Speaker

Raghavi Janaswamy

Raghavi Janaswamy

Sr. Principal Engineer at Optum, Research Scholar where technology meets fine arts

Actions

I am currently Sr. Principal Software Engineer, and have grown along with several organizations with various opportunities working on technologies ranges from client server architectures to enterprise applications at scale on cloud. I learnt lot during these course of my journey and currently helps teams on engineering best practices, and love the share the same with the technical communities.

I am also a Research Scholar pursuing my Ph.D, where the technology meets fine arts.

Area of Expertise

  • Arts
  • Information & Communications Technology
  • Health & Medical

Data as a Service at scale for a large Healthcare Enterprise

UHG is on a transformational journey modernizing its technology assets to improve overall experience for their customers. One of the challenges was to provide real time experience to the consumers by processing big data at scale. By leveraging the modernized streaming platform powered by Apache Kafka, Provider Data platform at UHG was able to setup data pipelines to process of billions of events with throughput of greater than 100k eps.

The data processing platform that consumes the data from various sources, compute metrics at the real time is implemented as set of micro services using the Kafka Feature set such as Kafka Stream joins & aggregation, Kafka Connects, RocksDB State Store.

In this session, we will to dive deep into the architecture and the tools that we used to implement our pipeline, the operational processes established, performance improvements we achieved and day-to-day operational challenges we faced.

End to End Data Traceability Using Kafka Interceptors

Data is the most critical asset to deliver value to our customers and hence we care to invest on the quality and the tracing. In the bigdata context, the data tracing challenges are multi-folded due to the data size, the data transformations, the cost and the maturity of the tools.

In this session, I am going to discuss the core concepts of Kafka interceptors and how it is leveraged to build an enterprise-wide library to trace the billions of events that are being processed by our data pipelines. This reusable library has been architected to be used in all our micro-services to trace the data, while the data traverse from various sources into Kafka topics, processed as streams, and computation of various metrics and eventually reach to a persistent storage. The tracing data itself is persisted to Elastic for further interpretations as visual dashboards.

The solution especially enables us to trace the complete lifecycle of the data.

Raghavi Janaswamy

Sr. Principal Engineer at Optum, Research Scholar where technology meets fine arts

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top