Speaker

Akaash Vishal Hazarika

Akaash Vishal Hazarika

Senior Member of Technical Staff - Salesforce

Seattle, Washington, United States

Actions

I have more than 5 years of experience as a software developer handling large scale systems, back end development and performance optimizations. I am currently working at Salesforce, working in a team that manages the marketing databases in Salesforce(The biggest SQL server deployment in the world). I have previously worked for companies like Splunk, AWS and Google. Additionally I have done my masters from NC State in the field of computer science. I am always interested in learning new things and feel software value is derived by simplicity, maintainability and testability of the system.

Area of Expertise

  • Information & Communications Technology

Topics

  • Distributed Systems
  • Algorithms
  • NLP
  • AI
  • ML
  • Databases
  • DevOps
  • Resiliency
  • Software Development
  • Cloud & DevOps

Building Resilient Distributed Systems

Disaster recovery in distributed systems ensures resilience by protecting against failures in complex, distributed environments. This presentation will explore essential strategies like replication, redundancy, and failover mechanisms that help preserve availability and data integrity. We will examine the trade-offs between synchronous and asynchronous replication, the importance of automatic failover, and the challenges of ensuring consistency in the context of the CAP theorem. A hands-on example will also be provided to illustrate these concepts in practice.

Balancing Ethics and Scalability in Distributed Systems

The rise of large-scale AI systems built on distributed architectures presents unique challenges for ensuring responsible AI practices. Traditional distributed systems issues intersect with ethical considerations, complicating the development of transparent, fair, and reliable AI. Drawing on real-world production examples, this presentation highlights these challenges and proposes actionable solutions that prioritize ethical AI deployment without compromising system performance or scalability.

Approaching Distributed Training of ML Models

In today's era of large-scale machine learning models, training on a single machine often becomes impractical due to resource constraints and time limitations. Distributed training provides an efficient solution by leveraging multiple computing resources to accelerate model training and handle larger datasets. This talk explores various approaches to distributed training, including data and model parallelism, synchronous and asynchronous strategies, using frameworks like TensorFlow and PyTorch.

Akaash Vishal Hazarika

Senior Member of Technical Staff - Salesforce

Seattle, Washington, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top