Pratik Patel
Lead Developer Relations Engineer
Atlanta, Georgia, United States
Actions
Pratik Patel is a Java Champion, developer advocate at Azul Systems and has written 3 books on programming (Java, Cloud and OSS). An all around software and hardware nerd with experience in the healthcare, telecom, financial services, and startup sectors. He's also a co-organizer of the Atlanta Java User Group and Enterprise AI Atlanta, conference chairperson for Devnexus, frequent speaker at tech events, and master builder of nachos.
Area of Expertise
Topics
Building A Real World AI Application
In this workshop, we'll build an AI application that allows users to perform data queries and extract insights from massive datasets using natural language. We’ll explore the potential of combining Iceberg, Spark and LLMs. We'll start with understanding the structure and architecture of a large dataset. Then we'll look at options for querying the dataset using Apache Spark and Trino. Finally, we'll use an LLM to query the dataset using natural language. We'll also look at other uses of LLMs as part of an overall solution, and explore the differences between different LLMs.
This workshop will give you a real take-home example to take away and explore how to build your own data-driven AI applications. We will use these technologies:
* Apache Iceberg
* Apache Spark
* Trino
* LM Studio for running your own LLM
You will need a laptop with an IDE, Docker, and the ability to install LM Studio and download data and AI models. We'll use Spring Boot and SpringAI as the base for the application.
Java Perf & Scale: Mastering Techniques for Efficient Applications
Building performant and scalable Java applications involves several key strategies from that span from coding, to architecture, to deployment. In this session, we’ll start at a high level and dive deep into code and talk about scaling. We’ll cover these topics:
1. Code Profiling and Bottleneck Identification
2. Efficient Coding Practices
3. Caching and Connection Pooling
4. Memory Management and Garbage Collection
5. Scalability Techniques
After this session, you’ll have specific techniques you can apply to your own Java applications to make them run with lower latency, faster throughput and higher stability!
Big Data and AI Architecture: Apache Iceberg, Spark and LLMs
This presentation delves into the potential of integrating LLMs with Apache Spark and Apache Iceberg as part of a Big Data to AI foundational architecture. In this session we’ll explore the potential of combining Iceberg, Spark and LLMs to give you a real world AI architecture that uses your data.
We'll build an AI application that allows users to perform data queries and extract insights from massive datasets using natural language. We'll start with understanding the structure and architecture of a large dataset. Then we'll look at options for querying the dataset using Apache Spark and Trino. Finally, we'll use an LLM to query the dataset using natural language. We'll also look at other uses of LLMs as part of an overall solution, and explore the differences between different LLMs.
We’ll also discuss where event streaming (Kafka and Flink) fit into this architecture. The design of this architecture is meant to be flexible and give your dev team the ability to choose different technologies for the processing and querying. I’ll leave you with a CONCRETE example that you can run on your laptop and explore the possibilities. Again, this will be an example of a real-world application; the dataset used will be for home sales data for the last 15 years.
We will use these technologies:
* Apache Iceberg
* Apache Spark
* Spring AI
* Ollama
* Various LLMs
AI Native Architecture for Java Applications
We are currently moving from "AI-enabled" systems, where artificial intelligence is an additive feature, to "AI-native" systems, where intelligence is the foundational, architectural core. An AI-native application is not merely a traditional application with a machine learning model bolted on; it is an entirely new class of software designed from the ground up to learn, adapt, and act autonomously. These systems are architected around continuous data ingestion, real-time model interaction, and a contextual understanding of the application and runtime environment.
We'll discuss the key elements mentioned above and the difference between existing applications that have added AI capability as an accessory, and this new class of applications that are built with AI in mind from the start. While we'll use the Java ecosystem in examples, the principles we'll discuss are language agnostic. We'll focus on the architecture discuss these topics:
* Big Data and data pipelines
* Integration with other services and APIs
* Testing considerations
* Evolution of UI/AX
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top