Most Active Speaker

Mary Grygleski

Mary Grygleski

AI Practice Lead, TED/x Speaker, Technical Advocate, Java Champion, President of Chicago-JUG, Chapter Co-Lead of AICamp-Chicago

資深軟體工程師倡導, Java Champion, 芝加哥Java用戶社群領導者, AICamp-芝加哥分會共同領導

Chicago, Illinois, United States

Actions

Mary is a Java Champion, and the AI Practice Lead at Callibrity, a consulting firm based in Cincinnati, Ohio. She started as an engineer in Unix/C, then transitioned to Java around 2000 and has never looked back since then. After 20+ years of being a software engineer and technical architect, she discovered her true passion in developer and customer advocacy. Most recently she has serviced companies of various sizes such as IBM, US Cellular, Bank of America, Chicago Mercantile Exchange, in topic areas that included Java, GenAI, Streaming systems, Open source, Cloud and Distributed messaging systems. She is also a very active tech community leader outside of her day job. She is the President of the Chicago Java Users Group (CJUG), and the Chicago Chapter Co-Lead for AICamp.

Awards

Area of Expertise

  • Information & Communications Technology

Topics

  • java
  • reactive systems
  • Reactive Programming
  • OpenSource
  • Cloud Computing
  • Mobile Development
  • Emerging Technologies
  • Java and Server-side
  • Enterprise Java / Jakarta EE
  • AI/ML + Data Processing
  • Event Streaming
  • GenAI
  • Vector Database
  • vector search
  • LLMs

Optimizing Complex Workflows with Event-Driven Multi-Agentic Approach

This presentation explores the integration of event-driven data streaming techniques with multi-agentic generative AI workflows, offering a powerful approach to complex system design. By leveraging event streaming, we enable real-time data flow and processing across multiple AI agents, each specializing in distinct tasks such as reflection, tool use, planning, and collaboration.

The proposed architecture allows for:

1. Scalability: Easily add or modify agents without disrupting the entire system.
2. Flexibility: Dynamically route tasks and information based on event triggers.
3. Resilience: Distributed processing reduces single points of failure.
4. Efficiency: Parallel processing of tasks by specialized agents.
5. Adaptability: Real-time adjustments to workflow based on streaming data.

Exploring Cloud Native MLOps for GenAI Vector Search

ChatGPT has been in the center stage since early this year. We will first take a look into this exciting sub-new field of Generative AI, and understand what LLM and NLP are, and the challenges that all of these are presenting themselves. We will also highlight the importance of Vector Search, and what a Vector DB's role is helping with the embeddings and fast index pattern matches/searches. While it is exciting, we also need to ensure that the process of building, integrating, and continuous deployment are being handled in the most efficient way, and by leveraging the cloud native environment with Kubernetes, we will examine how the process can be optimized by leveraging on the serverless and event-driven nature of a typical cloud native environment.

MLOps—machine learning operations, or DevOps for machine learning—is the intersection of people, process, and platform for gaining business value from machine learning. It streamlines development and deployment via monitoring, validation, and governance of machine learning models. With the rapid rise in popularity in GenAI, we will explore how the operational side things will be impacted and what MLOps will differ from DevOps.

Build a ChatGPT RAG Data Pipeline with RisingWave Stream Processor and Vector Store

Enter the exciting brave new world of GenAI, by building a ChatGPT Data Pipeline that leverages on RisingWave's efficient stream processing jobs for real-time data that we draw from an X (or Twitter feed) that's been enriched with vector data and similarity search.

We'll explore the exciting ChatGPT world, building an efficient data pipeline that's enriched with vector embeddings as stored in a vector DB (PgVector) , and how it can pair with the performant RisingWave cloud-based stream processor for its write job. We will illustrate a sample use case with live coding, as follows:

* Simulate a streaming data feed from X (or Twitter), we'll be using Kafka as the message broker for data ingestion
* RisingWave will consume the data stream, and perform data analysis
* Construct prompts based on the top 3 hashtags identified by RisingWave
* Prompts will be used for inferencing against a RAG-based BOT built with PgVector

Exploring the Reactive Landscape

As Java is an object-oriented language that inherently supports the imperative programming style, asynchronicity presents a challenge that can turn the code into nightmare. One way to deal with the complexity of asynchronicity is to introduce reactivity onto the coding level (reactive programming), and/or to handle it on the design and architecture level (reactive systems design).

Reactive programming and reactive systems are not brand new concepts, as the underlying problems that they are trying to solve, namely, concurrency processing and distributed systems, have existed since the beginning of computing time, and there have always been various techniques and implementations that are “reactive” in nature to address those problems. With the advances in multicore hardware and virtualization, plus cloud technologies in the 2010’s, the time has become ripe for reactive software tooling and frameworks to take center stage. This talk introduces the concepts of reactive programming and reactive systems, discusses the Reactive Manifesto and the Reactive Streams specification, as well as highlights a few popular reactive libraries that include MicroProfile Reactive Messaging, SmallRye/Mutiny, Vert.x, RxJava, Spring Reactor, and RSocket.

The takeaways for the audience will be an understanding of the key differences between reactive programming versus reactive systems, the purpose of the Reactive Streams specification, and the strength and weaknesses of each of the surveyed libraries and frameworks.

Integrating Vector DataStore into a ChatGPT-powered ChatBot Service

With ChatGPT taking center stage since the beginning of 2023, let's take a look at how we can quickly set up a ChatGPT-powered chatBot server using the "Supersonic, Sub-atomic" Quarkus framework that is capable of working with several popular LLMs. We will enhance the chatBot server by integrating a Cassandra-powered Astra DB datastore, with vector similarity search capability enhanced by JVector.

Leveraging Event Streaming to Boost LLMs with Retrieval Augmented Generation

Large pre-trained language foundation models (LLMs), such as ChatGPT, that are pre-trained offline have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream natural language processing (NLP) tasks. However, their ability to access and precisely manipulate knowledge, especially with up-to-date knowledge, is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures.

To overcome such limitations, Retrieval Augmentation Generation (RAG) is a technique that can be utilized to retrieve data from outside a foundation model, and as such, it can be used for augmenting the prompts by injecting the relevant retrieved data into the context. RAG has proven to be more cost-effective and efficient than pre-training or fine-tuning foundation models, and can help to reduce hallucinations in LLMs.

We’ll take a look at how utilizing an event-driven streaming approach, by using the open source library, LangStream, can quickly integrate your existing data in motion into generative AI applications such as with prompt engineering and the RAG pattern.

Enter the Brave New World of GenAI

With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.
The truth is that, whether we like it or not, we have all been “thrust” into this new era of computing.

Instead of procrastinating, let’s start by learning about Generative AI specifically with this presentation. We will go over the history and evolution of AI and ML, then look at how it has evolved to where it is today. We will touch upon as many new concepts that have popped up in the last 6-9 months, which include: Generative AI (GenAI), ChatGPT, Large Language Models (LLMs), Natural Language Processing (NLP), Vector DB, and the growing importance of Vector Search, as well as Multi-Agentic workflows. We will discuss the benefits of this new wave of technology as well as the challenges that it brings to the industry and the marketplace.

Leveraging Event Streaming to Boost LLMs with Retrieval Augmented Generation

Large pre-trained language foundation models (LLMs), such as ChatGPT, that are pre-trained offline have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream natural language processing (NLP) tasks. However, their ability to access and precisely manipulate knowledge, especially with up-to-date knowledge, is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures.

To overcome such limitations, Retrieval Augmentation Generation (RAG) is a technique that can be utilized to retrieve data from outside a foundation model, and as such, it can be used for augmenting the prompts by injecting the relevant retrieved data into the context. RAG has proven to be more cost-effective and efficient than pre-training or fine-tuning foundation models, and can help to reduce hallucinations in LLMs.

We’ll take a look at how utilizing an event-driven streaming approach, by using a new open source library, LangStream, can quickly integrate your existing data in motion into generative AI applications such as with prompt engineering and the RAG pattern. LangStream can be used with Apache Kafka or Apache Pulsar underneath as the event streaming and message broker platform.

Exploring Cloud Native MLOps for GenAI Vector Search

ChatGPT has been in the center stage since early this year. We will first take a look into this exciting sub-new field of Generative AI, and understand what LLM and NLP are, and the challenges that all of these are presenting themselves. We will also highlight the importance of Vector Search, and what a Vector DB's role is helping with the embeddings and fast index pattern matches/searches.

MLOps—machine learning operations, or DevOps for machine learning—is the intersection of people, process, and platform for gaining business value from machine learning. It streamlines development and deployment via monitoring, validation, and governance of machine learning models. With the rapid rise in popularity in GenAI, we will explore how the operational side of things will be impacted and what MLOps will differ from DevOps.

Enter the Brave New World of GenAI with Vector Search

With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.

The truth is that, whether we like it or not, we have all been “thrust” into this new era of computing. Instead of procrastinating, let’s start by learning about Generative AI specifically with this presentation. We will go over the history and evolution of AI and ML, then look at how it has evolved to where it is today. We will touch upon as many new concepts that have popped up in the last 6-9 months, which include: Generative AI (GenAI), ChatGPT, Large Language Models (LLMs), Natural Language Processing (NLP), Vector DB, and the growing importance of Vector Search. We will also point out the new operational concerns when it comes to managing the life-cycle of a machine learning environment. We will then look at a demo on how Vector Search is being done behind the scenes. We will discuss the benefits of this new wave of technology as well as the challenges that it brings to the industry and the marketplace.

Building a real-time analytics dashboard with Streamlit, Apache Pinot, and Apache Pulsar

When you hear "decision maker", it's natural to think, "C-suite", or "executive". But these days, we're all decision-makers. Restaurant owners, bloggers, big box shoppers, diners - we all have important decisions to make and need instant actionable insights. In order to provide these insights to end-users like us, businesses need access to fast, fresh analytics.

In this session we will learn how to build our own real-time analytics application on top of a streaming data source using Apache Pulsar, Apache Pinot, and Streamlit. Pulsar is a distributed, open source pub-sub messaging and streaming platform for real-time workloads, Pinot is an OLAP database designed for ultra low latency analytics, and Streamlit is a Python based tool that makes it super easy to build data based apps.

After introducing each of these tools, we’ll stream data into Pulsar using its Python client, ingest that data into a Pinot real-time table, and write some basic queries using Pinot’s Python SDK. Once we've done that, we’ll bring everything together with an auto refreshing Streamlit dashboard so that we can see changes to the data as they happen. There will be lots of graphs and other visualisations!

This session is aimed at application developers and data engineers who want to quickly make sense of streaming data.

Reactive for the impatient

As Java is an object-oriented language that inherently supports the imperative programming style, asynchronicity presents a challenge that can turn the code into nightmare. One way to deal with the complexity of asynchronicity is to introduce reactivity onto the coding level (reactive programming), and/or to handle it on the design and architecture level (reactive systems design).

This talk presents to the audience a few of the major Java-based reactive frameworks and toolkits in the market today, such as RxJava, Spring Reactor, Akka, and Vet.x. It will start by going over the basic tenets of reactive systems, and some examples of the problems that these systems aim to solve. It will discuss the 2 most commonly used Java frameworks for implementing reactive coding - RxJava and Spring Reactor, and will show some code samples. It will then bring the audience to the next level of "reactivity' by introducing 2 reactive frameworks - Akka and Vert.x, which are usually used for implementing reactive microservices. It will draw some comparisons between these 2 frameworks and cite some real-life examples of their usages.

The takeaways for the audience will be an understanding of the key differences between reactive programming versus reactive systems, and the strength and weaknesses of each of the surveyed frameworks.

The Art and Benefits of Computing Eventfully

The term "event" has become overloaded in the computing world. Event streaming, event processing, event messaging, event sourcing, event storming, event-driven architecture, and so on, all represent different aspects of eventful computing. We will examine each of these terms in detail, and explain - with a few illustrated code examples - why these emerging concepts are rising in importance and the problems that they are trying to solve. We will also discuss variations and evolutions of other event-based concepts that are being applied to reactive systems, serverless apps, and stateful as well as stateless microservices. Just like anything, there are also challenges, but the benefits of the event-based approach should outweigh any obstacles in the long run, and will prove itself to be a viable and dynamic solution for today's modern and heavy data-driven systems.

Processing Real-Time Trading Data with Event Streaming

The world is moving at an unprecedented pace and much of it has been powered by innovations in software and systems. While event handling, messaging, and processing are not necessarily brand new concepts, the recent emergence in hardware such as virtualizations, multi-core processors, and so on, are in fact pushing the envelope in software design and Kubernetes native development, elevating it to higher levels of capabilities never seen before. In the case of streaming which very often leverages the underlying messaging mechanism(s) to bring distributed messaging to higher forms of purposes, such as financial/trading systems, IoT edge applications, and AI/ML data pipelines, the event streaming platform has indeed become the “glue” in enabling data to flow through disparate systems in the pipeline and in a very dynamic fashion. We will build a simple "trade matching engine" to simulate the buy and sell-side of exchange using Apache Pulsar pub/sub and Quarkus.

Leveraging Event Streaming to Super-Charge your Business Solutions

The world is moving at an unprecedented pace and much of it has been powered by the innovations in software and systems. While event handling, messaging, and processing are not necessarily brand new concepts, the recent emergence in hardware such as virtualizations, multi-core processors, and so on, are in fact pushing the envelope in software design and development, elevating it to higher levels of capabilities never seen before. In the case of streaming which very often leverages on the underlying messaging mechanism(s) to bring distributed messaging to higher forms of purposes, such as IoT/IIoT applications, AI/ML data pipelines, or even eCommerce recommendations, event streaming platform has indeed become the “glue” in enabling data to flow through disparate systems in the pipeline and in a very dynamic fashion.

This talk on event streaming is meant for anyone interested in learning about it, and understanding how it fits into the modern software development design and architecture, as well as seeing some of the challenges it faces especially in the Cloud Native environment. We’ll then take a look at an open source platform - Apache Pulsar, which is poised to become the de facto new generation of distributed messaging and streaming platform that will bring joy to developers, and enable systems and applications to be highly responsive with its true real-time capabilities.

Event Streaming in the Cloud Native World with Apache Pulsar

Event streaming platform has become the “glue” in enabling data to flow through disparate systems in the pipeline and in a very dynamic fashion. This talk on event streaming is meant for anyone interested in learning about it, and understanding how it fits into the modern software development design and architecture, as well as seeing some of the challenges it faces especially in the Cloud Native environment. We’ll then take a look at an open source platform - Apache Pulsar, which is poised to become the de facto new generation of distributed messaging and streaming platform that will bring joy to developers, and enable systems and applications to be highly responsive with its true real-time capabilities.

Event-Driven Change Data Capture Pattern using Apache Pulsar

Change Data Capture (CDC) is a set of software design patterns used in databases or data repositories for determining, capturing and tracking the changes of data, so that actions can be taken accordingly based on the changed data. With Apache Pulsar’s unique capability in unifying enterprise messaging systems, we will take a look to see how we can leverage on Pulsar to apply the CDC pattern in a database, such as Cassandra, to track the changes that occur in the data. This talk will go into some details of how Apache Pulsar has the ability to help with the CDC pattern. We will demonstrate how to implement a Pulsar stream in Apache Cassandra using DataStax’s Astra Cloud platform, and build a data pipeline from Cassandra that would transfer the data to an output sink, such as AstraDB.

Building An Efficient Streaming Data Pipeline with Apache Cassandra and Apache Pulsar

Event Streaming is one of the most important software technologies in the current computing era as it enables systems to process huge volumes of data in blazingly high speed and in real-time. It is indeed the "glue" that can connect data to flow through disparate systems and pipelines that are typical in cloud environments. Leveraging on the Pub/Sub pattern for the message flow, and designed with the cloud in mind, Apache Pulsar has emerged as a powerful distributed messaging and event streaming platform in recent years. With its flexible and decoupled messaging style, it can integrate and work well with many other modern-day libraries and frameworks.

In this workshop, we will build a modern, efficient streaming data pipeline using Apache Pulsar and Apache Cassandra. Apache Pulsar will handle the data ingest. The external data that comes in will be used for further processing by Pulsar Functions that will in turn reference the tables in Cassandra as the data lookup sources. The results of the transformed data will then be egressed and sent to an Astra DB sink. We will also examine to see how we can further optimize the entire processing pipeline.

Building a real-time analytics application with Apache Pulsar and Apache Pinot

Apache Pulsar is a distributed, open source pub-sub messaging and streaming platform for real-time workloads, managing hundreds of billions of events per day. It is being run in production, processing millions of messages per second across millions of topics. It has been adopted by companies such as yahoo!, Verizon Media, Splunk, and more.

In this talk we'll learn how analytical queries can be run on top of Puslar's event data with Apache Pinot, a real-time distributed OLAP datastore, which is used to deliver scalable real-time analytics with low latency.

We'll explore the integration between Pulsar and Pinot, explaining the features that it supports and the challenges faced while building it.

After that we'll demonstrate how to build a real-time analytics dashboard with these technologies. We’ll stream data into Pulsar using its Python client, ingest that data into a Pinot real-time table, and write some basic queries using Pinot’s Python SDK. Once we've done that, we’ll bring everything together with an auto refreshing dashboard using Plot.ly Dash, so that we can see changes to the data as they happen.

Best of Both Worlds: Apache Pulsar and Apache Kafka

Which streaming technology is right for me? Do I need to use Apache Pulsar or do I choose Apache Kafka?

That is mostly the question, but did you also knew that you can combine them? 

In this session we will show how to use best of both worlds. Let's compare both architectures and Java Client Implementations and decide what's best for you!

Do you pick the battle? Or are you making allies?
It's up to you!

Modernizing Jakarta/JMS Messaging system with Apache Pulsar

In this session we will briefly describe Apache Pulsar and Jakarta JMS. We will see how Apache Pulsar concepts map to the Jakarta Messaging Specifications. You will also see how to connect a Jakarta EE application to Pulsar just by dropping a Resource Adapter in your application server and basically zero code changes.

Operational Benefits of the Event-Driven Cloud Native Apache Pulsar

Apache Pulsar has taken enterprise event-driven messaging and streaming system to a new height, unifying message streaming, queueing, mediation, and transformation all on the same platform. Best of all, Pulsar is capable of operating efficiently in cloud native environments, and has offloaded the developers from not having to worry as much about typical concerns such as scalability and resiliency. Apache Pulsar was born in the early 2010's at Yahoo when the company had to handle a massive amount of data in and out everyday with its global service reach. Eventually Yahoo donated the project in 2016 to the Apache Software Foundation, of which it became a top-level project in 2018. Since then we are seeing a steady growth of its adoption, because Pulsar was already designed in the beginning to address auto-scaling, geo-replication, and multi-tenancy concerns, in addition to its event-driven messaging, streaming, and transformation capabilities. This talk will introduce the audience to Apache Pulsar, and also demo its usage, plus having a use case of it being run in the Apache Cassandra cloud platform: the Astra Cloud by DataStax.

Retrofit your Java App with a Reactive Flow Pipeline

Legacy applications that were developed in bygone days may appear to be close to unsalvageable. In reality, these applications are still running in production and carrying out the important day-to-day missions for their respective companies. After all, companies have spent a considerable amount of time and money on developing those applications, and despite the lack of perfection, these applications nonetheless keep their companies in operation. How about if we re-design the system, and identify pieces of the complex business functionality in the legacy system that can potentially be "recycled", and retrofit them into the new system that leverages on the power of the reactive data flow pipeline?

This presentation will be a lively discussion with hands-on coding to illustrate how to construct a reactive, event-driven data flow pipeline, which are composed of different library implementations of the Reactive Streams specification, and their integration with Kafka Streams.

Paradigm shift from imperative to reactive in Java

In this session, we will take a look into the differences between the traditional or imperative, and the functional, reactive or declarative approach to programming. We will start with some basic concepts of each of these approaches, then we will work through a few code samples to illustrate those differences. We will also discuss about their pros and cons, and when to use which style.

Exploring Stateful Microservices in the Cloud Native World

How does one choose to architect a system that has a Microservice / REST API endpoints? There are many solutions out there. Some are better than others. Should state be held in a server side component, or externally? Generally we are told this is not a good practice for a Cloud Native system, when the 12-factor guidelines seem to be all about stateless containers, but is it? It’s unclear and this confusion may lead to poor technology stack choices that are impossible or extremely hard to change later on as your system evolves in terms of demand and performance.

While stateless systems are easier to work with, the reality is that we live in a stateful world, so we have to handle the state of data accordingly to ensure data integrity beyond securing it.

We will examine and demonstrate the fundamentals of a Cloud Native system with Stateful Microservices that’s built with Open Liberty in Kubernetes:

* Microservices/REST API – Options to use when running your apps in the JVM. Use of the Saga Programming Pattern, and the Long-Running Action feature of MicroProfile

* Concurrency – how to take advantage of multi-core CPUs and clustered distributed systems

* Stateful vs Stateless - while stateless apps are easier to implement, the bulk of the apps in production are stateful which involve a higher level of complexity and risk, especially when data would need to travel across multiple machines and network boundaries

* Deployment – how about containerization and orchestration using Kubernetes?

Deploying a Modern Serverless Reactive Microservice to the Cloud

We have been hearing a lot about the benefits of using the reactive approach to solving concurrency problems in distributed systems. While reactive programming refers to the implementation techniques being used on the coding level, on the systems deployment and runtime level, we can leverage on a robust yet very flexible and lightweight framework such as Vert.x to deliver. In this session, we will first learn about what the missions of a reactive system are, which, among many things, include handling multiple concurrent data stream flows and being able to control back pressure as well as managing errors in an elegant manner. The very loosely-coupled nature of a reactive system also lends itself very well to building microservices that can communicate well within its messaging infrastructure. We will also discuss the special polyglot nature of Vert.x, its event loop, and its use of the Vertical model. Live coding will accompany this session to illustrate how to program a simple use case using multiple JVM languages such as Java and Kotlin, we will then build and dockerize it to be deployed as a serverless container to the Knative cluster on the cloud in a delightful manner.

Growing the Technical Community

Development work is a very complicated activity especially for large projects with multiple teams, or even if it's just one team with multiple developers, things can get out of hand especially when people do not collaborate well or if someone refuses to cooperate. As much as the coding work itself and all of the other related tasks are technical in nature, and as developers we would be running into technical issues that can cause us headaches, we are also working with other developers and folks from other functional areas in a social environment, which can be the source of our frustrations sometimes (or for some, a lot of times). While we are all working to make a living, it is also in our nature that we would like to see ourselves keep learning and growing professionally. This talk aims at sharing the experience that I have as the organizer of the 3000+ member technical community of the Chicago Java Users Group. I started as the meeting director in 2017, and this year, 2020, I was voted as the president, and, with the untimely pandemic situation, CJUG is still growing in numbers of the community. What are some of the strategies that I have in growing the community? I would love to share my experiences with the audience.

Reactive in Practice

So you're a Developer and you've heard of Reactive and are excited to see how you can use its guiding principles to make your application more responsive and resilient. In this workshop we'll take you through step by step how to transform your application into a truly reactive system that gracefully handles failure, elastically distributes resources and is responsive to your end users.

With the advances in hardware, containerization, and virtualization technology within the past decade, software such as reactive systems is catching up to take advantage of such. Implementing reactive systems and writing code using the reactive approach may sound difficult, but we will illustrate that this can be done in a very manageable manner. While reactive system is not a "one-size-fits-all" cure to all problems, it does seem promising to be a solution to computing challenges such as system responsiveness, resiliency, and scalability.

This workshop will provide a guidance on how to transform a legacy, monolithic, on-premise application into a modern, highly responsive, microservices and cloud-based system. We will utilize a series of "Quick Labs" from the IBM Developer resource library (https://developer.ibm.com/) and work through building a few sample microservices together using an open-source reactive library (such as Eclipse Microprofile, Vert.x).

Thirst-Quenching Streams for the Reactive Mind

With the advances in multicore hardware and virtualization technologies, and the demand for highly responsive, resilient, and elastic systems and increasingly sophisticated applications, an array of reactive data stream processing libraries have been born to address the needs. Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. This encompasses efforts aimed at runtime environments that include JVM and Javascript, as well as network protocols. So how do the various library implementations of Reactive Streams, such as Spring Reactor, Reactive Extension (Rx)'s Observables, and RSocket, stack up against each other?


This presentation will go into some details on how streams leverage on the underlying multicore processor to achieve parallelism. It will then explain the push vs the pull streaming model. It will then use a simple use case with code examples to illustrate the different API usages, as well as runtime processing analysis between a few popular Java implementations of Reactive Streams.

Reactive in Practice - Transforming a Legacy Application

So you're a Developer and you've heard of Reactive and are excited to see how you can use its guiding principles to make your application more responsive and resilient. In this workshop we'll take you through step by step how to transform your application into a truly reactive system that gracefully handles failure, elastically distributes resources and is responsive to your end users.

With the advances in hardware, containerization, and virtualization technology within the past decade, software such as reactive systems is catching up to take advantage of such. Implementing reactive systems and writing code using the reactive approach may sound difficult, but we will illustrate that this can be done in a very manageable manner. While reactive system is not a "one-size-fits-all" cure to all problems, it does seem promising to be a solution to computing challenges such as system responsiveness, resiliency, and scalability.

This workshop will provide a guidance on how to transform a legacy, monolithic, on-premise application into a modern, highly responsive, microservices and cloud-based system. We will utilize the IBM Stock Trader application (https://developer.ibm.com/blogs/introducing-stocktrader/) and work through building a few sample microservices together using an open-source reactive library (such as Eclipse Vert.x or RxJava).

We will first have a quick study of the Stock Trader application in its legacy monolithic form, which can only be deployed as an on-premise app. This will help us to understand the common issues that face a lot of the legacy applications today.

Next, we will discuss about how to redesign the monolithic Stock Trader application, and break up the different components of the system into microservices, using a reactive approach where appropriate.

Participants will then get some hands-on reactive implementation experience by selecting one or more microservices, and work on their implementations with guided examples.

Participants will be able to also deploy their transformed application to Minishift or OpenShift on an open Cloud platform (such as the IBM Cloud). For ease of deployment, the workshop materials will include a fully implemented solution, so that the participant can utilize any of the sample microservices that s/he will need, or simply take the entire sample application, in order to try out the cloud deployment.

Thirst-quenching Streams for the Reactive Mind

With the advances in multicore hardware and virtualization technologies, and the demand for highly responsive, resilient, and elastic systems and increasingly sophisticated applications, an array of reactive data stream processing libraries have been born to address the needs. Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. This encompasses efforts aimed at runtime environments that include JVM and Javascript, as well as network protocols. So how do the various library implementations of Reactive Streams, such as Spring Reactor, Reactive Extension (Rx)'s Observables, and Akka Stream, stack up against each other?

This presentation will go into some details on how streams leverage on the underlying multicore processor to achieve parallelism.  It will then explain the push vs the pull streaming model.  It will then use a simple use case with code examples to illustrate the different API usages, as well as runtime processing analysis between a few popular Java implementations of Reactive Streams.

Teach your Pacman to play with Machine Learning and Reactive Streams

Today the adoption of Machine Learning is enormous. We use it almost everywhere: on clusters, on our phones, on hand watches and even on the teapots. But, who knows how challenging could be the development of the continuously improved (or even real-time) machine learning. Servers' capacity, stability, performance, along with a proper ML algorithm selection and many other properties must be taken into account. In this presentation, we will be demonstrating bots education for multiplayer Pacman game. We will show how elegant could be the development of such a problem with Kafka, Akka, RSocket and Reinforcement/Q Learning.

Co-presentation with International speaker, Oleh Dokuka. First presentation at Codemotion Berlin 2019, subsequent presentations include JFuture Minsk 2019 (voted second favorite by audience).

Event Messaging and Streaming with Apache Pulsar

The world is moving at an unprecedented pace and much of it has been powered by the innovations in software and systems. While event handling, messaging, and processing are not necessarily brand new concepts, the recent emergence in hardware such as virtualizations, multi-core processors, and so on, are in fact pushing the envelope in software design and development, elevating it to higher levels of capabilities never seen before. In the case of streaming which very often leverages on the underlying messaging mechanism(s) to bring distributed messaging to higher forms of purposes, such as IoT/IIoT applications, AI/ML data pipelines, or even eCommerce recommendations, event streaming platform has indeed become the “glue” in enabling data to flow through disparate systems in the pipeline and in a very dynamic fashion.

This talk on event streaming is meant for anyone interested in learning about it, and understanding how it fits into the modern software development design and architecture, as well as seeing some of the challenges it faces especially in the Cloud Native environment. We’ll then take a look at an open source platform - Apache Pulsar, which is poised to become the de facto new generation of distributed messaging and streaming platform that will bring joy to developers, and enable systems and applications to be highly responsive with its true real-time capabilities.

Jfokus 2025 Sessionize Event Upcoming

February 2025 Stockholm, Sweden

CodeMash 2025 Sessionize Event Upcoming

January 2025 Sandusky, Ohio, United States

KCDC 2024 Sessionize Event

June 2024 Kansas City, Missouri, United States

JNation 2024 Sessionize Event

June 2024 Coimbra, Portugal

Community Over Code EU 2024 Sessionize Event

June 2024 Bratislava, Slovakia

JCON EUROPE 2024 Sessionize Event

May 2024 Köln, Germany

Microsoft JDConf 2024 Sessionize Event

March 2024

JChampions Conference 2024 Sessionize Event

January 2024

WeAreDevelopers Live 2024 (Season 7) Sessionize Event

January 2024

CodeMash 2024 Sessionize Event

January 2024 Sandusky, Ohio, United States

Open Source Analytics Conference 2023 Sessionize Event

December 2023

Cassandra Summit + AI.dev 2023 Sessionize Event

December 2023 San Jose, California, United States

JCON WORLD 2023 Sessionize Event

November 2023

2023 All Day DevOps Sessionize Event

October 2023

DeveloperWeek CloudX 2023 Sessionize Event

August 2023 San Mateo, California, United States

KCDC 2023 Sessionize Event

June 2023 Kansas City, Missouri, United States

JCON EUROPE 2023 Sessionize Event

June 2023 Köln, Germany

JNation 2023 Sessionize Event

June 2023 Coimbra, Portugal

Pulsar Virtual Summit Europe 2023 Sessionize Event

May 2023

TEQnation 2023 Sessionize Event

May 2023 Utrecht, The Netherlands

WeAreDevelopers Live 2023 (Season 5) Sessionize Event

April 2023

Devnexus 2023 Sessionize Event

April 2023 Atlanta, Georgia, United States

JChampions Conference 2023 Sessionize Event

January 2023

CodeMash 2023 Sessionize Event

January 2023 Sandusky, Ohio, United States

Pulsar Summit Asia 2022 Sessionize Event

November 2022

2022 All Day DevOps Sessionize Event

November 2022

Build Stuff 2022 Lithuania Sessionize Event

November 2022 Vilnius, Lithuania

Momentum 2022 Sessionize Event

October 2022 Cincinnati, Ohio, United States

Real Time Analytics Summit Sessionize Event

September 2022 San Francisco, California, United States

JConf.dev 2022 Sessionize Event

September 2022 Chicago, Illinois, United States

JCON 2022 ONLINE (virtual) Sessionize Event

September 2022

Code PaLOUsa 2022 Sessionize Event

August 2022 Louisville, Kentucky, United States

KCDC 2022 Sessionize Event

August 2022 Kansas City, Missouri, United States

WeAreDevelopers World Congress 2022 Sessionize Event

June 2022 Berlin, Germany

JNation 2022 Sessionize Event

June 2022 Coimbra, Portugal

CloudWorld 2022 Sessionize Event

February 2022 Oakland, California, United States

CodeMash 2022 Sessionize Event

January 2022 Sandusky, Ohio, United States

Build Stuff 2021 Lithuania Sessionize Event

November 2021 Vilnius, Lithuania

DeveloperWeek Austin 2021 Sessionize Event

November 2021

2021 All Day DevOps Sessionize Event

October 2021

JCON 2021 Sessionize Event

October 2021

KCDC 2021 Sessionize Event

September 2021 Kansas City, Missouri, United States

Kafka Summit APAC 2021 Sessionize Event

July 2021

JNation 2021 Sessionize Event

June 2021

TECH(K)NOW Day on INTERNATIONAL WOMEN'S DAY Sessionize Event

March 2021

DeveloperWeek 2021 Sessionize Event

February 2021 Oakland, California, United States

Build Stuff 2020 Lithuania Sessionize Event

November 2020

DevFest WI 2020 Sessionize Event

October 2020

All The Talks Sessionize Event

April 2020

CodeMash 2020 Sessionize Event

January 2020 Sandusky, Ohio, United States

Vermont Code Camp 11 (2019) Sessionize Event

September 2019 Burlington, Vermont, United States

KCDC 2019 Sessionize Event

July 2019 Kansas City, Missouri, United States

Beer City Code 2019 Sessionize Event

May 2019 Grand Rapids, Michigan, United States

CodeStock 2019 Sessionize Event

April 2019 Knoxville, Tennessee, United States

Devnexus 2019

Reactive for the Impatient

March 2019 Atlanta, Georgia, United States

Google DevFest Florida 2019

Enterprise Hybrid Mobile App Development

January 2019 Orlando, Florida, United States

Devoxx Ukraine

Enterprise Hybrid Mobile App Development

November 2018 Kyiv, Ukraine

Google DevFest Hong Kong 2018

What to do when your boss needs a native mobile app within 24 hours?

November 2018 Hong Kong

Oracle Code One

What to do when you boss needs a native mobile app within 24 hours?

October 2018 San Francisco, California, United States

Devnexus 2018

Enterprise Hybrid Mobile App Development

February 2018 Atlanta, Georgia, United States

Java One

How to run a Successful Java Users Group

October 2017 San Francisco, California, United States

Mary Grygleski

AI Practice Lead, TED/x Speaker, Technical Advocate, Java Champion, President of Chicago-JUG, Chapter Co-Lead of AICamp-Chicago

Chicago, Illinois, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top