Session

Adding Generative AI to Real-Time Streaming Pipelines

In this talk I walk through various use cases where bringing real-time data to LLM solves some interesting problems.

In one case we use Apache NiFi to provide a live chat between a person in Slack and several LLM models all orchestrated via NiFi and Kafka. In another case NiFi ingests live travel data and feeds it to HuggingFace and OLLAMA LLM models for summarization. I also do live chatbot. We also augment LLM prompts and results with live data streams. All with ASF projects. I call this pattern FLaNK AI.

https://github.com/tspannhw/FLaNK-HuggingFace-BLOOM-LLM
https://medium.com/@tspann/mixtral-generative-sparse-mixture-of-experts-in-dataflows-59744f7d28a9
https://medium.com/@tspann/building-an-llm-bot-for-meetups-and-conference-interactivity-c211ea6e3b61

Timothy Spann

Principal Developer Advocate for Data in Motion @ Cloudera

Princeton, New Jersey, United States

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top