Jayita Bhattacharyya
Hackathon Wizard| Official Code-breaker | Generative AI | Machine Learning | Software Engineer | Traveller
Bengaluru, India
Actions
Passionate about AI/ML space and keen to adopt new technologies for solving real-world problems. The work focus these days is on generative AI. Along with the team, we help customers incorporate AI into software engineering.
Commudle Acc - https://www.commudle.com/users/jayita13
Invited Speaker Events:
* Analytics Vidhya Datahour - https://community.analyticsvidhya.com/c/datahour/rag-brag-supercharging-with-groq
* Analytics Vidhya DataHack Summit 2024 - https://www.analyticsvidhya.com/datahacksummit/sessions/jayita-bhattacharyya
* Google Cloud Community Days Kolkata 2024 - https://ccd2024.gdgcloudkol.org/speakers
* Infosys TechCohere WIT Q2-FY25
* GDG Durgapur Developer Summit 2024 - https://gdg.community.dev/events/details/google-gdg-durgapur-presents-developer-summit-durgapur-2024/
* GDSC AOT Build with AI 2024 - https://gdg.community.dev/events/details/google-gdg-on-campus-academy-of-technology-hooghly-india-presents-build-with-ai/
* GDG Delhi DevFest 2024 - https://www.commudle.com/users/jayita13
* GDSC IIIT Kalyani Build with AI 2024 - https://gdg.community.dev/events/details/google-gdg-on-campus-ideal-institute-of-engineering-kalyani-india-presents-build-with-ai/
* GDG Siliguri DevFest 2024 - https://gdg.community.dev/events/details/google-gdg-siliguri-presents-devfest-siliguri-2024/
* GDG Mangalore DevFest 2024 - https://gdg.community.dev/events/details/google-gdg-cloud-mangalore-presents-devfest-mangalore-2024/
* ROAD to AI Community Day 2024 - https://www.commudle.com/communities/machine-learning-kolkata/events/road-to-ai-community-day
I took part in multiple hackathons to build end-to-end solutions, won some, learned from all, and gained much experience interacting and networking with diverse mindsets.
* ATMECS Gen AI Hackathon 2024 2nd runner-up
* TIU AI Unite Hackathon 2024 Finalist
* Unisys UHackHive 2024 Judge's Choice Award
* M&G Global innovAIte knacktohack 2024 Finalist
* Infosys Data For AI 2024 Winner
* Informatica Data Engineering 2024 Winner
* Google GenAI APAC top 50 teams
* Lablab.ai Vectara Unhallucinate Challenge 1st runner-up
* Intel OneAPI Hackathon 2023 2nd runner-up
* TBO Voyagehacks 2023 Finalist
* Cyient CyientifIQ 2023 Finalist
* KPMG world after Covid 1st runner-up
* Intel GenAI hackathon 2023 Finalist
* Kaggle AI report competition 2023 top 7%
* H2S TPF GenAI rush buildathon 2023 Finalist
Area of Expertise
Topics
Securing LLM Apps with Guardrails AI
Guardrails AI offers a comprehensive solution to these challenges by providing real-time hallucination detection, enhancing AI agent reliability, and preventing sensitive data leaks.
As AI technologies become increasingly integrated into various sectors, the potential risks associated with their deployment—such as generating inaccurate information, exposing sensitive data, and executing unreliable actions—have garnered significant attention. Guardrails AI addresses these concerns by offering a suite of tools designed to enhance the safety and reliability of AI applications.
Combatting Hallucinations with DataGemma & RIG
Lack of grounding can lead to hallucinations — instances where the model generates incorrect or misleading information. Building responsible and trustworthy AI systems is a core focus and addressing the challenge of hallucination in LLMs is crucial to achieving this goal.
DataGemma represents a significant leap forward in addressing the challenges associated with AI hallucinations by grounding its outputs in real-world data. By combining advanced LLM capabilities with robust retrieval techniques from Data Commons, Google aims to enhance the reliability and trustworthiness of AI-generated information across various applications.
Two-Pronged Approach Retrieval-Augmented Generation (RAG): This approach retrieves relevant contextual information before generating a response, grounding the output in verified data.
RIG: This method allows for real-time data retrieval during the response generation process, enhancing factual accuracy and reducing hallucinations by verifying generated information against external sources.
Mitigating Hallucinations in Multimodal LLMs with HALVA
HALVA: Hallucination Attenuated Language and Vision Assistant
A new contrastive tuning strategy mitigates hallucinations while retaining general performance in multimodal LLMs.
Data-augmented contrastive tuning has been introduced to mitigate object hallucination in MLLMs. The proposed method effectively mitigates object hallucinations and beyond while retaining or improving performance on general vision-language tasks. Moreover, the proposed contrastive tuning is simple, fast, and requires minimal training with no additional overhead at inference. This method may have applications in other areas as well. For example, it might be adapted to mitigate bias and harmful language generation.
Building Knowledge GRAPH RAG using Neo4j
* Knowledge graphs are used in development to structure complex data relationships, drive intelligent search functionality, and build powerful AI applications that can reason over different data types.
• Knowledge graphs can connect data from both structured and unstructured sources (databases, documents, etc.), providing an intuitive and flexible way to model complex, real-world scenarios.
• Unlike tables or simple lists, knowledge graphs can capture the meaning and context behind the
data, allowing you to uncover insights and connections that would be difficult to find with conventional databases.
• This rich, structured context is ideal for improving the output of large language models (LLMs),
because you can build more relevant context for the model than with semantic search alone.
Monitoring Production Grade Agentic Rag Pipelines
It's time to say bye-bye to naive/vanilla RAG systems where we could easily plug in our sample clean data to query using LLM. Several parameters are needed to upgrade from PoC to production, where performance is a key factor in achieving enhanced results. Search and retrieval systems need proper data preprocessing before being ingested into vector databases. Let us head over to take up a few building blocks to set up such an advanced RAG pipeline that can be deployed and scaled in real-time.
Implementing robust and performant RAG systems is the industry's next big goal. Handling multiple operations along with low latency capabilities could present challenges. AI agents have been handy for automating such routing tasks. Observatory tools are the next step for scalability factors, allowing LLM debugging on each encountered step of such workflows. The stack trace helps with app session handling and a deep dive into inferencing flow and outcome.
This helps uniquely implement and augment LLMs in large knowledge bases. Manage and control the data to make informed decisions for your business use cases across BFSI, legal, healthcare, and other domains.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top