Session

Trustworthy AI for Enterprise: Enhancing LLMs

GPT Is an Unreliable Information Store
Current large language models (LLMs) exhibit limitations in deductive reasoning and cognitive architecture, posing challenges to their reliability in enterprise applications. This talk presents novel techniques to address these limitations, evaluate, and improve LLM performance.

We will highlight the issue of epistemological blindness in LLMs and propose a solution using embeddings model endpoint and Vector Index DB for enhanced factual accuracy. Additionally, we will discuss innovative approaches to automate feature engineering, including Automatic Feature Extraction with LLMs and Langchain integration in an MLOps pipeline so that we can evaluate the accuracy of these tools.

This presentation aims to offer valuable insights for practitioners and researchers, contributing to developing responsible and trustworthy AI in enterprise contexts.

Tags: Explainability, ML Model Pipelines, Chroma, Pinecone, OpenAI, ada, Embeddings, Cosine Similarity, Semantic Search

Noble Ackerson

Responsible AI Product Strategy & Data Governance

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top