Daniël Spee
Search Engineer at Luminis
Amsterdam, The Netherlands
Actions
Daniël is a seasoned Search Engineer at Luminis, where his passion for technology is manifested in creating and enhancing search solutions. He possesses a unique understanding of the field, believing that an excellent search experience transcends the pure technology underpinning it. His focus on the user experience, along with his innovative approach to problem-solving, enables him to develop search systems that are functional, efficient, and exceptionally user-friendly.
His proficiency in search engineering is matched by his excellent communication skills and team spirit. Daniël excels in translating complex concepts into easily understandable language, enabling effective collaboration with diverse teams and clients. His dedication to fostering a positive, productive work environment is as instrumental to his success as his technical expertise.
Staying ahead of the curve in the rapidly evolving field of search technology, Daniël consistently adopts the latest techniques to provide superior search solutions. His dedication, skills, and unique approach make him a crucial member of the Luminis team, contributing significantly to the field of search engineering.
Area of Expertise
Topics
Retrieval Evaluated: The Quest for Search Quality
In this session, I’m excited to delve into the fascinating world of search and information retrieval systems, especially as they play a pivotal role in the cutting-edge field of Retrieval Augmented Generation (RAG). I’ll be sharing the ins and outs of how we measure success in search systems, touching on those all-important metrics like precision, recall, and beyond – but all with a friendly, accessible twist.
Get ready for a journey through some engaging examples that highlight the importance of getting retrieval just right, not only for the sake of user satisfaction but also for the integrity and usefulness of the information being served.
But that’s not all – I’ll also guide you through the evolving landscape of search evaluation, from traditional methods to the latest innovations that are reshaping how we understand and improve search functionalities in the era of AI and machine learning. Expect practical advice, thought-provoking insights, and maybe a few laughs as we explore how to keep our search systems on the cutting edge and truly responsive to user needs.
Join me for a session filled with valuable takeaways, whether you’re deeply embedded in the tech world or just curious about how modern search technologies are shaping our access to information. Together, we’ll uncover the secrets behind making search systems more effective, intuitive, and, ultimately, more human.
Revolutionizing Customer Experience: Enhancing Information Access with Question-Answering Systems
Are you tired of customers struggling to find the answers they need on your website? Say goodbye to frustrating searches and embrace a new era of effortless information access. Join us for an exciting talk on revolutionizing customer experience through question-answering systems.
In this presentation, we invite you to explore the potential of the newest language libraries and tools. Discover how they can transform users’ interactions with your online platform.
We'll start by demystifying the core concepts behind question-answering systems, breaking down their purpose and functionality. Dive into the world of index-based search, vector-based search, and Large Language Models (LLMs), the essential building blocks powering this technological leap forward.
But we won't stop at theory alone. We give you a demonstration where you'll see these systems in action. Follow along as we walk you through a content pipeline that extracts content from different sources and indexes the content into vector databases—followed by different patterns to retrieve information from the vector databases and generate answers with multiple LLMs, enabling the dynamic generation of accurate and personalized answers in real-time.
By the end of our presentation, you'll understand the underlying structure of a question-answering system, empowering you to enhance information access on your website or within your application. Join us on this exciting journey and unlock the true potential of question-answering systems!
The Art of Questions: Creating a Semantic Search-Based Question-Answering System with LLMs
Ever thought about building your very own question-answering system? Like the one that powers Siri, Alexa, or Google Assistant? Well, we've got something awesome lined up for you!
In our hands-on workshop, we'll guide you through the ins and outs of creating a question-answering system. We prefer using Python for the workshop. We have prepared a GUI that works with python. If you prefer another language, you can still do the workshop, but you will miss the GUI to test your application. You'll get your hands dirty with vector stores and Large Language Models, we help you combine these two in a way you've never done before.
You've probably used search engines for keyword-based searches, right? Well, prepare to have your mind blown. We'll dive into something called semantic search, which is the next big thing after traditional searches. It’s like moving from asking Google to search "best pizza places" to "Where can I find a pizza place that my gluten-intolerant, vegan friend would love?" – you get the idea, right?
We’ll be teaching you how to build an entire pipeline, starting from collecting data from various sources, converting that into vectors (yeah, it’s more math, but it’s cool, we promise), and storing it so you can use it to answer all sorts of queries. It's like building your own mini Google!
We've got a repository ready to help you set up everything you need on your laptop. By the end of our workshop, you'll have your question-answering system ready and running.
So, why wait? Grab your laptop, bring your coding hat, and let's start building something fantastic together. Trust us, it’s going to be a blast!
Some of the highlights of the workshop:
Use a vector store (OpenSearch, Elasticsearch, Weaviate)
Use a Large Language Model (OpenAI, HuggingFace, Cohere, PaLM, Bedrock)
Use a tool for content extraction (Unstructured, Llama)
Create your pipeline (Langchain, Custom)
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top