Session

From SQL to Multimodal AI: Fusing Relational Data + Images for Real-World Predictions

Data teams increasingly need to combine relational/tabular data (SQL Server/PostgreSQL/Cosmos DB, etc.) with unstructured data (images, text, audio) to build production-ready AI solutions. In this talk, we’ll walk through a practical, end-to-end blueprint for multimodal modeling using a real-world healthcare case study: handheld fundus images paired with patient clinical/demographic metadata for predicting diabetic retinopathy severity and related outcomes.

Yingfan Xu

Ph.D. Researcher @ Oklahoma State University | AI/ML Researcher | Multimodal Learning | End-to-End ML Deployment

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top