Yingfan Xu
Ph.D. Researcher @ Oklahoma State University | AI/ML Researcher | Multimodal Learning | End-to-End ML Deployment
Actions
Yingfan Xu is a Ph.D. researcher in Engineering (Data Science concentration) at Oklahoma State University, working on real-world multimodal AI. His recent work focuses on building reproducible pipelines that fuse relational clinical metadata with medical images for patient-level prediction tasks, and translating models into deployable systems. He has research experience across computer vision, multimodal learning, and machine learning engineering, and has published work on deep learning for early prediction of disease, wildlife monitoring and acoustic species identification. Yingfan enjoys sharing practical, end-to-end patterns that help data professionals move from SQL-managed data to production-ready AI.
From SQL to Multimodal AI: Fusing Relational Data + Images for Real-World Predictions
Data teams increasingly need to combine relational/tabular data (SQL Server/PostgreSQL/Cosmos DB, etc.) with unstructured data (images, text, audio) to build production-ready AI solutions. In this talk, we’ll walk through a practical, end-to-end blueprint for multimodal modeling using a real-world healthcare case study: handheld fundus images paired with patient clinical/demographic metadata for predicting diabetic retinopathy severity and related outcomes.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top