Session

AI Model Management Life Circles : ML Ops For Generative AI Models From Research to Deployment

I share a novel framework called MM Life Circles, which allow teams to visualize and manage the entire lifecycle of generative AI models, from research to deployment including best practices for handover between each phase.

So essentially, I start with choosing a generative AI model as an example, then cover the data side of it followed by training frameworks. Finally, I wrap it up in a complete end-end journey that spans the model lifecycle from research to benchmarking to deployment.

The tools I dig into can be varied, but it's usually a mix of PyTorch Lightning, Tensorflow, Tensorboard, mlflow, Kubeflow, Weights & Biases, neptune.ai and a few more newer ones.

We cover all the essential phases of a model lifecycle, namely -

- Phase I : Research & Prototyping
- Phase II : Hyperparameter Tuning & Model Selection
- Phase III : Model Pruning & Optimization
- Phase IV : Production, Testing & Deployment

We aim to fundamentally document and manage every single pathway your model will go through during its life from research all the way to deployment

- Comprises of research platforms such as Python, R studio and associated libraries
- Covers model building and tuning frameworks such as [Pytorch Lightning](https://www.pytorchlightning.ai/index.html), [Weights & Biases](https://wandb.ai/site), [mlflow](https://github.com/mlflow/mlflow) etc.
- Benchmarking & Optimization Tools such as [onnx](https://onnx.ai/), [tensorflow](https://www.tensorflow.org/tensorboard), [tensorRT](https://developer.nvidia.com/tensorrt) optimization
- Testing & Deployment Tools comprising of unit, functional and integration tests along with deployment CI/CD services of Github, Gitlab, [Jenkins](https://www.jenkins.io/) etc.

The structure of the presentation is usually a few slides on the section of the life cycle (training, benchmarking, registration) followed by a tiny demo either on the tool or on a bash interface. So I usually switch between powerpoint and a terminal screen share for this.

Target Audience :
- Data Scientists
- AI Researchers
- MLOps Engineers
- ML Engineers
- Data Engineers

Key Takeaways for Audience :

What does the model lifecycle for generative AI models look like?
What kind of platforms allow us to import boiler plate code and infrastructures for building models (Pytorch Lightning, Tensorboard)?
How do we build or integrate model management platforms (mlflow, WAB etc.)
How do we hand-off models to production / software / backend teams?
How should we deploy models to production / code coverage by data scientists to ensure qualitative model testing?

Ideally the session is a mix of powerpoint / jumping in and out of tools / VS code for demos / terminal runs. The goal is to simulate the workflow and just take a peek into each phase / tool as we go through it.

Joy Chatterjee

Founder

Berlin, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top