Deploying ML Models in Production: An Overview

Valerio Velardo - The Sound of AI
Valerio Velardo - The Sound of AI
40.9 هزار بار بازدید - 2 سال پیش - The deployment of ML models
The deployment of ML models in production is a delicate process filled with challenges. You can deploy a model via a REST API,  on an edge device, or as as an off-line unit used for batch processing. You can build the deployment pipeline from scratch, or use ML deployment frameworks.

In this video, you'll learn about the different strategies to deploy ML in production. I provide a short review of the main ML deployment tools on the market (TensorFlow Serving, MLFlow Model, Seldon Deploy, KServe from Kubeflow). I also present BentoM - the focus of this mini-series - describing its features in detail.  

=================

1st The Sound of AI Hackathon (register here!):
https://musikalkemist.github.io/theso...

Join The Sound Of AI Slack community:
https://valeriovelardo.com/the-sound-...

Interested in hiring me as a consultant/freelancer?
https://valeriovelardo.com/

Connect with Valerio on Linkedin:
LinkedIn: valeriovelardo

Follow Valerio on Facebook:
Facebook: TheSoundOfAI

Follow Valerio on Twitter:
Twitter: musikalkemist​

=================

Content:

0:00 Intro
0:36 ML deployment strategies
1:32 Basic ML deployment
3:27 Disadvantages of basic ML deployment
4:57 Overview of ML deployment tools
9:54 BentoML
14:00 What's next?
2 سال پیش در تاریخ 1401/04/06 منتشر شده است.
40,925 بـار بازدید شده
... بیشتر