During the course, we simulate real-world end-to-end scenarios – building a Machine Learning pipeline to train a model and deploy it in Kubeflow environment. We’ll walk through the practical use cases of MLOps for creating reproducible, scalable, and modular data science code. Next, we’ll propose a solution for running pipelines on Google Cloud Platform, leveraging managed and serverless services. All exercises will be done using either a local docker environment or GCP account.
Target Audience
Data scientists and DevOps who are interested in implementing MLOps best practices, and building Machine Learning pipelines.
Requirements
Some experience coding in Python, a basic understanding of cloud computing, and machine learning concepts.
Participant’s ROI
Training Materials
All participants will get training materials in the form of PDF files containing slides with theory and an exercise manual with a detailed description of all exercises. During the workshops, the exercises can be done using either a local docker environment or within your IDE.
Time Box
This is a one-day event (9:00-16:00), and there will be some breaks between sessions.
Agenda
Session #1 - Introduction to Machine Learning Operations (MLOps)
Session #2 - Kedro - a framework to structure your ML pipeline
Session #3 - Kubeflow and Kubeflow Pipelines
Session #4 - Building infrastructure for your Machine Learning platform
Session #5 - Summary and wrap-up
Prowadzący: