Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycle for On-Prem or in the Cloud
Hadoop_Summit
3,504 views
29 slides
Jun 05, 2019
Slide 1 of 29
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
About This Presentation
Specialized tools for machine learning development and model governance are becoming essential. MlFlow is an open source platform for managing the machine learning lifecycle. Just by adding a few lines of code in the function or script that trains their model, data scientists can log parameters, met...
Specialized tools for machine learning development and model governance are becoming essential. MlFlow is an open source platform for managing the machine learning lifecycle. Just by adding a few lines of code in the function or script that trains their model, data scientists can log parameters, metrics, artifacts (plots, miscellaneous files, etc.) and a deployable packaging of the ML model. Every time that function or script is run, the results will be logged automatically as a byproduct of those lines of code being added, even if the party doing the training run makes no special effort to record the results. MLflow application programming interfaces (APIs) are available for the Python, R and Java programming languages, and MLflow sports a language-agnostic REST API as well. Over a relatively short time period, MLflow has garnered more than 3,300 stars on GitHub , almost 500,000 monthly downloads and 80 contributors from more than 40 companies. Most significantly, more than 200 companies are now using MLflow. We will demo MlFlow Tracking , Project and Model components with Azure Machine Learning (AML) Services and show you how easy it is to get started with MlFlow on-prem or in the cloud.
Size: 9.23 MB
Language: en
Added: Jun 05, 2019
Slides: 29 pages
Slide Content
Alex Zeltov An Open Source Platform for the Machine Learning Lifecycle for On-Prem or in the Cloud Introduction to M l
Alex Zeltov Big Data Solutions Architect / AI Engineer Background: Sr. Solutions Architect - part of a Global Black Belt Team Big Data & AI in Microsoft Sr. Solutions Engineer at Hortonworks specializing in HDP and HDF Research Scientist at Independent Blue Cross – BigData and ML Sr. Software Engineer at Oracle
Machine Learning E2E Development is Complex!!! Typical E2E Process Prepare Data Register and Manage Model Train & Test Model Build Image … Build model (your favorite IDE) Deploy Service Monitor Model Prepare Experiment Deploy Orchestrate Result: It is difficult to productionize and share.
Motivation: the Deployment Process 5 Data Engineer Dev Ops Pickled Model What’s Pickle? Data Scientist
ML Development Challenges 100s of software tools to leverage Hard to track & reproduce results: code, data, params, etc Hard to productionize models Needs large scale for best results
Custom ML Platforms Facebook FBLearner, Uber Michelangelo, Google TFX + Standardize the data prep / training / deploy loop: if you work with the platform, you get these! – Limited to a few algorithms or frameworks – Tied to one company’s infrastructure – Out of luck if you left the company…. Can we provide similar benefits in an open manner?
Open source platform for the machine learning lifecycle. API First: Allow submitting runs, models, works with any ML library & language MlFlow APIs are available for the Python, R and Java REST API Runs the same way anywhere: on-prem or any cloud > 3,700 stars on GitHub , 92 contributors from > 40 companies. > 200 companies are now using MlFlow * Introduction to M l * https:// www.oreilly.com /ideas/specialized-tools-for-machine-learning-development-and-model-governance-are-becoming-essential
M l F low Components Tracking Record and query experiments: code, data, config, results Projects Packaging format for reproducible runs on any platform Models General model format that supports diverse deployment tools distinct components: use different components individually based on your needs
Tracking Experiments with Ml Flow Tracking Server MLflow Tracking is… a logging API specific for machine learning agnostic to libraries and environments that do the training organized around the concept of runs , which are executions of data science code runs are aggregated into experiments where many runs can be a part of a given experiment An MLflow server can host many experiments
Ml Flow Tracking Server Azure Machine Learning Databricks IaaS Cloud On-Premise p ip install mlflow mlflow.set_tracking_uri (URI) CDH / HDP
Experiments in Tracking Server Parameters: key-value inputs to your code Metrics: numeric values (can update over time) Artifacts: arbitrary files, including models Tags/Notes: info about a run Source: what code ran Version: git version import mlflow # log model’s tuning parameters with mlflow.start_run (): mlflow.log_param ("layers", layers) mlflow.log_param ("alpha", alpha) # log model’s metrics mlflow.log_metric ("mse", model.mse()) mlflow.log_artifact ("plot", model.plot(test_df)) mlflow.tensorflow.log_model (model)
Experiments Tracking API Record and query experiments: code, configs, results,e tc
Demo Ml Flow Tracking Server
Ml Flow Projects There are a number of different reasons why teams need to package their machine learning projects: Projects have various library dependencies shipping a machine learning solution involves the environment in which it was built MLflow allows for this environment to be a conda environment or docker container This means that teams can easily share and publish their code for others to use Machine learning projects become increasingly complex as time goes on This includes ETL and featurization steps, machine learning models used for pre-processing, and finally the model training itself Each component of a machine learning pipeline needs to allow for tracing its lineage If there's a failure at some point, tracing the full end-to-end lineage of a model allows for easier debugging.
Ml Flow Projects Project Spec C o d e D a t a C o n f i g Local Execution Remote Execution
Ml Flow Models Once a model has been trained and bundled with the environment it was trained in. The next step is to package the model so that it can be used by a variety of serving tools Current deployment options include: Container-based REST servers Continuous deployment using Spark streaming Batch Managed cloud platforms such as Azure ML and AWS SageMaker Packaging the final model in a platform-agnostic way offers the most flexibility in deployment options and allows for model reuse across a number of platforms.
Ml Flow Models ML Frameworks Inference Code Batch & Stream Scoring Serving Tools Standard for ML models Model Format mlflow.pyfunc mlflow.h2o mlflow.keras mlflow.pytorch mlflow.sklearn mlflow.spark mlflow.tensorflow
Example Ml Flow Model my_model/ ├── MLmodel │ │ │ │ │ └── estimator/ ├── saved_model.pb └── variables/ ... Usable by tools that understand TensorFlowmodel format Usable by any tool that can run Python (Docker, Spark, etc!) run_id: 769915006efd4c4bbd662461 time_created: 2018-06-28T12:34 flavors: tensorflow: saved_model_dir: estimator signature_def_key: predict python_function: loader_module: mlflow.tensorflow >>> mlflow.tensorflow.log_model(...)
What is Azure Machine Learning service? Set of Azure Cloud Services Python SDK Prepare Data Build Models Train Models Manage Models Track Experiments Deploy Models That enables you to:
Machine Learning on Azure Domain specific pretrained m odels To reduce time to market Azure Databricks Machine Learning VMs Popular frameworks To build advanced deep learning solutions TensorFlow Pytorch Onnx Azure Machine Learning Language Speech … Search Vision Productive services To empower data science and development teams Powerful infrastructure To accelerate deep learning Scikit -Learn PyCharm Jupyter Familiar Data Science tools To simplify model development Visual Studio Code Command line CPU GPU FPGA From the Intelligent Cloud to the Intelligent Edge
Azure ML service Key Artifacts Workspace Pipelines Images Deployment Models Data Stores Experiments Compute Target
Azure ML: How to deploy models at scale
Demo Ml Flow + AML + Azure Databricks https://eastus2.azuredatabricks.net/?o=3336252523001260#notebook/2623556200920093/command/2623556200920094
Conclusion + Q & A MlFlow can greatly simplify the ML lifecycle Simplify lifecycle development Lightweight, open platform that integrates easily Available APIs: Python, Java & R Easy to install and use Develop locally and track locally or remotely Deploy locally, cloud, on premise… Visualize experiments
Learning More About Ml Flow pip install mlflow to get started Find docs & examples at mlflow.org https://github.com/mlflow/mlflow tinyurl.com/mlflow-slack https://docs.azuredatabricks.net/applications/mlflow/quick-start.html