• Home  >  
  • Perspectives  >  
  • How to Implement ML Models: Azure and Jupyter for Production  
Blog May 28, 2020
3 min read

How to Implement ML Models: Azure and Jupyter for Production

Learn how to implement Machine Learning models using Azure and Jupyter for production environments – from model development to deployment, including environment setup, training, and real-time predictions. Understand the advantages of using Azure’s robust infrastructure and Jupyter’s flexible interface to streamline the entire process.

Introduction

As Data Scientists, one of the most pressing challenges we have is how to operationalize machine learning models so that they are robust, cost-effective, and scalable enough to handle the traffic demand. With advanced cloud technologies and serverless computing, there are now cost-effective (pay based on usage) and auto-scalable platforms (with scale-in/scale-out architecture depending on the traffic) available. Data scientists can use these to accelerate the machine learning model deployment without having to worry about the infrastructure.

This blog discusses one such methodology of implementing the machine learning code and model developed locally using Jupyter notebook in the Azure environment for real-time predictions.

ML Implementation Architecture

ML Implementation Architecture on Azure

ML Implementation Architecture

We have used Azure Functions to deploy the Model Scoring and Feature Store Creation codes into production. Azure Functions is a FaaS offering (Function as a Service or FaaS provides event-based, serverless computing to accelerate development without having to worry about the infrastructure). Azure Functions comes with some interesting functionalities like-

1. Choice of Programming Languages

You can work with any language of your choice- C#, Node.js, Java, Python

2. Event-driven and Scalable

You can use built-in triggers and bindings such as http trigger, event trigger, timer trigger, and queue trigger to define when a function is invoked. The architecture is scalable, depending on the workload.

ML Implementation process

Once the code is developed, the following are the best practices to make the machine learning code production-ready. Below are the steps to deploy the Azure Function.

ML Implementation Process

ML Implementation Process

Azure Function Deployment Steps Walkthrough

Visual Studio Code editor with Azure Function extension is used to create a serverless HTTP endpoint with Python.

1. Sign in to Azure

sign into azure

2. Create a New Project. In the prompt that shows up, select the Language as Python, Trigger as http trigger (based on the requirement)

create new project

3. Azure Function is created, and the folder structure is as follows. Write your logic or copy the code if already developed into __init__.py

azure function folder structure

4. function.json, triggered by http trigger, defines the bindings in this case

function.json

5. local settings.json contains all the environmental variables used in the code as a key-value pair

settings.json

6. requirements.txt contains all libraries that need to be pip installed

requirements

7. As the model is stored in Blob, add the following line of code to read from Blob

blob

8. Read the Feature Store data from Azure SQL DB

feature store data

9. Test locally. Choose Debug -> Start Debugging; it will run locally and give a local API endpoint

debug

10. Publish to Azure Account using the following

func azure functionapp publish functionappname functionname — build remote — additional-packages “python3-dev libevent-dev unixodbc-dev build-essential libssl-dev libffi-dev”

publish

11. Log in to Azure Portal and go to Azure Functions resource to get the API endpoint for Model Scoring

azure portal

Conclusion

This API can also be integrated with front-end applications for real-time predictions.

Happy Learning!

Explore more blogs

5 min read
September 3, 2020
Automating Data Quality: Using Deequ with Apache Spark
Readshp-arrow-topright-large
7 min read
June 25, 2020
Spark-Snowflake Connector: In-Depth Analysis of Internal Mechanisms
Readshp-arrow-topright-large
4 min read
February 25, 2020
Koalas Library: Integrating Pandas with PySpark for Data Handling
Readshp-arrow-topright-large
Copyright © 2024 Tiger Analytics | All Rights Reserved