Deploy Machine Learning Models in a Production environment as APIs (Python Flask + Visual Studio)

Intelligent application building basically consist of integrating machine learning based predictive components for the apps and systems. Mostly data scientists or the AI engineers are accountable of building these machine learning models.

When it comes to integration and deployment in production environment, the problem occurs with platform dependency. Most of the data scientists and AI engineers are pretty comfortable with python or R and they develop their models with them, though the rest of the system would be on .NET or Java based application.

One of the best approaches to connect these components together is deploying the ML predictive module as a web API and calling the API through the application. When it comes to APIs any programmer can work with it when they have the API definition.

Flask is a small and powerful web framework for Python. It’s easy to learn and simple to use, enabling you to build your web app in a short amount of time. Visual Studio provides an easy way to create Python flask web applications through it’s templates. Here’s the steps I’ve gone through for deploying the ML experiment as a REST API.

01. Create the machine learning model, train, tune and evaluate it.

Here what I’ve done is a simple linear regression for predicting the monthly salary according to the years of experience. Sci-kit learn python library has been used for performing the regression operation. The dataset used for the experiment is from SuperDataScience. 

The code is available in the GitHub repository .

02. Creating the pickle

When you deploy the predictive model in production environment, no need of training the model with code again and again. Python has a built-in method of persisting data called pickle. The pickle module can serialize objects or data into a file that we can save and load from. You can just use the pickle as a binary reference generating the output.  scikit-learn has their own model persistence method we will use: joblib. This is more efficient to use with scikit-learn models due to it being better at handling larger numpy arrays that may be stored in the models.

03. Create a Python Flask web application.

Simply go for Visual Studio. (I’m using VS2017 which comes with python by default) Select web project. The step by step guide is here.  I would recommend you to go with option 2 mentioned in the blog because it reduces lot of unnecessary overhead.

f_2For the safe side, use python virtual environments. It would avoid many hassles occurs with library dependencies. I’ve used anaconda environment as the base of virtual environment.

f_3

04. Create the API.

Create a new python file in your project and set it as the startup file. (In my case MLService.py is the startup file which contains the API code). The pickle file that contains the model binaries is the only dependency the API is getting when it is deployed.

f_7Here the API operates through POST methods which accepts the input in JSON.

04. Run & Test

You can run the API and test by sending POST requests to the URL with a JSON body. Here I’ve used postman to send a POST request and it gives me the predicted salary for the entered number of months.

f_5

You can access the whole code of the project through my GitHub repo here.

f_6

    Do comment if you have any suggestion to change the API structure.

Advertisements

Handling Big snakes on Visual Studio

In the last post we discuss on setting up a Windows rig for deep learning. If you still haven’t setup your machine, go do it first: D

After getting the so called big snakes; python and anaconda in the machine, we should have a proper IDE for coding.

There are many good IDEs you can use in Windows environment to code in python. Pycharm, Spyder are some popular tools.

If you familiar with Visual Studio, the so-called father of all IDEs, python works smoothly with VS. There are few configurations need to be done.

c1No need to purchase Visual Studio enterprise or ultimate. The freely available Visual Studio Community edition works fine. In 2017 version python comes along side with the default installation options. For the later versions you need to install Python Tools for Visual Studio (PTVS) separately.

https://docs.microsoft.com/en-us/visualstudio/python/python-in-visual-studio

Refer this guide for more details.

The python environments configured to machines can be seen from ‘Python Environments’ pane of Visual Studio. (If it is not there go for Tools -> Python -> Python Environments)

c2

By default, your Anaconda environment and default python environment should be there. First Refresh those environments to support intelliSense and grab the installed libraries for the DB.

For our deep learning experimentations, we configured a separate python environment before. To add that environment for visual studio follow the following steps.

01. Click Custom on ‘Python environments’

02. Go for anaconda environments and activate your pre-configured environment for deep learning (Mine is tensorflow-gpu)

c4

03. Copy the interpreter path of the environment

04. Insert it for the interpreter path and click “auto detect’. Visual Studio will detect the rest

c3

05. Click Apply

It may take few minutes to refresh the packages as well as the intelliSense. Make the configured environment your default and open the interactive. You are good to go 😊