In the previous post, we explored the basic concepts of PyTorch profiler and the newest capabilities comes with its recent updates. One of the coolest things I tried is the TensorBoard plugin comes with PyTorch Profiler. Yes.. you heard to right.. The well-known deep learning visualisation platform TensorBoard is having a Profiler plugin which makes network analysis much more easy.
I just tried the PyTorch Profiler official tutorials and seems the visualisations are pretty descriptive with analysis. I’ll do a complete deep dive with the tool in the next article.
One of the cool things I’ve noticed is the performance recommendations. Most of the recommendations make by the tool makes sense and am pretty sure they going to increase the model training performance.
In the meantime you can play around with the tool and see how convenient it is to use in your deep learning experiments. Here’s the script I used for starting the initial steps with the tool.
import torchvision.transforms as T
transform = T.Compose(
T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_set = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True)
device = torch.device("cuda:0")
model = torchvision.models.resnet18(pretrained=True).cuda(device)
criterion = torch.nn.CrossEntropyLoss().cuda(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
inputs, labels = data.to(device=device), data.to(device=device)
outputs = model(inputs)
loss = criterion(outputs, labels)
#use profiler to record execution events
schedule=torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=2),
) as prof:
for step, batch_data in enumerate(train_loader):
if step >= (1 + 1 + 3) * 2:
Deep neural networks are complex! Literally it takes quite an amount of effort and time to make them work near to perfect. Despite the effort you put on fitting the model well with your data and getting an admirable accuracy you have to keep your eye on model efficiency and performance. Sometimes it’s a trade-off between the model accuracy and the efficiency in inference. In order to do this, analysing the memory and computation usage of the networks is essential. This is where profiling neural networks comes in to the scene.
Since PyTorch is my preferred deep learning framework, I’ve been using PyTorch profiler tool it had for a while on torch.autograd.profiler . It was pretty sleek and had some basic functionalities for profiling DNNs. Getting a major update PyTorch 1.8.1 announced PyTorch Profiler, the imporved performance debugging profiler for PyTorch DNNs.
One of the major improvements it has got is the performance visualisations attached with tensorboard. As mentioned in the release article, there are 5 major features included on PyTorch Profiler.
Distributed training view
Cloud storage support
Jump to course code
You don’t need to have extensive set of codes for analyzing the performance of the network. Just a set of simple Profiler API calls. To get the things started, let’s see how you can use PyTorch Profiler for analyzing execution time and memory consumption of the popular resnet18 architecture. You may need to have PyTorch 1.8.1 or higher to perform these actions.
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
#init simple resnet model
model = models.resnet18().to(device)
#create a dummy input
inputs = torch.randn(5,3,224,224).to(device)
# Analyze execution time
ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True) as prof:
#print the output sorted with CPU execution time
#Analyzing memory consumption
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
profile_memory=True, record_shapes=True) as prof:
#print the output sorted with CPU memory consumption
Will do discuss on using Profiler visualizations for analyzing model behaviours in the next post.
I blogged on some of the challenges I face with my deep learning based experiments and the approaches I used for overcoming those challenges in previous blog posts. This is going to be one of them where I’m going to explain a technique I used for calculating perspective relationship between two different planes in a computer vision application.
Computer vision is widely used in surveillance applications, object detection and sports analytics. Mapping the imagery/ video footage generated from a single camera or from set of cameras to a relative space is one of the major tasks we may have to deal with. Mostly this need comes with mapping people/ object locations.
Use Case –
Imagine a sports analytics application where you capture a soccer game from a fixed camera and run a human detection algorithm on the image to find out the player positions. That’s quite straightforward. (You can see that has been done in the following figure). The tricky part is mapping the player positions which are in the camera space to the actual soccer field coordinate and generate graph with player positions relative to the soccer field (or you may want to normalize the location coordinates) . What we need was an output similar to the left bottom one in the figure.
How to do that?
We can clearly understand that soccer field is rectangular in shape. So that, if we know the frame-space location coordinates of the 4 corners of the field, we can easily transform any point inside that polygon for a given coordinate space. In geometry this is called “perspective transformation” . (This is bit different from affine transformation which is a common application.)
If you interested in digging deep and see how this mathematical transformation is happening, I strongly encourage you to follow this link and see the related matrix calculations behind this operation.
1. Coordinates of the corner 4 points of the polygon to be transformed
2. Coordinate/ coordinates of the points to be transformed
3. 4 corner points of the transformed polygon (This can be a rectangle or any 4 point polygon)
perspective_transform method will get the input polygon coordinates and point coordinates and output the resultPoint perspective to the resultRect we have defined. (In this code I’ve used a 1-1 plane to map the points)
Feel free to use this method in your applications and let me know your thoughts on this. Cheers!
When it comes to real world data collections, we don’t have the prestige of having perfectly balanced labelled datasets for training models. Most of the machine learning algorithms are not immune for imbalanced classes and cause less accurate and biased models. There are many approaches that we can follow to tackle imbalanced data problem. Either we have to choose a ML algorithm which is reluctant for imbalanced data or we may have to generate synthetic data in order to make the classes balanced.
Neural networks are trained using backpropagation which treats each class same when calculating the loss. If the data is not balanced, that makes the model bias for one class than another.
I had to face this issue when experimenting with a computer vision based multi-class classification problem. The data I had was so much skewed and some classes had a very less amount of data compared to the majority class. Model was not performing well at all and need to take some actions to tackle the class imbalance problem.
These were the solutions I thought of try out.
Creating synthetic data – Creating new synthetic data points is one of the main methods which is used mostly for numerical data and in some cases in imagery data too with the help of GAN and image augmentations. As in the starting point, I took the decision not to go with synthetic data generation since it may introduce abnormal characteristics to my dataset. So I keep that for a later part.
Sampling the dataset with balanced classes – In this approach, what we normally do is, sample the dataset similar number of samples for each data label. For an example, will say we have a dataset which is having 3 classes named A, B & C with 100, 50, 20 data points for each class accordingly. When sampling what we do is randomly selecting 20 samples from each A, B & C classes and get a dataset with 60 data points.
In some cases this approach comes as a better option if we have very large amounts of data for each class (Even for the minority classes) In my case, I was not able to take the cost of loosing a huge portion of my data just by sampling it based on the data points having in the minority class.
Since both methods were not going well for me, I used a weighted loss function for training my neural network. Since this is a multi-class classification problem, I used Cross Entropy Loss in PyTorch as my loss function. (You can follow the similar approach if you using BCELoss for binary classification too)
import torch.nn as nn
#class weights for 6 class multi-class classification
class_weights = [0.5281, 0.8411, 0.9619, 0.8634, 0.8477, 0.9577]
#loss function with class weights
criterion = nn.CrossEntropyLoss(weight = class_weights)
How I calculated the weight for each class? –
This is so simple. What I did was calculating a manual re-scaling weight for each class and pass it to “weight” parameter in the loss function. Make sure that you have a Tensor with the size of number of classes as the class weights. (In simpler words each class should have a weight).
Hint : If you using GPU for model training, make sure to put your class weights tensor to the GPU too.
Did it worked? Hell yeah! I was able to train my model accurately with less bias and without overfitting for a single class by using this simple trick. Let me know any other trick you use for training neural network models with imbalanced data.
Machine learning or deep learning is not all about algorithms and training predictive models on some set of data. It involves a wide range of tools, techniques and computing approaches to handle various steps of the machine learning process pipeline.
Starting from a raw data point, to the stage of exposing the model as a REST API there are numerous places where we need to pay attention on data handling approaches. (Yes! Data is the key component of any ML/DL pipeline.)
In this article am bringing out a problem I faced when dealing with a deep learning experiment and the approach I took to overcome the problem. I’m pretty sure you may have to face similar kind of issues if you using massive amounts of structured/ unstructured data for training your deep learning models.
Here’s the issue I faced :
In order to train a computer vision related deep learning model I had to write a PyTorch custom dataloader for loading a set of annotation data. The data points were stored in JSON format and believe me, that massive JSON file was nearly 4GB! It was not a simple data structure with keys and values, but had a mixed set of data structures including lists, single float values and keys in String format.
As usual I wrote a PyTorch custom dataset class and tried to load the massive JSON file inside init . Yp! It crashed! Memory was not enough for handling such a big file. Can’t you move that for getitem ? No. It’s not possible. Loading file on call is so inefficient and I had to think of solution which doesn’t load the massive file as a whole for the RAM and with the possibility of retrieving data inside the file with indexes.
The first dumb idea I got was converting the data into a multidimensional numpy array and save the file, but I figured out that gives the birth for another massive file which doesn’t solve my problem. With a suggestion I got from my co-supervisor, I started looking on HDF5; Hierarchical Data Format. Yes! It was the solution and it nicely solved my issue.
What is Hierarchical Data Format (HDF5) ?
The Hierarchical Data Format version 5 (HDF5), is an open source file format that supports large, complex, heterogeneous data. This uses a ‘directory-like’ structure to store data. In simpler terms, a HDF5 file can be identified as a definition of a file system (the way files and directories are stored in your computer) in a single file.
There are two important terms used in HDF5 format.
Groups – Folder like element within the HDF5 file which can contain subgroups or datasets.
Dataset – Actual data contained within the HDF5 file. (Numpy arrays etc. )
In simpler terms, if your data is large, complex, heterogeneous and need random access most probably HDF5 would be the best option you can go forward with.
How to use HDF5?
We all speak Python when it comes to machine learning. Python supports HDF5 format using h5py package. Since this is a wrapper based on native HDF C API, it provides almost the full functionality.
Create HDF5 file from a JSON array
Here I included a very brief code snippet of creating a HDF5 file from a JSON array which contains the data from famous iris dataset. This is a sample of JSON array I used. (You can get the full dataset from here)
Here I created a separate group for each entry. (3 JSON objects in the array means 3 groups in HDF5 file.) The 5 datapoints in each object are stored as datasets.
import numpy as np
hdf5_filename = 'iris_hdf5.hfd5'
#read iris.json file
with open('iris.json') as jsonfile:
iris_data = json.load(jsonfile)
#create HDF5 file
h = h5py.File(hdf5_filename, 'w')
#running a loop through all entries in the JSON array
index = 0
for entry in iris_data:
for k, v in entry.items():
dataset_name = os.path.join(str(index), k) #groups are divided by '/'
h.create_dataset(dataset_name, data = np.asarray(v, dtype=np.float32))
index = index +1
print('Iris data HDF5 file created.')
#read data from HDF5
h_read = h5py.File(hdf5_filename, 'r')
#read a single entry
# output : <KeysViewHDF5 ['petalLength', 'petalWidth', 'sepalLength', 'sepalWidth', 'species']>
# output : array(1.4, dtype=float32)
Though this is a very simple data structure, you can expand this to complex and large files. You’ll find it pretty easy to use HDF5 instead of using huge lists inside init of custom dataloaders. Here’s a rough sketch of the PyTorch custom dataset class I created for the above example.
from torch.utils.data.dataset import Dataset
hdf5_filename = 'iris_hdf5.hfd5'
def __init__(self, ...):
# # All the data preperation tasks can be defined here
# - HDF5 file is referenced here.
h_read = h5py.File(hdf5_filename, 'r')
def __getitem__(self, index):
# # Returns data and labels
# - access HDF5 file through indexing
item = np.asarray(h_read[index]['petalLength'])
return count # of how many examples you have
This is only one usage of using HDF5 file format in machine learning. Share your experiences with HDF5 here too. 🙂
This famous quote of Steve Jobs is one of the most precious quotes I always keep in my mind. Stepping into the IT industry 11 years back as a teenager, I was always eagerly waiting to push myself beyond the barriers and keep trying new things. That hunger led me to explore Artificial Intelligence, undoubtably the most used buzz word in today’s industry. I always make sure to keep myself foolish and open to learn new things.
When I started exploring data science and related technologies 6 years ago, almost all the new things I experimented in the work life were self-taught from online resources. Even today I really enjoy going through documentations on different technologies and making myself familiar with those.
In the AI space, Microsoft Azure is a dominant player with their vast variety of tools and services. Being working with different AI related tools for years, I’m super thrilled to see the advancements on the Azure data & AI space. When the MVP cloud skills challenge was launched, I had no hesitation to go forward with Data & AI path, since I needed to sharpen up my skills and update myself with the new capabilities Azure AI is providing.
Azure AI is equipped with tools and services for anyone who’s interested in AI no matter in which expertise level the person is in. You can easily use Azure Cognitive services to add AI capabilities for your application by just calling for a REST API. If you want develop an advance machine learning/ deep learning experiment, Azure AI allows you to use your favorite open-source tools and frameworks with and adapt the power of cloud for your development.
What I actually learnt?
The challenge consists learning modules which covers most of the prominent parts in Azure AI domain including Azure machine learning, Azure cognitive services, Azure cognitive search and Bot framework. It has been a long time since I built bots. So, working with the new capabilities and functions of bot framework was a pretty good experience. In addition to that, Azure cognitive search is one of the services I’ve least used in my developments and I always wanted to give it a try. With the simple but well managed learning modules gave me a perfect star to sketch my first cognitive search application.
Here comes the most interesting part!
One of the most common questions I get when doing sessions in the community is “do we actually need to know coding to perform machine learning experiments?”
With no hesitation I say yes because Azure machine learning is offering two powerful tools for zero-code machine learning experiments. Automated machine learning supports training supervised machine learning models for classification, regression and time series forecasting. You can create and publish a machine learning experiment as a REST API with just few clicks!
Azure Machine Learning Designer provides you a simple drag and drop interface where you can create machine learning pipelines and publish those as REST endpoints. If you want advance functionalities, it allows you to add python or R code snippets inside the pipelines too.
I know you are super eager to learn on this zero-code machine learning development tools. Check out the Microsoft Learn collection I created specifically focusing on these two tools. Don’t forget to share your learning experience with me.
Docker has become the new norm of the software industry. Everyone is so obsessed with it since docker solves most of the issues software engineers and system administrators had with platform dependencies in application development and deployments.
“Docker is a tool that helps users to exploit operating-system-level virtualization to develop and deliver software in packages called containers.”
Though the technical explanation sounds bit complicated, simply docker can be identified as a ‘VM like’ environment where you can build and deploy your software applications.
Why docker for machine learning/ deep learning?
We have endless discussions on how hard it is to configure the development and deployment environments in machine learning. Since python is the most used language for ML and DL experiments, dealing with python packages and making them all work seamlessly on your hardware can be a nightmare. Using cloud-based machine learning platforms or virtual machines are some of the options we can utilize to deal with this problem.
Being more flexible than virtual machines and easy migration capabilities, docker is one of the best ways for managing machine learning environments. Since docker has become the key component of MLOps it’s time for the data scientists for adapting docker in their developments.
Where and how we can use docker?
For me docker helps me out in 4 main stages in the machine learning experiment pipelines.
As a development environment.
I use to do lot of experiments in the domain of computer vision and deep learning. You may have experienced the pain of dealing libraries like opencv with python. So, I always use custom docker images with all the dependencies installed for running my experiments. This makes easy for me to collaborate with my peers easily without giving the hassle of replicating my development environment in their machines.
What about the huge amounts of data? Including those also inside the docker container? Nah. Always keeping the data in mounted volumes as well as the output files created from the experiments.
If you need GPU supported docker images, NVIDIA provides docker image variations that matches with your need on docker hub.
2.As a training environment.
You all know ML/ DL models normally take quite a big time for training. In my case, I use remote shared servers with GPUs for training my experiments. For that, the easiest way is containerizing the experiment and pushing to the server.
3. As a deployment environment.
Another popular use case of docker is in the deployment phase. Normally the deployment environment should fulfil required dependencies in order to inference the ML/DL model seamlessly. Since a docker container can be shipped across platforms easily without worrying about hardware level dependencies, it’s really easy to use docker for deploying ML models.
4. Docker for cloud-based machine learning
Most of the data scientists are using cloud-based machine learning platforms like Azure machine learning today with their flexibility and resources. Containerized experiments are the main component these services use in order to run them on cloud. When it comes to Azure ML you can use their default docker image for experiments or you can specify your custom base image for model development and training.
Take a look on this documentation for deploy Azure ML models using a custom docker base image.
So, docker has become a life saver for me since it reduces a lot of headache occurring with machine learning model life-cycle. Will come up with a sample experiment on using docker for training a machine learning model in the next post.
Computer vision based applications have become one of the most popular research areas as well as have gained lot of interest in different industrial domains. Popularity and the advancements of deep learning have given a boost for the hype of computer vision.
Being a researcher focused on computer vision based applications for nearly 3 years, Here are some tips I’d give for a developer who’s stepping into a computer vision related experiment/ deployment.
Before going further into the discussion, you may need to get an idea on the difference between traditional computer vision approaches and deep learning based approaches. Here’s a quick overview on that.
01. Do we really have to use deep learning based computer vision approaches to solve this?
This is the very first thing to concern! When you see a problem from the scratch, you may think applying deep learning for this is the survivor. It’s not true in some cases. You may be able to solve the problem using traditional line detection filters etc. easily without wasting the time and energy in training a deep learning model to solve the task. Observe the problem thoroughly and get the decision to move forward or not.
02. Analyze the input data and the desired output
To be obvious, deep learning based computer vision models get images or videos as its input modalities. Before starting the project implementations, we should consider following factors of the input data we have.
Size of the data –
Since DL models need a huge amount of data (in most of the cases) for training without getting the models overfitted we need to make sure we have a good amount of data in hand for training. In this case we can’t specify exact numbers. I’d say more the better!
Quality of the data –
Some image inputs or the video streams we get are blurred and not covering the most important features we need to build the models. Getting images/ videos in higher resolution is always better. When considering the quality of the data it’s better to take a look on the factors like class imbalance if it’s a classification problem.
Similarity of training data and data inputs in the inference time –
I’ve seen cases where data model is getting in the inference time is very different than the data used in the training (For an example the model is trained using cat images from cartoons and it’s getting real life cat images in the inference time.) If it’s not a model which is specifically designed for domain adaptation, you should NEVER do this mistake.
03. Building from the scratch? Is it necessary?
As I said previously, computer vision is one of the most widely researched areas in deep learning. So that, you are having the privilege of using pre-built models as well as online services to perform your computer vision workloads.
Services such as Azure cognitive services, Google vision APIs etc. provides pre-built web APIs which you can directly use for many vision related tasks. Starting from an OCR task of reading a text in a scanned document, there are APIs which can identify human faces and their emotions even. No need to build from the scratch. You can just use the service as a web service in your application.
Even going a step forward from the pre-built services Microsoft Azure cognitive services offer a custom vision service where you can train your own image classification models with your own data. This may come handy in most of the practical applications where you don’t need to spend time on building the model or configuring the training environment.
04. Building from scratch? Is it REALLY necessary?
Yp! Again, a decision to take. If your problem cannot be addressed from the pre-built computer vision services available online, the option you have to go forward is building a deep learning model and training it using your own data. When it comes to model development one of the very big mistakes we do is neglecting the prevailing models built by researchers for various purposes.
I’m pretty sure most of the computer vision tasks that you have is falling under famous computer vision areas such as image classification, action recognition in videos, human pose detection, human/ object tracking etc. There are many pre-built methods which has been achieved state-of-the-art accuracy in solving these problems and benchmarked with most of the publicly available big datasets. For an example, ResNet models are specifically designed for image classification and shown the best accuracy on ImageNet dataset. You can easily use these models (Most of these models are available in model zoos of popular deep learning frameworks) and adapt their last layers for your needs and get higher accuracies rather than building your own model from the scratch.
Papers with code is a great place to search for prevailing models on various computer vision tasks.
I recently came across this openMMLab repositories which comes pretty handy in such tasks. (Mostly for video analysis stuff)
05. Use the correct method
When building the models, make sure you follow the correct path which matches with your data input. For an example if you only have few training images to train your classification model, you may need to look on areas like few-shot learning to train your model. Tricks such as adding batch normalization, using correct loss functions, adding more input modalities, using learning rate schedulers, transfer learning will surely increase your model accuracy.
06. Data augmentation is a suvivor!
More data the better! Always take a look on sensible data augmentation methods to make sure your model is not overfitted for training data. Always visualize your data inputs before using that for model training to make sure your data augmentations are making sense.
07. Model training should not be a nightmare
This is the most time-consuming part in developing computer vision models. We all know training deep learning models needs a lot of computation power. Make sure you have enough computation power to train your models. It’ll be a nightmare to train an image classifier which is having 100,000 images just using your CPU! Make sure you have a good enough GPUs for performing the computations and configured them correctly for training models.
08. Model inference time should not be years!
Model inferencing the least concerned portion in model development. Though it is the most vital part since this is where the outcome is shown for the outsider. Sometimes, your trained model may take a lot of time for inferencing which may make the model useless in a real-world application. Think of a human detection system you implemented taking 1-2 minutes to identify a human who’s accessing a secured location…. There’s no use of a such system since that doesn’t meet the need of real-time surveillance. Always make sure to develop the simplest model that gives the best accuracy. Sometimes you may have to compromise few digits from the accuracy numbers to increase the model efficiency. That’s totally fine in a real-world application. Before pushing the model into production, take a look on converting the models to ONNX or model pruning. It’ll help you to deploy efficient models.
09. Take a look on your deployment target
This directly connects with the facts we discussed in the model inference time. We don’t have the luxury of having high end machines powered with GPUs in all deployment locations. Or having high powered cloud services. Sometimes out deployment target may be a IoT device. So that make sure you design a light weight model which even provides a good performance by consuming less resources.
10. Privacy concerns
Last but not least, we may have to look on privacy concerns. Since we are dealing with image and video data which may contains lot of personal informaiton of the people, we need to make sure we are followiong the privacy guidelines and making sure the data we use for model training is having enough security clearance to do such tasks.
Bit lengthy… but hope you got some clues before getting into your next computer vision project. Happy coding 😊
In the current AI landscape, there are plenty of programming languages, frameworks, runtime environments and hardware devices used by practitioners for developing and deploying their machine learning and deep learning models. This technology stack get widen when it comes for integrating these machine learning models into software development processes.
With the experience with software development, we know handling platform dependencies and getting all components work smoothly is one of the biggest headache developers face. There’s no big difference in the machine learning space.
ONNX is an open format to represent both deep learning and tradition machine learning models. It increases the interoperability of the models without depending on the runtime environment or the development tools.
In simple words, you can port your neural network in a deep learning framework like Pytorch and then inference it on a Tensorflow environment by converting it into a ONNX model!
ONNX is widely supported by most of the frameworks, tools and hardware (Since it’s evolving rapidly, am pretty sure many frameworks will come under ONNX in the near future.)
Since ONNX is backed by the big players in AI space such as Facebook, Microsoft, AWS and Google you are use your familiar frameworks easily with ONNX.
Let’s get a scenario where you have built a deep learning based classification model for classifying grocery items using PyTorch as your deep learning framework. In a later stage of the developments you need to use the built model on a iOS mobile application where machine learning based operations are based on CoreML. You can export the PyTorch model into a ONNX model and then use on CoreML runtime for inference.
ONNX has proven it’s success in the scenarios where we have to deploy deep learning based models on IoT devices with less computation power and has stated a noticeable performance increase in inference times.
With ONNX, you don’t need to package the various platform dependencies in the deploying target. You just need the ONNX runtime.
You can find out the ONNX supported list of tools and frameworks through this link.
In the coming posts, am going to discuss my experiences with setting up ONNX runtime and using it with my favourite deep learning framework, PyTorch!
Accessing data in different data sources is one of the main tasks in machine learning model development life cycle. Let’s discuss one of the most common data accessing scenarios.
We have to set of relational data points stored in a Azure SQL server to develop a machine learning model using Azure Machine Learning. Let’s see how to leverage data stored in an Azure SQL database in an Azure Machine Learning experiment.
The process contains three main steps.
Set access permissions of Azure SQL database
Connect Azure SQL database to an Azure ML datastore
Register the data in datastore as an Azure ML dataset.
1. Set access permissions of Azure SQL database
By default Azure SQL databases are protected with a firewall which limits outside access for data. Since we going to provide access for the traffic from Ips belongs to Azure resources and services, make sure you allow Azure services to access your SQL server.
2. Connect Azure SQL database to an Azure ML datastore
Azure ML datastores can be defined as the abstraction of data sources for the ML workspace or as the interconnection between the data resource and AzureML workspace.
Go to your Azure Machine Learning Studio (ml.azure.com) and click ‘New datastore’. Provide a datastore name and select ‘Azure SQL database’ as the datastore type. Make sure to authenticate the access with Azure SQL server’s user ID and the password.
3. Register the data in datastore as an Azure ML dataset.