How to Streamline Machine Learning/ Data Science Projects?

CRISP-DM (Image from wikipedia)

When it comes to designing, developing and implementing a project related to data mining/ machine learning or deep learning, it is always better to follow a framework for streamlining the project flow.

It is OK to adapt a software development framework such as scrum, or waterfall method to manage a ML related project but I feel like having more streamlined process which pays attention on data would be an advantage for the success of a such project.

To my understanding there can be two variations of ML related projects.

  1. Solely machine learning/ data science based projects
  2. Software development projects where ML related services are a sub component of the main project.

The step-by step process am explaining can be used in both of these variations with your own additions and modifications.

Basically this is what I do when I get a ML related project to my hand.

I follow the steps of a good old standard process known as Cross-industry standard process for data mining (CRISP-DM) to streamline the project flow. Let’s go step by by step.

Step 1 : Business understanding

First you have to identify what is the problem you going to address with the project. Then you have to be open minded and answer the following questions.

  1. What is the current situation of this project? (whether it is using some conventional algorithm to solve this problem etc. )
  2. Do we really need to use machine learning to solve this problem? ( Using ML or deep learning for solving some problems maybe over engineering. Take a look whether it is essential to use ML to do the project.)
  3. What is the benefit of implementing the project? (ML projects are quite expensive and resource hungry. Make sure you get the sufficient RoI with the implementation.)
  4. What are constraints, limitations and risks? ( It’s always better to do a risk assessment prior the project. The data you have to use may have compliance issues. Look on those aspects for sure!)
  5. What tools and techniques am going to use? ( It maybe bit hard to determine the full tech stack you going to use before dipping your feet into the project. But good have even a rough idea on the tools, platforms and services you going to use to development and implementation. DON’T forget implementation phase. You may end up having a pretty cool development which maybe hard to implement with the desired application. So make sure you know your tool-set first)

Tip : If you feel like you are not having experience with this phase, never hesitate to discuss about it with the peers and experts in the field. They may come-up with easy shortcuts and techniques to make your project a success.

Step 2 : Data understanding

Data is the most vital part of any data science/ ML related project. When it comes to understanding the data, I prefer answering these questions.

  1. How big/small the data is? (Sometimes training deep learning models may need a lot of annotated data which is hard to find)
  2. How credible/ accurate the data is?
  3. What is the distribution of data?
  4. What are the key attributes and what are not-so-important attributes in data?
  5. How the data has been stored? (Data comes in CSVs/JSONs or flat files etc.)
  6. Simple statistical analysis of data?

Before digging into the main problem, you can save a lot of time by taking a closer look on data that you have or that you going to get.

Step 3 : Data preparation

To be honest, this step takes 80% of total project time most of the times. Data that we find in real world are not clean or in the perfect shape. Perfectly cleaned and per-processed data will save a lot of time in later stages. Make sure you follow the correct methodologies for data cleansing. This step may include tasks such as writing dataloaders for your data. Make sure to document the data preparation steps you did to the original dataset. Otherwise you may get confused in later stages.

Step 4 : Modelling

This is the step where you actually get the use of machine learning algorithms and related approaches. What I normally do is accessing the data and try some simple modelling techniques to interpret the data I have. For an example, will say I have a set of images to be classified using a artificial neural network based classifier… I’d first use a simple neural network with one or two hidden layers and see if the problem formation and modelling strategy is making any sense. If that’s successful, I’ll move for more complex approaches.

Tip : NEVER forget documentation! Your project may grow exponentially with thousands of code lines and you may try hundreds of modelling techniques to get the best accuracy. So that keep clear documentation on what you did to make sure you can roll back and see what you have done before.

Step 5 : Evaluation

Evaluating the models we developed is essential to determine whether we have done the right thing. Same as software review processes I prefer having a set framework to evaluate the ML projects. Make sure to select appropriate evaluation matrix. Some may not indicate the real behaviour of the models you build.

When performing a ML model evaluation, I plan ahead and make a set structure for the evaluation report. It makes the process easy to compare it against different parameter changes of the single model.

In most of the cases, we neglect the execution or the inference time when evaluating ML models. These can be vital factors in some applications. So that plan your evaluation wisely.

Step 6 : Deployment & Maintenance

Deployment is everything! If the deployment fails in the production, there’s no value in all the model development workload you did.

You should select the technologies and approaches to deliver the ML services (as REST web services, Kubernetes, container instances etc. ). I personally prefer containerising since it’s neat and clean. The deployed models should be monitored regularly. Predictions can get deviated with time. Sometimes data distribution can be changed. Make sure you create a robust monitoring plan beforehand.

Tip : What about the health of the published web endpoints or the capacity of inference clusters you using?? Yp! Make sure you monitor the infrastructure too.

https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/overview

This is just a high-level guideline that you can follow for streamlining data science/machine learning related tasks. This is a iterative process. There’s no hard bound rules saying you MUST follow these steps. Microsoft has introduced team data science process (TDSP) adapting and improving this concept with their own tool-sets.

Key takeaway : Please don’t follow cowboy coding for machine learning/ data science projects! Having a streamline process is always better! 🙂

Different Computation Options on Azure Machine Learning

In a later article we discussed on different data storage methods we can use with Azure Machine learning. In this article am gonna briefly discuss different computation options we have with Azure ML.

Since computation power is one of the key advantage we get from cloud based machine learning, choosing correct computation resource for our machine learning experiments is important.

AzureML offers 4 main compute types.

01. Compute instances –

If you don’t wanna spend the time in setting up your local computer for doing the ML experiments or you wanna leverage GPUs or powerful CPUs for doing your experiments, Azure Compute instances offer fully managed virtual machines loaded with most of the essential frameworks /libraries for performing machine learning and data science experiments. When you using AzureML notebooks (jupyter notebook instance attached for AzureML), compute instance is the place where the jupyter notebook is running.

Different methods can be used to access compute instances

You can access the compute instances using different methods. Accessing through Jupyter notebooks and JupyterLab is the all time favourite of most of the data scientists. If you are a R folk, you can use Rstudio with the compute instances. Accessing the compute instance through SSH is really useful (you may have to enable SSH access when creating the compute instance) in occasions where you have to install custom packages and such for the compute instance. (The machine is ubuntu based and you can use all bash scripts there!)

Basically compute instance can be defined as a virtual machine fully loaded with data science and machine learning essentials which you can use right out of the box.

02. Compute clusters –

Compute clusters are different from compute instances with their ability of having one or more compute nodes. These compute nodes can be created with our desired hardware configurations.

Why having more than one node? That comes with the ability of using parallel processing for computations. If you are going do to hyperparameter tuning/ GPU based complex computations/ several machine learning runs at once you may have to create a compute cluster.

If you are running Automated Machine Learning expriment with AzureML, you must have a compute cluster to perform computations.

When selecting the node configurations, you can either go with CPU based nodes or GPU based nodes. GPU based nodes (NC type etc.) is bit pricy. If you are not using GPU based computing, don’t waste your dollars by just creating a compute cluster with some fancy configs.

One other key setting is ‘Virtual machine priority’. If you are ok with pushing your experiment to the cloud and get the result without a hurry, you can go with low priority nodes, which will save you a lot of dollars rather than using dedicated VMs. No harm is gonna happen for the experimentation accuracy and such.

03. Inference clusters –

There are two options to deploy Azure machine learning web services as REST endpoints. 1) Use ACI (Azure Container Instances) 2) Use AKS (Azure Kubernetes Service)

Deploying the REST web service on ACI is good for testing and development uses and AKS would be the to-go for production level large deployments. You can configure the AKS cluster according to your need through AzureML as well as from the Azure portal. These AKS clusters are pretty much similar for AKS clusters you worked in any other Azure based deployments.

04. Attached compute –

Azure machine learning is not limited for doing computations on compute clusters. You can attach Azure Databricks, Data Lake Analytics, HDInsight or a prevailing VM as a compute for your workspace. Keep in your mind that Azure machine learning only supports virtual machines running Ubuntu. These compute targets will not be managed by Azure Machine Learning itself. So you may have to perform some additional steps to make sure they are compatible with your experiments.

Choosing the correct compute resource is a key component in the success of developing machine learning experiments. On the other hand, bad computation choices may leave you with huge Azure bills! 😀

There’s no hard bound rules on selecting different compute options for your machine learning life cycle. Just make sure you use the right tool at the right time.

Zero-code Predictive Model Development with AutomatedML on Azure Machine Learning

Designing and implementing predictive experiments requires an understanding about the problem domain as well as the knowledge on machine learning algorithms and methodologies. Extensive knowledge on programming is a necessity when it comes to real-world machine learning model training and implementations.

Automated machine learning is capable of training and tuning a machine learning model for a given dataset and specified target metrics by selecting the appropriate algorithms and parameters by its own.  Azure Machine Learning offers a user-friendly wizard-like Automated ML feature for training and implementing predictive models without giving you the burden of algorithm and hyperparameter selection.

Azure Automated ML comes handy, where you are able to implement a complete machine learning pipeline without a single line of coding. It saves a time and compute resources since the model tuning is done by following data science best practices.

Azure machine learning currently supports three types of machine learning user cases in their AutomatedML pipeline.

1. Classification – To predict one of several categories in the target column

2. Time series forecasting – To predict values based on time

3. Regression – To predict continuous numeric values

Let’s go through the step by step process of developing a machine learning experiment pipeline with Azure Automated ML.

01. CREATE AN AZURE MACHINE LEARNING WORKSPACE

Azure Machine Learning Workspace is the resource you create on Azure to perform all machine learning related activities on the cloud. The steps are straight forward same as creating any other Azure resource. Make sure you create the Workspace edition is ‘Enterprise’ since AutoML is not available in the basic edition.

02. CREATE AUTOMATEDML EXPERIMENT

Create AutomatedML experiment

ml.azure.com web interface is the one stop portal for accessing all the tools and services related to machine learning on Azure. You have to create a new Automated ML run by selecting Automated ML on the Author section of the left pane.

03. SELECT DATASET

Select dataset from the source

As of now, AutomatedML supports tabular data formats only. You can upload your dataset from the local storage, import from a registered datastore, fetch from a web file or else retrieve from Azure open datasets.

04. CONFIGURE RUN

Configuring the Automated ML run

In this section you have to specify the target column of the experiment. If it’s a classification task this should be the column that indicates the class values and if it’s regression that’s the column where the numerical value to be predicted. Select a training cluster where the experiments going to run. Make sure you select a cluster that is enough for the complexity of the dataset you provided.

05. SELECT TASK TYPE AND SETTINGS

Select the task type

Select the task type that is appropriate for the dataset you selected. If you have textual data in your dataset you can enable deep learning (which is in preview) to extract the features.

In the settings of the run, you can specify the evaluation metrics, any algorithms that you are don’t want to use, validation type, exit criterion etc. for the experiment. If you wish to select only a specific set of features in the provided dataset you can configure that through the settings.

Configuring the evaluation metrics, algorithms to block, validation type, exit criterion

Running the experiment may take some time depending on the complexity of the dataset, algorithms you use and the exit criterion you used.

When the run is completed AzureML provides a summary of the run by indicating the best performing algorithm. You can directly deploy or download the best performing model as a .pkl file from the portal.

Details of the run after the completion

Deployment comes as a REST API which runs on an Azure Kubernetes Service (AKS) or Azure Container Instance (ACI).

AutomatedML comes handy when you need to do fast prototyping for a specific set of data and supports the agile process of intelligent application development. Will look on the other tools and features we have on Azure AI stack in the coming articles.

Reference : https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml

The Story of Deep Pan Pizza :AI Explained for Dummies

Artificial Intelligence, Machine Learning, Neural Networks, Deep Learning….

Most probably, the words on the top are the widely used and widely discussed buzz words today. Even the big companies use them to make their products appear more futuristic and “market candy” (Like a ‘tech giant’ recently introduced something called a ‘neural engine’)!

Though AI and related buzz words are so much popular, still there are some misconceptions with people on their definitions. One thing that clearly you should know is; AI, machine learning & deep learning is having a huge deviation from the field called “Big Data”. It’s true that some ML & DL experiments are using big data for training… but keep in mind that handling big data and doing operations with big data is a separate discipline.

So, what is Artificial Intelligence?

“Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.” – Wikipedia

Simple as that. If a system has been developed to perform the tasks that need human intelligence such as visual perception, speech recognition, decision making… these systems can be defined as a intelligent system or an so called AI!

The most famous “Turing Test” developed by Alan Turing (Yes. The Enigma guy in the Imitation Game movie!) proposed a way to evaluate the intelligent behavior of an AI system.

Turing_test_diagram

Turing Test

There are two closed rooms… let’s say A & B. in the room A… we have a human while in the room B we have a system. The interrogator; person C is given the task to identify in which room the human is. C is limited to use written questions to make the determination. If C fails to do it- the computer in room A can be defined as an AI! Though this test is not so valid for the intelligent systems we have today, it gives a basic idea on what AI is.

Then Machine Learning?

Machine learning is a sub component of AI, that consists of methods and algorithms allows the computer systems to statistically learn the patterns of data. Isn’t that statistics? No. Machine learning doesn’t rely on rule based programming (It means that a If-Else ladder is not ML 😀 ) where statistical modeling is mostly about formulation of relationships between data in the form of mathematical equations.

There are many machine learning algorithms out there. SVMs, decision trees, unsupervised methods like K-mean clustering and so-called neural networks.

That’s ma boy! Artificial Neural Networks?

Inspired by the neural networks we all have inside our body; artificial neural network systems “learn” to perform tasks by considering many examples. Simply, we show a thousand images of cute cats to a ANN and next time.. when the ANN sees a cat he is gonna yell.. “Hey it seems like a cat!”.

If you wanna know all the math and magic behind that… just Google! Tons of resources there.

Alright… then Deep Learning?

Yes! That’s deep! Imagine the typical vanilla neural networks as thin crust pizza… It’s having the input layer (the crust), one or two hidden layers (the thinly soft part in the middle) and the output layer (the topping). When it comes to Deep Learning or the deep neural networks, that’s DEEP PAN PIZZA!

e8f6eaa267ef4b02b2734d0031767728_th

DNNs are just like Deep Pan Pizzas

Deep Neural Networks consist of many hidden layers between the input layer and the output layer. Not only typical propagation operations, but also some add-ins (like pineapple) in the middle. Pooling layers, activation functions…. MANY!

So, the CNNs… RNNs…

You can have many flavors in Deep Pan Pizzas! Some are good for spicy lovers… some are good for meat lovers. Same with Deep Neural Networks. Many good researchers have found interesting ways of connecting the hidden layers (or baking the yummy middle) of DNNs. Some of them are very good in image interpretation while others are good in predicting values that involves time or the state. Convolutional Neural Networks, Recurrent Neural Networks are most famous flavors of this deep pan pizzas!

These deep pan pizzas have proven that they are able to perform some tasks with close-to-human accuracy and even sometimes with a higher accuracy than humans!deep-learning

Don’t panic! Robots would not invade the world soon…

 

Image Courtesy : DataScienceCentral | Wikipedia

One-Hot Encoding in Practice

mtimFxhData is the king in machine learning. In the process of building machine learning models, data is used as the input features.

Input features comes in all shapes and sizes. For building a predictive model with a better accuracy rate, we should understand the data as well as the logic behind the algorithm we going to use to fit the model.

Data Understanding; as the second step of CRISP-DM, guides for understanding the types and the way the data we get has been represented. We can distinguish three main kinds of data feature.

  1. Quantitative Data           – Data with numerical scale (Age of a person in years, Price of a house in dollars etc.)
  2. Ordinal features              – Data without a scale but with ordering (Ordered sets/ first, second, third etc.)
  3. Categorical features       – Data without a numerical scale neither an ordering. These features don’t allow any statistical summary. (Car manufacturer categories, Civil status, N-grams in NLP etc.)

Most of the machine learning algorithms such as linear regression, logistic regression, neural network, support vector machine works better with numerical features.

Quantitative features come with a numerical value and they can be directly used (Sometimes data preprocessing, normalization may have to use) as the input features of ML algorithms.

Ordinal features can be easily represented in numbers (Ex. First = 1, Second = 2, Third = 3 …). This is called Integer Encoding. Representing ordinal features using numbers makes sense because the dependency between each representation can be notated in a numerical way.

There are some algorithms that can directly deal with joint discrete distribution such as Markov chain / Naive Bayes / Bayesian network, tree based, etc. These algorithms can work with categorical data without any encoding; while we should encode the categorical features in a way to represent in a numerically to use as the input features for other ML algorithms. That means it’s better to change the categorical features to numerical most of the times 😊

There are some special cases too. For an example, while naïve bias classification only really handles categorical features, many geometric models go in the other direction by only handling quantitative features.

How to convert Categorical data for Numerical data?

There are few ways to covert the categorical data to numerical data.

  • Dummy encoding
  • One-hot encoding / one-of-K scheme

are the most prominent ways of it.

One hot encoding is the process of converting the categorical features into numerical by performing “binarization” of the category and include it as a feature to train the model.

In mathematics, we can define one-hot encoding as…

One hot encoding transforms:

a single variable with n observations and d distinct values,

to

d binary variables with n observations each. Each observation indicating the presence (1) or absence (0) of the dth binary variable.

Let’s get this clear with an example. Suppose you have ‘flower’ feature which can take values ‘daffodil’, ‘lily’, and ‘rose’. One hot encoding converts ‘flower’ feature to three features, ‘is_daffodil’, ‘is_lily’, and ‘is_rose’ which all are binary.

CaptureA common application of OHE is in Natural Language Processing (NLP). It can be used to turn words to vectors so easily. Here comes a con of OHE, where the vector size might get very large with respect to the number of distinct values in the feature column.If there’s only two distinct categories in the feature, no need to construct to additional columns. You can just replace the feature column with one Boolean column.

oJEie

OHE in word vector representation

You can easily perform One-hot encoding in AzureML Studio by using the ‘Convert to Indicator Values’ module. The purpose of this module is to convert columns that contain categorical values into a series of binary indicator columns that can more easily be used as features in a machine learning model, which is the same happens in OHE. Let’s look at performing One-Hot encoding using python in next article.

Mission Plan for building a Predictive model

maxresdefaultWhen it comes to a machine learning or data science related problem, the most difficult part would be finding out the best approach to cope up with the task. Simply to get the idea of where to start!

Cross-industry standard process for data mining, commonly known by its acronym CRISP-DM, is a data mining process model describes commonly used approaches that data mining experts use to tackle problems. This process can be easily adopted for developing machine learning based predictive models as well.

CRISP-DM_Process_Diagram

CRISP – DM

No matter what are the tools/IDEs/languages you use for the process. You can adopt your tools according to the requirement you’ve.

Let’s walk through each step of the CRISP-DM model to see how it can be adopted for building machine learning models.

Business Understanding –

This is the step you may need the technical knowhow as well as a little bit of knowledge about the problem domain. You should have a clear idea on what you going to build and what would be the functional value of the prediction you suppose to do through the model. You can use Decision Model & Notation (https://en.wikipedia.org/wiki/Decision_Model_and_Notation) to describe the business need of the predictive model. Sometimes, the business need you are having might be able to solve using simple statistics other than going for a machine learning model.

Identifying the data sources is a task you should do in this step. Should check whether the data sources are reliable, legal and ethical to use in your application.

Data Understanding –

I would suggest you to do the following steps to get to know your data better.

  1. Data Definition – A detailed description on each data field in the data source. The notations of the data points, the units that the data points have been measured would be the cases you should consider about.
  2. Data Visualization – Hundreds or thousands of numerical data points may not give a clear idea for you what the data is about or an idea about the shape of your data. You may able to find interesting subsets of your data after visualizing it. It’s really easy to see the clustering patterns or the trending nature of the data in a visualized plot.
  3. Statistical analysis – Starting from the simple statistical calculations such as mean, median; you can calculate the correlation between each data field and it will help you to get a good idea about the data distribution. Feature engineering to increase the accuracy of the machine learning model. For performing that a descriptive statistical analysis would be a great asset.

For data understanding, The Interactive Data Exploration, Analysis and Reporting tool (IDEAR) can be used without getting the hassle of doing all the coding from the beginning. (Will discuss on IDEAR in a long run soon)

Data Preparation –

Data preparation would take roughly 80% of your time of the process implying it’s the most vital part in building predictive models.

This is the phase where you convert the raw data that you got from the data sources for the final datasets that you use for building the ML models. Most of the data you got from raw sources like IoT sensors or collectives are filled with outliers, contains missing values and disruptions. In the phase of data preparation, you should follow data preprocessing tasks to make those data fields usable in modeling.

Modeling –

Modeling is the part where algorithms comes to the scene. You can train and fit your data to a particular predictive model to perform the deserved prediction. You may need to check the math behind the algorithms sometimes to select the best algorithm that won’t overfit or underfit the model.

Different modeling methods may need data in different forms. So, you may need to revert back for the data preparation phase.

Evaluation –

Evaluation is a must before deploying a model. The objective of evaluating the model is to see whether the predictive model is meeting the business objectives that we’ve figured out in the beginning. The evaluation can be done with many parameter measures such as accuracy, AUC etc.

Evaluation may lead you to adjust the parameters of the model and might have to choose another algorithm that performs better. Don’t expect the machine learning model to be 100% accurate. If it is 100% most probably it would be an over fitted case.

Deployment –

Deployment of the machine learning model is the phase where the client, or the end user going to consume. In most of the cases, the predictive model would be a part of an intelligent application that acts as a service that gets a set of information and give a prediction as an output of that.

I would suggest you to deploy the module as a single component, so that it’s easy to scale as well as to maintain. APIs / Docker environments are some cool technologies that you can adopt for deploying machine learning models.

CRISP-DM won’t do all the magic of getting a perfect model as the output though it would definitely help you not to end up in a dead-end.

Deploy Machine Learning Models in a Production environment as APIs (Python Flask + Visual Studio)

Intelligent application building basically consist of integrating machine learning based predictive components for the apps and systems. Mostly data scientists or the AI engineers are accountable of building these machine learning models.

When it comes to integration and deployment in production environment, the problem occurs with platform dependency. Most of the data scientists and AI engineers are pretty comfortable with python or R and they develop their models with them, though the rest of the system would be on .NET or Java based application.

One of the best approaches to connect these components together is deploying the ML predictive module as a web API and calling the API through the application. When it comes to APIs any programmer can work with it when they have the API definition.

Flask is a small and powerful web framework for Python. It’s easy to learn and simple to use, enabling you to build your web app in a short amount of time. Visual Studio provides an easy way to create Python flask web applications through it’s templates. Here’s the steps I’ve gone through for deploying the ML experiment as a REST API.

01. Create the machine learning model, train, tune and evaluate it.

Here what I’ve done is a simple linear regression for predicting the monthly salary according to the years of experience. Sci-kit learn python library has been used for performing the regression operation. The dataset used for the experiment is from SuperDataScience. 

The code is available in the GitHub repository .

02. Creating the pickle

When you deploy the predictive model in production environment, no need of training the model with code again and again. Python has a built-in method of persisting data called pickle. The pickle module can serialize objects or data into a file that we can save and load from. You can just use the pickle as a binary reference generating the output.  scikit-learn has their own model persistence method we will use: joblib. This is more efficient to use with scikit-learn models due to it being better at handling larger numpy arrays that may be stored in the models.

03. Create a Python Flask web application.

Simply go for Visual Studio. (I’m using VS2017 which comes with python by default) Select web project. The step by step guide is here.  I would recommend you to go with option 2 mentioned in the blog because it reduces lot of unnecessary overhead.

f_2For the safe side, use python virtual environments. It would avoid many hassles occurs with library dependencies. I’ve used anaconda environment as the base of virtual environment.

f_3

04. Create the API.

Create a new python file in your project and set it as the startup file. (In my case MLService.py is the startup file which contains the API code). The pickle file that contains the model binaries is the only dependency the API is getting when it is deployed.

f_7Here the API operates through POST methods which accepts the input in JSON.

04. Run & Test

You can run the API and test by sending POST requests to the URL with a JSON body. Here I’ve used postman to send a POST request and it gives me the predicted salary for the entered number of months.

f_5

You can access the whole code of the project through my GitHub repo here.

f_6

    Do comment if you have any suggestion to change the API structure.

Competing in Kaggle with Azure Machine Learning

MLData science is one of the most trending buzz words in the industry today. Obviously you’ve to have hell a lot of experience with data analytics, understanding on different data science related problems and their solutions to become a good data scientist.

Kaggle (www.kaggle.com) is  a place where you can explore the possibilities of data science, machine learning and related stuff. Kaggle is also known as “the home of data science” because of it’s rich content and the wide community behind it. You can find out hundreds of interesting datasets uploaded by data science enthusiasts all around the world on Kaggle. The most fascinating thing that you can find on Kaggle is competitions! Some competitions are bound with exciting prize tags while some competitions offer wonderful job opportunities when you score a top rank on it.

As we discussed in previous posts, Azure Machine Learning enables you to deploy and test predictive analytics experiments easily. Sometimes you need to not to code a single line to develop a machine learning model. So let’s start our journey on Kaggle with Azure Machine Learning.

01. Sign up for Kaggle – Go to kaggle.com & sign up using your Facebook/Google or LinkedIn account. It’s totally free! 🙂

Kaggle landing page

Kaggle landing page

02. Register for a Kaggle competition – Under the competition section, you can find out many competitions. Will start from a simple experiment that doesn’t go with any prize tag or job offering but worth enough to try out as your first experience on Kaggle.

Can you classify monsters?

Can you classify monsters?

03. Ghouls, Goblins, and Ghosts… Boo! Search for this competition categorized under ‘Knowledge’ sector of the competitions.  The task you have to do in the competition is described precisely on ‘Competition Details’

04. Get the data – After accepting the terms and conditions of Kaggle, you can download the training dataset, test dataset and the sample submission in .csv format. Make sure to take a deep look on features and understand whether you need some kind of data preprocessing before jumping into the task 😉

05. Understand the problem – You can easily figure out this is a multi-class classification machine learning problem. So let’s handle it on that way!

06. Get the data to your Studio – Here comes Azure Machine learning! Go to AML Studio (Setting up Azure Machine Learning is discussed here) and upload the data files through ‘Add Files’ option.

07. Build the classifier experiment – Same as building a normal AML experiment. Here I’ve split the training dataset to evaluate the model. The model with highest accuracy has chosen to do the predictions. ‘Tune model hyperparameter’ has used to find the optimal model parameters.

Classifier Experiment

Classifier Experiment

08. Do the prediction – Now it’s time to use the trained model to predict the type of the ghost using the data in test dataset. You can download the predicted output using ‘Convert to CSV’ module.

Predicting with the trained model

Predicting with the trained model

09. Submission – Make sure to create the output according to the sample submission.

10. Upload the submission to Kaggle –  You can compete as a team or individual. See where you are in the list!

Here's I'm the 278th! :)

Here’s I’m the 278th! 🙂

That’s it! You’ve just completed your first Kaggle competition. This might not lift you to the top of the competitors list. But it’s not impossible to use Azure Machine Learning in real world machine learning related problem solving.

 

Let’s Jump In! – Azure ML Part 01

ImageArtScience5

“In the world of intelligent applications, data will be the king!”. Despite of way they making the revenue, data has become the main asset of each company. Sales and distribution data, customer data repos, employee records, all sort of structured and unstructured data have become the life blood of the company’s business process because it is vital to get the accurate and relevant data to get the correct business decisions and do relevant business related predations.

Digital data and cloud storage follow Moore’s law: the world’s data doubles every two years, while the cost of storing that data declines at roughly the same rate.

pic67f7d1878ee27c87a401e8948934f751

This abundance of large amounts of data enables more features and tasks, and better machine learning models and methodologies should to be created for predictive analytics.

When the data is widely available in the cloud, and when it needs large computation power and infrastructure to process and analyze data repositories, the best move is the cloud!

Machine learning (ML) is starting to move to the cloud, where a scalable web service is an API call away. Data scientists will no longer need to manage infrastructure or implement custom code. The systems will scale for them, generating new models on the fly, and delivering faster, more accurate results.

What is Machine Learning?

Simply, machine learning is teaching the silicon chips to think! 😀 If we use the general definition: “Machine learning is the systematic study of algorithms and systems that improve their knowledge or performance with experience”

When you going through the theories behind machine learning you may find it is closely related to computational statistics, where you use computers in prediction making.  Machine learning comes out with range of computing tasks to solve problems where designing and programming explicit algorithms is unfeasible.

All of these things mean it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. The result? High-value predictions that can guide better decisions and smart actions in real time without human intervention.

Where the hell ML is used?

Did you notice that eBay is pushing you to buy a protective glass after you buying a fancy phone case for your iPhone? Netflix is suggesting movies for you? Siri or Cortana speech recognition? All these tiny miracles have been possible with the power of machine learning. Spam filtering you emails, speech recognition, recommender systems in electronic commerce are some famous applications of machine learning.

So… How we going to do?

If you google or do a Bing search on machine learning, you’ll find out hundreds of ways of applying machine learning techniques in practical applications and tools that we can use to create machine learning models.

Screen-Shot-2016-06-08-at-3.35.53-PM-1024x730

Here’s a glimpse of Intelligent App Stack

With my post series, mainly am going to take you a journey with Azure Machine Learning Studio, which comes under the Cortana Intelligence Suite.

Why AzureML?

cortana-intel-suite-640x343

With advanced capabilities, free access, strong support for R, cloud hosting benefits, drag-and-drop development and many more features, Azure ML is ready to take the consumerization of ML to the next level.

It’s easy as ABC and powerful enough to handle petabytes of data with the power of Azure.

Theories??

Basics on computing and statistics will be useful to go forward. It’s fantastic if you have a rough idea about the machine learning algorithms, data pre preparation methods kind of stuff. Don’t worry. Here’s a book to read!  🙂

So will take the first step to Azure ML in the coming post.

Part 02