Tips & Tricks for building a better LUIS model

Windows-Live-Writer-1ea16f646b8b_EBC7-image_6b0e682f-3c30-4d18-a764-b7a8375e6ffeChatbots has become a ‘trend’ today. Everyone wants to attach a chatbot for their businesses. Either to the public facing phase or as the interfering interface of an internal business process. If you observe the social media handles of the major brands, they are using chatbots to communicate with their customers.

Building a bot from the scratch is not an easy task and there’s no need of going from the scratch! There’s plenty of frameworks and language understanding services that you can get the aid of.

Api.ai, Recast.ai, Motion.ai, rasa NLU, Wit.ai, Amazon Lex, Watson Conversation and LUIS.ai are some of services that you can use individually or with the use of other channels and frameworks to build your bot.

Here I’ve listed out some of the best practices I use when creating a language understanding engine using LUIS.ai; which is Microsoft’s NLU product framework.

(To get started with LUIS.ai refer the documentation here – https://docs.microsoft.com/en-us/azure/cognitive-services/luis/home )

Narrow down the scope of the bot

Don’t ever try to put up a Bot with a wide scope. First thing you should get into your mind is “LUIS doesn’t’ generate the answers for the questions. It just direct the questions come into the service into pre-defined answering lines.” Although LUIS limits the number of intents for a single LUIS model to 80, try to reduce the number of intents because it increases the probability of getting the right intent for the questions you ask. Lots of intents may confuse the language understanding algorithm of LUIS. It may lead your program to a wrong intent.

Use a good naming convention for the intents and entities

When you plan to build a LUIS model, choose a good naming convention. Else It would be hard for you when you referring the particular intent from your code. Don’t use too lengthy words as intent names. Just use short descriptive wordings. Using Camel case or dot separated phrases is a good practice.

As an example, for creating a Bot for a Pizza shop, these are some sample intents that you can use.

  • General.Introduction – Shows a general introduction about the coffee shop
  • General.ContactDetails – Show the contact details
  • General.ShowPriceList – Display the full price list of the coffee shop
  • Order.Pizza – Pizza ordering process starts
  • Order.ConfirmOrder – Confirm the placed order
  • Order.CancelOrder – Cancel a placed pizza order
  • Feedback.RecieveFeedback – Receive customer feedback.

Using this kind of dot separated phrases helps you to see the same kind of intents together in a row in your LUIS dashboard.

Use Entities wisely

Entities let you change the answer of a particular intent dynamically. It also let you to reduce the number of intents. If you using dynamic SQL queries to fetch data in your bot, entities may help you to build those queries with the appropriate parameters.

There are four types of custom entities as Simple, Composite, Hierarchal and List. If you want to identify a set of names from the question (Names of pizzas etc.) use the List type of custom entity. It increases the discoverability of the entity. You can add 20,000 items for a single list. Never forget to add the synonyms for the list items. Bot have to think from the user’s language.

img_3

Adding a item for a List type entity

There a set of pre-built entities that you can use. If you want to extract a number, age, date kind of data from the question, don’t hesitate to use them.

img_1

Pre-built Entities

Train the model with the maximum number of utterances that you can think of! –

Programmer never knows how the end user going to think. So, train each and every intent with the maximum number of utterances that you can think of.

Do check the model and improve it regularly

Even after publishing the LUIS model for the production, make sure to check the suggested utterances and retrain the model with them assigned for the correct intent.

Beware! LUIS is not from your country!

LUIS may capable of identifying a name of city in your question where you want it as an entity value. Sadly, LUIS may not recognize the name of your hometown as a city If it’s not popular as Paris or New York! So, use the ‘Feature List’ for this kind of applications. LUIS may learn that the similar kind of items in your feature list should be treated same. It doesn’t do a strict mapping. Just a hint for the algorithms behind the LUIS engine.

Do version controlling

Training the LUIS model with some utterances may lead it to push out some wrong predictions. Maintaining the clones of the previous versions may help you to rollback and start from the place you were right. Do clone the model when needed.

img_4These are just small tips you can use when building your own LUIS model. Do comment any best practice that you find useful in building an accurate model for your Bot.

 

Advertisements

Image Classification with CustomVision.ai

cv1Extracting the teeny tiny features in images, feeding the features into deep neural networks with number of hidden neuron layers and granting the silicon chips “eyes” to see has become a hot topic today. Computer vision has gone so far from the era of pattern recognition and feature engineering. With the advancement of machine learning algorithms combined with deep learning; understanding the content in the images and using them in real world applications has become a MUST more than a trend.

Recently during the Microsoft Build2017 conference, they announced a handy tool for training a machine learning image classification model to tag or label your own images. Most interesting part of this tool is, it provides an easy to use user interface to upload your own images for training the model.

After training and tuning the model you can use it as a web service. Using the REST API you just have to push the request to the web service and it’ll do the magic for you.

I just did a tiny experiment with this tool by building an image classifier that classifies few famous landmarks.

I’ve the following image set

  • Eiffel tower – 6 images
  • Great wall – 11 images
  • KL tower – 7 images
  • Stonehenge – 7 images
  • Space Needle – 7 images
  • Taj Mahal – 7 images
  • Sigiriya – 8 images

Let’s get started!

Go to customvision.ai – just sign in with your mail id and you’ll land onto the “My Projects” page

cv2Fill the name, description and select the domain you going to build the model. Here I’ve selected Landmarks because the images I’m going to use contains landmarks and structural buildings.

I had the images of each landmark in separate folders in my local machine. I uploaded the images category by category.  System will detect if you upload duplicate images.

cv4All together 53 images with different tags were uploaded for training.

Training will get few minutes. Optimize the probability threshold to get the best precision and recall. Then get the prediction URL. What you have to do is simply forward a JSON input for the Prediction API.cv7

You can retrain the model by tagging the images used for testing. In a production environment, you can use the user inputs to make the perdition model more accurate. The retrained model will appear as a different iteration. You have the freedom to choose the best iteration that should go live with the API.

You can quickly test how well the model you built us performing. Note that any ML model isn’t giving you 100% accuracy.

cv6

cv9

A prediction from the API

If you prefer to do this in a programmatic way, or your application need to do all the training and calling in the backend, just use Custom vision SDK.

https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Custom-Vision-Service/csharp-tutorial.md

The SDK comes pretty handy with training new models and adding labels for the images and training it before publishing the prediction API.

Grab a set of images. Build a classifier or a tagger. Make your clients WOW! 😃

Azure ML Web Services gets a new look

Huge buzz going on Machine Learning. What for?  Building intelligent apps is one of the dominant usages of machine learning. Web service is one of the understandable “language” for software developers. If the data scientists can provide a web service for the line of devs, they’ll be super excited because they only have to deal with JSON; not regression algorithms or neural networks! 😀

Azure ML studio provides you the power to deploy web services easily and nice interface that a software developer can understand. Consuming a web service built with Azure machine learning has become pretty easy because it even provide you the code samples and the sample JSONs that transfer in and out.

web-services

services.azureml.net

 

Recently AzureML Studio has come out with a new interface for managing the web services. Now it’s pretty easy for manage and monitor the behavior of your web services.

Go for your ML Studio. In web services section, you’ll find a new link directing to “New web services experience”. Currently it’s in the preview.

dashboard

New web services dashboard

 

Dashboard shows the performance of the web service that you built. The average execution time is shown there. Even you can get a glimpse on monetary terms attached with consuming the web service with the dashboard.

Testing the web services can be done through the new portal. If you want to build web application to consume the web service you built, can direct to the azure web app template that is pre-built for consuming ML web services.

Take a look from (http://services.azureml.net)  you’ll get used to it! 😀

 

 

Modules & Capabilities of Azure Machine Learning – Azure ML Part 03

Through the journey of getting familiar with Azure Machine Learning, cloud based machine learning platform of Microsoft, we discussed about the very first steps of getting started.
When you open up the online studio through your favorite web browser, you’ll directed to create a blank experiment. Let’s start with it.

start screen
Blank Experiment in Azure ML Studio

In your left hand side of the studio, you can see the pre-built modules that you can use to develop your experiments. If they are not enough for your case, you can use R or Python scripts in your experiment.
With Azure ML Studio, you get the ability to deploy models for almost all the machine learning problem types. The algorithms you can use for classification, regression and clustering are in the AML cheat sheet that you can download from here.(http://download.microsoft.com/download/A/6/1/A613E11E-8F9C-424A-B99D-65344785C288/microsoft-machine-learning-algorithm-cheat-sheet-v6.pdf)    machine-learning-algorithm-cheat-sheet-small_v_0_6-01

Will take a look into the sections that modules are categorize. If you want to find a specific module, what you have to do is search the experiment item from the search box.

Saved datasets – You can find out a set of sample datasets that you can use for experiments. Most of the popular machine learning related datasets like “iris dataset” are available here. If you want your own dataset in the studio, you can upload it to here.

Trained models – These are the models that you get as the output after training the data using an appropriate algorithm and methodology. They can be used for building another experiment or a web service later.

Data Format Conversions – The data comes in and going out from the experiment can be converted into a desired format using the modules in this section. If you wish to convert the output of your experiment to ARFF format (which supported in Weka) or to a CSV file you can use the modules here.

Data input & output – Azure ML has the ability to get data from various sources directly.  You can use an Azure SQL database, Azure BLOB storage or a hive query to get the data. Fetching data from a local SQL server is on preview yet (August 2016).

Data transformation – Data transformation tasks like normalization, clipping etc. can be done using the modules listed in this section. You can use SQL queries to do the data transformations if want.

Feature Selection – Appropriate feature selection increases the accuracy of your machine learning model drastically. There are three different methods as “Filter bases feature selection, Fisher linear discrimination and Permutation feature importance” that you can use according to your requirement.

Machine Learning – Within this section you can find out the modules built for training machine learning models, evaluate accuracy etc. Most of the popular machine learning algorithms used for classification, clustering and regression problems are listed down here as modules. The parameters of each module can be changed or use can you Tune Model Hyperparameters module to tune-up the experiment to get the optimal output.

OpenCV library Modules – ML is widely using in image recognition. In Azure ML there’s Predefined Cascade Image Classification that is trained to identify the images with front facing human faces.

Python language models – Python is one of the widely using languages in data mining and machine learning applications. With Azure ML studio you have the ability to execute your own python script using this module. 200+ common python libraries are supported with Azure ML right now.

R language models – Same as Python, R is one of the most favorite statistical languages among data scientists. You can use your favorite R scripts and train models with R using these modules. Most of the R packages are supported in Azure ML. If the package is not there you can import the packages for the experiment. (Unfortunately there are some limitations in this. Some R packages like RJava, openNLP are not supported yet with Azure ML – Aug.2016)

Statistical Functions – If you want to do some mathematical functions for the data or perform statistical operations, here you can find out the modules for that. A basic descriptive statistical analysis on the dataset also can be performed using the modules.

Text Analytics – Machine learning models can be used for text analytics. There are some modules included in Azure ML studio for text preprocessing (omit the stop words, punctuation marks, white spaces etc.), Named entity recognition (Pre trained module) and many more. Vawpal Wabbit learning system library is also included in the modules for the use.

Web service – One of the most notable advantages in Azure ML is the ability to deploy as a web service. Here’s the web service input and output modules that can be used for the built experiments.

Deprecated – Assigning data for clusters, binning, quantizing data, cleansing missing data can be done using these modules.

Building Azure ML experiments and deploying web applications using them are not that hard.

This is one of the best step by step guide for that task from MSDN.

In the coming posts will discuss on interesting applications in Azure ML hacks to build your predictive models.
Play with the tool and leave your experience as comments below.  🙂

  

Behind the Scene – Azure ML Part 02

OverviewOfAzureML_960With the power of cloud, we going to play with data now! 🙂

Machine Learning is a niche part of predictive analysis. Predictive analysis gets its power from the tools and techniques like mathematics, statistics, data mining, machine learning etc.… Predictive analysis doesn’t refer only predicting future events; real-time fraud credit card transaction detection also falls under a usage of predictive analysis.

Am not going to discuss the usages of machine learning and what you can do with machine learning methods. Let’s see what are the benefits that you getting by using Azure ML Studio for your analysis.

Fully managed scalable cloud service –You have to deal with thousands, mostly with millions of data records when you doing your analysis. The computation power of the local machine may not be sufficient for those kind of mammoth tasks. Get the use of Azure scalable & efficient cloud. It’ll make your predictions super-fast.

Ability to develop & deploy –Want to deploy an application that get intelligence with a ML backend? AzureML Studio is the best solution then. It provides you the ability to easily deploy a web service from your built ML model and use that in your application. REST will do the rest. J

Friendly user interface for data science workflow –I’m pretty sure dragging and dropping is your ‘thing’ right? So AML Studio suits for you! D from data loading to deployment of the web service, you get a friendly UI where mostly you can just drag and srop the modules into the workspace without bothering about their underlying complex algorithms.

Wide range of ML algorithms inbuilt –No need to start from the scratch. There are plenty of ML algorithms pre built as models in AML Studio. You can use them right away for building models.

R & Python integration –For data scientists, R and Python are like life blood. IF you wish to do intergrade your own scripts in the model, with AML Studio you have the chance here. You can choose either R/python or the both. AML Studio takes care of it.

Support for R libraries –R language has its vibrant user community and the rich set of libraries. With AML studio you get the access for most of the R libraries and you can add more libraries if want too.

1602.image_3FBAEFDE

Azure Machine Learning Process

Let’s go with the process. All starts with defining the objective. Before jumping into the problem, you should have a clear idea on what you going to do. Whether it’s a classification, linear regression, recommendation… you should be able to figure out it by skimming through the data sources and the problem definition.

 

Then the Data! Data maybe a set of sales data in your enterprise cloud or in your local storage. Identify the relevant data fields and components that you want for building up the model. If dataset exceeds 10GB, it’s better to store the data in Azure SQL database first and get the data through the ‘Import Data’ module. You can use HDInsight stored data using Hive queries too.

Pay attention on the data quality. Normally real world data is noisy, full of outliers, error values, missing values etc. So data preprocessing should be done first.  Make sure the data fields are in the appropriate type (Numerical, categorical, etc.) In Azure ML there are plenty of modules that you can perform data preprocessing tasks.

Model Development! Here’s the fun part. You can use ML algorithms comes with studio or you can go with your own scripts in R or python here. If you familiar with ML model development platforms like Weka, RapidMiner, Orange you will find out this is, it is not so different. You have to put the right module at the right place. Have to use right algorithm to take the right decision.

After developing the model, normally we should train the models. For that you can use the past data that you have. You must always keep a portion from your dataset for testing the model too.

Is it over after training the model? No. Many more in the process. You should score and evaluate the model you built. It is useless if the predictions you making with the model you built is having a high error rate. You may haven’t use the appropriate algorithm or you may haven’t use the correct and optimal parameters. So using the ‘score model’ and ‘evaluate model’ you can compare different algorithms for the particular task and pick the best one out from them.

It’s obvious that ML algorithms are not 100% accurate always. But the model you building should have an accuracy more than a wild guessing.

After building your predicting magic box, you can publish it as a web service. This allows you to consume it either by a custom application, Microsoft Excel or similar tool.

For more accuracy, normally this process goes in an iterative manner.

Finishing up the theories and let’s get our hands dirty with our experiments!

Simply there are 3 steps to start working with Azure ML

  1. Navigate to AzureML and choose your subscription plan
  2. Create a Machine Learning workspace in Azure Portal
  3. Sign in to ML Studio

Step 01 –Go to http://www.azure.com and products -> Analytics -> Machine Learning

cap1You can use AzureML absolutely for free. But if you want to deploy a web service and play with serious tasks have to go for an appropriate subscription. If you have a MSDN subscription, you can use it here 🙂

cap2

Azure ML subscriptions

Step 02 –You need an Azure account here. If you don’t have one go for the 3-month free trial.

cap3In the portal go for new -> data + analytics -> Machine Learning

From there you can create your workspace to do the machine learning tasks.

Step 03 –Sign in to the Azure ML Studio from https://studio.azureml.net

cap4Now you are there! Click on the new -> Blank experiment!

We are ready to start the now.

The GUI of the AML Studio is pretty clear and easy to understand. Try to find out the way to upload the datasets and the modules that contains the ML algorithms from the pane in the left hand side.

Will explore some cool capabilities of Azure ML in the coming posts. Here’s a video for your motivation.

Part 01

Microsoft Project 2016 Is Here!!!

Office 2016With the Microsoft Office 2016 release the ultimate project management client; Microsoft Project also got up to date. There’s more fluidity in the interface & the new cool feature comes with Office 2016, “Tell me” is integrated with Project.It helps to do the tasks you want much easily.

It is said that Office Apps are now working with Microsoft Project. If you write an App for Project it can now update the file … way more useful and powerful!

More flexible timelines. Project 2016 supports multiple timelines in a single project!

Many more to discover. Will update you with the useful features integrated with Project 2016 soon.

Here’s the Office official support article on the new features on Microsoft Project 2016.

Microsoft HoloLens – Future Is Here

Microsoft HoloLens

Microsoft HoloLens

Reporting from Imagine Cup finals at Microsoft Head Office Redmond, Seattle, WA. Yesterday I was fortunate enough to walk in the Microsoft HoloLens Academy! Yeah! It was like jumping into a science fiction.

As Microsoft says “Microsoft HoloLens is the first fully untethered, see-through holographic computer. It enables high-definition holograms to come to life in your world, seamlessly integrating with your physical places, spaces, and things. We call this experience mixed reality. Holograms mixed with your real world will unlock all-new ways to create, communicate, work, and play.”

It’s more than augmented reality. HoloLens was announced back in January. Microsoft is still putting together to enhance the user experience of it.

Satya Nadella, CEO Microsoft said “think of it as Kinect++, and then square it”:D That was his explanation on this cool device.

I walked into the HoloLens academy to see some magic. 😀 It was restricted to carry the electronic devices, mobiles almost NOTHING to the academy. Company policies 😉 But I was not disappointed.

First thing I saw was a set of computers running Windows 10 on desks. Turned around. Then the HoloLens. Thing that is more like a device that we’ve seen in science fictions.

It was not hard to wear the device. Though it was bit…. Heavy (am sure Microsoft will solve this tiny issue). No wires attached & it was able to create holograms that is perfectly blended with my surrounded environment.

microsoft-hololens-experience

Microsoft HoloLens Experience

Think of a minecraft game with augmented reality. The characters running on your own room floor… on your sofa! That’s mind blowing guys. I bet, with Microsoft HoloLens the gaming culture will go for a new era. Gamers will love to get their surroundings to the gaming environment.

Designers and researchers will get the ability to interact with the virtual reality objects that they have created with the gestures. So unlimited freedom to use a 3D canvas!

The first experience of me with this cool gadget was a little red jeep that will come to the place where you gaze at! Then 2 paper balls rolling on the actual floor that follows all the physics laws.

So guys, programming for HoloLens.. Yeah it’s not rocket science. Keep your eyes and polish your skills with Unity Game Engine. 😉 It’ll help you to get into the pool!

Keep calm… You’ll experience the magic! 😀