What’s next after ChatGPT?

A Large Language Model as a Van Gogh Artwork (Dall-E2)

The hype on Generative AI is still there. Everyone is looking for applications of GenAI and investing to get a competitive advantage for businesses with AI-based interventions. These are a couple of questions I came across, as well as my view on LLMs and their future.

Can we achieve everything with LLMs now?

The straightforward answer is NO! LLMs can’t handle all tasks and are best suited as tools for most natural language processing tasks, particularly in information retrieval and conversational applications. Despite their strengths, there are still plenty of simple approaches that find practical use in real-world scenarios. In simpler terms, LLMs have their unique use cases, but they can’t do everything like a wizard!

Are LLMs taking over the tech world?

Is this the end of traditional ML? Not at all. As mentioned earlier, LLMs don’t cover all machine learning tasks. Most data analytical and machine learning use cases involve numerical data often organized in a relational structure, where traditional machine learning algorithms excel. Traditional machine learning techniques are expected to remain relevant for the foreseeable future.

Artificial General Intelligence (AGI)?

Have we reached it? General Artificial Intelligence (General AI) envisions AI systems with human-like abilities across various tasks. However, we are not there yet. While there’s a possibility of achieving some level of AGI, current LLMs, including applications like ChatGPT, should not be confused with AGIs. LLMs are proficient in predicting text frequencies using transformers but struggle with complex analytical tasks where human expertise is crucial.

Are enterprises ready for the AI hype?

Having worked with numerous enterprises, I’ve observed a willingness to invest in AI projects for streamlining business processes. However, many struggle to identify suitable use cases with a considerable Return on Investment (RoI). Some organizations, even if prepared for advanced analytics, face extensive groundwork in their IT and data infrastructure. Despite these challenges, the AI hype has prompted businesses to recognize the potential of leveraging organizational data resources effectively. In the coming months, we anticipate a significant boost not only in LLM-based applications but also in traditional machine learning and deep learning applications across industries.

Ethical AI? What’s happening there?

With the public’s increasing adoption of ChatGPT and large language models, conversations about responsible AI use have gained traction. The European Union has passed pioneering AI legislation, and Australia is actively working on regulating AI systems and establishing ethical AI guidelines. Countries like Australia have introduced AI ethics frameworks and established National AI Centres to promote responsible AI practices and innovation.

Leading companies like Microsoft are contributing to responsible AI by introducing guidelines and toolboxes for transparent machine learning application development. Governments and corporations are moving towards regulating and controlling AI applications, a positive development in ensuring responsible AI use.

There’s no turning back now. We must all adapt to the next wave of AI and prepare to harness its full potential.

Using PySpark’s PartitionBy Function for String Column in Data Partitioning

In the previous blog post, we looked into data partitioning in Spark applications and the benefits it brings to the table. An essential consideration for data partitioning is deciding which data field or column to employ as the basis for partitioning your data.

When your data includes categorical or temporal columns, the choice for a partitioning column is relatively straightforward. However, what if you find yourself faced with a string column that needs to serve as the partitioning factor? It doesn’t make sense to partition data based on a multitude of distinct string values, and this approach won’t necessarily lead to optimal Spark performance.

There’s a clever technique that can be employed to partition Spark DataFrames based on the values within a string column. It involves using the first letter of each string value as the partitioning criterion. This method not only simplifies the partitioning process but can also enhance the efficiency of your Spark operations.

Here’s a simple example on performing data partitioning based on the first letter of a string column.

# create a list of tuples containing data
data = [("Alice", 25), ("Bob", 30), ("Charlie", 35), 
              ("Dave", 40), ("Eliza", 23), ("'Owen", 21)]

# create a PySpark DataFrame from the list of tuples
df = spark.createDataFrame(data, ["Name", "Age"])

# Create partioning column and add it to the DataFrame
df = df.withColumn("Name_first_letter", df.Name.substr(1, 1))

# Remove non-alphanueric characters from Name_first_letter column
# This ensure that the partitioning column is a valid directory name
df = df.withColumn("Name_first_letter", 
                  df.Name_first_letter.regexp_replace("[^a-zA-Z0-9]", "_"))

# Write the PySpark DataFrame to a Parquet file 
# partitioned by Name_first_letter
df.write.partitionBy("Name_first_letter").parquet("data/output/df_parquet")

Partitioning a PySpark DataFrame by the first letter of the values in a string column can have several advantages, depending on your specific use case and data distribution. Here are some potential advantages:

  • Improved Query Performance: Partitioning can significantly improve query performance when you often filter, or aggregate data based on the first letter of the string column. By partitioning the data by the first letter, you can avoid scanning the entire DataFrame for each query, and instead, the query engine can efficiently skip irrelevant partitions.
  • Faster Data Retrieval: When you’re looking for specific records that start with a particular letter, partitioning makes it faster to retrieve those records. It reduces the amount of data that needs to be read and processed.
  • Parallelism: Partitioning allows for better parallelism in data processing. Different tasks can work on different partitions concurrently, which can lead to improved overall performance.
  • Reduced Shuffling: Partitioning can reduce data shuffling during operations like joins and aggregations.
  • Easier Data Maintenance: If your data frequently changes, partitioning can make it easier to update or append data to specific partitions without affecting the entire dataset. This can be particularly useful for scenarios where you have a large historical dataset and regularly receive new data.

It’s worth noting that while partitioning can offer these advantages, it also comes with some trade-offs. Partitioning can increase storage overhead because it creates multiple directories/files on the storage system. Additionally, it requires careful planning to choose the right partitioning strategy based on your query patterns and data characteristics.  

Optimising Spark Analytics: The PartitionBy Function

Apache Spark is an open source, distributed computing framework designed for big data processing and analytics.  Spark was created to address the shortcomings of MapReduce in terms of use and versatility.

Recently I was using Spark framework (Through PySpark API) on Azure Synapse Analytics extensively for analysing bulks of retail data.  Following better optimising approaches makes a huge difference in runtimes when it comes to Spark queries.

Among several Spark optimisation methods, I thought of discussing the importance of data partitioning in Spark and ways we can partition data.

Most of the enterprises are adapting data Lakehouse architecture today and it’s common that most of the data is stored in parquet format. Parquet is a columnar storage file format which is designed to optimise performance when dealing with large analytical workloads.

With some experiments I did with large datasets, I found “partitionBy” option with parquet in Spark makes a huge difference in query performance in most of the use cases. It specifies how the data should be organized within the storage system based on one or more columns in your dataset.

Using “partitionBy” column values has several advantages.

  • Improved Query Performance: When you partition data by specific column values, Spark can take advantage of partition pruning during query execution. This means that when you run a query that filters or selects data based on the partitioned column, Spark can skip reading irrelevant partitions, significantly reducing the amount of data that needs to be processed. This results in faster query execution.
  • Optimized Joins: If you often join tables based on the partitioned column, partitioning by that column can lead to more efficient join operations. Spark can perform joins on data that is collocated in the same partitions, minimizing data shuffling and reducing the overall execution time of the join operations.
  • Reduced Data Skew: Data skew occurs when certain values in a column are much more frequent than others. By partitioning data, you can distribute the skewed values across multiple partitions, preventing a single partition from becoming a bottleneck during processing.
  • Simplified Data Management: Each partition represents a distinct subset of your data based on the partitioned column’s values. This organization simplifies tasks such as data backup, archiving, and deletion, as you can target specific partitions rather than working with entire datasets.
  • Enhanced Parallelism: Different partitions can be processed concurrently by separate tasks or executors, maximizing resource utilization and speeding up data processing tasks.
  • Facilitates Time-Based Queries: If you have time-series data, partitioning by a timestamp or date column can be especially useful. It allows you to efficiently perform time-based queries, such as retrieving data for a specific date range, by only reading the relevant partitions.
  • Easier Maintenance: Partitioning can simplify data maintenance tasks, like adding new data or updating existing data. You can easily add new partitions or drop old ones without affecting the entire dataset.

Let’s walk through a practical experience to understand the advantages of using partitionBy column values in Spark.

Scenario: Imagine you’re working with a large dataset of sales transactions from an e-commerce platform. Each transaction has various attributes, including a timestamp representing the transaction date. You want to analyse the data to find out which products sold the most during a specific date range.

Step 1 – Partitioning by Date: First, partition the sales data by the transaction date using the partitionBy command in Spark when writing the data to the storage. Each partition represents a specific date, making it easy to organize and query the data.

# Specify the partition column
partition_column = "transaction_date"

# Write the DataFrame to Parquet with partitionBy
df.write.partitionBy(partition_column).parquet("/path/to/output/directory")

Step 2 – Query for Date Range: Let’s say you want to analyse sales for a specific date range. When you run your PySpark query, it automatically prunes irrelevant partitions and only reads the partitions that contain data for the selected date range.

# Read the partitioned data from Parquet
partitioned_df = spark.read.parquet("/path/to/output/directory")

# Query data for a specific date
specific_date = date(2023, 7, 1)
result = partitioned_df.filter(partitioned_df[partition_column] == specific_date)

result.show()

In summary, using partitionBy column values in Spark can significantly enhance query performance, optimize joins, improve data organization and management, and provide advantages in terms of data skew handling and parallelism. It’s a powerful technique for optimizing Spark jobs, particularly when dealing with large and diverse datasets.

Share your experience and approaches you use when it comes to optimising Spark query performance.

Empowering Conversations with ChatGPT: The Art of Prompt Engineering

The release of ChatGPT in November 2022 gave a mega boost for the world of AI. With that, everyone started playing around with Large Language Models (LLMs) and explore the possibilities of them.

While opening up interesting possibilities, new approaches and methodologies for interacting with LLMs came into the play. Prompt engineering is one of the major domains even the general public is interested in right now.

To get myself familiar with the context and not to feel left behind, I followed several online resources including the Prompt Engineering for developers from Deeplearning.ai. Here are some interesting points caught my attention. This brief would be helpful for anyone who’s using ChatGPT or any LLM not only in development scenarios, even for general daily chaos.

What’s actually Prompt Engineering?

Without any hesitation, I asked this question from ChatGPT 😀

Prompt engineering refers to the process of designing and refining prompts to elicit desired responses from a language model like ChatGPT. It involves crafting the initial instruction or query in a way that effectively communicates the desired task or information to the model. Prompt engineering can significantly influence the quality and relevance of the model’s responses.

ChatGPT

In simple terms,

Prompt engineering is the methodology of giving clear instructions to LLMs. It helps the language model to understand the instructions clearly and provide a better precise output.

LLMs in the GPT family (GPT-3, GPT3.5, GPT4 etc.) are trained to predict the next word occurrence of a given output. given that, the instructions we provide to the model should be specific and understandable. There are no hard bound rules for prompting the instructions, but the most precise would be better.  

There are two key principles we should keep in mind when prompting.

  1. Write clear and specific instructions.
  2. Give the model time to “think”.

Be clear! Be Precise!

There are many tactics which we can follow in order to make out instructions clear and easy to understand for an LLM.

Use delimiters to indicate distinct parts of the input

Let’s get the example of using a GPT model to summarize a particular text. It’s always better to clearly indicate which text parts of the prompt is the instruction and which is the actual text to be summarized. You can use any delimiter you feel comfortable with. Here I’m using double quotes to determine the text to summarised.

Ask for structured outputs

When it comes to LLM aided application development, we use OpenAI APIs to perform several natural language processing tasks. For an example we can use the GPT models to extract key entities, and the sentiment from a set of product reviews. When using the output of the model in a software system, it’s always easy to get the output from a structured format like JSON object or in HTML.

Here’s a prompt which gets feedback for a hotel as the input and gives a structured JSON output. This comes handy in many analytics scenarios and integrating OpenAI APIs in production environments.  

It’s always good to go step by step

Even for humans, it’s better to provide instructions in steps to complete a particular task. It works well with LLMs too. Here’s an example of performing a summarization and two translations for a customer review through a single prompt. Observe the structure of the prompt and how it’s guiding the LLM to the desired output. Make sure you construct your prompt as procedural instructions.  

Here’s the output in JSON.

{
  "review": "I recently stayed at Sunny Sands Hotel in Galle, Sri Lanka, and it was amazing! The hotel is right by the beautiful beach, and the rooms were clean and comfy. The staff was friendly, and the food was delicious. I also loved the pool and the convenient location near tourist attractions. Highly recommend it for a memorable stay in Sri Lanka!",
  "English_summary": "A delightful stay at Sunny Sands Hotel in Galle, Sri Lanka, with its beautiful beachfront location, clean and comfortable rooms, friendly staff, delicious food, lovely pool, and convenient proximity to tourist attractions.",
  "French_summary": "Un séjour enchanté à l'hôtel Sunny Sands à Galle, Sri Lanka, avec son emplacement magnifique en bord de plage, ses chambres propres et confortables, son personnel amical, sa délicieuse cuisine, sa belle piscine et sa proximité pratique des attractions touristiques.",
  "Spanish_summary": "Una estancia encantadora en el hotel Sunny Sands en Galle, Sri Lanka, con su hermosa ubicación frente a la playa, habitaciones limpias y cómodas, personal amable, comida deliciosa, hermosa piscina y ubicación conveniente cerca de atracciones turísticas."
}

Hallucinations are one of the major limitations LLMs are having.  Hallucination occurs when the model produces coherent-sounding, verbose but inaccurate information due to a lack of understanding of cause and effect. Following proper prompt methodologies and tactics can prevent hallucinations to some extend but not 100%.

It’s always the developer’s responsibility to develop AI systems follow the responsible AI principles and accountable for the process.

Feel free to share your experiences with prompt engineering and how you are using LLMs in your development scenarios.

Happy coding!

Do we really need AI?

DALL-E Artwork

Since the launch of ChatGPT in last November, not only the tech community, but also the general public started peeping into the world of AI. As mentioned in my article “AI summer is Here”, organisations are looking for avenues where they can use the power of AI and advance analytics to empower their business processes and gain competitive advantage.

Though everyone is looking forward for using AI, are we really ready? Are we there?

These are my thoughts on the pathway an organisation may follow to adopt AI in their business processes with a sensible return on investment.

First of all, there’s a key concept we should keep in mind. “AI is not a wizard or a magical thing that can do everything.” It’s a man-made concept build upon mathematics, statistics and computer science which we can use as a toolset for certain tasks.

We want to use AI! We want to do something with it! OR We want to do everything with it!

Hold on… Though there’s a ‘trend’ for AI, you should not jump at it without knowing nothing or without analysing your business use cases thoroughly. you should first identify what value the organization is going to gain after using AI or any advance analytics capability. Most likely you can’t do everything with AI (yet). It’s all about identifying the correct use case and correct approach that aligns with your business process.

Let’s not focus on doing something with AI. Let’s focus on doing the right thing with it.

We have a lot of data! So, we are there, right?

Data is the key asset we have when it comes to any analytical use case. Most of the organizations are collecting data with their processes from day 1. The problem lies with the way data is managed and how they maintain data assets and the platform. Some may have a proper data warehouse or lake house architecture which has been properly managed with CI/CD concepts etc, but some may have spread sheets sitting on a local computer which they called their “data”!

The very first thing an organization should do before moving into advance analytics would be streamlining their data platform. Implementing a proper data warehouse architecture or a data lake architecture which follows something similar to Medallion architecture would be essential before moving into any analytics workloads.

If the organization is growing and having a plan to use machine learning and data science within a broader perspective, it is strongly recommended to enable MLOps capabilities within the organization. It would provide a clear platform for model development, maintenance and monitoring.

Having a lot of data doesn’t mean you are right on track. Clearing out the path and streamlining the data management process is essential.

Do we really need to use AI or advance analytics?

This question is real! I have seen many cases where people tend to invest on advance analytics even before getting their business processes align with modern infrastructure needs. It’s not something that you can just enable by a click of a button. Before investing your time and money for AI, first make sure your IT infrastructure, data platforms, work force and IT processes are up to date and ready to expand for future needs.

For an example, will say you are running a retail business which you are planning to use machine learning to perform sales forecasts. If your daily sales data is still sitting on a local SQL server and that’s the same infrastructure you going to use for predictive analytics, definitely that’s going to be a failure. First make sure your data is secured (maybe on a cloud infrastructure) and ready to expose for an analytical platform without hassling with the daily business operations.  

ChatGPT can do everything right?

ChatGPT can chat! 😀

As I stated previously, AI is not a wizard. So as ChatGPT. You can use ChatGPT and it’s underlying OpenAI models mostly for natural language processing based tasks and code generation and understanding (Keep in mind that it’s not going to be 100% accurate). If your use case is related to NLP, then GPT-3 models may be an option.

When to use generative AI?

Variational Autoencoders, Generative Adversarial Networks and many more have risen and continue to advance the field of AI. That has given a huge boost for the domain of generative AI. These models are capable of generating new examples that are like examples from a given dataset.

It is being used in diverse fields such as natural language processing (GPT-3, GPT-4), computer vision (DALL-E), speech recognition and many more.

OpenAI is the leading Generative AI service provider in the domain right now and Microsoft Azure offers Azure OpenAI, which is an enterprise level serving of OpenAI services with additional advantages like security and governance from Azure cloud.

If you are thinking about using generative AI with your business use case, strongly recommend going through the following considerations.

If you have said yes to all of the above, then OpenAI may be the right cognitive service to go with.

If it’s not the case, you have to look at other machine learning paradigms and off-the-shelf ML models like cognitive services which may cater better to the scenario.

Ok! What should be the first step?

Take a deep dive for your business processes and identify the gaps in digital transformation. After addressing those ground level issues, then look on the data landscape and analyse the possibilities of performing analytical tasks with the it. Start with a small non-critical use case and then move for the complex scenarios.

If you have any specific use cases in mind, and want to see how AI/ machine learning can help to optimize those processes, I’m more than happy to have a discussion with you.  

btw, the image on the top is generated by DALL-E with the prompt of “A cute image of a robot toy sitting near a lake – digital art

Managing Python Libraries on Azure Synapse Apache Spark Pools

Azure Synapse Analytics is Microsoft’s one-stop shop that integrates big data analytics and data warehousing. It features Apache Spark pools to provide scalable and cost-effective processing power for Spark workloads which enables you to quickly and easily process and analyse large volumes of data at scale.

I have been working with Spark environment on Azure Synapse for a while and thought of documenting the experience I had with installing external python libraries for Spark pools on Azure. This guideline may come handy for you if you are performing your big data analytics experiments with specific python libraries.

Apache Spark pools on Azure Synapse Workspaces comes with Anaconda python distribution and with pip as a package manager. Most of the native python libraries used in the data analytics space are already installed. If you need any additional packages to be installed and used within your scripts, there are 3 ways you can do it on Synapse.

  • Use magic command on notebooks to install packages in session level.
  • Upload the python package as a workspace package and install in Spark pool.
  • Install packages using PIP or conda input file.

01. Use magic command on notebooks to install packages in session level.

Install packages for session level on Notebooks

This is the most simple and straight forward way of installing a python package in Spark session level. You just have to use the magic command followed up with the usual pythonic way of installing the package through pip or conda. Though this is easy for prototyping and quick experiments, it’s pointless if you are installing it over and over again when you start a new spark session. Better to avoid this method in production environments. Good for rapid prototyping experiments.

02. Upload the python package as a workspace package and install in Spark pool.

Upload python libraries as workspace packages

Azure Synapse workspace allows to have workspace packages uploaded and install on the Spark pools. It accepts python wheels (.whl files), jar files or tar.gz as packages.

After uploading the packages go for specific Apache Spark pool and then select the packages you want to install on it. The initial installation may take few minutes. (In my case, it took around 20 mins to install 3 packages)

With the experience I had with different python packages, I would stick with python wheels from pip or jars from official package distributions. I tried sentence-transformers tar.gz file from pyPI (https://pypi.org/project/sentence-transformers/ ). It gave me an error during the installation process mentioning a package dependency with R (which is confusing)

03. Install packages using PIP or conda input file.

Upload the requirement files

If you are familiar with building conda environments or docker configurations, having a package list as a config file should not be new to you. You can specify either a .txt file or an .yml file with the desired package versions to be installed to the Spark cluster. If you want to specify a specific channel to get libraries, you should use a .yml file.

For an example if you need to install sentence-transformers package and spark-nlp python packages which are used in NLP for the Spark environment, you should add these two line in a .txt file and upload it as the input file for the workspace.

sentence-transformers===2.2.2
spark-nlp===4.4.1

I find this option as the most robust way of installing python packages to a Spark since it gives you the flexibility to select the desired package version and the specific channel to use during installation. It saves the time from installing the packages each time when the session get refreshed too.

Let me know your experience and the lessons learned during experiments with Apache Spark pools on Azure Synapse Analytics.

Here’s the Azure documentation on this for further reference.

AI Summer is Here!

Undoubtedly, we are going through another AI summer where we see a rapid growth in Artificial Intelligence based interventions. Everyone is so keen on using applications like ChatGPT and want to know what’s the next big thing. On my last post, I was discussing the underlying technicalities of large language models (LLMs) which is the baseline of ChatGPT.

Through this post, I’m trying to give my opinion on some of the questions raised by my colleagues  and peers recently. These are my own opinions and definitely there can be different interpretations.

What’s happening with AI right now?

Yes. AI is having a good time! 😀 Recently almost all the mainstream media started talking about AI and it’s good and bad. Even people who are not very interested in technology started their research on AI. In my point of view, it all happened since major tech companies and research institutes came out with AI base products that general public can use not only for business operations, but also for entertainment purposes. For example, AI base image generation tools (mostly based on artificial neural models as Dall-E) started getting popular in social media. ANN based image filters and chatting applications conquered all around social media.

All these fun stuffs didn’t come overnight. Those products have been under research and development for years. There have been more interesting stuff happening in research world and the problem was giving an easy accessibility of their abilities for general public.

Right now, we can see a clear motivation from enterprises to make AI more accessible and user-friendly. The barrier of having extensive understanding on mathematics and related theories to use AI and machine learning is getting reduced day by day.      

Yes. I see that as a good trend.

ChatGPT is the only big thing happened right?

No, it’s not! ChatGPT is an application build upon the capabilities of GPT-3 which is a LLM. OpenAI has announced they are coming up with GPT-4 soon, which is going to be much more powerful in understanding human language and images.

With the large computation capabilities opening up with technological advancements like cloud, we can observe research is going towards training massive neural networks to understand the non-linear data such as natural language, image and videos. The recent advancements of architectures like Transformers (which was introduced in 2017), has making very big changes in computer vision too.

Applications such as GitHub Copilot, an AI pair programming application gives software developers the ability to code faster and produce more reliable outcome efficiently.

Microsoft announced, Copilot is going to be a part of their office365 suite, which is widely used in many industries. It’s going to be a definite big change for the way that people work with computers today.

I’m a tech folk. How AI is going to affect my career?

I work in the AI domain. The advancement of AI directly affects my career and I always keep myself busy reading and grabbing new trends as much as I could (which is impossible with this rate!)

AI developers or the machine learning engineers are not the only groups getting affected directly with these advancements. Remember how cloud technologies changed the way we operate in enterprises, completely eliminating the excessive usage of cumbersome networking devices and in-house servers. I feel something similar is going to happen with AI too. IT folks may have to adapt for using AI As a Service in application development life cycle. You can be a dinosaur. Let’s get updated!

I’m a non-tech person. How AI is going to affect me?

It’s not only for tech savvy folks. Since, the trend is all about making AI more accessible, the AI based interventions are going to be there in almost all the industries. Starting from retail business, the knowledge focused industries such as legal and education going too going to get affected with the ways of working.

If you are a non-tech person definitely you should research on the ways of using AI and machine learning in your career.  

Can AI displace people from job?

Yes and no! According to a McKinsey study, Artificial intelligence could displace roughly 15% of workers, or 400 million people, worldwide between 2016 and 2030! Most the monotonous jobs will get replaced with robots and AI applications. For example, the manual labour needed for heavy lifting in factories will be replaced with computer vision assisted robots. Some monotonous tasks in domains like accounting will be replaced by automated applications powered with AI based applications.  

In the meanwhile, new job will be generated to assist AI powered operations. The term data scientist was not widely used even a decade ago. Now it has become one of the most demanding professions. It’s just an example.  

Isn’t excessive use of AI is harmful?

Excessive use of anything is harmful! It’s the same with AI too. There are many discussions going on related to regulating human-competitive machine intelligence. Enterprises are working towards concepts such as responsible AI to make sure AI systems are not harmful for the betterment of mankind and the nature.

To be frank, one can argue that AI base systems are reducing the critical thinking ability and the creative power of humans. In a way that can be true in some contexts. We should find ways to keep it as only a tool, but not totally rely on it.

Recently, an open letter titled “Pause Giant AI Experiments” was published enforcing the need of proper rule sets and legislations around AI systems. It was signed by profound scientists including Yoshua Bengio and known tech folks as Elon Musk.

Personally, I strongly believe a framework on regulating the usage of AI interventions should be there and the knowhow on such applications should be made widely available for general public.

That’s not something can be done overnight. Global organisations and governments should start working together soon as possible to make it happen.  

What’s the future would look like with AI?

It’s something we actually can’t predict. There will be more AI applications which may have human-like abilities and even more than that in some domains. The way people work will change drastically with the wide accessibility of AI. As we use electricity today to make our lives easy, the well-managed AI systems will be assisting us with our daily chaos. Definitely we may have to keep our eyes open!  

Streamline Your Machine Learning Workflow with Docker: A Beginner’s Guide

docker containerizing

Docker has revolutionized the way applications are developed and deployed by making it easier to build, package, and distribute applications as self-contained units that can run consistently across different environments.

I’ve been using docker as a primary tool with my machine learning experiments for a while. If you interested in reading how sweet the docker + machine learning combo is; you can hop to my previous blog post here.

Docker is important in machine learning for several reasons:

  • Reproducibility: In machine learning, reproducibility is important to ensure that the results obtained in one environment can be replicated in another environment. Docker makes it easy to create a consistent, reproducible environment by packaging all the necessary software and dependencies in a container. This ensures that the code and dependencies used in development, testing, and deployment are the same, which reduces the chances of errors and improves the reliability of the model.
  • Portability: Docker containers are lightweight and can be easily moved between different environments, such as between a developer’s laptop and a production server. This makes it easier to deploy machine learning models in production, without worrying about differences in the underlying infrastructure.
  • Scalability: Docker containers can be easily scaled up or down to handle changes in the demand for machine learning services. This allows machine learning applications to handle large amounts of data and processing requirements without requiring a lot of infrastructure.
  • Collaboration: Docker makes it easy to share machine learning models and code with other researchers and developers. Docker images can be easily shared and used by others, which promotes collaboration and reduces the amount of time needed to set up a new development environment.

Overall, Docker simplifies the process of building, deploying, and managing machine learning applications, making it an important tool for data scientists and machine learning engineers.

Alright… Docker is important. How can we get started?

I’ll share the procedure I normally follow with my ML experiments. Then demonstrate the way we can containerize a simple ML experiment using docker. You can use that as a template for your ML workloads with some tweaks accordingly.

I use python as my main programming language and for deep learning experiments, I use PyTorch deep learning framework.  So that’s a lot of work with CUDA which I need to work a lot with configuring the correct environment for model development and training.

Since most of us use anaconda as the dev framework, you may have the experience with managing different virtual environments for different experiments with different package versions. Yes. That’s one option, but it is not that easy when we are dealing with GPUs and different hardware configurations.

In order to make my life easy with ML experiments, I always use docker and containerize my experiments. It’s clean and leave no unwanted platform conflicts on my development rig and even on my training cluster.

Yes! I use Ubuntu as my OS. I’m pretty sure you can do this on your Mac and also in the Windows PC (with bit of workarounds I guess)

All you need to get started is installing docker runtime in your workstation. Then start containerizing!

Here’s the steps I do follow:

  1. I always try to use the latest (but stable) package versions in my experiments.
  2. After making sure I know all the packages I’m going to use within the experiment, I start listing down those in the environment.yml file. (I use mamba as the package manager – which is similar to conda but bit faster than that)
  3. Keep all the package listing on the environment.yml file (this makes it lot easier to manage)
  4. Keep my data sources on the local (including those in the docker image itself makes it bulky and hard to manage)
  5. Configure my experiments to write its logs/ results to a local directory (In the shared template, that’s the results directory)
  6. Mount the data and results to the docker image. (it allows me to access the results and data even after killing the image)
  7. Use a bash script to build and run the docker container with the required arguments. (In my case I like to keep it as a .sh file in the experiment directory itself)    

In the example I’ve shared here, a simple MNIST classification experiment has been containerized and run on a GPU based environment.

Github repo : https://github.com/haritha91/mnist-docker-example

I used a ubuntu 20.04 base image from nvidia with CUDA 11.1 runtime. The package manager used here is mamba with python 3.8.    

FROM nvidia/cuda:11.1.1-base-ubuntu20.04

# Remove any third-party apt sources to avoid issues with expiring keys.
RUN rm -f /etc/apt/sources.list.d/*.list

# Setup timezone (for tzdata dependency install)
ENV TZ=Australia/Melbourne
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime

# Install some basic utilities and dependencies.
RUN apt-get update && apt-get install -y \
    curl \
    ca-certificates \
    sudo \
    git \
    bzip2 \
    libx11-6 \
    libgl1 libsm6 libxext6 libglib2.0-0 \
 && rm -rf /var/lib/apt/lists/*

# Create a working directory.
RUN mkdir /app
WORKDIR /app

# Create a non-root user and switch to it.
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
 && chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user

# All users can use /home/user as their home directory.
ENV HOME=/home/user
RUN chmod 777 /home/user

# Create data directory
RUN sudo mkdir /app/data && sudo chown user:user /app/data

# Create results directory
RUN sudo mkdir /app/results && sudo chown user:user /app/results


# Install Mambaforge and Python 3.8.
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/mambaforge/bin:$PATH
RUN curl -sLo ~/mambaforge.sh https://github.com/conda-forge/miniforge/releases/download/4.9.2-7/Mambaforge-4.9.2-7-Linux-x86_64.sh \
 && chmod +x ~/mambaforge.sh \
 && ~/mambaforge.sh -b -p ~/mambaforge \
 && rm ~/mambaforge.sh \
 && mamba clean -ya

# Install project requirements.
COPY --chown=user:user environment.yml /app/environment.yml
RUN mamba env update -n base -f environment.yml \
 && mamba clean -ya

# Copy source code into the image
COPY --chown=user:user . /app

# Set the default command to python3.
CMD ["python3"]

In the beginning you may see this as an extra burden. Trust me, when your experiments get complex and you start working with different ML projects parallelly, docker is the lifesaver you have and it’s going to save you a lot of unnecessary time you waste on ground level configurations.

Happy coding!

Unlocking the Power of Language with GPT-3

Yes. This is all about the hype of ChatGPT. It’s obvious that most of us are too obsessed with it and spending a lot of time with that amazing tool even as the regular search engine! (Is that a bye-bye google? 😀 )

I thought of discussing the usage of underlying mechanics of ChatGPT: Large Language Models (LLMs) and the applicability of these giants in intelligent application development.

What actually ChatGPT is?

ChatGPT is a conversational AI model developed by OpenAI. It uses the GPT-3 architecture, which is based on Transformer neural networks. GPT-3 is large language model with about 175 billion parameters. The model has been trained on a huge corpus of text data (about 45TB) to generate human-like responses to text inputs. Most of the data used in training is harvested from public internet. ChatGPT can perform a variety of language tasks such as answering questions, generating text, translating languages, and more.

ChatGPT is only a single use case of a massive research. The underlying power is the ANN architecture GPT-3. Let’s dig down step by step while discussing following pinpoints.

What are Large Language Models (LLMs)?

LLMs are deep learning algorithms that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. As the name suggests, these language models are trained with massive amounts of textual data using unsupervised learning. (Yes, there’s no data labelling involved with this). BLOOM from Hugging Face, ESMFold from Meta AI, Gato by DeepMind, BERT from Google, MT-NLG from Nvidia & Microsoft, GPT-3 from OpenAI are some of the LLMs in the AI space.

Large language models are among the most successful applications of transformer models. They aren’t just for teaching machines human languages, but for understanding proteins, writing software code and much more.

What are Transformers?

Encoder decoder architecture
The  encoder-decoder structure of the Transformer architecture
Taken from “Attention Is All You Need

Transformers? Are we going to talk about bumblebee here? Actually not!

Transformers are a type of neural network architecture (similar as Convolutional Neural Networks, Recurrent Neural Networks etc.) designed for processing sequential data such as text, speech, or time-series data. They were introduced in the 2017 research paper “Attention is All You Need“. Transformers use self-attention mechanisms to process the input sequence and compute a weighted sum of the features at each position, allowing the model to efficiently process sequences of varying length and capture long-range dependencies. They have been successful in many natural language processing tasks such as machine translation and have become a popular choice in recent years.

For a deep learning enthusiast this may sound familiar with the RNN architecture which are mostly used for learning sequential tasks. Unless the RNNs, transformers are capable of capturing long term dependencies which make them so capable of complex natural language processing tasks.

GPT stands for “Generative Pre-trained Transformer”. As the name implies it’s built with the blessing of transformers.

Alright… now GPT-3 is the hero here! So, what’s cool about GPT-3?

  • GPT-3 is one successful innovation in the LLMs (It’s not the only LLM in the world)
  • GPT-3 model itself has no knowledge; its strength lies in its ability to predict the subsequent word(s) in a sequence. It is not intended to store or recall factual information.
  • As such the model itself has no knowledge, it is just good at predicting the next word(s) in the sequence. It is not designed to store or retrieve facts.
  • It’s a pretrained machine learning model. You cannot download or retrain the model since it’s massive! (fine-tuning with our own data is possible).
  • GPT-3 is having a closed-API access which you need an API key to access.
  • GPT-3 is good mostly for English language tasks.
  • A bit of downside: the outputs can be biased and abusive – since it’s learning from the data fetched from public internet.

If you are really interested in learning the science behind GPT-3 I would recommend to take a look on the paper : Language Models are Few-Shot Learners

What’s OpenAI?

The 2015 founded research organisation OpenAI is the creators of GPT-3 architecture. GPT-3 is not the only interesting innovation from OpenAI. If you have seen AI generated arts which are created from a natural language phrases as the input, it’s most probably from DALL-E 2 neural network which is also from OpenAI.

OpenAI is having there set of APIs, which can be easily adapted for developers in their intelligent application development tasks.

Check the OpenAI APIs here: https://beta.openai.com/overview    

What can be the use cases of GPT-3?

We all know ChatGPT is ground-breaking. Our focus should be exploring the approaches which we can use its underlying architecture (GPT-3) in application development.

Since the beginning of the deep neural networks, there have been lot of research and innovation in the computer vision space. The networks like ResNet were ground-breaking and even surpass the human accuracy level in tasks like image classification with ImageNet dataset. We were getting the advantage of having pre-trained state-of-the-art networks for computer vision tasks without bothering on large training datasets.

The LLMs like GPT-3 is addressing the gap of the lack of such networks in natural language analysis tasks. Simply it’s a massive pre-trained knowledge base that can understand language.

There are many interesting use cases of GPT-3 as a language model in use cases including but not limited to:

  • Dynamic chatbots for customer service use cases which provide more human-like interaction with users.
  • Intelligent document management by generating smart tagging/ paraphrasing, summarizing textual documents.
  • Content generation for websites, new articles, educational materials etc.
  • Advance textual classification tasks
  • Sentiment analysis
  • Semantic search capabilities which provide natural language query capability.
  • Text translation, keyword identification etc.
  • Programming code generation and code optimisation

Since the GPT-3 can be fine-tuned with a given set of training data, the possibilities are limitless with the natural language understanding capability it is having. You can be creative and come up with the next big idea which improves the productivity of your business.

What is Azure OpenAI?

Azure OpenAI is a collaboration between Microsoft’s Azure cloud platform and OpenAI, aimed at providing cloud-based access to OpenAI’s cutting-edge AI models and tools. The partnership provides a seamless platform for developers and organizations to build, deploy, and scale AI applications and services, leveraging the computing resources and technology of the Azure cloud.

Users can access the service through REST APIs, Python SDK or through the Azure OpenAI Service Studio, which is the web-based interface dedicated for OpenAI services.

In enterprise application development scenarios, using OpenAI services through Azure makes it much easier for integration.

Azure OpenAI opened for general availability very recently and I’m pretty sure there’ll be vast improvements in the coming days with the product.

Let’s keep our eyes open and start innovating on ways which we can use this super-power wisely.

What’s Best for Me? – 5 Data Analytics Service Selection Scenarios Explained

With the extensive usage of cloud-based technologies to perform machine learning and data science related experiments, choosing the right toolset/ platform to perform the operations is a key part for the project success.

Since selecting the perfect toolset for our ML workloads maybe bit tricky, I thought of sharing my thoughts on that by getting a couple of generic use cases. Please keep in mind that the use cases I have chosen and the decisions I’m suggesting are totally my own view on the scenarios and this may differ based on different factors (amount of data, time frame, allocated budget, ability of the developer etc.) you have with your project. Plus, the suggestions I’m pointing out here are from the services comes with Microsoft Azure cloud. This maybe the easily adjusted for other cloud providers too.

Scenario 1:

We are a medium scale micro financing company having our data stored on Microsoft Azure. We have a plan to build a datalake and use that for analytical and reporting tasks. We have a diverse data team with abilities in python, Scala and SQL (most of the data engineers are only familiar with SQL). We need to build a couple of machine learning models for predictions. What would be the best platform to go forward with? Azure Databricks or Azure ML Studio?

Suggestion: Azure Databricks

Reasons:

  • Databricks is more flexible in ETL and datalake related data operations comparing to AzureML Studio.
  • You can perform data curation and machine learning within a single product with Azure Databricks.
  • Databricks can connect with Azure Data Factory pipelines to handle data flow and data curation tasks within the datalake.
  • Since the data engineers are more familiar with SQL, they’ll easily adapt with SparkSQL on Databricks.
  • Data team can develop their machine learning experiments with any language of their choice with Databricks notebooks.
  • Databricks notebooks can be used for analytical and reporting tasks even with a combination of PowerBI.
  • Given that, the company is planning for building a datalake, Databricks is far more flexible in ETL tasks. You can use Azure Data Factory pipelines with Databricks to control the data flow of the datalake.

Scenario 2:

I’m a computer science undergrad. I’m doing a software project to predict several types of wildflowers by capturing images from a mobile phone. I’m planning to build my computer vision model using TensorFlow and Keras and expose the service as a REST API.  Since I’m not having the infrastructure to train the ML models, I’m planning to use Azure for that. Which tool on Azure should I choose?

Suggestion: Azure ML Studio

Reasons:

  • AzureML provides a complete toolset to train, test and deploy a deep learning model using any open-source framework of your choice.
  • You can use the GPU training clusters on AzureML to train your models.
  • It’s easy to log your model training and experiments using AzureML python SDK.
  • AzureML gives you the ability for model management and exposing the trained model as a REST API.
  • Small learning curve and adaptability.

Scenario 3:

I’m the CEO of a retail company. I’m not having a vast experience with computing or programming but having a background in maths and statistics. I have a plan to use machine learning to perform predictive analysis with the data currently having in my company. Most of the data are still in excel! Someone suggested me to use Azure. What product on Azure should I choose?

Suggestion: Azure ML Studio

Reasons:

  • For a beginner in machine learning and data science, Azure ML Studio is a good start.
  • AzureML Studio provides no-code environments (Azure ML designer and AutoML) to develop ML models.   
  • Since, you are mostly in the experimental stage and not going for using bigger datasets, using Databricks would be an overkill.
  • You can easily import your prevailing data and start experimenting and playing around with them without any local environmental setup.

Scenario 4:

I’m the IT manager of a large enterprise who are heavily relying on data assets with our decision-making process. We have to run iterative jobs daily to retrieve data from different external sources and internal systems. Currently we have an on-prem SQL database acting as the data warehouse.  Company has decided to go for cloud. Can Azure serve our needs?   

Suggestion: Yes. Azure can serve your need with different tools in the data & AI domain.

Reasons:

  • You can use Azure Synapse Analytics or Azure Data Factory to build data pipelines and perform ETL operations.
  • The local data warehouse can be easily migrated to Azure cloud.
  • You can use Azure Databricks in-order to perform analytics tasks.
  • Since the enterprise in large and scaling, using Databricks would be better with its Spark based computation abilities.

Scenario 5:

We are an agricultural company growing forward with adopting modern Agri-tech into the business. We collect numerous data values from our plantations and store them in our cloud databases. We have a set of data scientists working on data modelling and building predictive models related to crop fertilizing and harvesting. They are currently using their own laptops to perform analysis and it’s troublesome with the data amount, platform configurations and security. Will Azure ML comes handy in our case?

Suggestion: Yes. Azure ML Studio would be a good choice.

Reasons:

  • AzureML can be easily adaptable as an analytical platform.
  • The cloud databases can be connected to AzureML, and data scientists can straight-up start working on the data assets.
  • AzureML is relatively cheap comparing to Databricks (Given the data amount is manageable in a single computer.)
  • It’s easy to perform prototyping of models using AutoML/ AzureML Designer and then implement the models within a short time frame.  

Generally, these are the factors I would keep in mind when selecting the services for ML/ data related implementations on Azure.

  • Azure ML studio is good when you are training with a limited data, though Azure ML provides training clusters, the data distribution among the nodes is to be handled in the code.
  • AzureML Studio comes handy in prototyping with AzureML designer and Automated ML.
  • Azure Databricks with its RDDs is designed to handle data distributed on multiple nodes which is advantageous when your you have big datasets.
  • When your data size is small and can fit in a scaled up single machine/ you are using a pandas dataframe, then use of Azure Databricks is an overkill.
  • Services like Azure Data Factory and Datalake storage can be easily interconnected for building  

Let me know your thoughts on these scenarios as well. Add your queries in the comments too. I’ll try my best to provide my suggestions for those use cases.