Why we need GPUs for Deep Learning?

“If you wanna do deep learning experiments you should have a GPU”

This is one of the most common statements you should have heard over the years even from me. Do we really need a GPU for performing these experiments? If so why is that? So let’s dig in..

As we all know deep learning is all about deep neural networks. Training a deep neural network (DNN) is one of the most computationally expensive tasks in computer science. Since the DNN contains huge of numerical representations as the inputs as well as weights and biases inside the network, it can be mathematically seen as a set of multidimensional matrices.

A simplified illustration of an ANN

Now we have a set of huge multidimensional matrices.. next? In order to train a neural network to perform a particular task (will say classifying a bunch of images), we commonly use backpropagation algorithm which adjust the weights and biases of DNN according to the train set we given. In forward pass of training, input is passed through the neural network and after processing the input, an output is generated. Whereas in backward pass, we update the weights of neural network on the basis of error we get in forward pass. This operation includes huge set of matrix calculations (multiplications and summations). If you consider one single mathematical operation happens inside these passes, it’s pretty simple as multiplying two numbers. But when comes to a DNN such as VGG16 which contains 16 hidden layers (VGG16 is a convolutional neural network mostly specifically designed for computer vision tasks), it contains ~140 million parameters; aka weights and biases! So the number of calculations to perform for adjusting these weights and biases are tremendous!

Alright.. we now have millions and millions of calculations to be done.. So as we have very fast computers! What’s the issue? There’s no issue to be real. The problem is with the definition of speediness we have seen and using for our daily computations. Central Processing Unit (CPU) is the key component that contributes for computations in our computers. Typically a modern high end CPU is having 4-8 physical cores inside it. ( Intel Core i7 10th gen processor is having 8 physical cores) If you remember your early computer science lessons, a CPU can do only one task at a time. So to perform that millions and millions of calculations it’s gonna take ages! (Literally it may take years to train a complex DNN!) You may think of having a cluster with thousands of CPUs in parallel to do the task. Yes. It’s possible but it’s going to cost in millions plus going to consume a huge amount of power.

Since the single computational elements to be done is not complex in DNN training, Graphical Processing Units (GPUs) which are having hundreds/thousands of simple cores is a best match for this computational task. (typically a Nvidia940MX GPU sits on your laptop has 384 CUDA cores)

The table below will give your a better idea on GPU vs CPU. Just watch the video that demonstrate the power of thousands of small processors demonstrated by Mythbusters.

Source : https://www.slideshare.net/AlessioVillardita/ca-1st-presentation-final-published

Though we have a hype about GPUs today, usage of general purpose GPUs for scientific computing start in early 2000s. In 2006 Nvidia came out with CUDA language which allows to program GPUs using high level languages. Now all the deep learning frameworks have the ability to access the GPUs with just few lines of code. You can easily train a DNN model using the GPU you have on your laptop within minutes compared to training it on CPU which may take hours or days!

Alright.. alright…Now we need a GPU!

There are two main ways of leveraging the GPU power for your computations

1. Use a physical GPU fitted to your workstation/ laptop

This is quite straight forward. I would strongly recommend to use a Nvidia GPU since the cuDNN and CUDA packages are having great support plus good documentations for most of the deep learning libraries. AMD also have ROCm which is similar for CUDA based on openCL (https://rocmdocs.amd.com/en/latest/Current_Release_Notes/Current-Release-Notes.html) but I haven’t experienced it or used it. GPUs maybe bit expensive and plan ahead when you choosing a GPU for your workstation.

2. Using a GPU on cloud

This is a pretty cool. If you don’t have a GPU on your laptop but still you want to train the DNNs, you can easily spin up a GPU on a cloud and use it for your computations.

Almost all cloud providers (Google cloud platform, AWS, Microsoft Azure provides GPU based computing services) Some of these services are free for some extend and may have to pay if you going to run your training for a long time.

One of the main data science platforms, Kaggle provides free access for GPUs with a limit (https://www.kaggle.com/dansbecker/running-kaggle-kernels-with-a-gpu) Just take a look there and it may fit your need too.

So, that’s it! Don’t wait till the CPUs do the computation. Let the GPU train your model.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.