For Developers

Introduction to Self-Supervised Learning in NLP

Self supervised learning in NLP.

Introduction

Deep learning is a field of machine learning that deals with learning from the data through neural networks. Neural networks are algorithms used to train models in various domains including vision, images, text, audio, video, and more. When deep neural networks are provided with data, these networks try to figure out the patterns within the data and extract relevant features. These features are then used to make decisions, like classifying objects (image classification), predicting a number (Regression) or generation captions (caption generator), and more.

But training deep neural networks from scratch every time is not an easy task. For neural networks to perform better, they need lots and lots of data and to train this model, we will need high compute power like GPUs and TPUs (GPUs can train models in minutes which takes hours and days on CPUs). Some models dealing with image and text datasets require huge amounts of computing power to understand and find patterns.

In this blog, let's discuss some techniques using which we can easily train state-of-the-art deep learning models with less time and resources.

Transfer learning

In recent years, there has been a boost in Deep learning, thanks to the availability of large amounts of data and enough compute/processing power. This situation has led many companies and research labs to train heavier deep learning models on larger datasets, particularly in computer vision and natural language processing tasks. The resulting models, which have achieved state-of-the-art results on standard benchmarks, have been made available as pre-trained models and can be used to fine-tune on custom datasets. This technique forms the basis for Transfer Learning.

Transfer learning is defined as transferring the knowledge gained by a model when learning one task. This can be applied to a similar task with small modifications. We can also use pre-trained models like Resnet50, EfficientNet, and more. These models are trained on millions of images (like the ImageNet dataset) and fine-tune it on our data.

Let's consider an example to train an image classification model of Cats vs Dogs (i.e., PETs Dataset). We can use the ResNet50 pre-trained model that has been trained on the Imagenet dataset with 1000 classes. Use the model to fine-tune the PETS dataset, containing different images of cats and dogs. Now, since we are using the pre-trained model, we are not starting the model training from scratch i.e., the model has some knowledge about the appearance of a cat or dog. Now we can fine-tune the pre-trained model with the dataset and achieve good results with less time as compared to the training model from scratch.

But we can't use transfer learning when we have no pre-trained models available. Here, we can use the Self-supervised learning technique to train models to produce state-of-the-art results with less time and resources.

Self-supervised learning

Machine learning algorithms are broadly classified as supervised, unsupervised, and reinforcement learning. Let's see what each of these means in brief:

  • Supervised learning: In this technique, a machine learning model is provided with inputs and corresponding labels to learn. Example: Image classification (Cats vs Dogs: Inputs are images of cats and dogs, while outputs are labels representing whether it is a cat or a dog), regression, and more
  • Unsupervised learning: In this technique, a machine learning model is provided with only inputs and the model finds the patterns in it. Example: K-means clustering, Principal component analysis, and more
  • Reinforcement learning: In this technique, a machine learning model learns from actions and rewards. It is dependent on how agents take actions in an environment to maximize the award received. Example: path planning, chess engine, and more.

Self-supervised learning is an intermediate between supervised and unsupervised learning. It is a technique used to train models in which the output labels are a part of the input data rather than requiring separate output labels. A classic example of this would be language models.

Language model is a model that has been trained to predict the next word having seen/read the previous input sentence. This kind of task is a self-supervised learning task because we are not defining any separate output labels. Instead, we are providing the texts as inputs and outputs in a specific way such that, the model will be able to understand the fundamentals and style of the English language used in the dataset (or the language used in the dataset).

Self-supervised learning combined with Transfer learning in NLP.

When we don't have any pre-trained models available for our dataset, we can create one using self-supervised learning. We can train a language model using the text corpus available in the train and test dataset. We will train a language model by providing a text of specific length as the independent variable and then providing the same text by appending the next word as the output label. This will work when we provide the model with lots of text such that the model should be able to easily find patterns and learn the basic style of the text.

In this way, the model will learn to predict the next word, provided the input sentence.

Usually, this language model is not used for any direct task i.e., the language model will be able to generate text but it is not useful for any downstream task until it is fine-tuned. This language model acts as a pre-trained model which we can use to perform transfer learning i.e., fine-tuning with the same dataset or different datasets for any downstream tasks like text classification, sentiment analysis, and more.

One of the best resources to find the language models is HuggingFace (https://huggingface.co). It has many models trained using corpora of text data of different styles and languages.

Conclusion

By using techniques like Transfer learning and Self-supervised learning, we can train deep learning models even if we don't have enough data or resources. This also provides a way to train deep learning models that are not single task-specific but can easily support multiple tasks/functionalities using fine-tuning.

Press

Press

What's up with Turing? Get the latest news about us here.
Blog

Blog

Know more about remote work. Check out our blog here.
Contact

Contact

Have any questions? We'd love to hear from you.

Hire and manage remote developers

Tell us the skills you need and we'll find the best developer for you in days, not weeks.

Hire Developers