top of page Search

# Machine Learning: An Introduction to Supervised, Unsupervised and Semi-Supervised Learning

Introduction:

Machine Learning (ML) is an exciting field that involves training machines to learn from data and make predictions or decisions based on that learning. There are three main types of ML algorithms: supervised learning , unsupervised learning and semi supervised learning. In this blog, we'll introduce these two types of learning and give examples of each.

Supervised Learning:

Supervised learning involves training a machine learning algorithm on a dataset that has labeled examples. Each example in the dataset has both input data and an associated output or label. The goal of the algorithm is to learn a mapping between the input data and the output labels so that it can make accurate predictions on new, unseen data.

For example, let's say we have a dataset of images of dogs and cats. Each image is labeled as either a dog or a cat. We can use supervised learning to train an algorithm to recognize the difference between a dog and a cat. We would feed the algorithm the labeled images as input data and the associated labels (dog or cat) as output data. The algorithm would learn from this training data and then be able to make predictions on new, unseen images of dogs and cats.

Another example of supervised learning is predicting housing prices. Suppose we have a dataset of housing prices in a particular area, along with features such as the number of bedrooms, square footage, and location. We can use supervised learning to train an algorithm to predict the price of a new house based on its features. We would feed the algorithm the labeled data as input (features of the house) and output (price of the house) and it would learn from this data to make predictions on new, unseen houses.

Unsupervised Learning:

Unsupervised learning involves training a machine learning algorithm on a dataset that does not have labeled examples. The goal of the algorithm is to find patterns and structure in the data without being given explicit output labels.

For example, let's say we have a dataset of customer transactions for a retail store. Each transaction has information such as the items purchased, the time of purchase, and the amount spent. We can use unsupervised learning to group similar transactions together based on patterns in the data. The algorithm would learn from the data and then group together transactions that have similar patterns of purchase.

Another example of unsupervised learning is image clustering. Suppose we have a dataset of images of different animals. We can use unsupervised learning to group together similar images of animals, without being given any explicit labels. The algorithm would learn from the data and then group together images that have similar features, such as color, shape, and texture.

Semi-Supervised Learning:

Semi-supervised learning is a type of machine learning that falls between supervised learning and unsupervised learning. In supervised learning, we have a labeled dataset and we train a model to make predictions on new, unseen data. In unsupervised learning, we have an unlabeled dataset and we aim to discover structure or patterns in the data. In semi-supervised learning, we have both labeled and unlabeled data and we aim to use the information in the labeled data to improve the performance of our model on the unlabeled data.

The basic idea behind semi-supervised learning is to use the small amount of labeled data that we have to guide the learning process for the much larger amount of unlabeled data. This is particularly useful when obtaining labeled data is expensive or time-consuming, but unlabeled data is readily available. By combining the labeled and unlabeled data, we can leverage the large amount of unlabeled data to improve the accuracy of our predictions.

One example of semi-supervised learning is image classification. Let's say we want to classify images of cats and dogs, but we only have a small labeled dataset of 100 images. We also have a large unlabeled dataset of 10,000 images. We could use the self-training algorithm for semi-supervised learning. We start by training a model on the labeled data of 100 images. We then use this model to make predictions on the unlabeled data of 10,000 images. The predictions with the highest confidence scores are added to the labeled data, so we now have 200 labeled images. We then train a new model on the expanded dataset of 200 labeled images and use this model to make predictions on the unlabeled data again. This process is repeated multiple times, each time adding the most confident predictions to the labeled data, until the accuracy on the unlabeled data no longer improves.

By leveraging the information in the unlabeled data, we are able to improve the accuracy of our model compared to using only the 100 labeled images. This is particularly useful when obtaining labeled data is expensive or time-consuming, but unlabeled data is readily available. Semi-supervised learning has been successfully applied in a variety of domains, including natural language processing, computer vision, and speech recognition. It has been shown to improve the performance of models compared to using only labeled data, and can be particularly useful in scenarios where obtaining labeled data is difficult or expensive.

Conclusion:

All three approaches have their own strengths and weaknesses, and the choice of the appropriate approach depends on the specific problem at hand and the availability of data. Moreover, there are many variations and extensions of these basic paradigms, including reinforcement learning, transfer learning, and active learning, among others. In summary, supervised, unsupervised, and semi-supervised learning are powerful techniques that are widely used in many fields of machine learning, artificial intelligence, and data science. By understanding the strengths and limitations of each approach, we can make informed decisions about which approach to use for a particular problem and how to design effective learning algorithms.

References:

Alpaydin, E. (2010). Introduction to machine learning (2nd ed.). Cambridge, MA: MIT Press.

Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). New York: Springer.

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.

Ng, A. (2017). Machine learning yearning. Available online: https://www.mlyearning.org/

Russell, S. J., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall.

Zhu, X., & Goldberg, A. B. (2009). Introduction to semi-supervised learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1), 1-130.

Chapelle, O., Scholkopf, B., & Zien, A. (Eds.). (2006). Semi-supervised learning (Vol. 2). MIT press.

Thapaliya, E., Zhang, Y., & Yoon, B. J. (2019). Semi-supervised learning in deep neural networks: A survey. IEEE Access, 7, 93692-93710.

Lee, H. W., Pham, H., Largman, Y., & Ng, A. Y. (2013). Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in neural information processing systems (pp. 1096-1104).

Rasmus, A., Berglund, M., Honkala, M., Valpola, H., & Raiko, T. (2015). Semi-supervised learning with ladder networks. In Advances in neural information processing systems (pp. 3546-3554).