Hamburger_menu.svg

FOR DEVELOPERS

Different Types of Cross-Validations in Machine Learning and Their Explanations

Cross-Validations in Machine Learning

Machine learning and proper training go hand-in-hand. You can’t directly use or fit the model on a set of training data and say ‘Yes, this will work.’ To ensure that the model is correctly trained on the data provided without much noise, you need to use cross-validation techniques. These are statistical methods used to estimate the performance of machine learning models.

This article will introduce you to the different types of cross-validation techniques, supported with detailed explanations and code.

Types of cross-validation

  1. K-fold cross-validation
  2. Hold-out cross-validation
  3. Stratified k-fold cross-validation
  4. Leave-p-out cross-validation
  5. Leave-one-out cross-validation
  6. Monte Carlo (shuffle-split)
  7. Time series (rolling cross-validation)

K-fold cross-validation

In this technique, the whole dataset is partitioned in k parts of equal size and each partition is called a fold. It’s known as k-fold since there are k parts where k can be any integer - 3,4,5, etc.

One fold is used for validation and other K-1 folds are used for training the model. To use every fold as a validation set and other left-outs as a training set, this technique is repeated k times until each fold is used once.

image5_11zon.webp

Image source: sqlrelease.com

The image above shows 5 folds and hence, 5 iterations. In each iteration, one fold is the test set/validation set and the other k-1 sets (4 sets) are the train set. To get the final accuracy, you need to take the accuracy of the k-models validation data.

This validation technique is not considered suitable for imbalanced datasets as the model will not get trained properly owing to the proper ratio of each class's data.

Here’s an example of how to perform nok-fold cross-validation using Python.

Code:

code14_11zon.webp


Image source: Author

code14_11zon.webp


Image source: Author

Output:

code13_11zon.webp


Image source: Author

code13_11zon.webp


Image source: Author

Holdout cross-validation

Also called a train-test split, holdout cross-validation has the entire dataset partitioned randomly into a training set and a validation set. A rule of thumb to partition data is that nearly 70% of the whole dataset will be used as a training set and the remaining 30% will be used as a validation set. Since the dataset is split into only two sets, the model is built just one time on the training set and executed faster.

image4_11zon.webp

Image source: datavedas.com

In the image above, the dataset is split into a training set and a test set. You can train the model on the training set and test it on the testing dataset. However, if you want to hyper-tune your parameters or want to select the best model, you can make a validation set like the one below.

image3_11zon.webp

Image source: datavedas.com

Code:

code12_11zon.webp


Image source: Author

Output:

code11_11zon.webp

Output:

code11_11zon.webp

Stratified k-fold cross-validation

As seen above, k-fold validation can’t be used for imbalanced datasets because data is split into k-folds with a uniform probability distribution. Not so with stratified k-fold, which is an enhanced version of the k-fold cross-validation technique. Although it too splits the dataset into k equal folds, each fold has the same ratio of instances of target variables that are in the complete dataset. This enables it to work perfectly for imbalanced datasets, but not for time-series data.

image1_11zon.webp

Image source: dataaspirant.com

In the example above, the original dataset contains females that are a lot less than males, so this target variable distribution is imbalanced. In the stratified k-fold cross-validation technique, this ratio of instances of the target variable is maintained in all the folds.

Code:

code10_11zon.webp


Image source: Author

Output:

code9_11zon.webp


Image source: Author

Leave-p-out cross-validation

An exhaustive cross-validation technique, p samples are used as the validation set and n-p samples are used as the training set if a dataset has n samples. The process is repeated until the entire dataset containing n samples gets divided on the validation set of p samples and the training set of n-p samples. This continues till all samples are used as a validation set.

The technique, which has a high computation time, produces good results. However, it’s not considered ideal for an imbalanced dataset and is deemed to be a computationally unfeasible method. This is because if the training set has all samples of one class, the model will not be able to properly generalize and will become biased to either of the classes.

Code:

code8_11zon.webp


Image source: Author

Output:

code7_11zon.webp


Image source: Author

Leave-one-out cross-validation

In this technique, only 1 sample point is used as a validation set and the remaining n-1 samples are used in the training set. Think of it as a more specific case of the leave-p-out cross-validation technique with P=1.

To understand this better, consider this example:
There are 1000 instances in your dataset. In each iteration, 1 instance will be used for the validation set and the remaining 999 instances will be used as the training set. The process repeats itself until every instance from the dataset is used as a validation sample.

image2_11zon.webp

Image source
The leave-one-out cross-validation method is computationally expensive to perform and shouldn’t be used with very large datasets. The good news is that the technique is very simple and requires no configuration to specify. It also provides a reliable and unbiased estimate for your model performance.

Code:

code6_11zon.webp


Image source: Author

Output:

code5_11zon.webp


Image source: Author

Monte Carlo cross-validation

Also known as shuffle split cross-validation and repeated random subsampling cross-validation, the Monte Carlo technique involves splitting the whole data into training data and test data. Splitting can be done in the percentage of 70-30% or 60-40% - or anything you prefer. The only condition for each iteration is to keep the train-test split percentage different.

The next step is to fit the model on the train data set in that iteration and calculate the accuracy of the fitted model on the test dataset. Repeat these iterations many times - 100,400,500 or even higher - and take the average of all the test errors to conclude how well your model performs.

For a 100-iteration run, the model training will look like this:

image7_11zon.webp

Image source: medium.com

You can see that in each iteration, the split ratio of the training set and test set is different. The average has been taken to get the test errors.

Code:

code4_11zon.webp


Image source: Author

Output:

code3_11zon.webp


Image source: Author

Time series (rolling cross-validation / forward chaining method)

Before going into the details of the rolling cross-validation technique, it’s important to understand what time-series data is.

Time series is the type of data collected at different points in time. This kind of data allows one to understand what factors influence certain variables from period to period. Some examples of time series data are weather records, economic indicators, etc.

In the case of time series datasets, the cross-validation is not that trivial. You can’t choose data instances randomly and assign them the test set or the train set. Hence, this technique is used to perform cross-validation on time series data with time as the important factor.

Since the order of data is very important for time series-related problems, the dataset is split into training and validation sets according to time. Therefore, it’s also called the forward chaining method or rolling cross-validation.

To begin:
Start the training with a small subset of data. Perform forecasting for the later data points and check their accuracy. The forecasted data points are then included as part of the next training dataset and the next data points are forecasted. The process goes on.

The image below shows the method.

image6_11zon.webp

Image source: medium.com

Code:

code2_11zon.webp


Image source: Author

Output:

Code 1_11zon.webp


Image source: Author

Try your hand at these codes and play around with them to get a hang of how cross-validation is done using these seven techniques.
Happy coding!

Press

Press

What’s up with Turing? Get the latest news about us here.
Blog

Blog

Know more about remote work. Checkout our blog here.
Contact

Contact

Have any questions? We’d love to hear from you.

Hire remote developers

Tell us the skills you need and we'll find the best developer for you in days, not weeks.