What K-Fold Cross Validation really is in Machine Learning in simple terms

November 23rd, 2021

The basics of cross validation are as follows: the model is trained on a training set and then evaluated once per validation set. Then the…

The basics of cross validation are as follows: the model is trained on a training set and then evaluated once per validation set. Then the evaluations are averaged out. This method provides higher accuracy. However, because the training time is multiplied by the number of validation sets, this is more computationally expensive.

To start with some popular types of Cross Validation techniques are:

  1. K-fold cross-validation
  2. Stratified K-fold cross-validation
  3. Hold-out based cross-validation
  4. Leave-one-out cross-validation
  5. Group K-fold cross-validation

Out of these, in this post I will ONLY discuss k-fold cross validation.

To help the machine learning algorithm estimate what its performance will be on the never-before-seen examples (the test set), it is best practice to further split the training set into a training set and a validation set.

For example, if we split the training set into fifths, we can train on four-fifths of the original training set and evalulate the newly training model by making predictions on the fifth slice of the original training set, known as the validation set.

It is possible to train and evaluate like this five times — leaving aside a different fifth slice as the validation set each time. This is known as k-fold cross-validation, where k in this case is five. With this approach, we will have not one estimate but five estimates for the generalization error.

We will store the training score and the cross-validation score for each of the five runs, and we will store the cross-validation predictions each time. After all five runs are complete, we will have cross-validation predictions for the entire dataset. This will be the best all-in estimate of the performance the test set.

An example of k-fold cross-validation is depicted in below figure, where k = 6.

This six-fold cross-validation has its data split into six chunks, one reserved for testing and the remaining five data folds reserved for training. The accuracy column shows a variation of accuracy between 80% to 92%. The average accuracy is given by

Cross-validation can be run for different algorithms for different ranges for the same dataset. Depending on which algorithm offers an acceptable range of accuracy, that algorithm can be chosen as optimal for a given dataset. In summary, cross-validation is not generally used to find the best performing model. It is used more to validate or understand a particular model’s performance for a given dataset and new test data. The general approach in real life is to tune a model by measuring it while varying it with different parameters.

The advantages of cross-validation are that it is useful to systematically test a new model with data, and it can identify models with high-variance or over-fitting issues.

Drawback of K-Fold Cross Validation

Essentially its a computational issue because each of the split need to be trained k-times. If it takes too long to train, you use single split of data (eg enormous Neural networks). Equally if its too long because you have so much data, then a single split will still be reliable.

Other articles

March 28th, 2022

Wasserstein GAN Implementation from Scratch with PyTorch

Implementation of Wasserstein GAN Architecture from Scratch read more...

March 20th, 2022

CycleGAN Paper Implementation from Scratch with PyTorch

Implementation of CycleGAN Architecture from Scratch read more...

March 10th, 2022

CycleGAN Architecture and Paper Walkthrough

Understanding CycleGAN Architecture read more...

March 10th, 2022

DCGAN Implementation From Scratch with PyTorch

DCGAN Implementation From Scratch with PyTorch on MNIST Dataset read more...

March 10th, 2022

GoogLeNet or Inception v1 Paper Implementation From Scratch with PyTorch

GoogLeNet Inception v1 Architecture Implementation From Scratch with PyTorch on CIFAR10 Dataset read more...

February 22nd, 2022

ResNet Paper Implementation From Scratch with PyTorch

Residual Network Architecture Implementation From Scratch with PyTorch on CIFAR10 Dataset read more...

February 21st, 2022

Input Shape of Tensor in Neural Network for PyTorch

Understanding nn.Linear() Layer and nn.Conv2D() Layer read more...

February 23rd, 2022

Label Smoothing And CrossEntrophy Loss in PyTorch

Quantization refers to techniques for computing and accessing. read more...

February 20th, 2022

Weight decay in PyTorch and its relation with Learning Rate | L2 Regularization

Weight decay is a regularization technique by adding a small penalty, usually the L2 norm of the weights (all the weights of the model), to the loss function. read more...

February 19th, 2022

Quantization in PyTorch & Mixed Precision Training

Quantization refers to techniques for computing and accessing memory with lower-precision data. read more...

February 14th, 2022

Build LeNet5 from Scratch with PyTorch

LeNet5 is one of the classic Neural Network and great to start with if you are a beginner read more...

February 12th, 2022

EfficientNet Pre-Trained with PyTorch - Covid-19 X-Ray Dataset

EfficientNet is a convolutional neural network architecture and scaling method developed by Google in 2019. read more...

February 4th, 2022

Understanding the Math behind Gram Matrix and why its needed for Neural Style Transfer

Understanding the Math behind Gram Matrix and why its needed for Neural Style Transfer read more...

January 29th, 2022

DCGAN - Generator Function - Understanding Filter Size and Input Shape

Reason for using 4 * 4 * 515 Shape for the Input Dense Layer in the Generator Function. read more...

January 21st, 2022

Deep Neural Network - Base Pytorch Model - FMNIST Dataset

PyTorch Implementation of Classification on Fashion-MNIST dataset which consists of a training set of 60,000 images and test set of 10,000 images read more...

December 30th, 2021

Learn Tensorflow Fundamentals in 45 Mints

A number of tips and techniques useful for daily usage read more...

December 30th, 2021

Numpy Useful Techniques

A number of tips and techniques useful for daily usage read more...

December 28th, 2021

Image Matching & Stitching with Image Descriptors and Creating Panoramic Scene

Given a pair of images I want to stitch them to create a panoramic scene. read more...

December 24th, 2021

ML-Algorithms from scratch in pure python without using sklearn

TFIDF, GridSearchCV, RandomSearchCV, Decision Function of SVM, RBF Kernel, Platt Scaling to find P(Y==1|X) and SGD Classifier with Logloss and L2 regularization read more...

December 22nd, 2021

Computing performance metrics in Pure Python without scikit-learn

Here I will compute Confusion Matrix, F1 Score, AUC Score without using scikit-learn read more...

December 21st, 2021

Understanding Shi-Tomasi Corner Detection Algorithm with OpenCV Python

Shi-Tomasi Corner Detection is an improved version of the Harris Corner Detection Algorithm. read more...

December 21st, 2021

Understanding Harris Corner Detection Algorithm with OpenCV Python

Harris Corner Detection uses a score function to evaluate whether a point is a corner or not. First it computes the horizontal and vertical derivatives (edges) of an image, read more...

December 19th, 2021

Feature-Engineering - Naive-Bayes with Bag-of-Words on Donor Choose Dataset

Understanding Naive Bayes Mathematically and applying on Donor Choose Dataset read more...

December 17th, 2021

Decision Tree on Donors Choose Dataset Kaggle

In this post, I will implement Decision Tree Algorithm on Donor Choose Dataset read more...

December 14th, 2021

Decision Function of SVM (Support Vector Machine) RBF Kernel - Building From Scratch

Decision function is a method present in classifier{ SVC, Logistic Regression } class of sklearn machine learning framework. This method basically returns a Numpy array, In which each element represents whether a predicted sample for x_test by the classifier lies to the right or left side of the Hyperplane and also how far from the HyperPlane. read more...

December 13th, 2021

DCGAN from Scratch with Tensorflow Keras — Create Fake Images from CELEB-A Dataset

A DCGAN (Deep Convolutional Generative Adversarial Network) is a direct extension of the GAN. read more...

December 12th, 2021

Kaggle Haberman's Survival Data Set - Exploratory Data Analysis

The Haberman's survival dataset covers cases from a study by University of Chicago's Billings Hospital done between 1958 and 1970 on the subject of patients-survival who had undergone surgery for breast cancer. read more...

December 11th, 2021

Kaggle House Prices Prediction Competition - Mathematics of Linear Regression

This blog on Linear Regression is about understanding mathematically the concept of Gradient Descent function and also some EDA. read more...

December 10th, 2021

Common Pytorch Snippets

In this notebook I will go over some regular snippets and techniques of it. read more...

December 9th, 2021

Understanding in Details the Mathematics of Logistic Regression and implementing From Scratch with pure Python

Logistic regression is a probabilistic classifier similar to the Naïve Bayes read more...

December 8th, 2021

Most Common Python Challenges asked in Data Science Interview

In this Notebook I shall cover the following most common Python challenges for Data Science Interviews. read more...

December 4th, 2021

Multi ArmedBandit - Kaggle Santa Competition

Multi-armed bandit problems are some of the simplest reinforcement learning (RL) problems to solve. read more...

December 2nd, 2021

Dimensionality Reduction with t-SNE

In this post I will talk about Dimensionality Reduction with t-SNE (t-Distributed Stochastic Neighbor Embedding) using the famous **Digit Recognizer Dataset** (also known as MNIST data) read more...

December 2nd, 2021

Dimensionality Reduction with PCA

In this post I will be using Kaggle's famous **Digit Recognizer Dataset** (also known as MNIST data) to implement Dimensionality Reduction with PCA (Principle Component Analysis). read more...

November 30th, 2021

TF-IDF Model and its implementation with Scikit-learn

In this post, I shall go over TF-IDF Model and its implementation with Scikit-learn. read more...

November 30th, 2021

TFIDF from scratch in pure python without using sklearn

Here I shall implementing TFIDF from scratch in pure python without using sklearn or any other similar packages read more...

November 28th, 2021

Why Platt Scaling and implementation from scratch

Platt Scaling (PS) is probably the most prevailing parametric calibration method. It aims to train a sigmoid function to map the original outputs from a classifier to calibrated probabilities. read more...

November 26th, 2021

Why Bootstrapping is useful and Implementation of Bootstrap Sampling in Random Forests

Bootstrapping resamples the original dataset with replacement many thousands of times to create simulated datasets. This process involves drawing random samples from the original dataset. read more...

November 20th, 2021

Neural Network — Implementing Backpropagation using the Chain Rule

In this post, I will go over the mathematical need and the derivation of Chain Rule in a Backpropagation process. read more...

November 18th, 2021

Constant-Q transform with Gravitational Waves Data

The constant-Q transform transforms a data series to the frequency domain. It is related to the Fourier transform. read more...

November 15th, 2021

All the basic Matrix Algebra you will need in Data Science

Most used Matrix Maths in a Nutshell read more...

November 13th, 2021

Batch Normalization in Deep Learning

A batch normalization layer calculates the mean and standard deviation of each of its input channels across the batch and normalizes by subtracting the mean and dividing by the standard deviation. read more...

November 10th, 2021

Why F1 Score uses Harmonic Mean and not an Arithmetic Mean

The F1 score is the harmonic mean of precision and recall, taking both metrics into account read more...

November 8th, 2021

Why we do log transformation of variables and interpretation of Logloss

Original number = x and Log Transformed number x=log(x) read more...

November 6th, 2021

Location-Sensitive-Hashing-for-cosine-similarity

What is cosine distance and cosine similarity read more...

November 4th, 2021

Euclidean Distance and Normalization of a Vector

Euclidean distance is the shortest distance between two points in an N-dimensional space also known as Euclidean space. read more...

November 4th, 2021

Data Representations For Neural Networks - Understanding Tensor, Vector and Scaler

At its core, a tensor is a container for data — almost always numerical data. read more...

November 2nd, 2021

Axes and Dimensions in Numpy and Pandas Array

Understanding the shape and Dimension will be one of the most crucial thing in Machine Learning and Deep Learning Project. This blog makes it clear. read more...

November 1st, 2021

Vectorizing Gradient Descent — Multivariate Linear Regression and Python implementation

Vectorized Gradient-Descent formulae for the Cost function of the for Matrix form of training-data Equations. read more...

October 25th, 2021

Understanding Bias-Variance Trade-off

In this post I shall discuss the concept around and the Mathematics behind the below formulation of Bias-Variance Tradeoff. read more...

October 19th, 2021

Fundamentals of Multivariate Calculus for DataScience and Machine Learning

as soon as you need to implement multi-variate Linear Regression, you hit multivariate-calculus which is what you will have to use to derive the Gradient of a set of multi-variate Linear Equations i.e. Derivative of a Matrix. read more...

October 15th, 2021

Concepts of Differential Calculus for understanding the derivation of Gradient Descent in Linear Regression

The most fundamental definition of Derivative can be stated as — derivative measures the steepness of the graph of a function at some particular point on the graph. read more...

October 9th, 2021

Discrete vs Continuous Probability Distributions in context of Data Science

Discrete and Continuos Random Variable and related Probabilities read more...

October 6th, 2021

yfinance to get historical financials from free and Tesla Stock Prediction

Using yfinance open source library access great amount of historical financial data for Free read more...

October 6th, 2021

The mean, the median, and mode of a Random Variable

The mean is called a measure of central tendency because it tells us something about the center of a distribution, specifically its center. read more...

October 4th, 2021

BitCoin Price Prediction using Deep Learning (LSTM Model)

Using the historical data, I will implement a recurrent neural netwok using LSTM (Long short-term memory) layers to predict the trend of cryptocurrency values in the future. read more...

September 30th, 2021

Moving Averages with Bitcoin Historical Data

Moving averages are one of the most often-cited data-parameter in the space of Stock market trading, technical analysis of market and is extremely useful for forecasting long-term trends. read more...

September 29th, 2021

Categorical Data Handling in Machine Learning Projects

Typically, any data attribute which is categorical in nature represents discrete values which belong to a specific finite set of categories or classes. read more...

September 26th, 2021

Tensorflow-Keras Custom Layers and Fundamental Building Blocks For Training Model

In TensorFlow, we can define custom data augmentations(e.g. mixup, cut mix) as a custom layer using subclassing, which I will talk about in this blog read more...

September 23rd, 2021

Tensorflow Mixed Precision Training — CIFAR10 dataset Implementation

Mixed precision training is a technique used in training a large neural network where the model’s parameter are stored in different datatype precision (FP16 vs FP32 vs FP64). It offers significant performance and computational boost by training large neural networks in lower precision formats. read more...

September 21st, 2021

Predict Transaction Value of a Bank’s Potential Customers — Kaggle Santander Competition

For this Project — I applied LightGBM + XGBoost + CatBoost Ensemble to achieve a Top 11% score in the Kaggle Competition ( Santander Value… read more...

September 18th, 2021

Python Image Processing - Grayscale Conversion and Brightness Increase from Scratch

Understanding from scratch how to convert an image to Grayscale and increase the Brightness of an image, working at the pixel level and some related fundamentals of image processing with Python read more...

September 13th, 2021

Gravitational Waves Kaggle Competition Part-2 - EDA and Baseline TensorFlow / Keras Model

In this Kaggle Competition, data-science helping to find Gravitational Waves by building models to filter out noises from data-streams read more...

September 12th, 2021

Gravitational Waves Detection - Kaggle Competition Part-1

The great Kaggle Competition for G2Net Gravitational Wave Detection. Here, I shall go through the fundamental introduction on Gravitational waves, and some related concepts required for this competition. read more...

September 8th, 2021

Microsoft Malware Detection Kaggle Challenge

Here I apply a large number of Feature Engineering to extract features from the 500GB dataset of Microsoft Malware Classification Kaggle Competiton and then apply XGBoost to achieve a LogLoss of 0.007. read more...

September 5th, 2021

Implementing Stochastic Gradient Descent Classifier with Logloss and L2 regularization without sklearn

In SGD while selecting data points at each step to calculate the derivatives. SGD randomly picks one data point from the whole data set at each iteration to reduce the computations enormously read more...

September 1st, 2021

Implementing RandomSearchCV from scratch (without scikit-learn)

Random Search sets up a grid of hyperparameter values and selects random combinations to train the model and score. read more...

August 30th, 2021

Implementing Custom GridSearchCV from scratch (without scikit-learn)

Implementing Custom GridSearchCV without scikit-learn. read more...

August 24th, 2021

Probability Problem-Solution series for Data Science -Part-1

A series solving some fundamental Probability Problems in the context of DataScience and Machine Learning read more...