Machine Learning Archive
Principal Components Analysis Explained for Dummies
On January 17, 2022 In Classical Machine Learning, Machine Learning
In this post, we will have an in-depth look at principal components analysis or PCA. We start with a simple explanation to build an intuitive understanding of PCA. In the second part, we will look at a more mathematical definition of Principal components analysis. Lastly, we learn how to perform PCA in Python. What
Building a Neural Network with Training Pipeline in TensorFlow and Python for the Titanic Dataset: A Step-by-Step Example
On January 8, 2022 In Deep Learning, Machine Learning
In this post, we will cover how to build a simple neural network in Tensorflow for a spreadsheet dataset. In addition to constructing a model, we will build a complete preprocessing and training pipeline in Tensorflow that will take the original dataset as an input and automatically transform it into the format necessary for
An Introduction to Convolutional Neural Network Architecture
On December 12, 2021 In Computer Vision, Deep Learning, Machine Learning
In this post, we understand the basic building blocks of convolutional neural networks and how they are combined to form powerful neural network architectures for computer vision. We start by looking at convolutional layers, pooling layers, and fully connected. Then, we take a step-by-step walkthrough through a simple CNN architecture. Understanding Layers in a
What is Pooling in a Convolutional Neural Network (CNN): Pooling Layers Explained
On December 5, 2021 In Computer Vision, Deep Learning, Machine Learning
Pooling in convolutional neural networks is a technique for generalizing features extracted by convolutional filters and helping the network recognize features independent of their location in the image. Why Do We Need Pooling in a CNN? Convolutional layers are the basic building blocks of a convolutional neural network used for computer vision applications such
Understanding Padding and Stride in Convolutional Neural Networks
On December 3, 2021 In Computer Vision, Deep Learning, Machine Learning
Padding describes the addition of empty pixels around the edges of an image. The purpose of padding is to preserve the original size of an image when applying a convolutional filter and enable the filter to perform full convolutions on the edge pixels. Stride in the context of convolutional neural networks describes the process
Understanding Convolutional Filters and Convolutional Kernels
On November 29, 2021 In Computer Vision, Deep Learning, Machine Learning
This post will introduce convolutional kernels and discuss how they are used to perform 2D and 3D convolution operations. We also look at the most common kernel operations, including edge detection, blurring, and sharpening. A convolutional filter is a filter that is applied to manipulate images or extract structures and features from an image.
What is a Convolution: Introducing the Convolution Operation Step by Step
On November 21, 2021 In Computer Vision, Deep Learning, Machine Learning
In this post, we build an intuitive step-by-step understanding of the convolution operation and develop the mathematical definition as we go. A convolution describes a mathematical operation that blends one function with another function known as a kernel to produce an output that is often more interpretable. For example, the convolution operation in a
What is Batch Normalization And How Does it Work?
On November 15, 2021 In Deep Learning, Machine Learning
Batch normalization is a technique for standardizing the inputs to layers in a neural network. Batch normalization was designed to address the problem of internal covariate shift, which arises as a consequence of updating multiple-layer inputs simultaneously in deep neural networks. What is Internal Covariate Shift? When training a neural network, it will speed
Deep Learning Optimization Techniques for Gradient Descent Convergence
On November 11, 2021 In Deep Learning, Machine Learning
In this post, we will introduce momentum, Nesterov momentum, AdaGrad, RMSProp, and Adam, the most common techniques that help gradient descent converge faster. Understanding Exponentially Weighted Moving Averages A core mechanism behind many of the following algorithms is called an exponentially weighted moving average. As the name implies, you calculate an average of several
Stochastic Gradient Descent versus Mini Batch Gradient Descent versus Batch Gradient Descent
On November 6, 2021 In Deep Learning, Machine Learning
In this post, we will discuss the three main variants of gradient descent and their differences. We look at the advantages and disadvantages of each variant and how they are used in practice. Batch gradient descent uses the whole dataset, known as the batch, to compute the gradient. Utilizing the whole dataset returns a