Using DeOldify to Colorize and Restore Grayscale Images and Videos

Articles

Image colorization is an engaging topic in the field of image-to-image translation. Even though color photography was invented in 1907, It didn’t become popular for the average person until the 1960s because of its expensiveness and inaccessibility. All the photography and videography up until then was done on Black & White. Colorizing these images was impossible—until the DeOldify deep learning model came to life.

Continue reading Using DeOldify to Colorize and Restore Grayscale Images and Videos

Transfer Learning with PyTorch

Articles

When we learn something in our daily lives, similar things become very easy to learn because—we use our existing knowledge on the new task. Example: When I learned how to ride a bicycle, it became very easy to learn how to ride a motorcycle because in riding the bicycle, I knew I had to sit and maintain balance, hold the handles firmly, and peddle to accelerate. In using my prior knowledge, I could easily adapt to a motorcycle’s design and how it could be driven. And that is the general idea behind transfer learning.

Continue reading Transfer Learning with PyTorch

Understanding the Mathematics behind Principal Component Analysis

Articles

In this post, we’re going to learn the foundations of a very famous and interesting dimensionality reduction technique known as principal component analysis (PCA).

Specifically, we’re going to learn what principal components are, how data is concentrated within them, and learn about their orthogonality properties that make extraction of important data easier.

In other words, Principal component analysis (PCA) is a procedure for reducing the dimensionality of the variable space by representing it with a few orthogonal (uncorrelated) variables that capture most of its variability.

Continue reading Understanding the Mathematics behind Principal Component Analysis

Understanding the Mathematics Behind Naive Bayes

Articles

In this post, we’re going to dive deep into one of the most popular and simple machine learning classification algorithms—the Naive Bayes algorithm, which is based on the Bayes Theorem for calculating probabilities and conditional probabilities.

Before we jump into the Naive Bayes classifier/algorithm, we need to know the fundamentals of Bayes Theorem, on which it’s based.

Continue reading Understanding the Mathematics Behind Naive Bayes

Image Classification on Android using OpenCV

Articles

This tutorial uses the popular computer vision library OpenCV for building an image classifier that runs on Android devices.

The overall process looks like this. First, the color histogram of the hue channel from the HSV color space is extracted from the image dataset. Next, an artificial neural network (ANN) is built and trained by such features and then saved for later use in an Android app. An Android Studio project is then created, which imports the Android release of OpenCV. After being imported successfully, the saved trained ANN is loaded for making predictions.

Continue reading Image Classification on Android using OpenCV

TensorFlow MLIR: An Introduction

Articles

Currently, different domains of machine learning software and hardware have different compiler infrastructures. There are number of challenges posed by this dynamic, including:

MLIR seeks to address this software fragmentation by building a reusable and extensible compiler infrastructure. In this piece, we’ll look at a conceptual view of MLIR.

MLIR seeks to promote the design and implementation of code generators, optimizers, and translators at various stages of abstraction across different application domains. The need for MLIR arose from the realization that modern machine learning frameworks have different runtimes, compilers, and graph technologies. For example, TensorFlow itself has different compilers for different frameworks.

Continue reading TensorFlow MLIR: An Introduction