My Notes from Coursera’s Deep Learning Course

My Notes from Coursera’s Deep Learning Course

This blog is dedicated to sharing my notes from Coursera’s Deep Learning course. I’ll be posting summaries of each week’s lectures, as well as any interesting projects or papers I come across.

Check out our video:

Introduction to Deep Learning

Deep learning is a branch of machine learning that deals with algorithms that can learn from data that is unstructured or unlabeled. Deep learning is a subset of artificial intelligence (AI) that is inspired by the structure and function of the brain.

Deep learning algorithms are used to automatically discover patterns in data. These patterns can be used to make predictions about new data. Deep learning algorithms are also capable of detecting anomalies or outliers in data.

Deep learning models are composed of multiple layers of interconnected processing nodes (neurons). Each layer transforms the data it receives from the previous layer until the final layer produces the desired output (prediction).

The number of layers, as well as the number of neurons in each layer, can vary depending on the problem that is being solved. Deep learning models can have dozens or even hundreds of layers.

Neural Networks

Neural networks are a type of machine learning algorithm that are used to model complex patterns in data. Neural networks are similar to other machine learning algorithms, but they are composed of a large number of interconnected processing nodes, or neurons, that can learn to recognize patterns of input data.

Convolutional Neural Networks

Convolutional neural networks (also known as ConvNets or CNNs) are a type of neural network that are particularly well-suited for image processing tasks. ConvNets are similar to other types of neural networks but they have an additional layer, called a convolutional layer, that enables them to better process spatial information.

Convolutional neural networks are typically used for tasks such as image classification, object detection, and face recognition.

Recurrent Neural Networks

A recurrent neural network (RNN) is a type of neural network where the output from the previous timestep is fed as input to the current timestep. This creates a “memory” which allows the RNN to model temporal/sequential data.

There are two main types of RNNs:
-Foreward Propagating Networks: where information flows in only one direction
-Bidirectional Recurrent Neural Networks (BRNN): where information flows in both directions


An autoencoder is a neural network that learns to copy its input to its output. They are composed of an encodeing layer and a decoding layer. The encoder compresses the input and the decoder decompresses the input. The parameters of an autoencoder are trained such that the output is as close to the input as possible.

Autoencoders are used for dimensionality reduction and are useful for data compression. They can also be used for incredibly accurate denoising.

Deep Reinforcement Learning

Reinforcement learning is a subfield of machine learning, and is also calledOnline Learning or Approximate Dynamic Programming. It concerns the possibility of teaching agents (including robots, computer programs, and real animals!) to take actions in an environment so as to maximize some notion of cumulative reward. (source)

Deep reinforcement learning is a nascent but growing subfield of machine learning that combines the deep learning methods with reinforcement learning algorithms.

Deep reinforcement learning algorithms have been used to solve complex tasks such as playing Go and Atari games from raw pixels, and controlling robotic arms to manipulate objects.

Generative Models

In machine learning, a generative model is a model that captures the probability distribution of a dataset, typically so that new samples can be generated from it. The specific form of the model depends on the type of data being generated. For example, images can be modeled as two-dimensional arrays of pixel values, and thus a generative model for images might capture the joint probability distribution of all pixel values in all possible images. A particularly popular type of generative model is the generative adversarial network (GAN), which consists of two neural networks: a generator network that generates new samples, and a discriminator network that tries to correct the generator by flagging generated samples that it deems unreal.

Unsupervised Learning

Unsupervised learning is a type of machine learning that looks for previously undetected patterns in a data set without the aid of a known output label. This technique is used to solitary groups of similar objects from each other and to find hidden structures within the data. Clustering and Dimensionality Reduction are the two most common unsupervised learning tasks.

Common unsupervised learning algorithms include:
-k-means clustering
-Hierarchical clustering
-OPTICS algorithm
-Gaussian mixture model (GMM)
-Apriori algorithm
-Markov models

Dimensionality Reduction

Deep learning models tend to be very powerful, but also very slow to train. One way to speed up training is to use dimensionality reduction – that is, to reduce the number of input features you are using. This can be done in a number of ways, including:

– student-t distribution: This is a method of dimensionality reduction that assumes that your data is distributed according to a student-t distribution. This can be a good assumption for data that is not too high-dimensional (e.g. less than 1000 features).

– principal component analysis (PCA): This is a Linear Algebra-based method of dimensionality reduction. PCA finds the “principal components” of your data, which are the directions along which your data varies the most. These principal components can then be used as the new input features for your model.

– independent component analysis (ICA): This is a statistical method of dimensionality reduction. ICA finds the “independent components” of your data, which are linear combinations of your input features that are as close to uncorrelated as possible. These independent components can then be used as the new input features for your model.

Applications of Deep Learning

Deep learning is a subfield of machine learning that is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Neural networks, which are composed of interconnected layers (called neurons) of mathematical functions, can be used to identify patterns in data. Deep learning algorithms are designed to learn these patterns by “tuning” the connection weights between neurons (i.e., adjusting how much each neuron contributes to the output of the network).

Deep learning is often used for applications such as image recognition, speech recognition, and natural language processing.

Keyword: My Notes from Coursera’s Deep Learning Course

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top