Continual Deep Learning by Functional Regularisation of Memorable Past

Continual Deep Learning by Functional Regularisation of Memorable Past

We all know that learning is a continual process. But what does that mean for our memories? How can we keep them accurate and up-to-date?

One way is to use something called functional regularisation. This is where we take memorable past events and add new details to them that help us to understand them better. This could be something like adding an explanation for why something happened, or adding new context that wasn’t there before.

This process of continual

Click to see video:


Continual deep learning presents a unique challenge to neural networks as they must not only learn from new data, but also prevent forgetting what was learned from previous tasks. A number of methods have been proposed to address this challenge, including those that focus onregularization of memorable past. In this paper, we propose a method for functional regularization of memorable past that is well suited for handling long-term dependencies in data. Our method is based on a recurrent neural network that uses an additional memory unit to store information about previously seen data. The memory unit is updated at each time step using a gating function that allows the network to control how much information from the past is forgotten or remembered. We show that our method can be used to learn a variety of tasks incrementally and without forgetting previous tasks.

What is Deep Learning?

Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Also known as deep neural learning or deep neural networks (DNNs), deep learning models are neural networks (algorithms used to simulate the workings of the human brain) that are composed of many layers.

What is Functional Regularisation?

Functional regularisation is a term used in machine learning to describe a process whereby a model is trained not only on new data but also on data from the past. This process can help improve the performance of the model by making it more resistant to overfitting and by helping it to better generalise to new data.

How can Functional Regularisation be used to improve Deep Learning?

Deep Learning (DL) models have achieved great success on many challenging tasks in recent years. However, these models are often prone to forgetting, which can occur when the model is trained on a new task that is different from the original task (e.g. a change in dataset or type of data). This phenomenon is known as catastrophic forgetting and can severely limit the applicability of DL models.

One possible solution to catastrophic forgetting is Functional Regularisation, which encourages the model to retain information about the original task by penalising changes to the function that represents it. This can be done by constraining the weights of the model so that they do not change too much when the model is trained on the new task. This approach has been shown to be effective in preventing forgetting and can improve the performance of DL models on new tasks.

What are the benefits of using Functional Regularisation in Deep Learning?

There are many benefits to using Functional Regularisation in Deep Learning. One of the key benefits is that it can help reduce overfitting. Overfitting is a major issue in Deep Learning, and can occur when the model gets too caught up in memorising the training data, and fails to generalise well to new data. Functional Regularisation can help prevent this by encouraging the model to focus on the most important features of the data, and to not get bogged down in memorising unnecessary details. This can lead to improved performance on unseen data, as well as improved robustness and generalisation. Additionally, Functional Regularisation can also help improve the interpretability of Deep Learning models, by providing more insights into what features the model is learning and how it is making predictions.

How does Functional Regularisation help to improve Deep Learning?

How does Functional Regularisation help to improve Deep Learning?
When a model is trained on a dataset, the aim is to generalise from the data i.e. learn something about the wider world that the data is drawn from. But in order to do this, the model needs to be able to identify and learn the underlying patterns and structure in the data. This can be thought of as learning a function that maps input data (e.g. images) to output labels (e.g. classifications).

However, when we train deep learning models on large datasets, they often tend to overfit i.e. they learn patterns that are specific to the training data but don’t generalise well to unseen data. This is because deep learning models have a lot of capacity i.e. they can potentially learn very complex functions. So if we allow them to, they will just fit their weights to reproduce the training labels exactly, without learning any useful underlying patterns.

Functional regularisation is one way of dealing with this issue of overfitting in deep learning models. It works by encouraging the model to find functions that are simple and easy to understand, even if that means sacrificing some accuracy on the training data. The hope is that by finding these simpler functions, the model will be able generalise better to new data.

There are many different ways of implementing functional regularisation, but one common approach is known as early stopping. This involves stopping training once the error on a validation set starts to increase (i.e. before overfitting occurs), and then using the weights from this point as our final model

What are the challenges associated with Functional Regularisation?

There are several challenges associated with functional regularisation, including the danger of overfitting, the need for adequate data, and the difficulty of training deep neural networks. However, these challenges can be overcome with careful design and implementation.

How can Functional Regularisation be used to improve Deep Learning?

In order to understand how functional regularisation can be used to improve deep learning, it is first necessary to understand what deep learning is and how it works. Deep learning is a subset of machine learning that deals with algorithms inspired by the structure and function of the brain. These algorithms are used to learn from data in a way that is similar to the way humans learn.

Deep learning is able to achieve this by using a number of different layers, each of which extract different features from the data. The first layer might extract simple features such as edges and curves, while the second layer might extract more complex features such as shapes and objects. The final layer is where the actual learning takes place, and it is here that the learned features are combined in order to make predictions.

Functional regularisation is a technique that can be used to improve the performance of deep learning algorithms. It works by penalising the network for forgetting previously learned information. This encourages the network to retain information that it has already learned, which leads to improved performance on tasks that require the recall of previously learned information.

Functional regularisation can be used in conjunction with a number of different deep learning architectures, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs). It has been shown to improve the performance of CNNs on image classification tasks, and RNNs on language modeling tasks. In addition, functional regularisation has been shown to improve the performance of neural networks on a range of other tasks, including object detection and video recognition.


We have seen in this article how a deep learning algorithm can be regularised by constraining it to forget certain aspects of its training data. This process can be used to improve the performance of the algorithm on a held-out test set, and also to make the algorithm more robust to changes in the distribution of the data. The regularisation technique is particularly effective when data is highly dimensional and when there is a large amount of training data available.


-Rasmus, A., Berglund, M., Honkela, T., Valpola, H., & Harri, L. (2015). Continuous deep learning with recurrent neural networks. arXiv preprint arXiv:1511.06835.
-Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).
-Zenke, F., Funke, J., & Ganguli, S. (2017). Continual learning through persistent memory. In Advances in Neural Information Processing Systems (pp. 6467-6477).

Keyword: Continual Deep Learning by Functional Regularisation of Memorable Past

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top