Below is a summary of the lecture topics and links to the lecture slides. I will try and make all slides available before the lecture begins. We might vary the order of the lecture topics (probability of this happening is larger for the later lectures). The topics of Lectures 1-5 are fairly set though.
State of AI Report 2020 by Nathan Benaich and Ian Hogarth available at https://www.stateof.ai/. Documenting giving an overview of the current start of AI. I mention the document in the lecture.
Lecture 2
Title: Learning Linear Binary & Linear Multi-class Classifiers from Labelled Training Data
The suggested readings from chapter 5 should be familiar to those who have already taken courses in ML. Lecture 2 should be more-or-less self-contained. But the reading material should flesh out some of the concepts referred to in the lecture.
I'm going to go into very explicit detail about the back-propagation algorithm. It was not my original intention to have such an involved description but condensing the explanation made things less clear. My hope, though, is that everybody will have a good understanding of the theory and the mechanics of the algorithm after this lecture. I go into more specific detail (but not as generic) than in the deep learning book. So my recommendation is that you read my lecture notes to get a good understanding for the concrete example(s) I explain and then you can read the deep learning book for a broader description. Section 6.5 also assume you know about networks with more than 1 layer! So it may be better to hold off reading it until after lecture 4 (where I will talk about n-layer networks, activation functions, etc..)
Lecture 4
Title: k-layer Neural Networks
Date & Time: Monday, March 29, 13:00-15:00
More details and material
Topics covered:
k-layer Neural Networks.
Activation functions.
Backprop for k-layer neural networks.
Problem of vanishing and exploding gradients.
Importance of careful initialization of network's weight parameters.
Batch normalization + Backprop with Batch normalisation
Sections 8.7.1 from the deep learning book has a more subtle description of the benefits of batch normalisation and why it works.
Interesting extra reading material:
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan and Quoc V. Le piblished at ICML, 2019 available at https://arxiv.org/pdf/1905.11946.pdf . The paper discusses how to increase the size of your network w.r.t. width, depth and resolution to increase performance. Or to quote the abstract: In this paper, we sys- tematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance.
Lecture 5
Title: Training & Regularization of Neural Networks
Section 9.1, 9.2 (motivates benefit of convolutional layers Vs fully connected layers), 9.10 (if you are interested in the neuro-scientific basis for ConvNets). Section 9.3 discusses the pooling operation.
Clip from lecture 21 of Robert Sapolsky's course on behavioural biology:
Lecture 7
Title: Training & Designing ConvNets
Date & Time: Tuesday, April 13, 08:00-10:00
More details and material
Topics covered:
Review of the modern top performing deep ConvNets - AlexNet, VGGNet, GoogLeNet, ResNet
Practicalities of training deep neural networks - data augmentation, transfer learning and stacking convolutional filters.