Prerequisite course: Probability, Data Structures and Algorithms

Learning Objectives

Deep learning covers the theory and practice of a big family of very effective techniques today in the domain of machine learning. The objective of the course is to enable students to get familiarity with this area and to gain adequate knowledge to apply the techniques in solving real world problems.

Learning Outcomes

At the end of the course, the students should be able to precisely state the classical algorithms, models, and theories in the area. Students should be able to identify appropriate algorithm given a practical task. Students should also be able to implement and solve the tasks using deep learning techniques.

Course content

  1. Revision of Linear Algebra, Probability. Machine learning Basics. [2 weeks]
  2. Perceptron, Neural network, deep feed forward networks. Optimization techniques for deep networks: back propagation, gradient descend, sampling techniques. Regularization, dropout. [3 weeks]
  3. Case studies using tensor flow, pytorch (will be spread across the semester) [2 weeks]
  4. Convolutional networks with application in computer vision. [2 weeks]
  5. Recurrent networks, Long Short Term Memory networks with application in natural language processing. [ 2 weeks]
  6. Auto encoders, variational auto encoders with application in representation learning. [1 week]
  7. Generative adversarial networks [1 week]
  8. Bayesian deep learning. [1 week]

Text books

  1. Deep Learning. Ian Goodfellow and Yoshua Bengio and Aaron Courville. MIT Press. 2016. ISBN-13 : 978-0262035613.


  1. Pattern Recognition and Machine Learning. Christopher Bishop. Springer. 2006. ISBN-13 978-0-387-31073-2.
  2. Deep Learning with Python. Francois Chollet. Publisher: Manning Publications; 1 edition. ISBN-13: 978-1617294433
  3. Hands-On Machine Learning with Scikit-Learn and TensorFlow. Aurélien Géron. Publisher: O’Reilly Media; 1 edition. ISBN-13: 978-1491962299.