# Graduate Course "CE7454: Deep Learning for Data Science"

NTU, Singapore

Semester 1, 2019/20

Xavier Bresson

Introduction

Artificial intelligence (AI) plays a central role in the 4th industrial revolution. An important part of AI is to extract meaningful knowledge from data to solve real-world problems. In the last few years, artificial neural networks, re-born as deep learning, have overcome all other techniques in competition to analyze large-scale datasets, a.k.a. big data, and solve complex tasks just as humans do.

Deep learning technology has allowed the emergence of self-driving vehicles and shortly autonomous robots. It has also developed a program which defeats the world champions of Go for the first time in history. Deep learning has improved significantly speech recognition, which is about to reach the human achievement level. Machine language translation was also significantly enhanced with deep learning, and chatbots are now powerful entities that can understand questions and offer meaningful answers. In healthcare, this technology offers a great potential to develop our understanding of brain neuro-degenerative diseases like Alzheimer and produce new patient-personalized drugs.

Major IT companies like Google, Facebook, Amazon, Apple etc. have started their transition to become AI companies. Soon all products will be powered by deep learning techniques.

Overview

Deep learning is a fascinating paradox. On one hand, these techniques can solve very complex tasks by extracting highly abstract concepts from data. On the other hand, deep learning algorithms do not require highly technical level and can be accessible to virtually anyone. In other words, the mathematics behind deep learning techniques is simple to master and the coding of these algorithms remains effortless with Python.

The purpose of this course is to give the participants a clear introduction, an intuitive understanding and a smooth Python implementation of the most successful deep learning techniques. The teaching approach provides a good balance of theory and practice. Theory of deep neural networks relies on simple linear operations and basic gradient descent optimization. Practical exercises of deep learning applications focus on PyTorch, the library developed by Facebook AI. Each lecture presents the fundamental concepts and translates them into PyTorch implementations.

Prerequisite

Basic knowledge of linear algebra (e.g. matrix multiplication) and script programming (e.g. Python, Matlab, R) are needed. The coding will be done in Python. Participants must bring their laptop to run the Python notebooks.

Coding exercises

All python codes are available on the GitHub's folder of the course, here.

Lecture Slides

Lecture 1 - Introduction to Deep Learning

Material: Slides

Lecture 2 - Linear algebra review

Material: Slides

Optional Lecture - Calculus Review

Material: Slides

Lecture 4 - Vanilla Neural Networks - Loss and Optimization

Material: Slides

Lecture 5 - Multi-Layer Perceptron - Inference and Learning

Material: Slides

Lecture 6 - Multi-Layer Perceptron - Depth

Material: Slides

Lecture 7 - Convolutional Neural Networks - Introduction

Material: Slides

Lecture 8 - Convolutional Neural Networks - Implementation

Material: Slides

Optional Lecture - Visualizing Convolutional Neural Networks

Material: Slides

Lecture 9 - Recurrent Neural Networks - Introduction

Material: Slides

Lecture 10 - Recurrent Neural Networks - Implementation

Material: Slides

Lecture 11 - Recurrent Neural Networks - Applications

Material: Slides

Lecture 12: Deep Learning Project

Material: Slides

Lecture 13: Attention Neural Networks

Material: Slides

Lecture 14: Graph Neural Networks

Material: Slides

Optional Lecture - Introduction to Graph Science

Material: Slides

Lecture 15: Deep Reinforcement Learning

Material: Slides

Lecture 16: Good Practices

Material: Slides

Setup GPU Cloud Computing

Material: Slides