Introduction to Deep Learning and Self-Driving Cars from MIT

Introduction to Deep Learning and Self-Driving Cars from MIT

It doesn’t matter if you are beginner or new to machine learning or advanced researcher in the field of deep learning methods and their application, everybody can benefit of Lex Fridman’s course on Deep Learning for Self-Driving Cars.

If you are interested in this course, you can go to  and Register an account on the site to stay up-to-date. The material for the course is free and open to the public.

In order to reach more people who are interested in Deep learning, machine learning and artificial intelligence, I would like to share this course material on my website.

What you will learn from this course:

*An overview of deep learning methods: Deep Reinforcement Learning, Convolutional Neural Networks, Recurrent Neural Networks

*How deep learning can help improve each component of autonomous driving: perception, localization, mapping, control, planning, driver state

Artificial Intelligence helps us on;

Formal tasks: Playing board games, card games. Solving puzzles, mathematical and logic problems.

Expert tasks: Medical diagnosis, engineering, scheduling, computer hardware design.

Mundane tasks: Everyday speech, written language, perception, walking, object manipulation.

Is driving closer to chess or to everyday conversation?


Self-Driving Car Tasks

Localization and Mapping: Where am I?

Scene Understanding: Where is everyone else?

Movement Planning: How do I get from A to B?

Driver State: What’s the driver up to?

How Hard is it to Pass the Turing Test?

1. Natural language processing to enable it to communicate successfully

2. Knowledge representation to store information provided before or during the interrogation

3. Automated reasoning to use the stored information to answer questions and to draw new conclusions

Human brains have ~10,000 computational power than computer brains.

Current Drawbacks 
Lacks Reasoning: Humans only need simple instructions: “You’re in control of a paddle and you can move it up and down, and your task is to bounce the ball past the other player controlled by AI.”
Requires big data: inefficient at learning from data
Requires supervised data: costly to annotate real-world data
Need to manually select network structure
Needs hyperparameter tuning for training: Learning rate, Loss function, Mini-batch size, Number of training iterations, Momentum: gradient update smoothing, Optimizer selection
Defining a good reward function is difficult…

Lex Fridman is a postdoc at MIT.  His research interests are in developing and applying deep neural networks in the context of driver state sensing, scene perception, and shared control of semi-autonomous vehicles.

You can reach slides here..

Useful Deep Learning Terms

Basic terms:
Deep Learning = Neural Networks
Deep Learning is a subset of Machine Learning

Terms for neural networks: 
MLP: Multilayer Perceptron
DNN: Deep neural networks
RNN: Recurrent neural networks
LSTM: Long Short-Term Memory
CNN or ConvNet: Convolutional neural networks
DBN: Deep Belief Networks
Neural network operations: Convolution, Pooling, Activation function, Backpropagation


Walking is Hard. How Hard is Driving?

What’s Next for Deep Learning?
(5 year vision) 

Ilya Sutskever, Research Director of OpenAI: Deeper models, models that need fewer examples for training. 
Christian Szegedy, Senior Research Scientist at Google: Become so efficient that they will be able to run on cheap mobile devices. 
Pieter Abbeel, Associate Professor in Computer Science at UC Berkeley: Significant advances in deep unsupervised learning and deep reinforcement learning. 
Ian Goodfellow, Senior Research Scientist at Google: Neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks that model the behavior of genes, drugs, and proteins and then used to design new medicines. 
Koray Kavukcuoglu & Alex Graves, Research Scientists at Google DeepMind: An increase in multimodal learning, and a stronger focus on learning that persists beyond individual datasets. 
Charlie Tang, Machine Learning group, University of Toronto: Deep learning algorithms ported to commercial products, much like how the face detector was incorporated into consumer cameras in the past 10 years.

Social Media Share

Leave a Reply

Your email address will not be published.