Skip to main content

Deep Learning and Artificial Intelligence
Enrollment in this course is by invitation only

This course uses real examples and projects to show how Deep Learning can be used for developing applications of Artificial Intelligence, such as processing of images, text and speech, reinforcement learning, anomaly detection and generative models.
Enrollment in this course is by invitation only

Course Description

This course is a continuation of a more traditional subject of Machine Learning. It covers methods developed during the most recent period and based on Deep Neural Networks. We show using real examples how Deep Learning can be used for developing applications of Artificial Intelligence, such as processing of images, text and speech, reinforcement learning, anomaly detection and generative models.

Key features of this course are: fundamental academic background of the lecturer combined with lifelong teaching experience in multiple subjects related to data analyses as well as solving real life problems. On one hand, this course contains deep conceptual content presented in accessible form with large number of interactive tools and examples. On the other hand, it contains large number of workshops and assignment projects helping students to acquire hands-on experience.

Course Contents

Important Note: Changes may occur to the syllabus at the instructor's discretion. When changes are made, students will be notified via email and in-class announcement.

    Week 1

  • Deep learning principles
  • TensorFlow and Keras
  • Convolutional neural networks: principles and applications
  • Project: Satellite Image Segmentation
  • Week 2

  • Reinforcement learning: History and problem statement.
  • Markov Decision Process, Dynamic Programming, Bellman Equations. Examples and applications
  • Project: Training Agent to play Pacman using Reinforcement Learning
  • Week 3

  • Reinforcement learning: Incomplete Markov Process, Method of Temporal Differences and Q-learning.
  • Application to Yield Management.
  • A toolkit for developing and comparing reinforcement learning algorithms Gym: review and examples.
  • Mini-Pacman environment.
  • Project: Training Agent to play Pacman using Reinforcement Learning
  • Week 4

  • Reinforcement learning: Environments with very large or infinite dimensions of state-space, partially observed environments.
  • DeepMind: Deep Q-Learning (DQN) Method, Application of DQN method to the GYM environment for MS. Pacman.
  • Project: Training Agent to play Pacman using Reinforcement Learning
  • Week 5

  • Speech Recognition: Introduction to analysis of audio data.
  • Review of physics and physiology of verbal communication: speech chain, propagation of sound.
  • Basics of working with soundwave, Spectral Analysis, Fourier Transform, Discrete Fourier Transform, Fast Fourier Transform.
  • Project: Speech Recognition
  • Week 6

  • Speech Recognition: Nonlinearities of sound perception, MEL scaling, Filterbank aggregation.
  • Cepstrum, its application to analysis of echo delay.
  • Extraction of features of speech MEL Filterbank Cepstral Coefficients (MFCC).
  • Recognition of words in a given vocabulary: Hidden Markov Model.
  • In-class project: coding and analyzing music.
  • Project: Speech Recognition
  • Week 7

  • Anomalies Detection with Autoencoders: Review of Autoencoders and their applications.
  • General architecture: shallow and stacked autoencoders.
  • Types of undercomplete autoencoders: bottleneck, sparse, denoising autoencoders
  • Project: Network Intrusion Detector using Anomaly Detection with Deep Autoencoders
  • Week 8

  • Applications of Autoencoders: KL Divergence, its use in Sparse Autoencoders, Variational Autoencoders, Generative Model
  • Credit Card Fraud Detection with Autoencoders.
  • Project: Network Intrusion Detector using Anomaly Detection with Deep Autoencoders
  • Week 9

  • Pacman Competition between mixed Human-Machine Teams
  • fitting and tuning models. Using models for prediction
  • Week 10

  • Analysis of Natural Language with Recurrent Neural Networks
  • Review of RNNs, mechanics of LSTM layer.
  • Extraction of features from text data
  • Analysis of Wine Reviews.
  • Project: Detecting Toxic Comments in Online Conversations using Recurrent Neural Networks
Requirements
  • Knowledge of Machine Learning techniques
  • Familiarity with Python
Recommended Books

Hands-On Machine Learning with Scikit-Learn and TensorFlow, by Aurelien Geron, 2017, O’Reilly Media Inc.

Software and Hardware

This course is taught using Python (https://www.python.org/). It is recommended that students have their laptops/computer with Python installed during all sessions.

Course Staff

Course Staff Image #1
Yuri Balasanov

Yuri Balasanov is a faculty member at the University of Chicago since 1997. He teaches at Graduate Program on Financial Mathematics (MSFM) and Graduate Program on Analytics (MScA). He is also founder and President of Research Software International, Inc. since 1991 and iLykei Teaching Tech Corp since 2015. Dr. Balasanov earned his Master’s degree in Applied Mathematics and Ph. D. in Probability Theory and Mathematical Statistics from The Lomonosov Moscow State University, Russia, where he studied under Andrey Kolmogorov and leading members of his school. His primary expertise and research interests are in the area of stochastic modeling and advanced data analysis with applications in various fields including trading, risk management, finance and economics, business analytics, marketing, biology, medical studies. Dr. Balasanov has been a financial industry practitioner for more than 20 years, working at leading financial institutions as head quant, quantitative trader and risk manager.

Effort Required

6-8 Hours per week

Enrollment in this course is by invitation only