Deep Learning

Level:

Intermediate

Duration:

5 days

Dates & Duration

See available dates for this course!
If there is no date and time available for a certain training you are interested in, please contact us.

See dates  >>

 

This course takes 5 days with 8 hours each (on site) or 4 hours each (online).
The only difference is that there will be less practical exercises in an online course. However, we will hand them over to you and you can still do them on your own and ask our consultants for feedback or help if needed.

Course Description

This course is intended for developers who are considering a transition to Data Science or for technical Analysts who are willing to do their first steps in advanced Machine Learning.

 The dynamic of this course contemplates both theoretical content explanation and practical activities. The main goal is to introduce complex Deep Learning algorithms to participants, and help them implement state of the art solutions in different scenarios guided by the instructor.

Audience

Analysts/Developers with at least 1 year of programming experience (Ideally, also experience in Python) and intermediate Statistics/Analytics knowledge.

Available Languages

This course can be held in English and Spanish.

Course Outline

Day 1

Course Introduction:

  • Presentation
  • Outline
  • Course Dynamic

What is Deep Learning?

  • Introduction and main concepts
  • History recap
  • Machine Learning vs Deep Learning
  • Applications

Bias-variance trade-off

  • Underfitting
  • Overfitting

Overview of main Python libraries to be used

  • Numpy
  • Pandas
  • TensorFlow + Keras
  • Pytorch

About Python/Jupyter

  • Possibilities to run scripts online
  • Google Colab
  • GPU vs CPU

Perceptrons and Single Layer Network: Theory

  • 1-D Perceptron
  • Multidimensional input
  • Weights and biases in matrix notation

Perceptrons and Single Layer Network: Praxis

  • Classification Problem

Day 2

Artificial Neural Networks

  • Connecting Perceptrons
  • Network Architectures
  • Forward Propagation and Batch Normalization

Gradient Descent and Backpropagation

  • Loss function
  • SGD vs ADAM
  • Dropout and Early Stopping for Regularization

Data augmentation

  • Rotations
  • Translation
  • Noise

Convolutional Neural Networks: Theory

  • Convolution as a operator

Convolutional Neural Networks: Praxis

  • Using CNN’s for handwriting digit recognition

Day 3

Reinforcement Learning

  • Supervised, Unsupervised vs Reinforcement Learning
  • Agents, Environment, State, Actions, Rewards
  • Model Constraints
  • Markov Chains
  • Case Study: Traveling Salesman Problem

Deep Reinforcement Learning Algorithms

  • Policy Learning
  • Q-learning
  • Exploration vs. Exploitation

Deep Q-Learning: Theory

Deep Q-Learning: Praxis

 

Day 4

Natural Language Processing

  • Use cases
  • Lemmatisation
  • Tokenisation

Tuning big Neural Networks

  • Learning Rate
  • Batch Size
  • Gradient checkpoint

Sentiment Analysis

  • Case of study

Text Generation: Theory

  • Transformers Networks

Text Generation: Praxis

  • Training and using GPT-2

 

Day 5

Recurrent Neural Networks

  • Network Structure
  • Sequential Data
  • Time series
  • Vanishing / Exploding gradient problem

Long Short Term Memory

  • Hidden states
  • Cell functioning

Course close-up: Questions, suggestions, etc.

 

Contact us to sign up for this course >>