Enterprise ML - Why putting your model in production takes longer than building it
A Gentle Guide to Enterprise, in Plain English
Batch Norm Explained Visually - Why does it work?
In Neural, tutorial, May 27, 2021Transformers Explained Visually - Multi-head Attention, deep dive
A Gentle Guide to the inner workings of Self-Attention, Encoder-Decoder Attention, Attention Score and Masking, in Plain English.
Read MoreAll Stories
Enterprise ML - Why putting your model in production takes longer than building it
A Gentle Guide to Enterprise, in Plain English
In Neural, tutorial, Jun 28, 2021Enterprise ML - Why building and training a "real-world" model is hard
A Gentle Guide to the lifecycle of a Machine Learning project in the Enterprise, the roles involved and the challenges of building models, in Plain English
In Enterprise, Jun 16, 2021Transformers Explained Visually - Not just how, but Why they work so well
A Gentle Guide to how the Attention Score calculations capture relationships between words in a sequence, in Plain English.
In Transformers, tutorial, Jun 02, 2021Batch Norm Explained Visually - Why does it work?
A Gentle Guide to the reasons for the Batch Norm layer's success in making training converge faster, in Plain English
In Neural, tutorial, May 27, 2021Differential and Adaptive Learning Rates - Neural Network Optimizers and Schedulers demystified
A Gentle Guide to boosting model training and hyperparameter tuning with Optimizers and Schedulers, in Plain English
In Neural, tutorial, May 22, 2021Batch Norm Explained Visually — How it works, and why neural networks need it
A Gentle Guide to an all-important Deep Learning layer, in Plain English
In Neural, tutorial, May 10, 2021Foundations of NLP Explained — Bleu Score and WER Metrics
A Gentle Guide to two essential metrics (Bleu Score and Word Error Rate) for NLP models, in Plain English
In NLP, tutorial, May 07, 2021Image Captions with Attention in Tensorflow, Step-by-step
An end-to-end example using Encoder-Decoder with Attention in Keras and Tensorflow 2.0, in Plain English
In Vision, tutorial, Apr 27, 2021Image Captions with Deep Learning - State-of-the-Art Architectures
A Gentle Guide to Image Feature Encoders, Sequence Decoders, Attention, and Multimodal Architectures, in Plain English
In Vision, tutorial, Apr 20, 2021Leveraging GeoLocation Data with Machine Learning - Essential Techniques
A Gentle Guide to Feature Engineering and Visualization with Geolocation Data, in Plain English
In GeoLocation, tutorial, Apr 11, 2021Featured
-
Enterprise ML - Why putting your model in production takes longer than building it
In Neural, tutorial, -
Enterprise ML - Why building and training a "real-world" model is hard
In Enterprise, -
Transformers Explained Visually - Not just how, but Why they work so well
In Transformers, tutorial, -
Differential and Adaptive Learning Rates - Neural Network Optimizers and Schedulers demystified
In Neural, tutorial, -
Foundations of NLP Explained — Bleu Score and WER Metrics
In NLP, tutorial, -
Image Captions with Attention in Tensorflow, Step-by-step
In Vision, tutorial, -
Leveraging GeoLocation Data with Machine Learning - Essential Techniques
In GeoLocation, tutorial, -
Foundations of NLP Explained Visually - Beam Search, How It Works
In NLP, tutorial, -
Audio Deep Learning Made Simple - Automatic Speech Recognition (ASR), How it Works
In Audio, tutorial, -
Audio Deep Learning Made Simple - Sound Classification, Step-by-Step
In Audio, tutorial, -
Audio Deep Learning Made Simple - Why Mel Spectrograms perform better
In Audio, tutorial, -
Audio Deep Learning Made Simple - State-of-the-Art Techniques
In Audio, tutorial, -
Transformers Explained Visually - How it works, step-by-step
In Transformers, tutorial, -
Transformers Explained Visually - Overview of Functionality
In Transformers, tutorial, -
Reinforcement Learning Made Simple - Intro to Basic Concepts and Terminology
In Reinforcement Learning, tutorial,