๐ Hello, this is
My Interpretation of Andrej Karpathyโs Neural Networks: Zero to Hero
This lecture series progressively builds up to the recreation of a mini-GPT model, providing a hands-on understanding of deep learning fundamentals.
- The autograd engine demystifies the inner workings of PyTorch, enabling automatic differentiation.
- The bigram and MLP models iteratively improve a character-level name generator, showcasing the transition from simple statistical approaches to deeper neural networks.
- The final GPT model is a decoder-based Transformer trained on 1MB of Shakespearean text, capable of generating stylistically consistent Shakespearean passages.
โ Click into each project to learn more about its creation, goal and current version.