This is a living document and will be continually updated.
Today, the world is abuzz with LLMs (short for Large Language Models). Every day goes by without a new language model being announced, increasing the fear of missing out on opportunities in the AI field. However, many still struggle with the basic concepts of an LLM, making it difficult to keep up with advances. This article is for anyone who wants to delve into the inner workings of such AI models and gain a solid understanding of the subject. With this in mind, here are some tools and articles that will help you strengthen your LLM concepts and break them down for easier understanding.
· 1. Transformers Illustrated by Jay Allanmar
· 2. Illustrated GPT-2 by Jay Alammar
· 3. LLM Visualization by Brendan Bycroft
· 4. Generative AI exists thanks to Transformers — Financial Times
・5. Tokenizer tool by OpenAI
· 6. About GPT Tokenizers Written by Simon Willison
· 7. Chunkviz by Greg Kamradt
· 8. Do machine learning models memorize or generalize? -Explorable with PAIR
· 9. Generating color-coded text
・Conclusion
Many of you may already know about this iconic article. Jay was one of the early pioneers of writing technical articles with powerful visualizations. Just take a quick read of this blog site and you'll understand what I mean. Over the years, he inspired many writers to follow suit, and the concept of tutorials changed from simple text and code to immersive visualizations. Anyway, back to illustrated Transformers. The architecture of the transformer is…
