Projects Publications Blog Linklog

Linklog

A curated collection of links and resources I have found over time.

Tags: #algorithms (8)#best-practices (9)#cli (4)#data (10)#data-science (7)#deep-learning (8)#diffusion (3)#library (11)#llm (10)#LLM (1)#markdown (2)#optimization (6)#physics (3)#python (27)#rust (7)#SQL (2)#statistics (4)#tools (7)#vcs (6)#web-dev (4)

March 2025

Week 12
  • A Visual Guide to LLM Agents (newsletter.maartengrootendorst.com) - #llm

    It is possibly one of the best summaries out there of how LLMs function, broken down by high-level components (memory, tools), and well illustrated.

Week 10
  • Understanding Attention in LLMs (bartoszmilewski.com) - #llm

    This is a good example that even if you understand the math behind a concept, there's nothing like good storytelling. I knew how attention worked, but this post brillantly summarized it and clarified some steps for me. A great read!

February 2025

Week 8
  • Deep dive into LLMs like ChatGPT by Andrej Karpathy (TL;DR) (anfalmushtaq.com) - #llm

    A TL;DR version of Andrej Karpathy's "Deep dive into LLMs like ChatGPT" video. Manages to keep the essentials but presents them in digestible clear chunks.

Week 7
  • Binary vector embeddings are so cool (emschwartz.me) - #llm#deep-learning#data

    A description of the effect of binary quantization on embeddings. By restricting the dtype of embedding vectors, you can get a tradeoff between accuracy in latent space and size of the embedding. Using binary dtype seems to conserve a surprisingly high amount of the original information content (about 97%) while yielding a gigantic amount of saving in space (about 97% too here).

  • How I program with LLMs (crawshaw.io) - #llm

    Some interesting reflections on how to use LLMs in daily development work. I personally adhere mostly to the "autocomplete" part with Github Copilot, and I'm getting used to the "search" part where the LLM helps me find information on some language or coding paradigm faster than I can search it. I'm not yet onboard with "Chat-driven programming".

  • How to fine-tune open LLMs in 2025 with Hugging Face (www.philschmid.de) - #llm

    An in-depth example of how to fine-tune an LLM using the Hugging Face ecosystem.

  • LoRA (jaketae.github.io) - #llm

    Explanation of LoRA methods for LLMs.

January 2025

Week 2
  • Building effective agents (www.anthropic.com) - #llm

    Advice on agentic workflow for practical applications from Anthropic. A good read to better understand what structure you should use when establishing your project.

October 2024

Week 44
  • Transformers From Scratch (blog.matdmiller.com) - #deep-learning#llm

    Thorough explanation of the Transformers model. If like me you've been confused about what's so special about transformers compared to RNNs or LSTMs, this might help.

September 2024

Week 38
  • dleemiller/WordLlama (github.com) - #library#llm

    Natural language processing toolkit optimized for CPU hardware. I haven't tested it yet but it looks really useful for quick clustering, deduplication, similarity search, etc...

You can follow me via RSS. Switch theme.
© 2025 Nicolas Chagnet. All rights reserved.