Category

ai

44 articles across 26 sub-topics

Types of LLM Quantization: By Timing, Scope, and Mapping

Types of LLM Quantization: By Timing, Scope, and Mapping

TLDR: There is no single "best" LLM quantization. You classify and choose quantization along three axes: when you quantize (timing), what you quantize (scope), and how values are encoded (mapping). In practice, most teams start with weight quantizati...

14 min read
LoRA Explained: How to Fine-Tune LLMs on a Budget

LoRA Explained: How to Fine-Tune LLMs on a Budget

TLDR: Fine-tuning a 7B-parameter LLM updates billions of weights and requires expensive GPUs. LoRA (Low-Rank Adaptation) freezes the original weights and trains only tiny adapter matrices that are added on top. 90%+ memory reduction; zero inference l...

13 min read
LLM Model Quantization: Why, When, and How to Deploy Smaller, Faster Models

LLM Model Quantization: Why, When, and How to Deploy Smaller, Faster Models

TLDR: Quantization converts high-precision model weights and activations (FP16/FP32) into lower-precision formats (INT8 or INT4) so LLMs run with less memory, lower latency, and lower cost. The key is choosing the right quantization method for your a...

13 min read
Variational Autoencoders (VAE): The Art of Compression and Creation

Variational Autoencoders (VAE): The Art of Compression and Creation

TLDR: A VAE learns to compress data into a smooth probabilistic latent space, then generate new samples by decoding random points from that space. The reparameterization trick is what makes it trainable end-to-end. Reconstruction + KL divergence loss...

13 min read
Deep Learning Architectures: CNNs, RNNs, and Transformers

Deep Learning Architectures: CNNs, RNNs, and Transformers

TLDR: CNNs, RNNs, and Transformers solve different kinds of pattern problems. CNNs are great for spatial data like images, RNNs handle ordered sequences, and Transformers shine when long-range context matters. Choosing the right architecture often ma...

12 min read
Neural Networks Explained: From Neurons to Deep Learning

Neural Networks Explained: From Neurons to Deep Learning

TLDR: A neural network is a stack of simple "neurons" that turn raw inputs into predictions by learning the right weights and biases. Training means repeatedly nudging those numbers via back-propagation until the error shrinks. Master the basics and ...

12 min read