LFM2-8B-A1B β MLX 8-bit (Apple Silicon)
Maintainer / Publisher: Susant Achary
Upstream model: LiquidAI/LFM2-8B-A1B
This repo (MLX 8-bit): mlx-community/LFM2-8B-A1B-8bit-MLX
This repository provides an Apple-Silicon-optimized MLX build of LFM2-8B-A1B at 8-bit quantization for fast, on-device inference.
π What is LFM2-8B-A1B?
- Architecture: Mixture-of-Experts (MoE) Transformer.
- Size:
8B total parameters with **1B active** per token (the βA1Bβ suffix commonly denotes ~1B active params). - Why MoE? During generation, only a subset of experts is activated per token, reducing compute per token while keeping a larger total parameter pool for expressivity.
Important memory note (single-device inference):
Although compute per token benefits from MoE (fewer active parameters), the full set of experts still resides in memory for typical single-GPU/CPU deployments. In practice this means RAM usage scales with total parameters, not with the smaller active count.
π¦ Whatβs in this MLX build
config.json
(MLX),mlx_model*.safetensors
(8-bit shards)- Tokenizer files:
tokenizer.json
,tokenizer_config.json
- Model metadata (e.g.,
model_index.json
)
Target platform: macOS on Apple Silicon (M-series) using Metal/MPS.
β Intended use
- General instruction-following, chat, and summarization
- RAG back-ends and long-context workflows on device
- Function-calling / structured outputs with schema-style prompts
β οΈ Limitations
- Even at 8-bit, long contexts (KV-cache) can dominate memory at high
max_tokens
or large batch sizes. - As with any quantization, small regressions vs FP16 can appear on intricate math/code or edge-formatting.
π’ RAM planning (8-bit, MoE, MLX)
You asked to assume and decide RAM usage in absence of your measurements. Below are practical planning numbers derived from first-principles + experience with MLX and similar MoE models. Treat them as starting points and validate on your hardware.
Rule-of-thumb components
- Weights:
~ total_params Γ 1 byte
(8-bit). For 8B params β ~8.0 GB baseline. - Runtime overhead: MLX graph + tensors + metadata β ~0.5β1.0 GB typical.
- KV cache: grows with context_length Γ layers Γ heads Γ dtype; often 1β3+ GB for long contexts.
Indicative peak RAM (single image/text, batch=1)
Context window | Estimated peak RAM |
---|---|
4k tokens | ~9.5β10.5 GB |
8k tokens | ~10.5β11.8 GB |
16k tokens | ~12.0β14.0 GB |
These ranges assume 8-bit weights, A1B MoE (all experts resident), batch size = 1, and standard generation settings.
On lower windows (β€2k), you may see ~9β10 GB. Larger windows or batches will increase KV-cache and peak RAM.
π§ Choosing precision for LFM2-8B-A1B
While this card is 8-bit, teams often want a consistent lineup. If you later produce 6/5/4/3/2-bit MLX builds, hereβs a practical guide (RAM figures are indicative for an 8B MoE LM; your results depend on context/batch):
Variant | Typical Peak RAM | Relative Speed | Typical Behavior | When to choose |
---|---|---|---|---|
4-bit | ~7β8 GB | π₯π₯π₯ | Better detail retention | If 3-bit drops too much fidelity |
6-bit | ~9β10.5 GB | π₯π₯ | Near-max MLX quality | If you want accuracy under quant |
8-bit (this repo) | ~9.5β12+ GB | π₯π₯ | Highest quality among quant tiers | When RAM allows and you want the most faithful outputs |
MoE caveat: MoE reduces compute per token, but unless experts are paged/partitioned across devices and loaded on demand, memory still follows total parameters. On a single Mac, plan RAM as if the whole 8B parameter set is resident.
π Quickstart (CLI β MLX)
Deterministic generation
python -m mlx_lm.generate \
--model mlx-community/LFM2-8B-A1B-8bit-MLX \
--prompt "Summarize the following in 5 bullet points:\n<your text>" \
--max-tokens 256 \
--temperature 0.0 \
--device mps \
--seed 0
- Downloads last month
- 301
Model tree for mlx-community/LFM2-8B-A1B-8bit-MLX
Base model
LiquidAI/LFM2-8B-A1B