LFM2-8B-A1B β€” MLX 8-bit (Apple Silicon)

Maintainer / Publisher: Susant Achary
Upstream model: LiquidAI/LFM2-8B-A1B
This repo (MLX 8-bit): mlx-community/LFM2-8B-A1B-8bit-MLX

This repository provides an Apple-Silicon-optimized MLX build of LFM2-8B-A1B at 8-bit quantization for fast, on-device inference.


πŸ”Ž What is LFM2-8B-A1B?

  • Architecture: Mixture-of-Experts (MoE) Transformer.
  • Size: 8B total parameters with **1B active** per token (the β€œA1B” suffix commonly denotes ~1B active params).
  • Why MoE? During generation, only a subset of experts is activated per token, reducing compute per token while keeping a larger total parameter pool for expressivity.

Important memory note (single-device inference):
Although compute per token benefits from MoE (fewer active parameters), the full set of experts still resides in memory for typical single-GPU/CPU deployments. In practice this means RAM usage scales with total parameters, not with the smaller active count.


πŸ“¦ What’s in this MLX build

  • config.json (MLX), mlx_model*.safetensors (8-bit shards)
  • Tokenizer files: tokenizer.json, tokenizer_config.json
  • Model metadata (e.g., model_index.json)

Target platform: macOS on Apple Silicon (M-series) using Metal/MPS.


βœ… Intended use

  • General instruction-following, chat, and summarization
  • RAG back-ends and long-context workflows on device
  • Function-calling / structured outputs with schema-style prompts

⚠️ Limitations

  • Even at 8-bit, long contexts (KV-cache) can dominate memory at high max_tokens or large batch sizes.
  • As with any quantization, small regressions vs FP16 can appear on intricate math/code or edge-formatting.

πŸ”’ RAM planning (8-bit, MoE, MLX)

You asked to assume and decide RAM usage in absence of your measurements. Below are practical planning numbers derived from first-principles + experience with MLX and similar MoE models. Treat them as starting points and validate on your hardware.

Rule-of-thumb components

  • Weights: ~ total_params Γ— 1 byte (8-bit). For 8B params β†’ ~8.0 GB baseline.
  • Runtime overhead: MLX graph + tensors + metadata β†’ ~0.5–1.0 GB typical.
  • KV cache: grows with context_length Γ— layers Γ— heads Γ— dtype; often 1–3+ GB for long contexts.

Indicative peak RAM (single image/text, batch=1)

Context window Estimated peak RAM
4k tokens ~9.5–10.5 GB
8k tokens ~10.5–11.8 GB
16k tokens ~12.0–14.0 GB

These ranges assume 8-bit weights, A1B MoE (all experts resident), batch size = 1, and standard generation settings.
On lower windows (≀2k), you may see ~9–10 GB. Larger windows or batches will increase KV-cache and peak RAM.


🧭 Choosing precision for LFM2-8B-A1B

While this card is 8-bit, teams often want a consistent lineup. If you later produce 6/5/4/3/2-bit MLX builds, here’s a practical guide (RAM figures are indicative for an 8B MoE LM; your results depend on context/batch):

Variant Typical Peak RAM Relative Speed Typical Behavior When to choose
4-bit ~7–8 GB πŸ”₯πŸ”₯πŸ”₯ Better detail retention If 3-bit drops too much fidelity
6-bit ~9–10.5 GB πŸ”₯πŸ”₯ Near-max MLX quality If you want accuracy under quant
8-bit (this repo) ~9.5–12+ GB πŸ”₯πŸ”₯ Highest quality among quant tiers When RAM allows and you want the most faithful outputs

MoE caveat: MoE reduces compute per token, but unless experts are paged/partitioned across devices and loaded on demand, memory still follows total parameters. On a single Mac, plan RAM as if the whole 8B parameter set is resident.


πŸš€ Quickstart (CLI β€” MLX)

Deterministic generation

python -m mlx_lm.generate \
  --model mlx-community/LFM2-8B-A1B-8bit-MLX \
  --prompt "Summarize the following in 5 bullet points:\n<your text>" \
  --max-tokens 256 \
  --temperature 0.0 \
  --device mps \
  --seed 0
Downloads last month
301
Safetensors
Model size
8B params
Tensor type
F32
Β·
U32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mlx-community/LFM2-8B-A1B-8bit-MLX

Quantized
(18)
this model

Collection including mlx-community/LFM2-8B-A1B-8bit-MLX