icon of OpenLLaMA

OpenLLaMA

An open-source reproduction of Meta AI’s LLaMA models, offering permissively licensed weights in 3B, 7B, and 13B parameter sizes compatible with both PyTorch and JAX.

OpenLLaMA is a permissively licensed, open-source replica of Meta’s LLaMA, developed by OpenLM Research and trained on 1 trillion+ tokens from the RedPajama dataset. It offers drop-in PyTorch and JAX weights (versions v1 and improved v2) for 3B, 7B, and 13B models, with v2 models providing notably better performance. As an unfenced alternative, OpenLLaMA is fully usable in research and commercial applications under the Apache‑2.0 license .As a transparent and accessible open model, it competes directly with proprietary systems like GPT‑4, Claude, and Gemma, as well as other open-source models like Meta’s Llama, Mistral, and Falcon—with full training code, evaluation pipelines, and checkpoints made available

Key features include:

  • Model sizes: Available in 3B, 7B, and 13B parameters, with efficient v2 options
  • Dual-framework support: Compatible with both PyTorch (Hugging Face) and JAX (EasyLM)
  • Apache‑2.0 licensing: Fully permissive for commercial and research use
  • Competitive performance: Matches or exceeds original LLaMA and GPT‑J in common benchmarks
  • Quantization support: Enables efficient deployment even on limited hardware

Use cases include:

  • Training or fine-tuning open LLMs for domain-specific tasks
  • Replacing proprietary LLMs in production with fully open, audited alternatives
  • Benchmarking against state-of-the-art models with transparent evaluation
  • Embedding into research pipelines, enterprise systems, or edge deployments

Stay Updated

Subscribe to our newsletter for the latest news and updates about Open Source Alternatives