Stay Updated
Subscribe to our newsletter for the latest news and updates about Open Source Alternatives
Subscribe to our newsletter for the latest news and updates about Open Source Alternatives
An open-source reproduction of Meta AI’s LLaMA models, offering permissively licensed weights in 3B, 7B, and 13B parameter sizes compatible with both PyTorch and JAX.
OpenLLaMA is a permissively licensed, open-source replica of Meta’s LLaMA, developed by OpenLM Research and trained on 1 trillion+ tokens from the RedPajama dataset. It offers drop-in PyTorch and JAX weights (versions v1 and improved v2) for 3B, 7B, and 13B models, with v2 models providing notably better performance. As an unfenced alternative, OpenLLaMA is fully usable in research and commercial applications under the Apache‑2.0 license .As a transparent and accessible open model, it competes directly with proprietary systems like GPT‑4, Claude, and Gemma, as well as other open-source models like Meta’s Llama, Mistral, and Falcon—with full training code, evaluation pipelines, and checkpoints made available
Key features include:
Use cases include: