Alibaba's Apache 2.0 LLM in sizes from small to frontier scale
Permissive open LLaMA reproduction in 3B, 7B, and 13B parameters
Allen AI research LLM with public weights and open training code
Compact 1.1B LLaMA model trained on 3 trillion tokens
Full transparency LLM: open weights, training code, and data
EleutherAI's framework for training LLMs at research scale
Apache 2.0 LLM fine-tuning toolkit for Llama and Mistral on GPU
Apache 2.0-licensed LLM from TII, from 1B to 180B parameters