An open-source family of large language models developed by Alibaba Cloud, featuring scalable model sizes and versions released under permissive licenses like Apache 2.0 .
An open-source reproduction of Meta AI’s LLaMA models, offering permissively licensed weights in 3B, 7B, and 13B parameter sizes compatible with both PyTorch and JAX.
A fully open-source LLM framework from the Allen Institute for AI, providing model weights, training code, data, and evaluation tools in a transparent package.
A fully open-source, 1B-parameter Chinese-centric language model trained from scratch, with complete access to code, data, checkpoints, and logs under the Apache 2.0 license.
A compact, open-source LLaMA-based language model (~1.1 B parameters) pretrained on trillions of tokens under the Apache 2.0 license.
A fully open-source large language model suite offering reproducible training, open weights, and instruction-tuned variants under the Apache 2.0 license.
It is a 176B parameter open-access multilingual language model for text generation and research, supporting 46 languages and 13 programming languages.
An open-source, large-scale language model training library and model family developed by EleutherAI, enabling cutting-edge autoregressive model development and deployment.
An open-source framework by MosaicML for training, fine-tuning, and deploying large language models across GPUs at scale.
An open-source family of large language models by the UAE's Technology Innovation Institute, offering high-performance AI in varying parameter scales with rich multimodal and multilingual capabilities.
AI developer teammate that helps you build software faster with code reviews, bug fixes, and feature implementations through natural language.