Moxin-LLM is an open-source LLM family developed by the Moxin team, designed to prioritize full transparency, reproducibility, and broad accessibility. Trained on public datasets and released under the Apache 2.0 license, Moxin includes multiple model types Base, Instruct, Reasoning, and VLM (Vision-Language Model) all with open weights, code, data, and logs.Positioned as a transparent and reproducible alternative to closed models like GPT-4, Claude, and Gemini, Moxin-LLM also rivals open-source leaders like LLaMA 3.1, Mistral, and Qwen. It’s one of the few models meeting all criteria of the Model Openness Framework (MOF), ensuring verifiable and open training pipelines.
Key features include:
- Model Variants: Moxin-7B Base, Instruct, Reasoning, and Vision-language models
- Full Reproducibility: Open training code, config, datasets, checkpoints, and logs
- Open Evaluation: Benchmarks provided on MMLU, HellaSwag, ARC, TruthfulQA, GSM8K
- RL Tuning: Reinforcement learning with human preference (GRPO), fully released
- Apache 2.0 License: Fully permissive for commercial and academic use
Use cases include:
- Academic research on LLM training and evaluation pipelines
- Domain-specific instruction fine-tuning for industry use
- Benchmarking and comparison in open-source LLM evaluations
- Building transparent AI systems with verifiable lineage and control

