Void is an open source AI code editor built on the VS Code codebase that sends AI completion and chat requests directly to your chosen LLM provider, keeping your code and prompts off Cursor's and Windsurf's managed servers.
The Problem
AI code editors like Cursor and Windsurf route all code context and completions through their own cloud infrastructure. Every file snippet, chat message, and completion request passes through a managed backend. For developers working on proprietary codebases, security-sensitive projects, or regulated industries, this creates data exposure risk with no opt-out path.
How Void Solves It
Void is a fork of the VS Code codebase with a native AI integration layer that connects directly to the LLM API of your choice: Anthropic, OpenAI, Ollama for local models, or any compatible provider. No Void-managed backend sees your code. The editing experience mirrors Cursor's inline editing and chat panel, but the network path is directly to the provider you configure. Apache-2.0 license applies.
Key Features
- Direct LLM connections: code and prompts route to your provider, not a Void proxy
- Compatible with Anthropic, OpenAI, Ollama, and any OpenAI-compatible API
- Inline AI editing, chat panel, and multi-file context support
- VS Code extension compatibility: use your existing extension ecosystem
- Local model support via Ollama for fully offline AI assistance on sensitive code
Who It's For
Void is best for developers and engineering teams working on proprietary or security-sensitive codebases who want AI-assisted coding without their code transiting a managed AI editor cloud.
Compared to Cursor
Unlike Cursor, which routes code context through Cursor's own cloud infrastructure, Void sends AI requests directly to the LLM provider you configure. Cursor has more polished agentic features and active commercial development; Void trades some polish for a clear data path and an open, auditable codebase.

