For simplified tool guides, see User Tooling →
These tools help you implement the VIBES ecosystem — from generating .ai-audit/ data during development, to attesting integrity with VERIFY, to scoring risk with PRISM. Whether you're adding transparency to a personal project or rolling out AI provenance tracking across an organization, these tools make adoption practical.
The ecosystem is open — if your favorite AI coding tool isn't listed here, consider building an integration. The Implementors Guide has everything you need to get started, and every new tool strengthens the standard for everyone.
A Rust CLI tool spanning all four standards — validates .ai-audit/ data (VIBES), signs cryptographic attestations (VERIFY), and computes risk scores (PRISM). The reference implementation for the VIBES ecosystem.
.ai-audit/ structure and hash integrityA development-only tool for generating synthetic .ai-audit/ directories and validating v1.1 spec conformance. Designed for tool implementors, spec maintainers, and CI pipelines — not for end users. Generates controlled test scenarios including rebase flows, PRISM severity spectrums, concurrency stress, and intentionally malformed data for negative testing.
A cross-platform desktop application for orchestrating multiple AI coding agents simultaneously. Maestro supports Claude Code, OpenAI Codex, OpenCode, and Factory Droid with parallel agent management, git worktree integration, auto-run playbooks, group chat coordination, and a mobile remote access interface. Designed for power users who run lengthy unattended automation sessions across parallel projects.
Learn MoreA VIBES-compliant hook for Claude Code that automatically generates .ai-audit/ data as you work. Captures environment context, prompt hashes, session boundaries, and line-level annotations in real time — no workflow changes required. Configurable assurance level from Low to High with optional chain-of-thought capture. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
A VIBES integration for Google Gemini CLI that emits standards-compliant audit data during interactive and scripted sessions. Tracks Gemini model variants, tool use, and file modifications with automatic annotation generation. Supports all three assurance levels with configurable output to .ai-audit/. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
A VIBES plugin for OpenAI Codex CLI that records AI provenance data for every code generation and edit session. Captures OpenAI model metadata, prompt context, and file-level change tracking with automatic annotation output. Integrates with Codex's sandbox execution model to record tool invocations and shell commands alongside code changes. Generated audit data is compatible with VERIFY attestation and PRISM risk scoring.
Source available at launchIntegrate VIBES tooling into your development workflow for maximum value:
vibecheck verify to catch schema violations before they enter historyvibecheck anchors to detect and remap drifted line annotationsvibecheck risk --ci to gate merges on PRISM thresholdsvibecheck risk --format json and inject risk summaries into PR descriptionsvibecheck attest to sign and submit cryptographic attestation to the public registrySee the Implementors Guide for integration patterns, the VIBES standard for data format details, VERIFY for attestation integration, and PRISM for risk scoring setup.
Whether you're building a standalone CLI, an IDE extension, or integrating VIBES into an existing AI coding agent — here's the architecture to follow.
.ai-audit/ DataYour tool should write annotation files conforming to the VIBES v1.1 spec. Each annotation captures: file path, line range, model identifier, prompt hash, confidence signal, and timestamp. Use the vibes-test-harness to validate your output against the schema.
Let users configure Low, Medium, or High assurance. Low captures basic provenance (model + timestamp). Medium adds prompt hashes and session context. High includes chain-of-thought and full prompt capture. Default to Medium for the best transparency-to-privacy balance.
Integrate with VERIFY for cryptographic attestation. Generate DSSE envelopes, support Ed25519 signing, and optionally implement tool-provider cosigning so your tool can vouch for the provenance data it creates.
Feed your annotations into PRISM for automatic risk scoring. Emit the severity signals PRISM needs — autonomy level, scope of change, model confidence — and let consumers gate their CI/CD pipelines on the result.
Use the vibes-test-harness to run your tool against the full v1.1 conformance suite: baseline fixtures, rebase scenarios, PRISM severity spectrums, and intentionally malformed data. Submit passing results with your pull request to the ecosystem.
Ready to start? The Implementors Guide has code examples, data format references, and agent prompt templates to get you shipping in hours.
Whether you want to join the community, build new tooling, or champion adoption — there's a path for you.
Contribute to the VIBES specification, propose new tool integrations, and help shape the ecosystem. Report issues, submit PRs, and collaborate with fellow toolsmiths.
Get involved →Write a VIBES integration for your favorite AI coding tool. Every new emitter, validator, or pipeline plugin makes the standard more useful for the entire ecosystem.
Implementation guide →Advocate for VIBES adoption in your organization. Make the case for AI code transparency tooling — from individual developer workflows to enterprise-wide deployment.
Resources →