At itsavibe.ai, we're a community-driven initiative focused on establishing best practices, standards, and frameworks for AI-generated code. Our mission is to raise, create, and catalog best practices for AI-driven code development while identifying and mitigating risks inherent to this practice.
We believe in the power of AI to transform software development, but we also recognize the importance of responsible adoption and transparent practices.
The ecosystem comprises four complementary standards: VIBES for audit data, VERIFY for cryptographic attestation, PRISM for risk scoring, and EVOLVE for agent learning and governance.
Creator & Lead Developer
Andre Ludwig is a globally recognized cybersecurity and AI security leader with over two decades of experience delivering mission-critical outcomes for Fortune 500 companies, government agencies, and critical infrastructure providers. As Global Practice Leader for Cybersecurity, Privacy, and AI Security at Ankura, he leads engagements across the Americas, EMEA, APAC, and India, helping clients navigate AI risk governance, threat intelligence, privacy compliance, and incident response. Andre combines deep technical expertise — including malware analysis, distributed system design, and applied machine learning — with executive leadership in scaling secure, privacy-first solutions. He has built and led high-performing engineering and security teams at Capital One, InQuest, Bricata, Neustar, and QOMPLX, driving product innovation, revenue growth, and operational transformation.
Andre’s experience includes prior consulting work with the Defense Advanced Research Projects Agency (DARPA), where he supported advanced security research and development initiatives, as well as advisory roles across commercial sectors including finance, telecommunications, and large-scale enterprises. He co-founded the Conficker Working Group, led global cyber takedowns like Operation SMN and Operation Blockbuster, and launched Quad9 — a nonprofit DNS security platform now protecting over 150 million users. He continues to advise on national security issues, law enforcement platforms, and cyber startup acceleration through roles with Pryme Infil, the National Security Institute, and MACH37.
Security Advisor
Pedram Amini is a veteran security researcher, bug bounty pioneer, and founder with extensive experience in cybersecurity. As CTO at InQuest (acquired by OPSWAT in 2024), he developed Deep File Inspection (DFI) technology for real-time threat detection. Previously, he founded the Zero Day Initiative at TippingPoint and led Jumpshot before its acquisition by Avast. At itsavibe.ai, Pedram advises on security practices for AI-generated code and leads initiatives to identify and mitigate potential vulnerabilities in AI systems.
AI and Machine Learning Lead
Zachary Hanif is a seasoned AI and cybersecurity leader currently serving as Head of AI, ML & Data and Vice President of Traffic Intelligence at Twilio, where he leads the development and deployment of AI capabilities at internet scale. Previously, he served as Head of AI, Machine Learning & Software Development Platforms at Capital One, CTO of Eastern Foundry, Director of Applied Data Science at Novetta, and was an early employee at Endgame where he pioneered the application of machine learning to network traffic analysis for malware detection. Zachary is a founding member of a chapter of The Honeynet Project and a published researcher whose work includes seminal studies on PE32 Rich Header malware triage across nearly one million samples and the SKALD architecture for scalable malware analysis. He has presented at Black Hat USA, RSA Conference, O'Reilly Strata Data, MLconf, and NVIDIA GTC among others. He holds a Master's degree in Computer Science from the University of Maryland and a Bachelor's from Georgia Institute of Technology. Zachary previously served as an advisor to InQuest and brings deep expertise in operationalizing machine learning at scale to the itsavibe.ai initiative.
Help establish AI code transparency as the norm. Whether you're presenting to leadership, evaluating vendor practices, or shaping policy — these resources will help you make the case.
AI coding assistants are already writing production code across your engineering teams. Without structured audit data, you have no way to track what was AI-generated, verify its provenance, or assess its risk. VIBES provides the missing accountability layer.
Use these when presenting VIBES to management, security teams, or compliance officers.
There is no widely adopted standard for AI code audit data. Here's how existing approaches fall short.
| Approach | Scope | Limitations |
|---|---|---|
| Git commit messages | Ad hoc labeling | Unstructured, no model/prompt data, easily omitted |
| Vendor telemetry | Usage analytics | Proprietary, not portable, no code-level mapping |
| SBOM / SLSA | Supply chain provenance | Designed for dependency tracking, not AI generation context |
| Internal policies | Process controls | Manual enforcement, no machine-readable format, org-specific |
| VIBES Ecosystem | Full AI code lifecycle | Open standard, structured data, attestation, risk scoring, agent governance |
Industry research and regulatory trends that support the case for AI code transparency.
Print or share this summary with stakeholders. Print this page →
Base data standard for AI code audit metadata. Records model, prompt, session, and human review data in structured JSON.
Cryptographic attestation layer. Uses DSSE envelopes and Ed25519 signatures to create tamper-proof audit trails.
Risk scoring extension. Automated severity bands and policy-driven CI/CD gating for AI-generated code changes.
Agent governance framework. Decision records, feedback loops, and guardrails for autonomous AI agent operations.
Open & Vendor-Neutral — No lock-in. Adopt at your pace. Start with VIBES Low assurance and grow as your needs evolve.
Learn more at itsavibe.ai • Join the community on SimpleX Chat • Contribute on GitHub
Start by introducing the standards to your team. Present the talking points above, share the one-pager, and begin with a pilot project using VIBES Low assurance.
We're building a diverse community of developers, researchers, and industry professionals passionate about the future of AI-generated code. Join the conversation, share ideas, ask questions, and help shape the future of the VIBES ecosystem.
We use SimpleX Chat for our community — a private, secure messenger with no user IDs or tracking. Discuss the standards, get implementation help, share what you're building, and connect with the team directly.
New to SimpleX? Download the app first (iOS, Android, desktop), then tap the group link to join.