A six-step loop that turns vibe-coded security debt into concrete fixes — using the same AI tools you already rely on.
The assessment covers 13 security domains — from OWASP Top 10 classics to 2026-era threats like Agentic AI & MCP Security and AI Governance & Compliance. Work through all of them or jump straight to the ones most relevant to your stack.
Each question maps to a real vulnerability class. Every question has a plain-English hint explaining why it matters and what attackers exploit when it's missing. No security degree required.
Questions are weighted 1–3 based on exploitability. Critical issues (weight 3) like injection, broken access control, and unsanctioned AI agent access count more than best practices. Your score reflects real-world risk, not a checkbox count.
Every failing category ships with a ready-made prompt you can paste directly into Cursor, Windsurf, Claude Code, or any other AI coding tool. Each prompt is scoped to the specific vulnerability class so the AI knows exactly what to fix.
Export a PDF that includes your full score breakdown, per-category results, all the AI fix prompts, and a detailed calculation appendix. Share it with your team, a CISO, or a client.
Security is a cycle, not a checkbox. After using the prompts to patch vulnerabilities, retake the assessment to verify your score improved. Track progress over time as your app grows.
Your score is calculated as the sum of weights for "Yes" answers divided by the total weight of all answered questions, expressed as a percentage. Only questions you answer count toward your score — skipping a question doesn't penalise you.
Immediate compromise risk if missing — injection, broken access control, exposed secrets, unsanctioned agent access.
Significant risk but typically needs attacker setup — session management, MFA, log tampering.
Reduces attack surface without directly blocking exploits — SBOMs, AI code inventory, key rotation cadence.
Strong baseline. Maintain it with regular re-assessments.
Exploitable gaps exist. Prioritise weight-3 failures first.
Critical issues need immediate attention before shipping.
Covering the OWASP Top 10 (2021), OWASP LLM Top 10 v2025, NIST AI RMF, MITRE ATLAS v5.1.0, and EU AI Act obligations.
Do I need to complete all 13 categories?
No. You can answer as few or as many questions as you like — the score is calculated from only the questions you answered. Even a partial assessment surfaces useful vulnerabilities.
How accurate is the score?
The assessment is self-reported, so accuracy depends on your honest answers. It's designed as an awareness and prioritization tool, not a penetration test. A score of 80%+ means you've addressed the most common attack surfaces — it doesn't mean you're breach-proof.
The AI prompts reference my specific code — what do I replace?
Each prompt contains placeholder sections like [DESCRIBE YOUR AUTH SETUP HERE] or [PASTE package.json HERE]. Replace those with the relevant code or description from your project before sending to your AI tool.
Is my data stored anywhere?
No. The entire assessment runs in your browser. No answers, scores, or code snippets are sent to any server. When you close the tab, everything is gone.
What is the Agentic AI & MCP Security category?
As of 2026, most "vibe coding" workflows involve agentic tools (Cursor, Windsurf, Claude Code) with MCP server integrations that have persistent filesystem, database, and API access. This category covers prompt injection via untrusted content, over-privileged agents, unsanctioned autonomous production actions, and secrets exposure through agent tool calls — attack techniques tracked in MITRE ATLAS v5.1.0.
What is AI Governance & Compliance and why does it matter now?
The EU AI Act reached full enforcement on August 2, 2026, requiring risk-tier classification, technical documentation, and human oversight for high-risk AI systems. This new category also covers NIST AI RMF mapping, AI Bill of Materials (AI-BOM), user-facing transparency disclosures (Article 50), AI incident response planning, and alignment with ISO/IEC 42001:2023. If your product uses AI and has EU users, this is no longer optional.
What is slopsquatting?
Slopsquatting is an attack where malicious actors pre-register package names that AI coding tools commonly hallucinate. Studies show ~20% of AI-generated code references packages that don't exist in official registries, and 58% of those hallucinations are consistent — making it easy for attackers to predict and pre-register them. The Supply Chain category now includes a question specifically about verifying AI-suggested package names before installing.
What frameworks does this assessment cover?
The 13 categories map to: OWASP Top 10 (2021), OWASP LLM Top 10 v2025 (LLM01–LLM10), NIST AI RMF, NIST AI 600-1 (generative AI profile), NIST CSF 2.0, NIST SP 800-218A (secure AI development), MITRE ATLAS v5.1.0 (AI adversarial techniques), CSA AI Safety Initiative, EU AI Act (full enforcement August 2026), EU Cyber Resilience Act (September 2026), and ISO/IEC 42001:2023.
Every question traces back to an authoritative source. Here are the standards we pull from, with direct links.
OWASP Top 10 (2021)
Backbone of 8 of 13 categories
OWASP LLM Top 10 v2025
LLM01–LLM10 mapped to AI categories
NIST AI Risk Management Framework
GOVERN / MAP / MEASURE / MANAGE
NIST AI 600-1 (GenAI Profile)
Generative AI-specific guidance, July 2024
NIST CSF 2.0
Adds GOVERN function to core CSF
NIST SP 800-218A
Secure software development for AI/ML
MITRE ATLAS v5.1.0
AI adversarial TTPs — updated Nov 2025
CSA AI Safety Initiative
Cloud AI security & governance guidance
EU AI Act
Full enforcement August 2, 2026
EU Cyber Resilience Act
Partial enforcement September 11, 2026
ISO/IEC 42001:2023
AI management system standard
See a framework missing? Visit the About page or open an issue on GitHub.
Takes 10–15 minutes. Runs entirely in your browser. No sign-up required.
Start the assessment