Take the Assessment
14 questions. 3 minutes. Discover your AI-native engineering archetype.
The 5 Archetypes
The Explorer (0-20)
The Explorer (0-20)
“The AI revolution hasn’t hit your workflow yet”You’re aware of AI coding tools and have started experimenting, but AI isn’t yet a consistent part of how you work. You’re in discovery mode — trying things out, seeing what sticks.Growth focus: Pick one task you do daily and try using AI for it consistently for two weeks. Build the habit before expanding scope.
The Adopter (21-40)
The Adopter (21-40)
“AI assists you, but you’re still in the driver’s seat”AI has found a place in your workflow — you use it for specific tasks and it genuinely helps. But you’re still doing most of the orchestration, context management, and quality control manually.Growth focus: Start writing structured specifications before coding. Give AI more context upfront and let it do more of the heavy lifting on implementation.
The Integrator (41-60)
The Integrator (41-60)
“AI is embedded in your workflow — you’d feel the loss”AI is a core part of how you build software. You have established practices, your tools work together, and you’d significantly feel the impact if AI was removed from your workflow.Growth focus: Focus on connecting your tools and building feedback loops. Invest in context management so AI can do more with less manual setup.
The Conductor (61-80)
The Conductor (61-80)
“Your agents do the work, you set the direction”You’ve moved beyond using AI as an assistant — you’re orchestrating AI workflows, maintaining rich context, and focusing your energy on direction-setting and quality oversight.Growth focus: Build automated feedback loops and invest in observability. Let your systems learn from outcomes and self-improve.
The Architect (81-100)
The Architect (81-100)
“You’re defining how the industry builds with AI”Your engineering practice is fully AI-native. Specs drive delivery, context flows seamlessly, feedback loops close automatically, and your systems continuously evolve. You’re not just using AI — you’re pioneering how it’s used.Growth focus: Share what you’ve built. Write about your workflows, contribute to open-source tooling, and help others reach this level.
How It Works
7 Capabilities
The assessment measures your maturity across 7 capabilities of AI-native engineering, directly from the AI-Native Engineering Maturity Model:| Capability | What It Measures |
|---|---|
| Spec-Driven Development | How you define, structure, and maintain specifications that drive AI-assisted development |
| Context Management | How your AI tools access, retain, and evolve project knowledge |
| Agent Collaboration | How AI agents participate in and coordinate across your development workflow |
| Observability & Feedback | How you measure, track, and learn from AI performance |
| Governance & Trust | How you ensure quality, safety, and trustworthiness of AI outputs |
| Continuous Delivery | How connected and automated the path is from requirements to production |
| Organizational Adaptation | How your team has evolved its structure and practices around AI |
Situational Judgment Format
Every question presents 5 behavioral descriptions at different maturity levels (L1-L5). You pick the one closest to your current practice — no “strongly agree/disagree” scales, no aspirational bias. Each answer IS a maturity level. Answers are shuffled for each question so there’s no pattern to game.Scoring
Your responses produce:- A score per capability (0-100)
- An overall maturity score (0-100, equal-weight average of 7 capabilities)
- An archetype based on your overall score
- A 7-spoke radar chart showing your profile across all capabilities
- Growth recommendations for your lowest-scoring capabilities
Methodology
The assessment is grounded in the AI-Native Engineering Maturity Model, which defines 7 capabilities across 6 maturity levels (L0-L5). Each question’s answer options trace to specific behavioral descriptions in the capability matrix. The question design was validated against 6 academic papers from arxiv.org and cross-referenced with EPAM’s AI adoption survey (273 responses). No competing “developer practice maturity” assessment exists — this is the first to combine individual developer practice maturity with behavioral self-assessment. The full question bank, scoring algorithm, and design rationale are open-source:Assessment Design on GitHub
Questions, scoring spec, and design decisions — all open for review and contribution.