When I start a new fractional CTO engagement, the first thing I do is assess the engineering organization. Not by reading code — by scoring six dimensions that predict whether this team can scale or whether they're going to hit a wall.
I call this the NEOF (Nebari Engineering Operating Framework) maturity assessment, and I've run it across 20+ companies ranging from seed-stage startups to PE-backed portfolio companies. The scores vary wildly. The pillar structure never does — these are the six things that matter everywhere.
The Six Pillars
1. Delivery (Can you ship reliably?)
This isn't "how fast do you ship" — it's "how predictably and safely do you ship." A team that deploys every two weeks with zero incidents scores higher than a team that deploys daily and breaks production twice a month.
Score 1 (Chaos): Deployments are manual, stressful, multi-hour affairs. Only one person knows how to do it. Releases are unpredictable.
Score 3 (Managed): CI/CD is in place. Deployments take minutes. The team has a defined release cadence and mostly hits it. Rollbacks are possible but not automated.
Score 5 (Strategic): Deployments are automated, zero-downtime, and boring. Feature flags control rollout. The team can ship multiple times per day without anyone's heart rate increasing.
2. Reliability (Do you find problems before your customers do?)
The question isn't whether your systems fail — they will. The question is how quickly you detect, respond, and recover.
Score 1 (Chaos): No monitoring. You find out about outages because a customer emails you. Last downtime event lasted 4+ hours before anyone noticed.
Score 3 (Managed): Monitoring covers core services. Alerts page the right people. Mean time to detection is under 15 minutes. Post-incident reviews happen but aren't consistently actioned.
Score 5 (Strategic): Full observability stack. SLOs defined and tracked. Error budgets drive prioritization. Incidents are detected in seconds, and the team has practiced runbooks for common failure modes.
3. Security (Are you a breach waiting to happen?)
Most scaling companies are one shared admin password away from a headline. Security maturity isn't about having a CISO — it's about basic hygiene.
Score 1 (Chaos/Danger): Admin keys are shared. No MFA on critical infrastructure. Secrets are hardcoded or in plaintext config files. Nobody has done a security review in the last year.
Score 3 (Managed): MFA enforced everywhere. Secrets management in place (Vault, AWS Secrets Manager). Access follows least-privilege principles. Dependencies are scanned for known vulnerabilities.
Score 5 (Strategic): SOC 2 certified or equivalent. Security is embedded in the development process (SAST/DAST in CI). Penetration testing is annual. Incident response plan is documented and tested.
4. Cost Efficiency (Are you spending wisely?)
Not "are you spending as little as possible" — but "are you getting proportional value for your infrastructure and tooling spend?"
Score 1 (Chaos): Nobody knows what the cloud bill is. Resources are over-provisioned because someone was too busy to right-size. Dev environments are as expensive as production.
Score 3 (Managed): Cloud spend is tracked monthly. Someone reviews the bill. Non-production resources are appropriately sized. Cost anomaly alerts are in place.
Score 5 (Strategic): Unit economics tracked (cost per customer, cost per transaction). Reserved instances and savings plans are optimized. Cost is a factor in architecture decisions. FinOps practices are embedded.
5. Strategic Alignment (Are you building the right things?)
The most common maturity gap I see. The engineering team is executing well — on the wrong things. Alignment means the engineering roadmap connects to business outcomes, not just feature requests.
Score 1 (Chaos): The roadmap is whatever the loudest stakeholder asked for last. Engineers don't know why they're building what they're building. Product and engineering are adversarial.
Score 3 (Managed): A roadmap exists and is shared. Product and engineering collaborate on prioritization. Business goals inform sprint planning, at least loosely.
Score 5 (Strategic): Engineering roadmap explicitly maps to business OKRs. Engineers understand customer problems, not just ticket descriptions. Technology investments are framed in business terms and reviewed quarterly.
6. Team Health (Will your people stay?)
Culture, retention, psychological safety, and sustainable pace. A team that scores 5 everywhere else but 1 on Team Health is a ticking time bomb — one key departure from collapse.
Score 1 (Chaos): High turnover. Hero culture where a few people carry everything. No growth paths. Engineers are burned out and interviewing elsewhere.
Score 3 (Managed): Reasonable turnover. Regular 1:1s happen. Engineers have clarity on expectations. The team is busy but not burning out. Some growth opportunities exist.
Score 5 (Strategic): Voluntary attrition is well below industry average. Engineers have clear growth paths. The team runs sustainable sprints. Psychological safety is high — people flag problems early and without fear.
Running the Assessment
The assessment takes 1-2 weeks. It involves reviewing systems and documentation (not just asking people how they feel), interviewing team members individually, observing actual workflows (deployment process, incident response, sprint ceremonies), and reviewing the last 3-6 months of incidents, deployments, and roadmap changes.
I score each pillar independently, then calculate an overall maturity score. The spider chart — showing all six pillars visually — is usually the slide that gets the most reaction in the executive presentation. When a CEO sees that their team scores 4 on Team Health but 1 on Reliability, the investment case writes itself.
What to Do With the Scores
The scores aren't the point. The gap analysis is.
A company scoring 2.1 overall doesn't need to improve everything at once. They need to fix the pillars where a failure would be most damaging — usually Security and Reliability first (because those are existential risks), then Delivery (because that's how you ship the fixes for everything else).
I typically build a 6-month improvement roadmap: Month 1 stabilizes the critical risks, Months 2-3 build the automation foundation, and Months 4-6 optimize and scale the improvements.
The re-assessment at month 6 is the proof. When a CEO sees their overall score go from 2.1 to 3.4, they understand the value of the investment in a way that no burndown chart could communicate.
Related: Signs Your Engineering Team Needs Outside Help, Engineering Metrics That Matter, What a Fractional CTO Actually Does