An AI-native operating model for product teams

The Tenhaw Way

Agile was tuned for a slower world. AI now does the research, drafting, scaffolding and review work that used to take a sprint each, and the operating model has to keep up. The Tenhaw Way is how a modern product team works when AI is in the room: value priced in currency, outcomes that ladder cleanly to epics and stories, a quarterly timebox that has nowhere for tech debt to hide, and a specialist AI agent at every phase of the workflow. This is the playbook. The Tenhaw product is where it runs.

Why this exists

Most teams we meet have the same problem. They are busy. Boards are full, meetings run on time, retros fill the whiteboard. And yet the business still doesn't trust the roadmap, features ship without anyone measuring whether they worked, and nobody can answer a simple question: "how much value did this quarter produce?"

The issue isn't effort. The operating model was designed for a world before AI could do research, scaffold code, write tests, critique designs and scrape the competition on demand. Agile was tuned for a slower, quieter environment. The Tenhaw Way keeps what still works (flow, feedback, honest measurement) and adds the four things that actually matter now: currency on every outcome, a workflow with real approval gates, a per-phase AI agent on every ticket, and a quarterly timebox that nothing escapes from without being accounted for.

The rest of this page is the blueprint. If you're nodding along, try Tenhaw. The product enforces every rule on this page so you don't have to police it yourself.

The four values

Not wall posters. Operating decisions. Each of the four is enforced in the tool and visible on every ticket you open.

1

Radical transparency

Everyone in the room should know why they're doing the work, how it ladders to an outcome, and what value sits at the top of the ladder. Open any ticket (story, chapter, bug, tech debt) and you see the parent epic, the outcome it rolls up to, and the £ target behind that outcome. Nobody is ever more than one click from knowing why their work matters.

2

Measure value in currency

If value isn't priced, technology is a cost centre. "Increase checkout by 1%" isn't enough. "Increase checkout by 1%, lifting revenue by £1m" gives you a number you can plan against, report on and close the loop on. Work is prioritised to maximise £ delivered, not tickets closed.

Every outcome carries a target value in currency. Every epic carries a planned contribution to that target. When the sum of the epics doesn't cover the target, Tenhaw flags the gap. And if your org historically realises 70% of what it plans, Tenhaw tells you to plan 43% headroom. Calibration grounded in your own delivery data, not optimism.

3

Predictable delivery

You can't plan or prioritise honestly if delivery doesn't land when you said it would. That means story-pointing the work, simulating against your own historical throughput, and letting an optimisation algorithm schedule the portfolio so the whole set of commitments lands on time, not just the loudest one.

No single-point estimates. Tenhaw produces p50/p85 confidence intervals from your flow data and surfaces slips before the deadline arrives. Miss a gate early; react early.

4

Embrace feedback loops

Reviewing what happened, why and how to improve runs from outcome validation all the way down to team psychological safety. Retros every two weeks. Health checks every month. Outcome validation every month on every live outcome. Demos as often as you can. Daily is fine. The team that talks about its own output honestly, weekly, is the team that compounds.

How work breaks down

Four levels, one direction. Nothing exists in the system that doesn't trace to an outcome.

L1

Outcomes

A business outcome with a currency target. Can span multiple quarters. Often more than one active at once.

L2

Epics

Each directly linked to one outcome, each carrying a planned £ value. The sum of an outcome's epics should at minimum cover its target, ideally with calibration headroom on top.

L3

Stories

Each linked to one epic. A story can't exist without a parent epic, and it can't leave the backlog until that epic has been product-approved.

L4

Chapters

Optional. Created by developers when a story needs to be broken down further during build. Always belong to exactly one story.

Roadmaps are quarterly timeboxes

A roadmap is exactly one quarter. Outcomes can span quarters; epics can't. Each epic belongs to one roadmap (the quarter it ships in). That single constraint is what makes the portfolio predictable.

Every roadmap is created with two dedicated epics already on it:

  • Tech Debt epic, any tech debt raised this quarter is auto-linked here. Debt reduction stays first-class rather than waiting for "when there's time" (there never is).
  • Bug Budget epic, bugs found this quarter are auto-linked here. Burn rate over time tells you whether quality is improving or rotting.

The workflow end-to-end

How a piece of work travels from "we think this is valuable" to "we know this delivered value."

1. Outcome shaping

The business wants a series of outcomes with tangible value attached. An outcome's target is a currency number, not a vibe. Outcomes move through a clear lifecycle - Idea → Exploring → Ready for Review → Committed → Working On → Value Monitoring → Closed , so it's always obvious which ones the team is actually working on.

2. Breaking outcomes into epics

Once an outcome is committed, it's broken down into epics. One epic, ten, a hundred, pick what fits, but every outcome needs at least one. Each epic carries a planned value: its share of the outcome's target.

If an outcome has a £1m target and you plan four epics at £200k each, that's £800k, a £200k hole. Tenhaw flags it. If your historical data shows the team realises 75% of planned value, Tenhaw nudges you to plan £1.3m of epics to land £1m. Transparency, calibrated.

3. Approval gates, the bit agile usually skips

Epics travel through phases 1–4 (product: idea → research → design → ready-for-dev), getting researched, validated and broken into stories along the way. Two hard gates that cannot be bypassed:

  • An epic cannot leave Ready for Dev without product approval, engineering approval, and at least one product-approved story attached.
  • A story cannot leave the backlog until its parent epic is product-approved. You can stage future stories in the backlog, but they can't start.

4. Development

Stories flow through phases 5–8 (todo → in-progress → in-review → done). When a story turns out to be too big mid-build, the developer splits it into chapters. Tech debt and bugs flow through the same phases, and when they do, they're auto-linked to the current quarter's Tech Debt or Bug Budget epic. Debt never floats unbucketed.

5. Release

A done story is reviewed and product-approved, then enters phase 9 (release scheduled). Stories from the same epic can release on different days, that's fine. The Release Planner agent produces the user notes, dev changelog, exec one-pager, rollout plan and comms in three channels.

6. Live monitoring & value monitoring, epic-only

Once shipped, a story enters live monitoring for post-release triage - customer impact, FAQ, support macro, and the continue/watch/rollback call. After that, only epics enter value monitoring. The epic sits there until the value has been confirmed by outcome validation, or until it's deliberately closed with a note that the expected value didn't land. A £200k epic might generate £40k/month, so it takes a few months to validate. That's fine, the whole point is to close the loop honestly.

7. Outcome closure

When every epic under an outcome is closed, the outcome auto-closes with a clear reason: all linked epics done, value realised, or accepted as not realised. Outcomes are reviewed on a rolling basis to confirm they're on track, and can be closed manually whenever it's time to call it.

AI-assisted workflows

A specialist AI agent at every phase of every ticket. Grounded in real data, run in a real sandbox. Not a replacement for Claude Code, Cursor or VS Code, an accelerant that sits alongside them.

Every ticket, at every phase, has a specialist AI agent attached. Different agents for different phases: a Backlog Curator for idea shaping, a Competitor Scout for market research, a Visual Designer for mockups and screenshot critiques, a Story Architect for breakdown, a Code Reviewer for technical review, a QA Engineer for test plans, a Release Planner for the ship kit, a Live Support agent for post-release triage. Seventeen in total. Each one knows its phase, sees the full hierarchy + outcome value, and calls read tools to verify before it acts.

Per-ticket, per-phase agent

Open any ticket, switch to the AI tab, and the right specialist for the current phase is ready. It sees the outcome, the epic, sibling tickets, the team's recent retro themes and prior agent runs, no context-copying, no hand-holding.

E2B sandbox execution

Agents that need to run something, compile-check code, render a PDF, capture a headless screenshot, scrape competitor pages, do it inside an isolated Firecracker VM. No leaked credentials, no local environment problems, real execution results feeding the analysis.

AI Boost

One click on an outcome. AI Boost burns a larger credit pool and autonomously breaks the outcome into 4–6 epics, 20+ stories at the Idea stage, and a starter PR scaffold per story. A full phase boost when you need the skeleton in place fast.

Outcome Coach + Stakeholder Updates

Mondays at 7am, every active outcome gets a chief-of-staff briefing, 1 win, 1 risk, 1 decision needed, 1 action, emailed to the owner. Weekly exec, monthly board and quarterly investor reports auto-draft from the same data.

Proactive alerts

AI watches your delivery data and surfaces high-severity signals automatically: "This epic is 32pp behind schedule." "This team's health has dropped 0.4 vs the prior two months." "This outcome hasn't been validated in 34 days." No hunting for the red flags.

Cross-team dependency map

Most dep maps die because they're declared once and rot. Tenhaw detects them semantically from your live story descriptions, every night. You confirm or reject. The Coach surfaces unresolved blockers as the leader's "decision needed", so deps can't quietly stale.

Discovery Workbench

Paste an interview transcript or upload audio (Whisper transcribes inside the sandbox). The AI extracts verbatim insights and dedupes them into your existing themes, so research stops dying in a deck and themes accumulate evidence over time. Each insight links back to the outcome it informs.

The goal isn't to replace the developer, the designer or the product manager. It's to collapse the research, drafting, QA and review work that AI is now obviously better suited for, and give humans back the time for the parts that still need them - judgement, relationships, and the call on what ships and what doesn't.

The cadence

Principles don't work without rhythm. Seven rituals, each owned by a specific role, each running on a specific clock.

Every day

Daily dashboarding

Proactive lookahead + real-time delivery metrics. AI surfaces what's off track before the daily stand-up. Five minutes, not a ceremony.

Every 2 weeks

Refinement & story pointing

Per team. Size new work coming in, break down anything that's too big, re-point anything that's shifted in understanding. Done in-product so the data feeds the simulations.

Every 2 weeks

Retrospectives

Per team, completed in the product, reviewed by the team. Actions tracked and revisited next retro. AI summarises recurring themes across quarters so you see the patterns.

Monthly

Health checks

Per team, completed in-product, reviewed by both the team and management. Trends matter more than any single month.

Monthly

Outcome validation

For every live outcome, product answers a questionnaire about what value has landed and what hasn't. Continues until the epic's value is validated or explicitly written off.

Per quarter

Tech Debt management

Budgeted per quarter via the dedicated Tech Debt epic on the roadmap. Items are monitored, scored by ROI, and pulled into the quarter's flow rather than squeezed in "when we have a quiet week."

Continuous

RAID — always-on

AI watches your delivery data continuously and auto-opens risks when it spots the patterns (outcomes stalling, benefits slipping, team-health dropping, off-track epics, value-at-risk). The Outcome Coach surfaces unresolved high-severity items in the weekly briefing as the leader's "decision needed". Humans confirm, mitigate, close.

By role

One operating model, three primary lenses. Each role has a primary focus and a dashboard tuned to support it.

Product

Maximise value delivered

Product's job is to make sure every committed piece of work maps to value that lands in the business. Currency on every outcome. Outcome validation every month until the value is confirmed. Kill the ones that aren't landing and reinvest.

  • Value dashboard
  • Every ticket links directly to an outcome
  • Product phases 1–4 visible on every board
  • Outcome Validation each month
Portfolio / Delivery

Deliver when we said we would

Delivery owns predictability across the whole portfolio. Effort simulations, capacity shapes, quarterly RAID budgets, tech-debt paydown plans, all the machinery that keeps commitments honest at scale.

  • Operations dashboard
  • Delivery dashboard
  • Effort optimisation simulations
  • RAID logs (budgeted per quarter)
  • Tech Debt management (budgeted per quarter)
Engineering

Maximise output delivered

Engineering's job is flow, get the approved work through phases 5–9 without drama. Proactive signals when something's slipping, healthy teams because unhealthy teams don't ship, and AI agents doing the research, drafting, and QA work that shouldn't have humans doing it in the first place.

  • Delivery dashboard
  • Development boards
  • Proactive lookahead alerts
  • Retros & health checks, address issues quickly
  • AI-assisted workflows in every ticket

The Tenhaw Way.

Speed, value and honest feedback loops, built for a world where AI does the research, drafting and QA work. Agile is too slow to keep up with what's now possible. The Tenhaw Way is the operating model. The Tenhaw product is where it runs.

If this resonates, try Tenhaw. Every rule on this page is enforced in the product, so the methodology isn't a poster on your wall, it's how the board behaves.

Professional services

Want help running this in your org?

Most teams adopt the Tenhaw Way faster with a hand on the wheel for the first quarter. Our professional services team will roll out the methodology with your product, delivery and engineering leads, calibrate the value model against your historical data, configure Tenhaw to enforce your team's reality, and stay close while the first quarter runs.

  • Methodology rollout, onboard your product, delivery and engineering leads to the four values, the work breakdown, and the workflow gates.
  • Value calibration, work through historical delivery data to set the realistic headroom for your org.
  • Tooling configuration, set up roadmaps, Tech Debt + Bug Budget defaults, refinement cadence, retros, health checks, and the AI agent permissions.
  • First-quarter shadow, weekly check-ins through Q1 to make sure the gates hold, the value rollups stay honest and the team doesn't quietly slip back to old habits.

Pick a time below, 30 minutes with the founder. No sales pitch; a working session to see whether we're a fit.