A guide for teams who ship

The Tenhaw Way

We've spent years helping teams go from chaos to clarity. This is what we've learned — distilled into six principles that actually work. No buzzwords. No frameworks-for-the-sake-of-frameworks. Just honest, opinionated thinking about how modern software teams should operate.

Why this exists

Most teams we work with have the same problem. They're busy — sprints are full, standups happen on time, retros fill up whiteboards — but somehow the business still doesn't trust the roadmap, features ship without anyone measuring whether they worked, and the team can't answer a simple question: "When will this be done?"

The issue isn't effort. It's that the entire operating model is optimised for activity, not outcomes. Teams track velocity instead of value. They estimate in story points but report in vibes. They run retros but never action the takeaways.

We built Tenhaw because we kept seeing the same gaps in every team we worked with. And this guide — The Tenhaw Way — is the philosophy behind the product. It's how we think teams should work, regardless of whether they use our software.

That said, Tenhaw is designed to make all of this significantly easier. If you're reading this and nodding, you should probably try it.

Six principles

These aren't abstract values we put on a wall. They're concrete operating decisions that change how your team spends its time every single day.

1

Start with outcomes, not features

Before you write a line of code, ask: "What business outcome does this serve?" If the answer is "well, the client asked for it" or "it was on the roadmap" — that's not good enough. Every piece of work should trace to a measurable result: revenue, retention, efficiency, customer satisfaction. Something you can look at in six weeks and say "that worked" or "that didn't."

This means replacing your feature backlog with an outcome backlog. Instead of "Build export to CSV," you have "Reduce time-to-first-insight for enterprise users by 40%." The CSV export might be part of the solution — but it's not the goal. The goal is the outcome. Everything else is a hypothesis.

Score every epic for business value. Not in abstract t-shirt sizes — in actual estimated impact. Then use that score to prioritise ruthlessly. The moment you stop letting the loudest voice in the room set priorities, everything gets clearer.

2

Make delivery predictable

Your stakeholders don't care about your velocity. They really don't. They care about one thing: can I trust your commitments? When you say something will ship in March, does it actually ship in March?

Stop giving people single-point estimates. Nobody knows exactly when something will be done — but you can know the probability distribution. Run Monte Carlo simulations against your historical throughput and give stakeholders confidence intervals: "There's an 85% chance we ship by March 15th." That's honest. That's useful. That builds trust.

Watch your flow metrics: cycle time, throughput, work in progress. These tell you the real story. If your cycle time is creeping up, something is wrong — and you'll catch it before it becomes a missed deadline. Limit WIP aggressively. Finishing work is always more valuable than starting new work.

3

Measure value, not velocity

Velocity tells you how fast you're running. It tells you absolutely nothing about whether you're running in the right direction.

After you ship something, go back and check: did it work? Did that onboarding redesign increase activation? Did the performance work reduce churn? If you're not closing the loop, you're just shipping features into the void and hoping for the best.

Set targets for each initiative before you start — "We expect this to improve conversion by 15%" — and then track whether you hit them. Hold the team accountable for outcomes, not outputs. Nobody should be celebrating "we shipped 47 stories this sprint" if none of them moved a business metric.

This is the hardest principle to adopt because it requires honesty. It means admitting when something you shipped didn't work. But that honesty is what separates great teams from teams that are just busy.

4

Treat operational health as non-negotiable

Healthy teams ship better software. This isn't fluffy HR talk — it's observable. Teams that skip retros accumulate process debt. Teams that ignore tech debt slow down every sprint. Teams that don't track risks get blindsided by them.

Run regular health checks — not as a box-ticking exercise, but as an honest pulse check. Are people happy with the tooling? Is collaboration working? Are we spending too much time fire-fighting? Track the trends, not just the snapshots. A team that scores 7/10 but has been declining for three months is in worse shape than a team that scores 5/10 and is improving.

Do retros properly. Actually action the items. Check last sprint's actions at the start of this sprint's retro. Generate summaries so you can spot recurring themes — the problems that keep coming back are the ones worth investing in.

Catalogue your tech debt and score it by ROI. Some debt is fine to live with. Some is actively slowing you down. Know the difference, and make paying it down a first-class part of your planning — not something you squeeze in when there's a quiet sprint (there never is).

5

Use AI to see what you can't

Your team generates an enormous amount of delivery data every week — commits, cycle times, retro feedback, incident logs, estimation accuracy, throughput trends. No human can synthesise all of that into actionable insight. But AI can.

The point isn't to let AI make decisions for you. It's to let it surface signals you'd otherwise miss. Things like: "Code review is becoming a bottleneck — average wait time has doubled this month." Or: "The auth module has a 78% regression probability based on recent commit patterns." These are signals that a human would catch eventually — but usually too late.

AI-generated status reports that pull from real delivery data instead of someone's Friday-afternoon memory. Natural language queries against your entire delivery history. Summaries of recurring retro themes across quarters. This is where AI genuinely helps — not replacing your team's judgement, but making sure they have the right information when they need it.

6

Roadmap with honesty

A roadmap is a communication tool, not a contract. And definitely not a Gantt chart pretending to be truth.

Near-term items should be committed — backed by data, high confidence, the team has capacity and the work is well-understood. Mid-term items are planned — estimated, moderate confidence, subject to change. Far-term items are exploring — directional, low confidence, this is where we think we're heading.

Make this distinction explicit. When stakeholders understand that "Q4" means "we're exploring this direction" rather than "this will ship in Q4," everyone's happier. Update the roadmap when reality changes — which it always does. A roadmap that never changes isn't a roadmap. It's fiction.

And critically: every item on the roadmap should link to an outcome. If it doesn't serve a strategic goal, it shouldn't be taking up space.

The cadence

Principles are useless without rhythm. Here's the cadence that makes all of this work in practice.

Every day

Glance at your AI-surfaced signals. Check the board — is WIP under control? Spot bottlenecks early. This takes five minutes, not a ceremony.

Every week

Generate a status report from real data (stop writing them from memory). Review lookahead predictions with the team. Update your risk log. Check whether recently shipped features are moving the needle.

Every cycle

Run a health check — track the trend, not just the score. Hold a proper retro, action the items, and check last cycle's actions. Re-prioritise the outcome backlog based on what you've learned.

Every quarter

Update roadmaps with fresh simulation data. Review tech debt ROI — plan reduction work. Re-score outcome priorities against strategy. Step back and ask: are we working on the right things?

The monthly blueprint

Theory is nice. Here's what a Tenhaw month actually looks like, day by day. This assumes two-week cycles — adjust as needed, but keep the rhythm tight.

Week 1 Cycle kick-off & build
Monday

Sprint Planning (60 min). Pull from the outcome backlog, not a feature list. For each item, articulate the outcome it serves and how you'll know it worked. Set WIP limits for the cycle. If you can't explain why something is in the sprint, it shouldn't be.

Tuesday – Thursday

Build. Async standup in the morning — each person posts what they're working on and whether anything's blocked. Keep it written, keep it short. No 30-minute ceremonies. Check the board once mid-afternoon — is WIP creeping up? Pull, don't push. Review AI-surfaced signals for anything unusual (cycle time spikes, blocked items).

Friday

Refinement (45 min). Look ahead to what's coming next cycle. Break down upcoming work, flag unknowns, estimate effort. Weekly status report — generate it from Tenhaw's delivery data, not from memory. Send to stakeholders. Update the RAID log — any new risks or issues surfaced this week?

Week 2 Build & close
Monday – Wednesday

Continue building. Same daily rhythm — async standups, board checks, signal monitoring. Mid-week checkpoint: are we on track to finish what we committed to? If not, cut scope now — don't wait until Friday to discover you're behind. Run the Lookahead to check delivery probability against your commitments.

Thursday

Demo prep & completion push. Get everything into a demo-ready state. No starting new work — focus on finishing. If something isn't going to make it, acknowledge it cleanly and move it to next cycle. Review benefit metrics for recently shipped features — are they tracking?

Friday

Demo (30 min). Show what shipped and why it matters. Don't just demo features — show the outcome each one targets. Stakeholders should leave knowing what moved, not just what was built.

Retrospective (45 min). What went well. What didn't. What we'll change. Check last retro's action items — did we actually do them? Generate a summary for the record. Track health trends over time.

Weekly status report — end-of-cycle edition. Include throughput, cycle time, WIP trends, completion rate.

Week 3 New cycle & strategic check-in
Monday

Sprint Planning (60 min). Same as Week 1 — pull from outcome backlog, set WIP limits. But this time, also review the retro actions from last Friday. Make sure at least one improvement makes it into this cycle's plan.

Health Check (20 min). Quick pulse on team health — tooling, collaboration, pace, clarity. Compare to previous scores. If anything is trending down, address it now, not next month.

Tuesday – Thursday

Build. Same daily rhythm. Mid-week: tech debt review (30 min). Look at the catalogue, score items by ROI, decide if any debt-reduction work goes into this cycle or next. This isn't optional — it's how you stay fast.

Friday

Refinement + Outcome Review (60 min). Refine upcoming work as usual. But also: review benefit realisation for features shipped 2–4 weeks ago. Did the onboarding changes increase activation? Did the performance fix reduce churn complaints? Close the feedback loop. Update the outcome backlog priorities based on what you've learned.

Weekly status report. Update RAID log.

Week 4 Build, close & look ahead
Monday – Wednesday

Build and close. Same rhythm. Focus on finishing work in progress. Run Monte Carlo forecasting against the roadmap — update confidence intervals for upcoming milestones. Share with stakeholders so there are no surprises.

Thursday

Demo + Retro — same format as Week 2 Friday. Demo outcomes, review retro actions, generate summaries.

Monthly Roadmap Review (45 min). Step back from the day-to-day. Review the roadmap against strategic goals. Are committed items still on track? Do planned items still make sense? Update confidence levels with fresh simulation data. Re-score outcome priorities. This is where leadership and the team align on what's next.

Friday

Delivery Review (30 min). Look at the month as a whole. Throughput trends, cycle time, delivery predictability, benefit realisation across all shipped features. Generate an AI-powered monthly summary. What patterns are emerging? What's improving? What needs attention?

End-of-month status report. This is the one stakeholders use for board meetings and investor updates. Make it data-driven, not narrative-driven.

This blueprint is a starting point, not a straitjacket. Some teams run three-week cycles. Some do async retros. Some skip the mid-week tech debt review and do it monthly instead. The important thing is that every element is here somewhere in your rhythm. If you're not doing retros, health checks, benefit reviews, and roadmap updates at a regular cadence — you're not doing the Tenhaw Way.

That's the Tenhaw Way.

It's not a framework. It's not a certification. It's a set of beliefs about how software teams should work — informed by years of seeing what actually makes the difference between teams that deliver and teams that are just busy.

If this resonates, try Tenhaw. We built it to make all of this easier.