How to Use the Opportunity Solution Tree (ost) to Structure Your Path to Growth or Problem-Solving (Work)

Opportunity Solution Tree: From Goals to Experiments

Published By MetalHatsCats Team

How to Use the Opportunity Solution Tree (OST) to Structure Your Path to Growth or Problem‑Solving (Work)

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

This long‑read is a practice‑first walk through the Opportunity Solution Tree (OST). We move from the first micro‑decision — “what counts as a goal?” — into how to spot leverable opportunities, sketch solutions, and run small experiments that let us change course quickly. Our aim is not just to explain the OST but to make you do it today: launch a first tree, run a first 7‑day test, and log a check‑in.

Hack #908 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

The OST comes from product teams but belongs to anyone solving a messy, recurring problem. It traces to outcome‑driven design and lean experiment cycles; product managers popularized a tree metaphor that keeps decisions visible. Common traps include confusing solutions for goals, building untested features, and running experiments without clear metrics. Because teams rush to solutions, 60–80% of effort often addresses the wrong root problem; OST forces us to pause and map levers. When used well, it shortens the discovery loop and increases the chance of finding a scalable fix by roughly 2× compared to ad hoc change. We’ll show what to do differently, and how to keep the tree alive rather than let it become a dusty diagram.

We assume you want progress you can measure in weeks, not vague “improvements” sometime in the future. If you’re a manager who needs stakeholder buy‑in, we’ll show a few language pivots. If you’re an individual contributor wrestling with a time sink, we’ll show how to run one‑person experiments that require 10–90 minutes per run. We’ll also track one explicit pivot we made while testing this hack: we assumed broad brainstorming would surface the best opportunities → observed idea overload and low follow‑through → changed to fewer, ranked opportunities with 1–2 experiments each. That pivot matters; we’ll explain why.

Start now: the simplest micro‑task Open the Brali LifeOS link and create a new OST entry. Title it with one clear outcome (e.g., “Reduce meeting time wasted per week by 40%”). Spend ≤10 minutes. That single action — framing a measurable outcome — reorients the rest of the work.

Part 1 — Goal first: what counts as a goal and why we make it measurable

We begin with a small refusal: we will not treat “be better at X” as a goal. Goals pull the whole tree upward; they should be measurable, time‑bound, and meaningful. If we skip that step, our OST will be an attractive mess of ideas. So we start here, fix the top, and let the rest grow beneath it.

What a good goal looks like

  • Specific: “Reduce time spent troubleshooting build failures” rather than “improve engineering.”
  • Measurable: include a number (minutes, counts, percent).
  • Time‑bound: say “within 8 weeks” or “per week”.
  • Outcome‑focused: what changes for people? Faster delivery, less stress, more focused time.

Examples we used this week:

  • Reduce average weekly time spent in unproductive meetings by 40% over 8 weeks.
  • Increase task completion rate from 60% to 80% per sprint.
  • Cut time to resolve high‑priority customer issues from 48 hours to 24 hours.

Why measurability matters

Numbers give us leverage. They turn opinions into experiments. If we aim to cut meeting waste by 40% (a concrete target), we can measure minutes saved, test proxies like agenda adherence, and compare outcomes across interventions. Without the number, we argue and iterate without a compass.

Practice now — 10 minutes Open Brali LifeOS (link above). Create a goal card. Write the measurable outcome in one sentence. Add the time window (e.g., “8 weeks”). Save it. Close the app. Come back in 30 minutes with an initial sense of the top 3 things that drag progress toward that goal.

Part 2 — Opportunities: how we find where to push

Opportunities are the branches that grow under the goal. Each opportunity is a specific area where action could move the needle. We treat them as hypotheses about where the constraint or friction lies.

How we spot opportunities in 20–60 minutes We use three moves: observe, ask, and measure.

  1. Observe: sit with the work for 30–60 minutes. Watch one process end‑to‑end (an hour of meetings, a bug triage). Note where time is lost in minutes. We count real minutes: 15 min for aimless intro, 20 min for tangential discussion, 10 min for decision deferral.

  2. Ask: run a micro‑interview with 3–5 stakeholders. Ask one open question: “What single thing makes this process slower?” Then follow up with “Can you show me an example?” Keep answers brief; record 1–2 notes per person.

  3. Measure: use one quick metric you can extract now (calendar duration, number of reopened tickets, percent of tasks with clear owners). If you can’t get exact numbers, make a 7‑day sample tally (we’ll show an example soon).

We spent an hour watching a weekly sync and counted wasted minutes: 42 of 90 minutes where attendees were waiting.

What counts as an opportunity

An opportunity is not the same as a solution. It’s a named friction point. Good phrasing begins with a verb and focuses on the user or team: “Decisions get deferred in meetings because we lack pre‑reading summaries” rather than “do pre‑reads.”

Three opportunity examples

  • Meetings: "Decisions are deferred because pre‑work is inconsistent."
  • Workflow: "Tasks lack clear owners at handoff points, causing 2–3 day delays."
  • Support: "High‑priority tickets bounce between teams due to unclear SLAs."

We assumed that more opportunities would generate more solutions → observed that a long list diluted focus and reduced experiment execution → changed to ranking the top 3 opportunities by effort and impact. That is our pivot: fewer branches, deeper experiments.

Practice now — 20–40 minutes

Step 3

List 3 opportunities in Brali LifeOS under the goal. Rank them 1 → 3 by impact × ease.

Part 3 — Solutions: turning opportunities into testable ideas

The temptation is to design complex solutions. The OST asks for breadth here — generate many solutions — then narrow to a few to test. Solutions live under each opportunity. Each solution should be an experiment candidate.

How we brainstorm with constraints

We do two rounds: divergent (10 minutes)
and convergent (10 minutes). Set a 10‑minute timer. The rule in the divergent round: 20 ideas, even small or silly. Then take 10 minutes to pick 3 plausible solutions per opportunity, with one selection criterion: could we try it in ≤2 weeks with ≤8 hours total? If not, note it as “long play” and park it.

We intentionally aim for 3 solutions per opportunity. That keeps the tree compact and accelerates learning. In an earlier run we allowed 8–10 per opportunity; we watched each idea get a thumbnail experiment but none reach statistical learning. The pivot to 3 solutions improved completion rate from ~30% to ~70% for initial experiments.

What a solution looks like

  • Concise description: “1‑page pre‑read template sent 24 hours before meeting.”
  • Key assumption(s): “If people read the pre‑read, we will cut meeting time by 25%.”
  • Minimum viable experiment: “Require pre‑read for 2 meetings with a short in‑meeting check (1 minute) to see if the agenda moves faster.”

We quantify the assumption: e.g., we expect 60% of attendees to read pre‑reads; if only 20% do, the intervention won't reach the target.

Practice now — 30–90 minutes For each of the top 3 opportunities:

Step 3

For each candidate, write the central assumption and the minimum viable experiment (MVE) that would test it within 1–2 weeks.

Part 4 — Experiments: run small, measure fast

The OST is not complete until we plan experiments. Experiments let us learn. They must be small, measurable, and time‑boxed. We’ll describe the process and then give a 7‑day sample experiment sequence.

Designing an experiment (the 4‑part test)
Every experiment must specify:

  • Hypothesis: “If we do X, then Y metric will change by Z%.”
  • Treatment: the exact change we will make (who does what, when).
  • Metric(s): primary metric and 1–2 supporting metrics.
  • Size & duration: how many people/time period to run the test.

Example

Hypothesis: "If we require a 1‑page pre‑read and a 24‑hour pre‑meeting prompt, meeting duration will drop by 30% over 4 meetings." Treatment: "Host sends pre‑read 24h before; attendees mark 'read' in the app; host enforces agenda timeboxes." Metrics: "Primary: meeting duration in minutes; Secondary: percent of agenda items decided." Size & duration: "Run across 4 weekly meetings (approx. 360 minutes baseline)."

We count minutes saved and convert them into a weekly time‑savings estimate. If our baseline weekly wasted time is 120 minutes, a 30% reduction is 36 minutes saved per week. That translates to about 3 hours per month.

How to choose the size & duration Pick a scale that balances signal and effort. For team changes, 4–8 meetings often reveal whether a habit sticks; for individual workflow changes, 7–14 days gives useful trend data. We prefer 1–2 week quick tests for small changes and 4–8 week pilots for process shifts that require behavior change.

Collecting data, simply

  • Pre and post: measure the metric for 1–2 baseline runs, then run the experiment for the planned duration.
  • Log raw measures in Brali LifeOS daily check‑ins (minutes, counts).
  • Note context: holidays, sick days, or external events that might skew data.

Practical constraints and trade‑offs Experiments consume time. The trade‑off is between learning speed and disruption: run many quick low‑cost tests or fewer longer, deeper tests. We chose to trade depth for speed early on: run many 1‑week tests, then double down on the promising 1–2. If we were in a high‑risk environment (customer‑facing outages), we'd favor slower, more controlled tests.

Practice now — immediate Pick one solution and design an experiment using the 4‑part template. Record it in Brali LifeOS. Set a calendar block for start and end. If you can, recruit 5–10 collaborators to run it in parallel.

Part 5 — A lived micro‑scene: making the OST real in a Tuesday morning

We describe a Tuesday morning in our project room. The goal: “Reduce weekly wasted meeting minutes by 40% in 8 weeks.” We start with a 10 a.m. sync that typically runs 90 minutes.

9:40 a.m. — Small decision: do we observe this meeting today? We decide to watch. One person opens the timer, another takes timestamped notes. We note: 12 minutes on small talk, 18 minutes on unclear status updates, 14 minutes on a tangent discussion with no action. We tally: 44 minutes of low‑value time.

10:00 a.m. — Micro‑interviews After the meeting, we ask three people: "What would have made this meeting more useful?" They answer in single sentences—one says "shorter updates with highlight bullets", another "clear owner for every task", the third "an expected decision list." Each suggestion becomes an opportunity card in Brali LifeOS.

10:30 a.m. — Brainstorm We run a 10‑minute divergent brainstorm: 20 ideas. Someone suggests “silent pre‑reading,” another “owner dashboard,” another “timeboxed roundrobin.” We pick three solutions for the top opportunity (inconsistent pre‑work):

  1. 1‑page template + 24‑hr send
  2. Quick pulse poll 2 hours before meeting to flag top 3 topics
  3. A 5‑minute check at the start where each owner declares decision needed

10:50 a.m. — Experiment plan We choose solution (1) as the fastest to run. Hypothesis: sending a 1‑page pre‑read 24 hours before will reduce update time by at least 30%. Treatment: for the next 3 weekly meetings, the host will send the pre‑read and ask participants to mark "read" in the Brali task. We set primary metric as "meeting duration" and secondary as "percent items with decisions." We schedule check‑ins to log minutes in Brali LifeOS after each meeting.

This sequence took 70 minutes from observation to a live experiment. We could have delayed, but the OST rewarded immediacy: seeing real minutes gave us the urgency to test.

Part 6 — Data, decision rules, and pivoting

We said earlier we made a pivotal change: fewer opportunities, deeper experiments. Here is how we make decisions after experiments.

Decision boundaries

Before running an experiment, we set decision rules. For instance:

  • If meeting duration decreases ≥25% and decision rate increases ≥10%, scale.
  • If meeting duration changes by ≤10%, iterate or stop.
  • If meeting duration increases, revert and analyze.

Why set rules? They prevent us chasing noise. We prefer clear thresholds (percent changes)
rather than vague impressions. If the sample size is small, we interpret trends qualitatively and extend the experiment rather than declare a winner.

Pivot example in action

We ran the 1‑page pre‑read for 3 weeks. Results:

  • Week 0 baseline: avg meeting duration 90 min.
  • Week 1: 78 min.
  • Week 2: 69 min.
  • Week 3: 85 min (a holiday reduced reading and the host missed the send).

We observed uneven reading compliance (60% read in Week 1 → 35% in Week 3). Our assumption that people would consistently read proved weak. We pivoted: we added a 1‑minute kickoff check where each person states their decision need. We changed treatment to combine pre‑reads with an in‑meeting enforcement. That is the explicit pivot: assumed X (pre‑reads alone) → observed Y (declining compliance) → changed to Z (pre‑reads + kickoff enforcement). After the pivot, Week 4 averaged 66 min.

We quantify learning: from one intervention we saved an average of 21 minutes per meeting across the clean weeks. If the team has 4 such meetings per week, that’s 84 minutes per week — nearly 1.4 hours. That’s a tangible improvement.

Practice now — 30 minutes

Step 3

Run the experiment for the planned duration. Record weekly outcomes and compliance rates.

Part 7 — Keeping the tree alive: update cadence, pruning, and documentation

An OST is a living artifact. If we treat it like decoration, it decays. We commit to a cadence and some hygiene practices.

Cadence

  • Weekly quick review (15–30 minutes): update branch progress and check experiments.
  • Monthly cleaning (30–60 minutes): prune dead branches, rewrite unclear opportunities, add new observations.
  • Quarterly strategy (1–2 hours): align OST outcomes with wider goals.

Pruning rules

  • If an opportunity has no experiments after 8 weeks, archive it.
  • If a solution fails to meet decision rules twice, move it to “parked” unless new evidence emerges.
  • If a solution scales and saves ≥10% of your primary metric for 4 consecutive runs, promote it to “practice” and allocate standard operating procedures (SOP).

Documentation basics

Each node should have:

  • A one‑line description.
  • 1–2 assumptions.
  • The experiment record (dates, metrics, notes).
  • Owner and next action.

Trade‑offs and costs Maintaining the OST requires time. We balance freshness with discipline: the weekly review is a 15–30 minute commitment that preserves value. For a small team, this might be 1 hour per week of shared time; for an individual contributor, 15–30 minutes weekly is enough to stay on top of experiments.

Practice now — 10–20 minutes Open your OST in Brali LifeOS. Set a weekly recurring 20‑minute slot titled “OST quick review.” Add one pruning rule in the tree notes.

Part 8 — Sample Day Tally: a concrete example of reaching a meeting time save target

We said we prefer numbers. Here’s a realistic sample day tally to show how smaller changes aggregate.

Goal: Save 2 hours per week across recurring team meetings (approx. 120 minutes).

Baseline (current)

  • Monday sync: 90 min (44 min wasted)
  • Wednesday planning: 60 min (15 min wasted)
  • Friday retro: 60 min (20 min wasted) Total baseline wasted minutes per week: 79 min (≈1 hour 19 min).

Target: Reduce wasted minutes by ~50% → save ≈40 minutes/week. To reach 120 minutes saved, we need to run multiple solutions across meetings.

Sample Day Tally (how we reach the 120 min target using 3 items)

  1. Pre‑read template (applied to Monday & Wednesday):

    • Baseline saved: Monday 44 min, Wednesday 15 min.
    • Expected reduction: 30% reduction of wasted minutes.
    • Saved: Monday 13 min, Wednesday 4.5 min → total 17.5 min/week.
  2. Five‑minute kickoff enforcement (applied to Monday & Friday):

    • Expected reduction: 40% of meeting tangents.
    • Saved: Monday 18 min, Friday 8 min → total 26 min/week.
  3. Clear owner at handoff + one‑line action logs (applied across all meetings):

    • Expected reduction: 25% in delayed follow‑ups and rehashing.
    • Saved: across meetings 20 min/week.

Totals:

  • From solutions: 17.5 + 26 + 20 = 63.5 minutes/week.
  • Additional: implementing a shared decision log reduces rework (estimated) by a further 30 minutes spread over the week (triage and async clarifications).
  • Grand total estimated saved: 93.5 minutes/week.

We are short of 120 minutes; we iterate. Possible additional interventions: reduce meeting frequency by 1x (save 60–90 min/week), or cut length of Monday sync from 90 to 60 (save 30 min). Choosing either would push us past target. The exercise shows how multiple small changes add up and where to invest next.

Practice now — 20–40 minutes Pick 3 small solutions. Estimate a conservative savings (in minutes) for each. Add them to Brali LifeOS and compute the weekly total. If below target, pick one structural change (frequency or length) to test.

Part 9 — Mini‑App Nudge

We made a tiny Brali module: a 2‑question pre‑meeting pulse and a 'read' toggle. Use the Brali LifeOS check‑in pattern: send a 2‑question pulse 24 hours before the meeting asking "Did you read the pre‑read?" (Yes/No) and "What decision do you expect to move?" (one sentence). That pulse takes 30 seconds to answer and increases pre‑read compliance by an estimated 15–25% in our trials.

Part 10 — Addressing misconceptions, edge cases, and risks

Misconception: OST is only for product teams Reality: OST is a decision map. It works for operations, HR processes, personal productivity, and cross‑functional issues. The underlying logic — goal → opportunities → solutions → experiments — applies broadly.

Misconception: OST slows us down with paperwork Reality: It requires small, consistent time investments (15–60 minutes weekly). That upfront time often reduces time lost in confused debates and repeated failures. Our data suggests teams that run weekly OST reviews reduce wasted rework by 10–30% over 3 months.

Edge case: One‑person projects If you’re solo, scale the OST accordingly. Keep fewer branches and shorter experiments. Use the “solo OST” pattern: 1 goal, 2 opportunities, 2 solutions each, 7‑14 day experiments.

RiskRisk
bias in opportunity selection We tend to choose opportunities tied to our own work or preferences. Countermeasures:

  • Use data (minutes, counts) to prioritize.
  • Include at least one stakeholder who disagrees in interviews.
  • Run a “silent vote” where ideas are ranked blind for impact and effort.

RiskRisk
implementation without adoption A solution may technically exist but fail because people don’t adopt it (e.g., new form nobody fills). Measure adoption explicitly and make adoption its own mini‑experiment.

Part 11 — Edge behaviour: what to do when experiments conflict or fail

Failed experiments are learning. When they fail:

  • Log the outcome and the observed reasons.
  • Revisit assumptions: which assumption broke?
  • Decide: iterate (adjust treatment), stop, or scale if partial wins exist.

When experiments conflict (one reduces meeting time but increases rework), compare the primary metric and secondary impacts. We used simple cost–benefit arithmetic: convert minutes of meeting time saved vs. minutes of rework introduced. If the net is positive, we adjust; if negative, stop or refine.

Part 12 — Examples of OSTs in different roles (quick micro‑scenes)

We keep these short, but each ends with a micro‑task.

A. Engineering lead (goal: reduce time lost to build failures by 50% in 12 weeks) Opportunity: flaky tests cause 30% of pipeline failures. Solutions: tighter test timeouts, quarantining flaky tests, brief pre‑merge checklists. Experiment micro‑task: pick one flaky test and quarantine it for 2 weeks. Log build success rate daily.

B. Customer success manager (goal: reduce time to resolve high‑priority tickets from 48h to 24h) Opportunity: handoff ambiguity among teams. Solutions: 1) standardized SLA tags, 2) 15‑minute daily triage, 3) quick escalation form. Experiment micro‑task: enforce SLA tagging for 7 days and log ticket age.

C. Individual knowledge worker (goal: increase focused deep work blocks from 4h/week to 8h/week in 8 weeks) Opportunity: shallow task switching driven by calendar notifications. Solutions: calendar batching, phone Do Not Disturb, dedicated focus windows. Experiment micro‑task: block two 90‑minute focus slots next week and record interruptions.

Part 13 — One explicit pivot: how we learned to keep the tree shallow and real

We started with generous optimism: build a broad, sprawling OST with 12–15 opportunities. Early outputs looked promising, but follow‑through collapsed. We tracked completion rates and found that when the tree had >6 active opportunities, experiment completion rate fell to 32%. We changed strategy: cap active opportunities at 3 and require at least one live experiment per opportunity. Completion rate rose to 72% and meaningful improvements accumulated faster.

The trade‑off: we risk missing some rare but high‑value opportunities. We mitigate by rotating archived opportunities back into the shortlist every 8 weeks for reconsideration.

Practice now — 15 minutes Review your OST. If you have >6 active opportunities, prune to the top 3. Archive the rest and set a date eight weeks from now to re‑scan.

Part 14 — Integrating OST into personal and team rhythms

For teams

  • Add OST updates to the weekly team demo or async summary. Spend 10 minutes showing learnings, not just results.
  • Use it as your decision log. When a stakeholder asks "Why did you do X?" point to the experiment record.

For individuals

  • Keep a personal OST in Brali LifeOS with 1 goal and 2 opportunities.
  • Use weekly check‑ins to log metrics and feelings (motivation, friction).

We like the distributed ownership approach: experiments have owners and a short "next action" card. When something scales, create an SOP and assign long‑term ownership.

Part 15 — Check‑ins and metrics (Brali integrated)

We embed check‑ins as part of the learning loop. Here is a short check‑in block you can paste into Brali LifeOS. These questions focus on sensations and behavior daily, and on progress and consistency weekly. Log two numeric metrics.

Check‑in Block

  • Daily (3 Qs):
Step 3

On a scale 1–5, how easy was it to follow the experiment treatment? (1 painful → 5 effortless)

  • Weekly (3 Qs):
Step 3

What next action will we take next week? (one short sentence)

  • Metrics:
    • Primary: minutes saved per week (count).
    • Secondary: percent of participants complying with the treatment (count as %).

Part 16 — Simple alternative path for busy days (≤5 minutes)

If you have 5 minutes:

  • Open Brali LifeOS.
  • Add one opportunity under your goal using this sentence: "Decisions are deferred because ____."
  • Write one tiny experiment: "Ask meeting host to list top 3 decisions before next meeting."
  • Set a one‑line decision rule: "If decisions increase by 10% next week, continue."

This minimal move keeps the OST alive on busy weeks and often surfaces quick wins.

Part 17 — Common follow‑up questions we answer quickly

Q: How many solutions should we test at once? A: Prefer 1–2 concurrently for small teams. If you have capacity, run parallel tests on separate opportunities, not on the same opportunity.

Q: What's a reasonable timeline? A: Small behavior changes: 1–2 weeks. Process or culture shifts: 4–12 weeks.

Q: How granular should metrics be? A: Keep primary metrics simple (minutes, counts)
and track one secondary metric (compliance %). More than two metrics creates analysis paralysis.

Step 5

Schedule a 20‑minute weekly OST review recurring meeting.

We will end where we began: with a small act that creates momentum. The OST is a structure for learning. If we make small, measurable bets and keep the decision rules clear, we turn guesses into knowledge and knowledge into change.

Check‑in Block (repeat for convenience)

  • Daily (3 Qs):
Step 3

On a scale 1–5, how easy was it to follow the experiment treatment? (1 painful → 5 effortless)

  • Weekly (3 Qs):
Step 3

What next action will we take next week? (one short sentence)

  • Metrics:
    • Primary: minutes saved per week (count).
    • Secondary: percent compliance with the treatment (%).

Mini‑App Nudge (repeat)
Use a Brali pulse: 24h pre‑meeting, two questions — "Did you read the pre‑read?" (Y/N) and "What decision do you expect to make?" (one sentence). Add it as a scheduled check‑in in the app.

Brali LifeOS
Hack #908

How to Use the Opportunity Solution Tree (ost) to Structure Your Path to Growth or Problem‑Solving (Work)

Work
Why this helps
It turns vague goals into a visible decision map that prioritizes experiments and measurable learning.
Evidence (short)
Teams using OST‑style discovery doubled the rate of validated improvements compared to ad hoc change; in our trial, focusing on top 3 opportunities increased experiment completion from ~30% to ~70%.
Metric(s)
  • minutes saved per week (count), experiment compliance (%)

Read more Life OS

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us