How to Don’t Always Stick to the Same Approach (Game Theory)
Mixed Strategy: Stay Flexible
Quick Overview
Don’t always stick to the same approach. Be flexible, try different strategies, and change your game plan depending on the situation. Flexibility is your friend.
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.
Use the Brali LifeOS app for this hack. It's where tasks, check‑ins, and your journal live. App link: https://metalhatscats.com/life-os/mixed-strategy-coach
We sit down and admit something simple: when a plan works once, we cling to it. When it fails, we either double down out of inertia or switch wildly until luck returns. Mixed strategy thinking borrows from game theory to make those switches purposeful. This is not abstract chess commentary; it's a hands‑on way to change what we do, when we do it, and by how much, so the next small decision today nudges future decisions toward better outcomes.
Background snapshot
Game theory started with math and economics in the mid‑20th century; its applied offspring include negotiation, auction design, and behavioural strategies in daily choices. A common trap is overfitting: we learn one solution that worked in a narrow context and treat it like universal truth. Another trap is random switching without measurement, which wastes effort. What changes outcomes is structured variability: measured, intentional changes that preserve core constraints while exploring alternatives. Mixed strategies give us a principled method to balance exploitation (use what works) and exploration (try something new), often improving results by 5–30% in iterative tasks and decisions when we track outcomes.
A micro‑scene to set the tone: we sit at a kitchen table with our laptop, a half‑drunk coffee (120–150 mg caffeine left in the cup), and a list of three regular choices we make each day: the morning email template we use (same subject line, same order of priorities), the route to work (same street, same speed), and the way we respond in meetings (fast agreement). Each has a cost and a benefit. If we treat each as a pure strategy—always the same move—opponents (real or environmental constraints) adapt. If we build in flexible probabilities—say, use the alternative approach 30% of the time—we gather information and prevent predictable failure.
If we want to do this today, we must start with a decision: pick one routine where swapping a move matters. Don’t choose everything—choose one. This practice‑first framing keeps the distance from concept to action under 10 minutes.
What we mean by “don’t always stick to the same approach”
We mean: intentionally vary our actions across repeated situations in a measured way. Practically, that looks like picking two to four viable options for a recurring decision and assigning them probabilities or simple rules so that the chosen option is not fixed. On Monday we try a new opening line in email; on Wednesday we revert. We keep track for two weeks. The goal is to reduce predictable failure, discover higher‑yield moves, and protect from bias and overconfidence.
We assumed X → observed Y → changed to Z We assumed a simple rule: swap a behavior every third time. We tried that in email subject lines and observed Y: our open rates were the same but reply patterns changed—some subjects generated faster replies but fewer long conversations. We changed to Z: keep the fastest reply subject 50% of the time, use an exploratory subject 30% of the time, and reserve a bold unconventional subject 20% of the time. That blend gave us better overall response quality. This pivot captures the essence: pure alternation found patterns but mixed probabilities exploited them.
Practice‑first step for today (≤10 minutes)
- Pick one recurring decision (email subject, commute route, morning prep, meeting stance). Write it down.
- List 2–4 real options you can use for that decision. Give each a short label.
- Assign simple probabilities (ex: 60/30/10) that sum to 100% or define a rule (every 3rd time, do B).
- Add a single metric you will record for each occurrence (reply time in minutes, total commute minutes, number of interruptions in a meeting).
- Open Brali LifeOS and create a task named “Mixed Strategy — [Decision]” with a daily check‑in. Use the app link above.
We do this now: we choose, we label, we assign, we log. The act of assigning probabilities moves the idea into a concrete constraint that guides small choices across the day.
Why a mixed strategy helps (short)
When outcomes interact with an environment that adapts (other people, traffic, attention), a fixed move becomes predictable and often exploitable. A mixed strategy introduces controlled unpredictability; this reduces exploitation and boosts the chance of encountering an opportunity. For repeated choices, switching increases the expected value because it combines the strengths of multiple approaches while limiting persistent blind spots.
A note on measurement: small numbers matter We want to measure one numeric outcome: minutes, counts, or milligrams (if relevant). Keep the metric simple and consistent. If emails are our field, measure reply time in hours or minutes. If exercise is the field, measure session minutes or repetitions. For change to be visible, collect at least 10–14 data points. Many micro‑experiments show detectable differences in 10–14 trials; fewer than 6 is too noisy.
Scene: deciding on the right decision to vary We make a small rule: pick a decision that repeats daily or several times a week and where options differ in observable outcomes within a few trials. Breakfast choice is repeatable but may not show measurable cognitive changes in two days; the route to work will show time differences in minutes immediately. For a persuasive test, choose something with clear numerical outcomes in under 14 trials.
Constraints and trade‑offs: insights we say out loud We need to balance three constraints:
- Cost: how much time or cognitive load does switching impose? (minutes per decision; we prefer ≤2 minutes).
- Signal: how quickly can we see a difference in the metric? (we prefer ≤14 trials).
- Risk: what’s the downside if an exploratory option fails? (lost time, social friction, missed meeting).
If the cost is high, keep probabilities small (10–20% exploration). If the signal is weak, lengthen the experiment. If risk is high—say, a client meeting—do not explore there; choose low‑stakes contexts first.
A worked example: email subject lines (concrete)
We pick a recurring decision: the opening subject line for status emails to stakeholders. Options:
- A: "Weekly Status — Key Updates" (safe)
- B: "Quick Sync? 3 Items" (engagement)
- C: "Decision Needed: Input by Friday" (urgent) We define probabilities 60% A, 25% B, 15% C. Metric: reply time in minutes from send to first substantive reply; secondary metric: number of follow‑up messages exchanged.
Sample Day Tally — Email subject lines
- 3 emails sent today using this approach:
- Email 1: A (reply in 240 minutes)
- Email 2: B (reply in 50 minutes)
- Email 3: A (reply in 90 minutes)
- Totals: 3 emails, total reply minutes = 380, average = 127 minutes. If the previous baseline average was 210 minutes, we've improved by 40%. This is an immediate, tangible gain that only required labeling and probability assignment.
Designing the mixed strategy
We sketch a compact decision tree: define context triggers (if time of day is morning, prefer A; if deadline < 48hrs, increase C's probability). These conditional shifts reduce risk by aligning probabilities with known constraints. The tree should be no more than 6 lines—brevity reduces friction.
Micro‑app logic we used We prototyped a tiny Brali module that asks three questions before each send:
- How urgent (1–5)?
- Time of day (morning/afternoon/evening)?
- Recipient familiarity (known/unknown)? Based on answers, it suggests probabilities: default 60/30/10, urgent becomes 30/30/40, unknown recipient becomes 70/20/10. The suggestion nudges us; we still make the choice.
Mini‑App Nudge A 2‑question Brali micro‑check: "Is this high stakes?" (yes/no); "Do we need a new approach?" (yes/no). If yes to both, use the exploratory subject in 50% of cases today.
How to pick the right probability mix
There’s no magic ratio. We use three heuristics:
- Start conservative: 70/20/10 for low‑stakes decisions.
- Shift after 10 trials: if exploratory option gives ≥10% better median outcome, increase it to 40–50%.
- Cap risk: never give an exploratory option over 60% unless you accept potential downside.
Why numbers like 10% or 14 trials matter We pick numbers to control variance. A 10% test gives ~1 trial per 10 decisions, letting us gather sparse but persistent data. Fourteen trials provide a minimum dataset where simple median comparisons become informative. These choices are pragmatic—rooted in our experience that with 10–14 observations, many recurring patterns show stable directionality.
Scene: a small failure and a pivot We tried a 50/50 split on meeting opening styles for a week. Attendance and energy metrics (minutes of constructive discussion) dropped 15% on exploratory days. We learned exploration can hurt group coordination. We changed to Z: exploratory approach only in internal meetings where norms are flexible and never in client meetings. This is the explicit pivot: exploration is conditional, not unconditional.
A practical plan to run a two‑week mixed strategy test Day 0 (preparation; 10–20 minutes)
- Pick decision.
- Define 2–4 options and label them.
- Choose metric(s), probabilities, and simple conditional rules.
- Create a Brali LifeOS task with check‑ins and a short journal prompt: "Which option did we use and what happened?" Days 1–14 (execution; 1–3 minutes per occurrence + 2–5 minutes daily review)
- For each occurrence, decide according to probabilities (flip a coin, use a smartphone randomiser, or let Brali suggest).
- Log the metric (minutes, count). Add one sentence in the journal about context.
- End each day with a 2‑minute reflection: did the choice feel costly? Did it produce a signal? Day 7 (midpoint; 10–20 minutes)
- Run a simple median comparison of outcomes.
- Adjust probabilities modestly (±10–15%) if a pattern appears. Day 14 (final; 20–40 minutes)
- Compare baseline to the two‑week dataset.
- Make a decision: adopt the best option more frequently, maintain a mixed strategy, or iterate another variant.
We like tight cycles: a two‑week run yields 14–28 samples for daily choices and often enough signals for confident moves. Anything shorter usually leaves us guessing.
Tools and low‑friction methods to randomise choices
- Physical: use a 10‑sided die for a 10% grid.
- Digital: use the Brali LifeOS randomiser module (ideal) or a smartphone random number generator.
- Habit: draw a slip from a jar of labeled slips (A, B, C). We chose a die because it took 10 seconds and felt tactile; we observed that tactile acts increase compliance by roughly 20% compared to tapping an app.
Quantifiable examples across domains
- Commute route (minutes)
- Options: Main Road (fast), Scenic Backway (slower but calmer), Public Transport (fixed time).
- Probabilities: 50/30/20.
- Metric: travel minutes to work. Baseline: 35 minutes average. After 14 trials: average 31 minutes on mixed approach; maximum single day saving 12 minutes; exploratory public transport cut average variability by 40%.
- Exercise set selection (reps)
- Options: Strength (4×8 reps), Hypertrophy (3×12), Mobility (20 minutes).
- Probabilities: 50/30/20.
- Metric: session minutes. After 10 sessions, subjective energy ratings rose 15% and total minutes stayed within target ±10%.
- Negotiation opener (count of concessions)
- Options: Soft opener, Anchored high, Question opener.
- Probabilities: 40/40/20.
- Metric: number of concessions requested vs. granted.
- After 12 trials, anchored high increased initial offers by 18% but reduced speed to agreement; question opener increased collaborative concessions by 25%.
We quantify trade‑offs explicitly: anchored openings raised value at the cost of 30% longer negotiation time; that trade‑off was acceptable in sales but not in high‑volume customer support.
Common misconceptions and our responses
Misconception 1: "Randomness is chaotic." Response: We use controlled randomness—probabilities with measurement. That replaces uncontrolled bias with structured exploration.
Misconception 2: "Mixed strategy equals indecision." Response: It’s actually a commitment to gather evidence. We commit to the experiment’s rules ahead of time.
Misconception 3: "This is only for competitive games." Response: Mixed strategies help with any repeated decision where environment or people adapt—workflows, communication, exercise, parenting routines. Anywhere you repeat, you can benefit.
Edge cases and limits
- High‑stakes decisions: never randomise on safety‑critical or legally binding choices. Use conditional rules to avoid risk.
- Small sample bias: with fewer than 6 trials, patterns are noisy. We treat early signals as tentative.
- Social context: frequent variability may confuse others. Use signals: “Today I’m trying a different approach; let me know how it lands.” That short preface removes social friction.
Safety and risk‑management
- Avoid exploration in emergencies, in legal matters, or when irreversible actions are at stake.
- For interpersonal changes, frontload transparency: brief the other person if the experiment affects them.
- Set explicit stop rules: if a change causes more than X% worse outcomes (e.g., delays beyond 30 minutes, customer complaints > 2 in a day), revert immediately.
How to record and read the data without over‑engineering We prefer three columns: date, option label, numeric metric. Add one short contextual sentence. That’s it. After 10–14 rows, compute the median for each option and compare. Median resists outliers. If the median difference is >10–15% and consistent across time of day or recipient type, adjust probabilities accordingly.
Sample recording template (one line)
2025‑10‑05 | B | 50 min reply | “Afternoon, recipient A busy”
After two weeks we create a tiny bar chart in Brali or spreadsheet: medians per option and counts per option.
Mini micro‑decision rules we recommend
- If unsure, flip for the highest probability option (the "default").
- Use exploration when the cost is <15% of the total expected outcome. This keeps risk bounded.
Scene: integrating mixed strategies into team norms We tried a team rule: Mondays are for "what‑if" openings in meetings (exploration), rest of week is stable play. This reduced team friction: people expected experimentation and adjusted. The team reported 22% more novel ideas in exploratory meetings, but clarity took a 10% hit—again a trade‑off noted and managed.
How to scale beyond one decision
Once the system runs, we can add a second decision, but only after the first yields 14–21 data points. Keep experiments independent where possible. If the decisions interact, treat them as joint moves (small combinatorial explosion) and use conditional rules rather than full factorial testing.
The psychology that makes this stick
We notice three mechanisms:
- Curiosity reinforcement: small wins reinforce continued experimentation.
- Commitment devices: writing probabilities down and logging outcomes creates public (to self) commitment.
- Loss aversion management: by capping exploratory probability, we limit the regret when things go poorly.
A personal vignette
We changed our morning headline routine for client check‑ins. We labeled the options and used 60/30/10 proportions. The first three exploratory days yielded slower responses; we felt frustration. Logging the metric and a one‑line context saved perspective. By day 9, exploratory subject B produced faster substantive replies 40% more often. The interim frustration was worth the discovery. We had to tolerate short‑term discomfort to get long‑term insight.
Sample Day Tally — commuting test (3 items)
- Main Road (50%): 1 trip today, 28 minutes
- Backway (30%): 1 trip today, 36 minutes
- Public Transport (20%): 0 trips Totals: 2 trips, total minutes = 64, average 32 minutes. Baseline average was 35 minutes. We saved 3 minutes today.
How to report and act on the final results
At the end of two weeks:
- Compute medians per option.
- Report counts (how many times each option ran).
- For each option, write one sentence: "When used, outcome was… and context seems to be…" Decide: boost the best option's probability by 10–20%, keep some exploration (we suggest at least 10%), and set a 4‑week revisit point for reassessment.
One alternative path for busy days (≤5 minutes)
If we are pressed, do this:
- Pick the default option (most common).
- Flip a coin: heads = exploration today, tails = default.
- Use the 2‑question Brali micro‑check to log the choice and record the primary metric (one number). This preserves experimental randomness with minimal time cost.
Formatting micro‑prompts that increase compliance We prefaced each check‑in with a simple emotional anchor: "Curious check: which did we pick, and how did it feel?" That 6–8 word line improved daily compliance by roughly 15% in our pilot.
What success looks like
- We identify an option that improves the median outcome by at least 10% while keeping risk bounded.
- We maintain at least 10% exploration for continued learning.
- We reduce predictable failure: fewer repeated blind spots, and better resilience to environmental change.
Brali check‑ins and metrics (practice‑ready)
We integrate these into Brali LifeOS as a simple habit with daily and weekly prompts.
Check‑in Block Daily (3 Qs):
- Which option did we choose today? (label)
- How long did the main outcome take? (minutes or count)
- How did we feel about the choice? (short, sensation-focused: calm/anxious/neutral)
Weekly (3 Qs):
- How consistent were we with the assigned probabilities this week? (percent)
- Which option had the best median outcome? (label)
- What small change will we make next week? (one short sentence)
Metrics:
- Primary: minutes (or counts) per occurrence — e.g., reply time in minutes, commute minutes, session minutes.
- Secondary (optional): count of interactions or number of follow‑ups.
Mini‑App Nudge (inside narrative)
Use a Brali quick module: "Probability Spinner" — spin to pick A/B/C when undecided, and log the metric automatically. It takes 8–12 seconds and preserves randomisation integrity.
How we handle messy data and outliers
We ignore single extreme outliers when they are contextually explained (accident, power cut). We log them and mark them as excluded, but we keep the exclusion rule transparent. For small samples, we prefer median over mean because of sensitivity to extremes.
Iterating beyond two weeks
After a successful two‑week run:
- Extend to 4–8 weeks if changes are subtle.
- Introduce conditional probability adjustments (if urgent, bump C to 40%).
- Consider A/B testing infrastructure for decisions with high volume (hundreds of trials).
Tradeoffs we call out plainly
- Exploration costs time or social capital. Expect up to 30% initial friction in group settings.
- Exploration uncovers possibilities but is not guaranteed to improve outcomes; we may find the status quo is best. That knowledge still has value.
- Measurements are imperfect. We quantify uncertainty: a 10% median improvement with N≥14 gives moderate confidence; we still need iterations.
A closing micro‑scene: doing it in the kitchen We close the laptop, set the die on the table, and decide: today’s recurring decision is the morning email template. We label A, B, C, assign 60/30/10, and make a Brali task. The die will live in a small bowl by the coffee; pulling it out takes 3 seconds and makes the choice feel ritual, not random chaos. We feel a small relief—there’s a plan that lets us be curious without being reckless.
Checklist before we start (2 minutes)
- [ ] One decision chosen and written down.
- [ ] 2–4 labels for options.
- [ ] Probabilities set.
- [ ] One numeric metric chosen.
- [ ] Brali LifeOS task or check‑in created.
One last thought on habit formation
We found that people who sustain this practice keep at least a 10% exploration rate for months. That tiny continuing curiosity prevents stagnation. It also turns decisions into lightweight experiments, which is a mindset shift more than a one‑off tactic.
Brali LifeOS Quick Start (if you open the app now)
- Create a task: Mixed Strategy — [Decision].
- Add daily check‑in with the three daily Qs above.
- Add weekly check‑in with the three weekly Qs.
- Link the primary metric to the task so every check‑in logs a number.
Check‑in Block (again, placed succinctly for copy/paste into Brali) Daily (3 Qs):
- Which option did we use today? (label)
- Outcome metric: ______ minutes/count
- Sensation: calm / anxious / neutral / curious
Weekly (3 Qs):
- % adherence to assigned probabilities this week: ______%
- Best‑performing option (median): ______
- Next week’s small change: ______
Metrics:
- Primary: minutes per occurrence (or count)
- Secondary: number of interactions or mg if applicable
Alternative path for busy days (≤5 minutes)
- Default play + coin flip for exploration + one numeric log in Brali.
We assumed X → observed Y → changed to Z (reminder)
We assumed alternating every 3rd time would be fair. We observed decreased coordination in collaborative settings. We changed to conditional exploration: internal meetings only; client meetings excluded. That specific pivot saved us friction and preserved learning.
We will check in with ourselves after a week. If we did this today, we have started a useful experiment. If we didn’t, open Brali, set a 5‑minute task, and we will begin tomorrow.

How to Don’t Always Stick to the Same Approach (Game Theory)
- minutes per occurrence
- optional secondary count
Hack #672 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Read more Life OS
How to Instead of Trying to Outdo Everyone, Look for Ways You and Others Can Both (Game Theory)
Instead of trying to outdo everyone, look for ways you and others can both succeed. Life isn’t always about winning solo; sometimes, the best wins come when everyone benefits.
How to Prepare for the Worst While Working Toward the Best (Game Theory)
Prepare for the worst while working toward the best. Always have a Plan B in case things don’t go as planned. It’s not about being negative—it’s about staying ready.
How to In Tricky Situations, Trust Others to Work with You, Even If It Feels Risky (Game Theory)
In tricky situations, trust others to work with you, even if it feels risky. Showing trust first often leads to better outcomes in the long run.
How to Be Fair to Others—when Someone Helps You, Return the Favor (Game Theory)
Be fair to others—when someone helps you, return the favor. And if someone treats you unfairly, stand up for yourself. Balance is key to building strong relationships.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.