How to Use the Triz Contradiction Matrix to Find Principles That Can Resolve Your Specific Conflict (TRIZ)
Apply the Contradiction Matrix
How to Use the TRIZ Contradiction Matrix to Find Principles That Can Resolve Your Specific Conflict — Hack №432
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it.
We approach this as a practice, not a puzzle. TRIZ — the Theory of Inventive Problem Solving — gives us patterns that appear across thousands of patents and solutions. The Contradiction Matrix is a tool within TRIZ that maps a specific conflict (for example: we want faster performance without losing quality) to a short list of inventive principles that have solved similar conflicts elsewhere. Our goal in this long‑read is simple: by the end of one session today, we should have a clearly stated contradiction, three candidate principles from the matrix, an immediate micro‑experiment to test one principle for 10–60 minutes, and a quick way to record what happened.
Hack #432 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Background snapshot
TRIZ originated in the Soviet Union in the 1940s–50s from systematic studies of patents. Researchers looked for repeatable patterns in inventive solutions across domains. The Contradiction Matrix distills trade‑offs (improve X, worsen Y) into recommended principles. Common traps: we pick principles that sound clever but are fuzzy, we fail to translate them to our context, and we try to apply many principles at once. Many attempts fail because people don't test a single principle quickly; they overdesign. What changes outcomes: being precise about the contradiction, committing to one micro‑experiment for 10–60 minutes, and logging measurable change.
Why this helps: The matrix focuses our creativity toward proven patterns, reducing wasted ideation by about 30–70% compared with unguided brainstorming in engineering studies. If we treat it as a micro‑experiment engine, not a magic map, we gain usable directions quickly.
We will move through a thinking process that reads like a short workshop we do together. We will narrate small choices, trade‑offs, constraints, and one explicit pivot: We assumed X → observed Y → changed to Z. Keep a pen, open Brali LifeOS to the TRIZ module, and plan to commit 30–90 minutes today.
Part 1 — Starting with the right contradiction (15–30 minutes)
We begin by making a factual statement about the trade‑off, not a wish: “We want A to be better, but B gets worse.” The most common mistake is being vague. “I want faster” is a wish. “I want build time reduced from 40 minutes to 15 minutes, without increasing post‑build defects beyond 1 per 1000 lines” is precise.
Step 1: Pick the system level We choose one level: component, subsystem, product, process, or organization. If we pick the wrong level we will get principles that don't fit. For example, if our friction is team meetings, the product‑level contradiction (speed vs. quality of a device) will produce irrelevant engineering principles. We prefer process level for workflows and subsystem level for product features.
Decision micro‑scene: We read the calendar and see two half‑days blocked. We choose "process" for today's example because the issue comes from a build pipeline. This choice matters because the matrix has entries that tend to suggest physical manipulation or separation that better fit physical products; if our level is abstract, we translate principles metaphorically. We assumed physical interpretations would map easily → observed that metaphors became vague → changed to applying physical principles as process metaphors (segmentation as "split pipeline into stages").
Step 2: State the parameters as numbers TRIZ uses 39 technical parameters (e.g., weight, speed, temperature). We map our A and B to the closest parameters and, where possible, add numbers. We resist the temptation to invent parameters. If none of the 39 fit exactly, pick the nearest and note the gap.
Example mapping:
- Want faster build time: "Speed of operation" (Parameter 9) — current 40 min → target 15 min.
- Don't want more defects: "Reliability" (Parameter 27) — current defect rate 2 per 1000 lines → target ≤1 per 1000.
If numbers are unknown, measure roughly: run a build today and time it; count defects from the last 10 builds and compute rate. The precision need not be scientific; ±20% is fine.
Step 3: Choose the contradiction direction We must specify whether we improve A at the cost of B, or vice versa. The matrix recommends principles for both directions separately. For our example: improve Speed (A) while not worsening Reliability (B). In TRIZ language: "Improving parameter 9 (Speed) while preventing deterioration of parameter 27 (Reliability)."
Decision micro‑scene: We had earlier thought "improve speed and quality" — we realize TRIZ expects a conflict statement. We reframe: “We want faster without less reliable.” Reframing clarifies what we will search for.
Why precision matters: In practice, selecting the wrong pair yields irrelevant suggestions. In one project we assumed "reduce weight vs. durability" but the real conflict was "reduce cost vs. durability." The matrix entries diverged and suggested different principles; the team wasted a day exploring segmentation when cost‑reduction measures were cheaper.
Part 2 — Using the Contradiction Matrix (10–20 minutes)
We open the matrix (the Brali module includes a digital version, a chooser for the 39 parameters, and quick descriptions). If you use a printed copy, keep a highlighter.
Step 4: Locate the A parameter row and B parameter column The matrix is a 39×39 grid. Where A row intersects B column, we get up to 4 recommended inventive principles (numbers 1–40). Digital tools usually return 3–5 principles with their ranking.
Example: For Speed (9)
vs Reliability (27), the matrix may suggest principles like 1 (Segmentation), 10 (Prior Action), 13 (The Other Way Round), 35 (Parameter Changes) — actual numbers vary by matrix editions. We record the principle names and numbers.
Small translation task: For each principle, write one sentence about how it might apply to our system level. This forces translation from abstract to concrete. If we can't write one sentence, that principle might be too abstract or misaligned.
We list them quickly:
- Segmentation: Split the build into smaller independent stages that can run in parallel or be cached.
- Prior Action: Do preparatory steps before the critical path (precompile, prefetch) so the build spends less time on them.
- The Other Way Round: Reverse order of some steps; perform integration checks later but in a sandbox.
- Parameter Changes: Alter the conditions under which we build (e.g., use warmed caches, increase memory) to reduce build time without changing code.
After listing, we pause. Translation clarifies whether each is plausible. For us, Segmentation and Prior Action look practical. The Other Way Round sounds risky for reliability, so it's lower priority.
Part 3 — Choosing a principle to test (5–10 minutes)
We only pick one to test immediately. Why? Because rapid iteration beats multipronged attempts that dilute learning.
Criteria to choose:
- Ease of implementation within 10–60 minutes.
- Potential measurable impact (reduce build time by at least 10–30%).
- Low risk to the metric we want to protect.
We check trade‑offs: Segmentation (create smaller build units)
may require reconfiguration that takes more than 60 minutes. Prior Action (pre‑compile headers, warm caches) can be done in 10–30 minutes by adding a prefetch or a scheduled job. We pick Prior Action.
Decision micro‑scene: We notice the team already has nightly builds; we add a small script to prefetch dependencies and precompile heavy files in the morning. We assumed segmentation would be fastest to show benefit → observed that restructuring pipelines would take ~4 hours → changed to Prior Action because it's a quicker micro‑experiment.
Part 4 — Designing the micro‑experiment (10–20 minutes)
We design an experiment that isolates the effect of the principle. A good experiment has:
- A clear action (what we do).
- A measurement plan (what we measure and how).
- A control or baseline (what we compare to).
Action: Add a morning precompile job that runs for 15 minutes at 08:00 and warms the build cache for the main pipeline.
Measurement plan:
- Measure build time (minutes, to nearest second) for the next 10 builds triggered after 08:15.
- Measure post‑build defect count: count failing tests and any regression discovered in integration in the next 24 hours — we use "failing tests" as a proxy (count).
- Log CPU and memory usage (optional) but note if build servers spike.
Baselines:
- Record the previous 10 builds' average time: 40 min (we measured earlier).
- Record average failing tests per build: 5 failing tests per build (we'll convert to defect rate later).
Operational constraints:
- The precompile job must not run during peak hours that block team work; we schedule it before 09:00.
- The job should be reversible; we keep the script in a branch and can disable it.
We set thresholds:
- Success if median build time among the next 10 builds falls by ≥20% (i.e., from 40 min → ≤32 min).
- Safety check: failing tests should not increase by more than 10% (from 5 → ≤5.5 rounded to 6).
We note that 10 builds give us some sensitivity but not statistical certainty. This is a pragmatic test.
Part 5 — Running and journaling the micro‑experiment (10–60 minutes for setup, then passive observation)
We implement the precompile script. In practice this might be 10–30 minutes: write a short script that pulls dependencies, precompiles heavy modules, and warms the build cache. We commit and run it in a controlled way.
Micro‑sceneMicro‑scene
We sit with a coffee, open Brali LifeOS and set a task: "Run morning precompile and record build times for next 10 builds." We set a Brali check‑in to trigger after the 10th build.
What we log in the moment:
- Time at which precompile ran.
- Build job IDs and timestamps.
- Immediate errors from precompile.
- Approximate server load.
We avoid over‑instrumenting; the point is to detect an effect, not to gather every metric.
Part 6 — Interpreting results and the explicit pivot (5–20 minutes)
After we collect 10 builds, we look. If median build time reduced ≥20% and failing tests stayed stable, we mark it as a success and plan a follow‑up: scale to team. If not, we analyze.
Example outcomes and our pivot:
- Outcome A: Median build time reduced from 40 to 30 minutes (25% reduction), failing tests unchanged (5 → 5). Interpretation: Prior Action worked. Pivot: we now plan to automate the precompile and measure across different times (we assumed morning warming is best → observe it's effective at all times → change to scheduled warming per commit).
- Outcome B: Build time reduced by 5% only, failing tests increased by 20%. Interpretation: Precompiling introduced timing or environment differences causing flaky tests. Pivot: revert precompile, investigate test flakiness; try Segmentation or Parameter Change next.
- Outcome C: No change. Interpretation: Cache warming isn't bottleneck. Pivot: try Segmentation in a 60–120 minute slot.
We narrate one explicit pivot: We assumed Prior Action would help because builds spent time downloading dependencies. After the experiment we observed that peak time network delays were the real cause; we changed to running dependency downloads from a mirrored internal server — a Parameter Change principle (principle 35) — and saw an additional 15% reduction.
Part 7 — Translating other principles into micro‑experiments (10–30 minutes)
We don't abandon the other principles. For each remaining principle from the matrix, we sketch a 10–60 minute micro‑experiment and record whether it looks feasible in our constraints.
Segmentation micro‑experiment (30–120 minutes):
- Action: Split a module's build into two separate jobs and run them in parallel.
- Measure: Wall clock time reduced for that module's part; integration tests count.
- Feasibility: Requires branching and job changes; feasible as a 60–120 minute task.
The Other Way Round micro‑experiment (10–30 minutes):
- Action: Swap sequence for two low‑dependency steps and run builds.
- Risk: May increase integration regressions.
- We decide to defer because it fails our low‑risk criterion.
Parameter Change micro‑experiment (10–60 minutes):
- Action: Increase memory allocation for the build container from 4 GB → 8 GB, and increase concurrency options.
- Measure: Build time and CPU usage.
- Feasibility: Quick change if infrastructure allows; likely safe.
We rank them by Ease × Expected Impact and choose the next one to try in 1–3 days.
Reflective sentence: Each principle becomes a focused experiment; the matrix converted abstract ideas into testable actions. We prefer to do two small experiments in a week rather than five poorly executed ones.
Part 8 — Sample Day Tally (how to reach the target using 3–5 items)
We give a concrete sample day plan (numbers are actionable). Suppose our target is reduce build time from 40 min to ≤20 min across main pipeline while keeping failing tests ≤6.
Sample Day Tally (one realistic sequence)
- 08:00–08:15 — Run Prior Action: precompile heavy files and warm cache (15 min).
- 09:00 — Trigger primary build (after warm): Build wall time: 30 min (we record).
- 10:00–11:00 — Run Parameter Change: increase container memory & CPU for one build (60 min to run and observe).
- Observed build time after memory bump: 24 min.
- Afternoon: Implement a partial Segmentation for the largest module (60–120 min work).
- After segmentation: module builds in parallel, observed wall time drop to 18 min on next build.
Totals for the day:
- Time invested in experiments: 15 + 60 + 90 = 165 minutes (2.75 hours).
- Measured build time reductions: 40 → 30 → 24 → 18 minutes.
- Failing tests: baseline 5 per build → observed 5–6 per build (within tolerance).
We make one quantification: with two interventions (Prior Action + Parameter Change), we achieved a 40% reduction (40→24). Adding Segmentation yielded a 55% reduction overall (40→18). These numbers are concrete and show cumulative, not additive, effects.
Part 9 — Mini‑App Nudge
Mini‑App Nudge: In Brali LifeOS, create a "TRIZ micro‑experiment" module that schedules a 15–60 minute task, tags the principle used, and sets a check‑in for after N builds. This gives us disciplined iteration.
Part 10 — Common misconceptions, edge cases, and risks
Misconception 1: The matrix gives ready‑made solutions. Reality: It suggests principles — we must translate them. Treat them as hypothesis generators with about 30–70% hit rate when translated well.
Misconception 2: The matrix only fits physical products. Reality: The principles are abstract patterns; they map to software and processes if we do careful translation. For example, "Segmentation" becomes modularization; "Prior Action" becomes precomputation or warming caches.
Edge case 1: If the conflict is qualitative (customer satisfaction vs. speed), we must find measurable proxies (NPS score, time on task). TRIZ expects numbers; choose reasonable proxies.
Edge case 2: When multiple parameters are implicated (speed, cost, reliability), prioritize one contradiction at a time. Trying to resolve three at once typically yields no clear experiment.
Risk 1: Applying a principle can introduce hidden trade‑offs. For example, Prior Action might use more compute and cost more. We quantify costs: if a precompile job uses an extra 1.5 CPU hours per day at $0.10 per CPU‑hour, that’s $0.15/day or ~$45/year — likely acceptable. Always list these costs.
Risk 2: Changing a critical pipeline can break production. Always run experiments in branches or isolated environments first.
Part 11 — Iteration: turning one success into a repeatable pattern
If a principle works in one context, test it across contexts. We make two choices:
- If effect is large and low‑risk, standardize (automate in main pipeline, document steps).
- If effect is contextual, generalize carefully: make a template that teams can adapt.
We track learning in Brali LifeOS: each experiment gets a short template — principle used, timeframe, slots changed, numeric effect, cost, and notes. Over months, these entries create a pattern library.
Pattern library micro‑scene: We open Brali and search "Prior Action" and see three entries with median improvement 22% and one with negative effect due to flaky tests. We add an entry: “Avoid warming when tests are long‑running integration tests; prefer segmented warming.”
Part 12 — Decision rules for scaling
We use simple decision rules to standardize scaling:
- Rule 1: If median improvement ≥20% and cost increase ≤10%, automated roll‑out.
- Rule 2: If improvement 5–20% but risk moderate, run a 2‑week pilot with monitoring.
- Rule 3: If improvement <5% or risk high, shelve principle and document.
We quantify thresholds because they avoid endless debate.
Part 13 — A short how‑we‑did narrative (an example case study, 800–1200 words)
We describe a realistic case to show the method in action.
We had a product team that complained: “Builds take too long; we can’t iterate.” We mapped this to TRIZ parameters: improving Speed (9) vs not worsening Reliability (27). The matrix suggested Segmentation, Prior Action, and Parameter Change.
We chose Prior Action as the first micro‑experiment because we could implement it quickly. We wrote a 12‑line script that pre‑downloaded dependencies and precompiled 3 heavy files. The script took 12 minutes to run and was scheduled at 07:45; builds were triggered after 08:00. We recorded 10 builds before and 10 builds after.
Observations:
- Baseline median build time (10 builds): 40:12.
- After precompile median: 32:05 (20% reduction).
- Baseline failing tests: mean 4.8 per build.
- After precompile failing tests: mean 5.0 per build (within 5% change).
We assumed warming caches would be sufficient. We observed that the network fetches still caused variability during peak hours; downloads from an external registry were the secondary bottleneck. We pivoted to mirror the registry internally (Parameter Change). That change took longer (a day) but decreased variability and gave another 12–15% drop.
We then prototyped Segmentation for the largest module. We created two parallel jobs for that module, which took 90 minutes to configure. After segmentation, the build time for that path dropped from 16:00 to 6:40, which reduced the overall pipeline from 32:05 to 19:12.
Trade‑offs we logged:
- Additional compute cost: +0.2 CPU hours per main build, ~+$0.02 per build.
- Complexity: two more CI jobs to maintain, added to the maintenance backlog.
- Benefit: median iteration time halved, team reports more frequent commits, and revealed earlier regressions.
What we learned:
- The matrix pointed us to a small set of promising directions. The “Prior Action” principle gave a cheap test that revealed the real bottleneck was external registry latency. We had assumed X (code compilation) → observed Y (network delays) → changed to Z (mirror registry + precompile). The sequence exemplified a pivot: small experiments revealed different underlying causes and guided the next principle.
Part 14 — Practical templates (what to put in Brali LifeOS now)
We give ready‑to‑copy short templates for the Brali task and check‑in.
Template: TRIZ micro‑experiment task (10–60 min)
- Title: TRIZ: [Principle name] test for [System area]
- Description: State the precise contradiction (A improves / B shouldn't worsen). Add numbers.
- Steps: 1) Implement quick action (script, config change) 2) Trigger N builds 3) Record times 4) Note failing tests
- Timebox: 30–90 min
- Tags: TRIZ, [principle], experiment
Template: TRIZ micro‑experiment journal entry
- Baseline: median build time (N builds), failing tests
- Action: what changed (include commit ID)
- Result: median build time (N builds), failing tests
- Cost estimate: compute, maintenance
- Next step: scale / pivot / shelve
We suggest making these templates inside Brali LifeOS for reproducibility.
Part 15 — Check‑in Block (use this in Brali or copy)
We include a structured check‑in block to place in Brali LifeOS. Place this near the end of your task or as a recurring check.
Check‑in Block
- Daily (3 Qs) — sensation/behavior focused
What immediate effect did we notice? (time saved minutes, tests failing count)
- Weekly (3 Qs) — progress/consistency focused
What will we scale next week? (one decision)
- Metrics: numeric measures to log
- Build time (median minutes)
- Failing tests (count per build)
Part 16 — One simple alternative path for busy days (≤5 minutes)
If we have ≤5 minutes, we can still use TRIZ in a minimal way:
- Do a one‑sentence contradiction: “Reduce build time from ~40 min to ≤30 min without increasing failing tests (>5).”
- Open the Brali TRIZ quick chooser; map parameters (Speed vs Reliability) and capture the top 1–2 principles suggested.
- Create a 5‑minute task: set up a single prefetch command or toggle a build cache flag and schedule it.
- Add a Brali quick check: after next build, record build time and failing tests.
This keeps momentum and builds a habit of hypothesis → quick test.
Part 17 — Risks, limitations, and ethical considerations
Maintenance debt
Many principles that speed things up add complexity. We always log additional maintenance efforts as a soft cost; if the maintenance is likely to exceed the benefit within 6–12 months, we set a sunset plan.
Part 18 — What success looks like after one month
If we run 4–8 micro‑experiments in a month, we expect:
- 1–2 principles that consistently reduce our primary metric by ≥15–25%.
- A checklist and automation for those principles.
- A documented trade‑off log for at least one principle (costs and maintenance).
- A mini pattern library entry in Brali LifeOS with the principle, micro‑experiment steps, and quantitative effect.
This is pragmatic progress: measurable improvement and institutional memory.
Part 19 — Wrap up: how we do this again tomorrow
We end with a short plan for continuing the habit:
- Each day, spend 10–30 minutes on one TRIZ micro‑experiment. Use Brali to schedule the task and set the check‑in.
- After 10 experiments, do a weekly synthesis: which principles scaled, which failed.
- Keep the rule set (>=20% for auto‑rollout) to avoid analysis paralysis.
We keep a humane pace: two experiments a week may be sufficient to change behavior without overloading the team.
We close with a small encouragement: if we do one precise micro‑experiment now, we will have real data by the end of the day. Small pivots beat perfect plans.

How to Use the Triz Contradiction Matrix to Find Principles That Can Resolve Your Specific Conflict (TRIZ)
- median build time (minutes), failing tests (count per build)
Read more Life OS
How to Borrow and Adapt Successful Strategies from Others to Enhance Your Own Growth (TRIZ)
Borrow and adapt successful strategies from others to enhance your own growth.
How to Use Fluid or Adaptable Approaches in Your Life (TRIZ)
Use fluid or adaptable approaches in your life. For example, adjust your goals based on your current situation.
How to Automate or Delegate Tasks That Don't Require Your Direct Involvement (TRIZ)
Automate or delegate tasks that don't require your direct involvement. Free up your time to focus on what matters most.
How to Break Down Big Challenges into Smaller, More Manageable Parts (TRIZ)
Break down big challenges into smaller, more manageable parts. Instead of trying to fix everything, focus on one aspect at a time.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.