Phase 5 · W45–W48

W45–W48: Case Studies (problem → approach → results)

Turn each project into a clear, evidence-backed case study that shows problem, approach, and measurable results.

Suggested time: 4–6 hours/week

Outcomes

  • A case study page for each project.
  • A clear problem statement (real AMS pain).
  • Architecture explained in plain language.
  • What you built is concrete and system-level, not vague AI claims.
  • Results are backed by numbers and proof artifacts.
  • Lessons learned and next improvements are documented.

Deliverables

  • 3 case studies (one per project) using a consistent template.
  • Proof artifacts included (screenshots/diagrams + sample outputs).
  • Metrics section with at least 3 numbers per case study and honest context.
  • Case studies published and linked from your site.

Prerequisites

  • W41–W44: Hardening & Documentation (README, diagrams, demos)

W45–W48: Case Studies (problem → approach → results)

What you’re doing

You stop hoping people will “get it” from your repo.

Hiring managers and clients don’t read code first.
They read:

  • the story
  • the outcome
  • the proof

Case studies turn your work into something understandable and persuasive.

Time: 4–6 hours/week
Output: 3 case studies (one per project) that explain the problem, your approach, and measurable results


The promise (what you’ll have by the end)

By the end of W48 you will have:

  • A case study page for each project
  • A clear problem statement (real AMS pain)
  • Architecture explained in plain language
  • What you built (not “I used AI”, but actual system)
  • Results with numbers (even small, but real)
  • Lessons learned + next improvements

The rule: show evidence, not vibes

Your case study must include:

  • screenshots
  • sample outputs
  • metrics
  • before/after comparison

Otherwise it reads like marketing.


Case study template (use this)

Keep every case study consistent.

1) The pain (problem)

  • What was broken / slow / expensive?
  • Who suffered (support, business, users)?
  • What did “bad” look like?

2) The goal

  • What “better” means
  • What success metric you target (even a proxy)

3) The approach (architecture)

  • diagram
  • data flow
  • key decisions (why Postgres, why rules, why eval)

4) The build (what you actually shipped)

  • major components
  • APIs/scripts
  • how it runs
  • how it fails safely

5) Results (numbers)

Examples:

  • reduced manual triage time by X% (even if estimated from samples)
  • top-k retrieval hit rate improved from A to B
  • DQ errors detected automatically per run (count)
  • recurring issues report found top clusters (count)

If you don’t have “real prod numbers”, use:

But be honest.

  • benchmark numbers
  • sample dataset metrics

6) What I learned

  • 3 lessons
  • 3 things you’d improve next

7) Links

  • repo link
  • demo link (if any)
  • screenshots

What to write (3 case studies)

  1. AI Ticket Analyzer — triage + routing + clusters + eval
  2. SAP Data Pipeline — extraction + DQ + mapping + storage + scheduling
  3. Knowledge Base + RAG — sources + chunking + retrieval + governance + gates

Deliverables (you must ship these)

Deliverable A — 3 case studies

  • one page per project
  • consistent template

Deliverable B — Proof artifacts

  • screenshots/diagrams included
  • sample outputs included

Deliverable C — Metrics section

  • at least 3 numbers per case study
  • honest explanation of what they represent

Deliverable D — Publishing on your site

  • case studies accessible from your website
  • linked from /program or your portfolio hub

Common traps (don’t do this)

No. People need narrative.

  • Trap 1: “My repo is the case study.”

Without numbers, it’s not convincing.

  • Trap 2: “No numbers.”

Keep it readable. Short + evidence beats long + fluff.

  • Trap 3: “Too long.”

Quick self-check (2 minutes)

Answer yes/no:

  • Does each case study explain the pain and the goal clearly?
  • Is architecture explained with a diagram?
  • Do I show proof artifacts (screenshots/outputs)?
  • Do I include real metrics (even benchmark metrics)?
  • Can someone understand it in 5 minutes?

If any “no” — fix it before moving on.