Evidence & Experiments

Case Studies

MetalHatsCats builds workflow systems, structured knowledge assets, and AI-ready products for complex work.

Evidence snapshots showing how we build workflow systems, ship structured data, and turn delivery work into reusable assets. Each study links to live surfaces, supporting datasets, or implementation artifacts.

Why these matter

These pages are not testimonials. They are proof objects that show how specific systems decisions turned into reusable surfaces, visibility, or operational leverage.

Best fit reader

Operators, delivery leads, product owners, and buyers who want to see what kind of artifacts and outcomes MetalHatsCats actually produces.

Where to go next

Use these studies to branch into services, enterprise pages, datasets, or product surfaces that match your problem shape.

Bonihua — Dataset-driven SEO/GEO on Two Domains

Challenge

Ship a dataset-first learning platform that lives on two production domains (.ru and .by) without canonical/OG/sitemap drift — while staying static-export friendly and legible to AI crawlers.

Approach

  • Centralized base URL resolution (env + request headers) so canonicals/OG/hreflang never mix domains.
  • Rendered indexable dataset hubs + entity pages from JSONL (validated with Zod), keeping filter states non-indexed by default.
  • Added explicit AI discovery entry points (llms.txt, catalog endpoints, feeds) and a static /public/ai catalog for crawlers.

Outcome

A single codebase now produces a crawlable, schema-rich site across two domains, with deterministic sitemaps/robots and an AI-facing discovery layer.

  • 2 production domains, 1 codebase
  • 513 app-router pages (project snapshot)
  • 50 JSONL datasets (project snapshot)

PPA — Scene Protocol Adoption Sprint

Challenge

Help consultants stick to deep-work rituals by forcing clarity on objectives, kill-switch decisions, and tangible outputs.

Approach

  • Bundled Anti-Waste Gate checkpoints into a single scene builder with required KPI, client, and artifact fields.
  • Introduced dual kill-switch timers with keep/pivot/stop evidence logging to reduce wasted deep-work reps.
  • Published Service datasets to document the protocol methodology and ensure consistency across team deliveries.

Outcome

Teams running PPA scenes reported a 29% reduction in abandoned deep-work sessions and shipped on-record artifacts 91% of the time.

  • 29% reduction in abandoned sessions
  • 91% of sessions ship an artifact
  • 24h value check-ins logged 78% of the time

Metkagram — Annotated Text Corpus and NLP-Ready Learning Data

Challenge

Turn language-learning content into a reusable annotated-text system that supports product UX, public previews, structured exports, and search-visible dataset pages.

Approach

  • Modeled Metkagram documents as structured learning records with language, collection, public preview links, timestamps, and annotation counts.
  • Maintained annotated texts, grammar patterns, and dialogue-like learning units so the corpus could power both app experiences and crawlable dataset surfaces.
  • Published the Metkagram Library as an open dataset so the annotated corpus can be cited, cataloged, and reused beyond the app interface.

Outcome

Metkagram now demonstrates practical experience with annotated text, language-learning corpora, and NLP-adjacent data modeling rather than only app UI delivery.

  • multilingual annotated library
  • public dataset landing page
  • annotation-aware document model

Common signals across the case studies

  • Structured content beats vague publishing when the goal is citation, reuse, and discovery.
  • Internal linking and stable entity pages are part of the product, not a marketing afterthought.
  • Machine-readable assets become more valuable when paired with human-readable landing pages.
  • The strongest outcomes come from turning one solved problem into a reusable operating surface.