Catalog

Battle-tested prompts

A curated library of prompts that ship results — for product copy, SEO briefs, support macros, ad creative and beyond. Fork into your skill library, remix to your tone, contribute back when you have a winner.

Library

Prompts in catalog
240+
Authors
85
Hit rate
92%

median quality score after fork

To use
Free

Categories

What the library covers

  • Catalog SEO

    Title rewrites, meta descriptions, alt-text, structured data — tuned to Google + GEO (ChatGPT, Perplexity, Claude search).

  • Conversion + AOV

    PDP rewrites, hero copy, post-purchase upsell scripts, exit-intent variants. Each one ships with the success metric the author measured.

  • Ads + creative

    Meta / TikTok / Google ad variant generators, hook libraries, product-led copy. Tagged by audience and ad format.

  • Lifecycle + CRM

    Welcome flows, abandon-cart, win-back, VIP. Drop into Klaviyo with the segment definitions included.

  • Support + reviews

    Macros, public review responses, refund scripts. Tone-tuned to brand voices the author has shipped on real stores.

  • Operator chat

    Daily standups for solo founders, weekly digest prompts for @iRen, KPI scorecard generators.

Quality bar

No filler

Every prompt ships with: the model it was tuned on, the use case, expected inputs, a sample output, and the success metric the author reported. We sunset prompts that drift in quality after model upgrades — when Claude Opus 4.7 lands, every prompt gets re-evaluated within a week.

Live catalog

Browse the library

No listings published yet for this category. Check back soon.

Common questions

Frequently asked

Is it free to use prompts?
Yes. Free to fork, remix, and ship. Authors can opt into a tip jar — buyers send a one-off tip if a prompt moves their numbers — but it's never gated.
What's the licensing?
All prompts are MIT-licensed unless the author tags otherwise. Attribution is encouraged but not required. Forking is the default mode of contribution.
Will my prompt break when models update?
We re-evaluate every prompt in the catalogue when a major model ships (Opus 4.7, GPT-5.4, etc). Prompts that drift get a `needs-tuning` badge until the author or community reships them.
How do I contribute?
Hit "Submit a prompt" above. Include the model, use case, expected inputs, sample output and one metric you measured. Reviewers read every submission — no third parties.

Got a prompt that ships results?

Submit it to the catalogue. Free to publish, MIT by default, reviewed within 48h.