thinkingaboutthinking.
MetaCogni is the browser layer for AI subscriptions. It reads the prompt before you send, predicts output tokens, diagnoses weak instructions, and shows the limit that matters for Claude, ChatGPT, or Gemini — because flat subscriptions still cost you in time, quality, and caps.
Free during beta · on-device prompt analysis · no prompt logging
A small extension that thinks about your thinking.
Four browser-side checks run before you hit send: intent, output size, limit pressure, and prompt quality. Your prompt stays on the page you typed it on.
Shows the cost before the thought leaves.
Every prompt has a likely shape: short factual answer, long explanation, code fix, audit, comparison, or creative draft. MetaCogni estimates that shape before you commit the message.
Classifies intent without sending the prompt away.
A local intent classifier recognizes factual, yes/no, math, code-gen, code-fix, explain, creative, list, comparison, conversational, and general asks. The browser does the first read.
Tracks the meter that actually bites.
Claude has 5-hour burn, weekly model quotas, and extra-usage billing. ChatGPT has rolling message caps. Gemini has daily Pro limits and long-context quality risk. MetaCogni separates those instead of pretending every plan is dollars.
Diagnoses the prompt, then prescribes the nudge.
Brain Says recommends the best directive to append. Could Be Better flags missing format, scope, success criteria, or context risk. The goal is fewer wasted tokens and better answers.
See your session burn in real time.
The overlay watches the prompt box, estimates the reply before send, and updates the relevant meter: Claude session burn and weekly quotas, ChatGPT message caps, or Gemini daily/context pressure.
Fix this React hydration bug. I pasted the component and server log above.
Cache is warm. Keep going — context is cheap right now.
Lives where you already do
One extension. Three different limit systems.
Claude burns sessions, weekly quotas, and sometimes extra-usage dollars. ChatGPT spends rolling message caps. Gemini spends daily Pro queries and long-context quality. MetaCogni keeps those meters separate.
= next adapter layer · current beta already runs local prompt analysis
Knows your plan. Tracks the limit that bites.
Each subscription fails differently. MetaCogni shows the meter you should care about right now: session burn, weekly quota, rolling messages, daily queries, or context risk.
Limits are approximate and provider-controlled. MetaCogni uses public limits where necessary and live platform signals where available.
ready when you are
Know before you send.
Then spend the good model on purpose.
MetaCogni runs across Claude, ChatGPT, and Gemini. No API keys, no signup, no prompt logging — just a quiet browser layer that estimates the next reply and tells you how to make the prompt cheaper or better.
- 1Unzip the download. The folder you want is
browser-extension/. - 2Open
chrome://extensionsand toggle Developer mode on. - 3Click Load unpacked and pick that folder. Visit Claude, ChatGPT, or Gemini — the overlay appears on supported chat pages.