About
Why This Exists
Aviation has the NTSB. Medicine has morbidity and mortality conferences. Software has postmortems. But AI agents — systems making autonomous decisions with real-world consequences — have nothing. Failures happen quietly, root causes are never shared, and the industry learns nothing.
AgentPostmortem is a structured public ledger for AI agent failures. Every case is numbered, tagged, and searchable. The goal is not to shame vendors or teams — it's to give practitioners a shared base of evidence so we don't keep making the same mistakes.
How We Handle Privacy
All submissions can be anonymous. We automatically redact emails, phone numbers, and other PII from submitted text before it reaches our database. IP addresses are hashed with a secret pepper and never stored in plaintext. If you provide an email, it's used only to send you a one-time edit token — we do not store it long-term.
What Makes a Good Case Report
- A specific, reproducible incident — not a general complaint
- The instruction or prompt that triggered the failure
- What the agent actually did versus what was intended
- Concrete damages: financial, reputational, operational
- Evidence where possible (screenshots, logs)
Moderation
Every submission is reviewed before publication. We reject cases that are vague, unverifiable, or appear to be targeted harassment. Approved cases are assigned a permanent case number (APM-XXXX) and indexed immediately.
For Teams
If you run AI agents at scale and need private incident tracking, compliance exports, or API access, see our Teams offering.
Contact
For editorial questions, case disputes, or partnership inquiries: hello@agentpostmortem.com
AgentPostmortem is an independent project. We are not affiliated with Anthropic, OpenAI, Google, Microsoft, or any AI vendor listed in the case database.