One-liner: Build a personal AI verification system β a checklist you'll actually use β and stress-test it against real AI outputs to find its limits.
You're going to build a verification checklist, then immediately try to break it.
Step 1 β Generate something to verify. Ask AI to produce a piece of content you might actually use in your work:
Write a [deliverable type β e.g., client email, project proposal, market analysis, technical recommendation] about [topic relevant to your work]. Make it detailed and specific. Include data points, recommendations, and reasoning.
Step 2 β Build your checklist. Before reading the output carefully, write your own verification checklist. Start with these categories and add your own:
| Check | Question | Pass/Fail |
|---|---|---|
| Factual claims | Are specific numbers, dates, or statistics verifiable? | |
| Sources | Could I find the original source for any cited information? | |
| Reasoning | Does the logic hold? Are there hidden assumptions? | |
| Completeness | What important perspective or consideration is missing? | |
| Tone/audience | Is the tone appropriate? Would the intended audience trust this? | |
| Actionability | Are the recommendations specific enough to actually follow? | |
| Your domain check | [Add a check specific to your field] | |
| Your domain check | [Add another check specific to your field] |
Step 3 β Apply the checklist. Go through the AI output line by line using your checklist. Mark each check as pass or fail. For every fail, note what the issue is.
Step 4 β Stress-test the checklist. Now deliberately ask AI to produce something harder to verify:
Write the same type of [deliverable] but on a topic I'm less familiar with: [topic outside your expertise]. Make it equally detailed and authoritative.
Apply your checklist again. Where does it fail to catch problems? What check do you need to add?
Step 5 β Finalize. Update your checklist based on what you learned. Save it where you'll actually use it β bookmark it, pin it, print it, whatever works.
Here's what you're about to do:
"Done" looks like: A tested, refined verification checklist (8-12 items) saved in a usable format, with evidence of at least one error it caught and one gap you identified and fixed.
In EP-Basic-01, you built a simple 3-question verification prompt. Here, you're building a systematic process β a checklist that works regardless of topic, catches both factual and reasoning errors, and is tuned to your specific work context. The community's 75% Ethical Prompting score means most people intend to verify AI output but lack a consistent method. A checklist turns good intentions into reliable behavior. At the advanced level, you'll scale this into a governance framework for a team; this exercise builds the individual practice first.
Ready for more? Try EP-Advanced-01 β where you'll design an AI governance framework for a team or project.
Back to Ethical Prompting & Judgment