One-liner: Design a practical AI governance framework for a team or project β covering when to use AI, how to verify outputs, and what requires human judgment.
Pick a real team, project, or organization you work with. You're going to design an AI usage framework they could actually adopt.
Step 1 β Map the AI touchpoints. Send this prompt:
I'm designing an AI governance framework for a [team type/project type] that does [describe the work]. Map out all the places where team members might use AI in their workflow. For each touchpoint, classify the risk level:
- Low risk: AI errors are easily caught and consequences are minor (e.g., drafting internal emails, brainstorming)
- Medium risk: AI errors could waste significant time or create confusion (e.g., research summaries, data analysis, first drafts of client deliverables)
- High risk: AI errors could cause reputational, legal, or financial harm (e.g., published content, financial recommendations, legal language, customer-facing decisions)
Present this as a table with: Touchpoint | Description | Risk Level | Why
Step 2 β Design the verification tiers. Based on the risk map, create a tiered verification system:
Based on the risk map above, design a 3-tier verification system:
Tier 1 (Low risk): What's the minimum verification needed? What can proceed without review?
Tier 2 (Medium risk): What checks are required? Who reviews? What's the turnaround expectation?
Tier 3 (High risk): What's the full review process? Who signs off? What documentation is needed?For each tier, specify:
- Verification steps (checklist)
- Who is responsible
- What "approved" looks like
- What happens when issues are found
Step 3 β Write the team guidelines. Now produce the actual document:
Write a 1-page "AI Usage Guidelines" document for this team. It should be practical, not corporate. Include:
- When to use AI β Green light scenarios
- When to be careful β Yellow light scenarios with required verification
- When NOT to use AI β Red light scenarios or scenarios requiring explicit approval
- Verification standards β The tier system from above, simplified
- Attribution β When and how to disclose AI usage
- Escalation β What to do when you're unsure whether AI use is appropriate
Write it in the tone of a senior colleague giving practical advice, not a legal department issuing mandates.
Step 4 β Red-team the framework. Test it:
Now role-play as a team member who wants to use AI in a gray area. Come up with 3 realistic scenarios where the guidelines are ambiguous or where a reasonable person might interpret them differently. For each scenario, suggest how to clarify the guideline.
Revise the guidelines based on the edge cases.
Here's what you're about to do:
"Done" looks like: A complete, practical AI governance framework (risk map + tiered verification + 1-page guidelines) that you could present to your team, plus documentation of edge cases you tested against.
Individual verification habits (from EP-Intermediate-01) don't scale to teams. When five people use AI differently with different standards, the team's AI output quality is only as good as the weakest link. A governance framework creates shared standards without bureaucracy β it tells people what's safe to do quickly and what requires care, without making every AI interaction feel like a compliance exercise. This is also the document organizations will pay for: a practical, calibrated AI usage policy that actually gets followed.
You've reached the advanced level for Ethical Prompting & Judgment. From here, consider:
Back to Ethical Prompting & Judgment