Back to Learning Hub

Module 2

Rough vs Sharp Decisions

Where AI Helps — and Where It Hurts

Beginner12-15 min

This module builds on Module 1. If you have not yet read "How AI Actually Behaves," we recommend starting there.

What happens if this is wrong?

Learn to evaluate risk before using AI. Know when AI is safe to use — and when it is not worth the gamble.

What you'll learn

  • You will stop using AI for high-stakes, irreversible decisions — and start using it where it is genuinely safe and useful.
  • Not All Decisions Are Created Equal
  • AI Shines in Exploration, Not Final Calls
  • Human Accountability Cannot Be Delegated
  • Small Errors Compound in Workflows

Lesson Outline

8 lessons12-15 min

Lesson 1

Introduction

A team lead uses AI to draft a performance review.

Lesson 2

Core Ideas

Not All Decisions Are Created Equal · AI Shines in Exploration, Not Final Calls · Human Accountability Cannot Be Delegated · Small Errors Compound in Workflows

Lesson 3

Visual Framework

Interactive diagram: Rough Sharp Matrix

Lesson 4

Real-World Examples

See how this applies with ChatGPT, Claude, Gemini

Lesson 5

Self-Assessment

3 scenario-based questions to test your understanding

Lesson 6

Myth vs. Reality

3 common misconceptions examined

Lesson 7

Key Takeaway

The question is not "Can AI do this?" — it is "What happens if this is wrong?"

Lesson 8

Next Step

Explore the Decision Classifier

Frequently Asked Questions

Is it true that ai saves time on everything?

AI saves time on preparation and exploration. On sharp-edged decisions, rushing with AI costs more time later — in corrections, trust repair, and damage control. The time you save upfront, you pay back with interest when something goes wrong.

Is it true that if i review the output, it is safe?

Review quality depends on your expertise in the subject. AI can generate errors you lack the knowledge to catch. A well-formatted legal clause with a jurisdictional flaw looks fine to a non-lawyer. Reviewing is not enough if you cannot evaluate what you are reading.

Is it true that ai cannot cause real harm in everyday work?

A misclassified expense, a wrong date in a contract, an overlooked compliance term, a poorly worded performance review — everyday AI errors create real consequences for real people. Harm does not require dramatic failure. It accumulates through small, unchecked mistakes.

Where does the risk live?

Board presentations drive real decisions. If the AI fabricates or misrepresents competitor information — revenue figures, market share, product features — the board might make strategic decisions based on data that does not exist. That is a sharp-edged consequence. Formatting and length are fixable. Acting on hallucinated data is not.

What is the best approach?

Calibrating AI against your own judgment on a small sample reveals systematic errors before they scale. If the AI classifies consistently differently from you on the 20-item sample — for example, downgrading complaints that you would mark as critical — you know the full set needs more attention. This approach balances efficiency with quality control.