Enterprise Leadership • Module 23

Autonomous Decision Making

When AI acts without human approval

Module 23 illustration

"Dear Marilyn,

How much autonomy should we give AI testing systems? I'm nervous about letting machines make decisions.

— Control Conscious"

Marilyn Responds:

Your nervousness is appropriate. Autonomy without accountability is dangerous.

The key is graduated autonomy: AI makes low-risk decisions automatically, escalates high-risk ones to humans. Over time, as trust builds, the autonomy boundary can expand.

Full autonomy is a destination, not a starting point.

Autonomy Levels

Graduated autonomy framework:

  • Level 0 — AI suggests, human decides everything
  • Level 1 — AI decides routine matters, human approves exceptions
  • Level 2 — AI acts autonomously within defined boundaries
  • Level 3 — AI expands its own boundaries based on performance

Quick Check: Module 23

Question: What's the safest approach to AI autonomy?

a) Full autonomy from the start

b) No autonomy ever

c) Graduated autonomy based on risk and trust

d) Random autonomy levels

(Answer: c — Start small, build trust, expand gradually.)