humancode.us

A good way to think about automation and AI

October 2, 2025

When considering whether an automation is fit for a task, it’s worth thinking about it this way:

  • How often does it successfully do what it’s supposed to do, and how much convenience does it bring when it succeeds?
  • How often does it fail to do what it’s supposed to do, and how bad is the consequence of its failure?

The worse the consequence of failure, the lower the probability of failure has to be. When failures cost lives, a one-in-a-billion failure rate means 8 people will die if you run the automation once for every living person in the world.

AI fails to do its job at a remarkably high rate, so it should only be used in cases where its failures are merely disappointing but otherwise inconsequential.

Putting AI into systems where their failures have catastrophic consequences—like using it to target people for pre-crime surveillance, or deny people financial or civil benefits—is a gross misapplication of the technology.

Newer Post