Why Do You Want More

why do you want more

why do you want more
 
In the world of artificial intelligence, there’s a famous thought experiment called “The Paperclip Maximizer.” It predicts death, not by machine uprising (a-la Terminator), but instead, by paperclips.
 
It goes like this:
 
Imagine a machine, an intelligent machine, that has the goal of making paperclips. It would work to increase its own intelligence, not because it values intelligence intrinsically, but because the smarter it becomes, the more resourceful it becomes, and the better able it is to devise new methods of production. It would start small, bending pliable metals in loops, but over time, it would consume the Earth’s natural resources, then the sun’s, and eventually, the universe’s, all in an effort to make…paperclips.
 
Obviously—painfully so—no one needs a whole universe of paperclips.
 
To quote the theory:

“This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The [Paperclip Maximizer] won’t revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals.”

But are we so different?
 
“Love” and “joy” are vague objectives. And unlike a paperclip, there is no A + B = C formula to follow. Instead, we devise proxies to achieve those ends, like losing weight, making more money, and buying bigger houses. We believe those things will bring us joy or love, but they don’t.
 
And when they don’t, rather than asking if our current goal is worth pursuing, we double down. We lose more weight. We buy even bigger houses and more expensive cars. We try to make more money. We want more.
 
Your goals are only as good as the intentions behind them—and more isn’t always the answer.
 
The Paperclip Maximizer illustrates a hidden danger of an innocuous AI, but it also illustrates the negative side effects of never stopping to ask “why.” Seemingly innocent objectives can lead to sinister outcomes.


Join my Newsletter
(I work really hard on it & I think you’ll like it)

Described as “an array of interesting things from someone very down to earth and ‘normal,'” my bi-monthly essays cover such topics as: creativity, politics, introspective polemics and more. Exciting ?

Support the Blog

Please share this essay with your friends…or enemies. I’m not picky.