• Rabbi Marc Kraus

The End of Human Arrogance

We’ve created a world in which our value systems are based upon human significance and human experience. We’ve established systems where human lives, without exception, are safeguarded by all manner of laws and rights. At the same time, non-humans are treated in ways so appalling that we prefer to close our eyes rather than take in the full horror of what we do.

When we look to the future, one fact becomes inescapable. Our technology is outpacing the myth that we are the pinnacle of creation. We are already manufacturing prosthetic limbs that can be controlled by the mind. We already have artificial human organs. We can already (illegally) tamper with DNA and clone human beings. How long before a race of superhumans comes along and begins to notice our inferiority?

I’ve been reading Homo Deus, the sequel to Yuval Noah Harari’s bestselling book Sapiens. In the first book he covered the history of our species, and in the second, he looks towards our future. Harari asks:

You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine. [1]

At the same time, software that teaches itself already runs much of our financial markets, finds answers in seconds that have taken humans years to develop and beats the best humans alive in esports. Google, Alexa, Siri and Facebook - all now part of our lives - are racing to best predict what we want next, and they are getting better and better at it. How long before these computer algorithms “wake up?” The scary answer is that most experts believe computer consciousness or self-awareness will occur within the next forty years.

Tesla CEO Elon Musk has been interviewed as saying:

“Most people don’t understand just how quickly machine intelligence is advancing, it’s much faster than almost anyone realized, even within Silicon Valley…”
“If there is a superintelligence who’s utility function is something that’s detrimental to humanity, then it will have a very bad effect… it could be something like getting rid of spam email… well the best way to get rid of spam is to get rid of… humans.” [2]

Physicist Stephen Hawking has spoken at length about this in several interviews:

“Artificial intelligence could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,"
"I fear that artificial intelligence may replace humans altogether… if people design computer viruses, someone will design artificial intelligence that improves and replicates itself. This will be a new form of life that outperforms humans." [3]

To make clear just how real this threat is, last August more than one hundred technology leaders signed an open letter to the United Nations, calling on it to ban the development and use of artificially intelligent weaponry. [4] Doomsday scenarios are not inevitable - it’s possible that with careful regulation and oversight, none of these scenarios will come to be - yet the threat is very very real.

In our own lives, these predictions should also give us pause. Perhaps we should reconsider our own relationship with planet earth and the other species with which we share it, because the truth is we might not always be around. We are just another blip in the history of our planet, or as Abraham once put it “simply dust and ashes.”

[1] Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow




135 views0 comments

Recent Posts

See All