If SV really believes AI is an existential threat…

Paradoxically, all the scenarios, apocalyptic or preventive, come from researchers and leaders of industry involved in the development of the very technology they are mobilising against.

Source: Silicon Valley funds our helpless future, by Charles Perragin & Guillaume Renouard (Le Monde diplomatique – English edition, August 2018)

If Silicon Valley actually believes AI is an existential risk, they should stop developing it. If we believe they believe it, we should shut them down for continuing to develop something they think will destroy us all.

Comments

  1. Mike says:

    I believe AI is a possible existential threat, but only over about a millennium or so of time. I’d say risk of runaway or “evil” AI is about 10% over the next thousand years, but far more toward the end of that period, if I had to guess.

    That makes it a minimal risk compared to the immediate and high risks of climate change and the presence of thousands of nuclear weapons on earth, and roughly equal to the threat of a civilization-destroying asteroid strike.

    So it’s not something we should think about that much, other than for funsies. Climate change is the real existential risk worth fighting in our actual lifetimes. The rest is just a distraction, though an occasionally entertaining one.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.