Technology is bestowing wonderful opportunities and benefits to the world, but the acceleration of development, and lack of global regulatory control, represents the biggest threat going forward.
Cool toys, fancy devices and health-care cures are positive developments.
But less benign will be the development, without guard rails, of artificial intelligence that matches human capability by 2029. Worse yet, this will be followed by the spectre of what’s known as General AI — machines capable of designing machines.
Another worrisome field is synthetic biology, genetic engineering and the propagation of androids or AIS on two legs with personalities.
Mankind has faced similar technological challenges, notably nuclear weapons, but famous physicist Robert Oppenheimer rose to the challenge.
He ran the Manhattan Project to develop the atomic bomb, realized its danger, then spent decades lobbying leaders to create the Nuclear Non-proliferation Treaty, the cornerstone of nuclear control, which took effect in 1970.
Oppenheimer is the only reason why humanity didn’t blow itself to bits, but today there is no scientist of the stature of Oppenheimer to devote his life to ensuring governments bridle the transformative technologies under development now.
And the threat is greater. Bombs, after all, are controlled by human beings, not the other way around. But if AI becomes smarter than humans, then all bets are off.
The task of imposing ethics and restraints on science, technology and engineering is greater now.
Nuclear capability requires massive amounts of scarce materials, capital and infrastructure, all of which can be detected or impeded.
But when it comes to exponential tech, simply organizing governments or big corporations won’t do the trick because the internet has distributed knowledge and research capability across the globe.
This means the next pandemic or hazardous algorithm or immoral human biological experimentation can be conducted in a proverbial “garage” or in a rogue state.
The late, legendary physicist Stephen Hawking warned in 2017: “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst.
We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.”
Tesla founder Elon Musk and others have been vocal about this risk, but international action is needed.
To date, these fears and ethical constraints have only been addressed in petitions and open letters signed by important scientists but these have not captured global attention, nor have they provoked a political movement.
In 1975, the Asilomar Conference on Recombinant DNA led to guidelines about bio-safety that included a halt to experiments that combined DNA from different organisms.
Then, in 2015, an open letter concerning the convergence of AI with nuclear weapons was signed by more than 1,000 luminaries, including Apple co-founder Steve Wozniak, Hawking and Musk.
They called for a ban on AI warfare and autonomous weapons, and eventually led to a United Nations initiative.
But four years later, the UN Secretary General was still urging all member nations to agree to the ban.
Only 125 had signed. Without robust ethical and legal frameworks, there will be proliferation and lapses. In November 2018, for instance, a rogue Chinese geneticist, He Jiankui, broke long-standing biotech guidelines among scientists and altered the embryonic genes of twin girls to protect them from the HIV virus.
He was fired from his research job in China, because he had intentionally dodged oversight committees and used potentially unsafe techniques.
Since then, he has disappeared from public view.
There’s little question that, as U.S. entrepreneur and engineer Peter Diamandis has said, “we live in extraordinary times.”
There is also much reason for optimism. But for pessimism, too.