On March 1st 1954, the United States would conduct a thermonuclear test that would go horribly wrong. Castle Bravo1 produced an explosion with the power of 15 megatons of TNT, 2.5 times more powerful than expected, and 1000 times greater than either bomb dropped on Hiroshima and Nagasaki. The Japanese would call it a “Second Hiroshima” and the fallout from the blast, combined with the unexpected shift in the winds put it on a path to more than 15,000 people.
The United States covered up the extent of the explosion, denying that the fall out was outside of acceptable ranges. That was until Sir Joseph Rotblat began publishing a series of papers detailing how powerful the bomb actually was. He had the credentials as well, being a former member of the Manhattan project. He left once it became clear that Germany was no longer developing nuclear weapons, on grounds of consciousness and would spend the rest of his life advocating against the use of nuclear bombs. This would ultimately lead to him receiving the Nobel Peace Prize in 19952.
9 years after Castle Bravo, The Partial Test Ban Treaty (PTBT) would be ratified, banning military and civilian nuclear tests in the atmosphere, space and underwater. It did not ban tests underground. The Comprehensive Nuclear-Test-Ban Treaty3 would follow in 1996, and be adopted by the United Nations, though it has yet to be ratified.
According to the Atomic Archive4, outside of North Korea there hasn’t been a nuclear test done by another country in almost a quarter century. In 2023 everyone is well aware of the ramifications of nuclear war, and while not all superpowers have dismantled their arsenals, the circumstances under which they would be used are so grave, no one would be around after to protest their use.
Enter Artificial Intelligence 🤖
Artificial intelligence is not a nuclear bomb, but the impacts that it could have on human life are no less profound. Artificial intelligence and machine learning already power most of our tech5. And with AI compute engines becoming more common in hardware like phones and laptops, it’s becoming even easier to accelerate those workloads. With the rise of Deepfakes, AI Generated art, and large language models, the public is finally becoming more aware of the how pervasive the technology is becoming, and some have begun to raise concern. In 2018 Elon Musk was quoted saying…
The danger of AI is much greater than the danger of nuclear warheads… By a lot.
This is ironic considering he helped fund OpenAI the company whose ChatGPT product has people concerned about the tool’s more dangerous capabilities. The arms race between detecting artificially generated content, and generating it, has begun, and currently AI generated content is winning. Schools and other academic institutions are already banning ChatGPT due to the threat it poses to learning. Not only does AI have the potential to shake up many industries, and indeed already has, but it also has the potential to put a lot of people out of work.
Many people believed that the first jobs to be automated would be the menial ones. Whole swaths of factory workers would be put out of work by robots who could do the tasks without pay or rest. It turns out that the effortless way humans operate in the physical world is still far better than the most advanced general-purpose robot can achieve. Boston Dynamics has been around since 1992, and while the demos of Atlas are impressive6, the high-pitched whirring of the fans keeping the whole unit cool for its one-minute stunt betray the shortcomings.
But give AI the near infinite power source of a wall outlet, the deep pockets of a large organization, thousands of expensive GPUs, and a task that doesn’t require legs, and baby you’ve got gold. This has the potential to displace a lot of workers really fast7. Not just in the art space but also in the software space. Microsoft seems to think so8, investing a further $10 billion into OpenAI while simultaneously laying off 10,000 people9. So the question seems to be.
Who is really going to benefit from these advances in AI?
While the things that AI can do can definitely make our lives easier, the more effective it is, the greater its potential for harm if used improperly. And I’m not talking about Skynet. Humans are fascinated with the concept of something else ending our existence, sentient robots, aliens, an asteroid, giant kaijus. It’s all so big and obvious in the movies. But that isn’t what it’s really like, the truth is far more insidious.
Enter the Military 🪖
The reality is that unlike the weapons of the past, The advances in the civilian field of AI are much easier to transfer to a military context. Facial recognition, coordinated New Year’s drone celebrations, object identification, and many other fields that are currently being developed have obvious military benefits. The more effective and general the tools become, the easier it is to apply them to domains outside of their original context, including war. These military contracts usually come with huge paydays, and if a government approached a company wanting to license the technology, it only takes one company saying yes to open the flood gates.
In fact, this has already happened. In June of 2019 Israeli defense company Rafael fitted their alluringly named Spice bombs (Smart, Precise Impact, Cost-Effective) with automatic target recognition10. In the press release they say
The newly-unveiled ATR feature is a technological breakthrough, enabling SPICE-250 to effectively learn the specific target characteristics ahead of the strike, using advanced AI and deep-learning technologies.
There have been multiple agreements by the majority of countries on the regulation of nuclear bombs. But in a world that is becoming more divided every day, it seems unlikely that we could get the majority of countries to reach consensus about AI weapons. And like climate change, if everyone isn’t onboard, then it doesn’t work. In fact, in 2016 the future of life organization presented an open letter at the International Joint Conference on Artificial Intelligence (IJCAI)11, urging everyone to ban the use of autonomous weapons. To this date, all of the world's superpowers have rejected it. But laws around AI will require the work of many AI ethics specialists, many of whom are hard at work thinking about how we can use AI safely.
The inconvenience of ethics ⚠️🚧
Timnit Gebru was co-lead of the Ethical Artificial Intelligence team at Google along with Margaret Mitchell. After two years at Google, she left the company. Google said she resigned, she said she was fired. Regardless of the reason, the circumstances that set her departure in motion, came from a paper that she co-authored called, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?12.
The paper documents the problems with large language models, a subject even more popular now that it was two years ago. The paper highlights many problems with LLM, some of which I’ve summarized below.
The Environmental Impact. Larger models have carbon footprints the size of a trans-american flight
Large undocumented training sets. Most training sets are filtered, but the filtering is usually done using a machine learning model trained to filter out negative rhetoric. They are too large to be reviewed by human beings so we just hope they are working as intented
Encoding Bias. Language changes over time and has different connotations depending on the context. It has been shown that large language models can encode bias and amplify it to degrees disproportionate to the amount represented in the dataset
Too costly. Large language models require access to large quantities of compute and data. This makes it cost prohibitive for smaller companies, further cementing the dominance of large software companies
Disinformation. The output of these models are highly convincing, even when incorrect. This could lead to misinformation propagating quickly
The concerns of the paper can be neatly summarized by the quote.
Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is fantasy
But that is not to say that it is all doom and gloom. Many of the issues presented in the paper also have alternatives or solutions. There is also hope that even if the laws haven’t caught up to them, companies working on AI could implement some of the suggestions, creating AI technologies that are more ethical.
It remains to be seen how the proliferation of AI will impact our lives. Google13, and Microsoft14 are currently locked in a battle for search engine AI dominance, and are pushing the field forward at a rapid pace. Whether laws will proactively clamp down on AI before it gets out of hand, or whether they will end up being “Written in blood” like other fields remains to be seen. Regardless, humans have been able to make rapid changes in a short amount of time when under high duress, and while I hope we can be more proactive than that, I’m confident we will be able to effect meaningful change in this space before it’s too late. The consequences for failing are just too high.
Call To Action 📣
If you made it this far thanks for reading! I’m still new here and trying to find my voice. If you liked this article please consider liking and subscribing. And if you haven’t why not check out another article of mine!
the real weapon is the nudge and AI is a wonderful steering wheel. Watch in awe while a large school of fish move about as a predator swims through them. The nudge can reduce human action to large herd movement. Style, politics, war? You name it.
the real weapon is the nudge and AI is a wonderful steering wheel. When we watch in awe when a large school of fish move about as a predator swims through them. The nudge can reduce human action to large herd movement. Style, politics, war? You name it.