Artificial intelligence (AI) is a hot topic these days. And Elon Musk is nothing, if not boisterous about it. He’s called for folks to hit the brakes on AI training. What’s that about? I mean, is it an axe he’s grinding because of his prior relationship with OpenAI? Or is he genuinely sincere? Well, enough of celebrities (aka tabloid fodder). Let’s explore the whole of AI chatter instead.
Here’s the scoop. Some people, Musk included, are arguing that “giant AI experiments” should be put on hold until we can fully understand and mitigate their potential risks and consequences. Yeah, I get it, responsible innovation and all that. But let’s be real, we don’t have a crystal ball or some other magical tool. We can only look back at history for some clues and debate about how it’ll play out under similar circumstances. On the flip side, there are those who are like, “I call B.S.” because the push to have an AI training moratorium is symbolic, at best. I mean, who would willingly shoot themselves in the foot?
There’s always a catch with seemingly altruistic (for the good of society) proposals anyway. Take education voucher programs for example. Republican lawmakers want you to believe they’re into vouchers for the kids, to give next generations equal access to quality education. Come on. In Arizona, it’s been a ploy to line the pockets of wealthy donors. And what about the kids left behind in traditional public schools? Sorry kiddos, but your resources are getting diverted and Arizona public education is going broke. So, yeah, I have huge doubts about pure altruism when peddled by certain people.
They’re creepy and they’re kooky, mysterious and spooky
Now, let’s back up a minute. We all know that AI has come a long way since the imagined days of “I’m sorry, Dave. I’m afraid I can’t do that.” Nowadays, AI is everywhere you look, from Tesla’s self-driving cars (yes, Musk, again) to Amazon’s Alexa and Apple’s Siri (virtual assistants) to Salesforce Einstein (personalized advertising). And while some folks are excited about the possibilities, others are worried about the downsides. After all, we’re talking about machines that can learn and make decisions on their own, with little-to-no human intervention. And, whoops! Machines make mistakes, just like the people who created them. And that’s some serious sci-fi sh*t right there.
Not to be outdone, U.S. politicians are working overtime to reign in AI’s power at the state level. In 2022, no less than 17 states put forth bills or resolutions regarding general artificial intelligence, and four states, namely Colorado, Illinois, Vermont, and Washington, passed them into law. Among these, Colorado, Illinois, and Vermont formed task forces or commissions with the purpose of studying AI.
Invention often requires courage
So, what are the risks? Well, some experts say AI could pose a threat to jobs, especially in industries like hospitality, transportation and manufacturing. Look, anybody who has all their money locked up in a newly-vulnerable industry is going to raise an objection to progress, but my “give a damn” is broken for some industries. For example, look at how the fossil fuel industries (coal, oil, and gas) respond to renewable energy: lobbying elected officials, funding climate change denial campaigns, and challenging regulations in court.
In a fiercely competitive world, stifling invention is greed plus willful ignorance masked as good governance. I say, “Hey, you! Stop bribing elected officials or scrambling to hamstring entire industries every time you smell a threat to your livelihood. Show some risk management savvy!” Warren Buffett once said diversification was “protection against ignorance.” And he’s not wrong.
Even the wise cannot see all ends
Anyway, some people say AI could be used for malicious purposes, like cyber attacks or even military drones. Sure, you can bet it will. I’m going to sound flippant, but what’s new? Anti-American regimes and other bad actors are not going to pause their cyberattacks, censorship, surveillance, and propaganda. AI is just another tool ripe for exploitation, as in AI-powered autonomous weapons, if you’re paying attention. Simply put, if the good guys pause AI training, the not-so-good guys will go full speed ahead.
And then there’s the existential threat: AI could eventually surpass human intelligence and become a threat to our very existence. I hate to break it to you, but AI absolutely will surpass us and that’s the whole point, right? But you won’t proactively discover any potential threats by sticking your head in the sand, that’s for sure. We need to recognize the issues in real time and put safeguards in place as smaller issues arise continuously. And the issues will arise.
You remember what I said earlier about studying history? Let’s expand that advice to include literature, namely works by author Isaac Asimov. His Three Laws of Robotics have been around since 1942. Still, even with the aid of Asimov’s framework, humans will never be smart enough to think of everything. And we are only able to act as responsibly as our research informs us…and our governments will allow. So there’s that.
Bring balance to the force
Let’s wrap this up. Not everyone is on board with the idea of hitting the pause button on AI, while some industries are worried AI will make their jobs obsolete and they’re pushing for the delay. And some say the call to pause is a smokescreen for political maneuvering, like scoring points with a base of voters or deflecting attention away from other issues.
So, what’s the truth? Well, as with most things, the answer is probably somewhere in the middle, with all the stakeholders — politicians included — weighing in. Ultimately, we have to find a balance between pushing the boundaries of innovation and taking a mindful approach. It’s important to consider both economic and technological interests, while also keeping the well-being of society in mind. We can’t simply charge ahead with AI without also thinking about potential real-world effects. But, we shouldn’t let fear or politics hinder our exploration, either. It’s important to continue both the research and the conversation surrounding AI, so we can make informed decisions about its development and implementation. After all, as the great philosopher Dolly Parton once said, “If you want the rainbow, you gotta put up with the rain.”
Discover more from Blog for Arizona
Subscribe to get the latest posts sent to your email.
Not a bad interpretation! I think one of the reasons the oligarchs are constantly warning of an existential threat from AI is this very motive.
While we have made some strong recent strides in large language models that sound pretty smart, I don’t see any reason to suspect that we are anywhere near the sort of sentient AI that could become self-motivated and pose an existential threat – frankly, I’m not entirely sure such an AI is likely ever. So far as I can see from the current direction of AI research and development, AI will remain an important, but highly specialized and ‘dumb’ tool for the foreseeable future. There is no one who has an interest in an AI with ‘free will’ rather a series of specialized, wealth-generating tools – so if it happens, it will be purely by accident or as an emergent property.
“…until we can fully understand and mitigate its potential risks and consequences.”
Allow me to interpret, “Small companies got a jump on us so we have to delay further AI development until we oligarchs figure out how best to monetize and monopolize this so it doesn’t erode our profitability and power.”