AI Goes Boldly To The Next Frontier

Welcome to a new era. You may have noticed colleagues leveraging Artificial Intelligence (AI) tools for fun and profit. And every week, a dozen or so AI projects trumpet their debut. Everybody loves a good productivity hack and AI is delivering. Pretty cool, right? 

At the same time, the Israel Defense Forces (IDF) recently deployed AI systems to handle its deadly military operations. So, yeah. The perils of AI are no longer some fictional plot line in a sci-fi show, where you can hit pause and go use the bathroom. Military officials are rubbing their hands together like Romulan warlords, ready to exploit AI’s darker side. It’s a wild spin in the wormhole of power dynamics with opportunists rethinking the entire warfare playbook. 

Star Fleet regulations?! That’s outrageous!

“At present, the regulation of AI in the United States is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation,” writes Victor Li for the American Bar Association. The Biden Admin has issued its fact sheet on the web and Congress has had a hearing or two to explore generative AI. Lots of folks at the fed level are all in for studying AI and a handful of bi-partisan bills are in the congressional queue. Individual states have enacted a variety of laws addressing AI, but there’s no national consensus on AI laws…yet.

And while Democrats expect a repeat of online disinformation campaigns in 2024, now driven more efficiently by AI, any meaningful legislation would no doubt be slow-walked by Republicans who seemingly hope to deploy AI bots and deep fake videos of their own. Anyway, Republicans would rather look at Hunter Biden’s ʞɔᴉp pics. It sure feels like the shields are down indefinitely. 

As for the news cycle, the mainstream media is focused on the impact of AI on the entertainment industry since it’s undoubtedly the most bankable story. Outside of Hollywood, the SAG-AFTRA strike might feel like another labor dispute rather than a microcosm of AI danger to all our livelihoods. Did anyone bother to mention AI’s existential threat and the impending apocalypse? Yes, I know. We still haven’t solved our horrific domestic situation of gun nuts who get sprung firing an automatic weapon. 

Red alert! All crew to battle stations!

Now about those ethical dilemmas that new technology often serves up. Would you trust an AI deciding your fate with the same opaque functionality as a Star Trek replicator? Sure, who doesn’t love a good gaffe, dripping with tech bro biases and poor outcomes for marginalized communities? The National Institute of Standards and Technology did publish a standard for recognizing and managing AI bias. Meanwhile, a few other efforts aimed at promoting ethical AI have reportedly faced setbacks, primarily due to the absence of diversity. But, gosh, who needs diverse perspectives in AI dev anyway? It’s not like AI will become self-aware and judge us all based on our Instagram selfies, right?

And the right to privacy? Good luck with that! Combine a bunch of MAGAt legislators with AI’s unfettered ability to scrape data, and personal privacy becomes so last season, as if it hasn’t already. Golly, what young women wouldn’t enjoy living life under the watchful eyes of an overzealous Republican governor or state Attorney General? Anti-choice oppressors would crave the AI-powered opportunity to spy on someone’s reproductive healthcare, e.g., her last period and upcoming travel plans, all without having to bother with HIPAA or constitutional rights. Talk about weaponizing AI, why dontcha?

It’s not just a set of rules, Beverly!

But hold on, because now we’re circling back to the biggest reason to legislate at the federal level and not just politely chat about a Prime Directive. On Friday, July 21, President Biden gathered the AI big shots—from Amazon, Google, Microsoft, et al.—to sign a non-binding agreement on how to handle artificial intelligence. Yes, you read that right: non-binding!

Predictably, industry groups gave their solemn nod of approval, because nothing says “trustworthy” and “responsible” like a voluntary code of conduct in the wild, wild world of tech. If past practices are any indication, relying on tech giants to err on the side of honor over greed is as naive as expecting white Christian Nationalists to stop behaving like the Borg Collective. 

The White House’s AI Virtue-Signaling Summit seems promising and full of potential, just like when Q offers a helping hand to the Enterprise crew (wink wink). I mean, as recently as June 2023, the FTC slapped Amazon on the wrist for its “deceptive user-interface designs,” so there you go. But for what it’s worth, check out the grandiosity of a few official statements by those in attendance at Biden’s AI Meetup:

• Google: Our commitment to advancing bold and responsible AI, together
• Inflections: The precautionary principle: partnering with the White House on AI safety
• Open AI: Moving AI governance forward
• White House: FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI

Basically, we are dealing with advanced technology that demands immediate regulatory guardrails, alongside historically-rogue U.S. companies, industries and institutions that necessitate the presence of enforceable accountability. Putting federal legislation for AI at the top of our country’s to-do list is imperative. If any of the previous concerns resonate with you, get involved. Use your computer or communicator (smartphone) to hail your elected officials, and let’s go maximum warp toward a safer future. Engage!

6 thoughts on “AI Goes Boldly To The Next Frontier”

  1. I agree that AI misused has the potential to be an ominous presence in our world. My use of the word imagination was intended to convey my notion that humankind operates in a continuum of constraints. I sense that the most basic level of constraint is a rule/regulation. On a scale from 1-10, I put a regulation in the 1 to 2 range. We are accustomed to regulations for the general welfare of us. Say, traffic lights which hopefully keep us safe on the roads. Next is the nubbin of how to regulate something a elusive as AI. I can envision an AI developer claiming their source code/ algorithm is proprietary, and no regulatory body is going to look inside their AI system. In those instances I really don’t see how regulation would work. Maybe so, however as you said the folks doing regulation at the federal level, in my estimation couldn’t pour water out of a boot even with the directions written on the heel. Finally, I put imagination as a 9 or 10 on my constraint scale. and sincerely hope in trying to corral AI we use some imagination to think outside the box.

  2. True, a tool is a tool, but AI is no ordinary tool. Just like fire, which can either cook a meal or burn down a forest, AI’s potential impact is vast. Without regulation, we risk leaving this ‘tool’ in the hands of those who may prioritize profits over ethics. It’s not about stifling imagination, but rather channeling it responsibly to protect our future.

    • The cascading societal effect of AI-driven disinformation + skewed elections should have us all concerned.

  3. A tool is a tool. The only artificial thing about AI is the fools developing it have no scope of imagination.

    • True, a tool is a tool, but AI is no ordinary tool. Just like fire, which can either cook a meal or burn down the entire house, AI’s potential impact is vast. Without regulation, we risk leaving this ‘tool’ in the hands of those who may prioritize profits over ethics. And the brand names at the WH meeting have repeatedly demonstrated their willingness to do just that. AI regulation is not about stifling imagination. It’s about channeling it responsibly to protect our future.

Comments are closed.