AI Regulations: In Preparation for the Rubicon
Balancing AI and humanity. Source: NPR.
OpenAI’s newly released Sora 2 is making waves. This “latest video generation model is more physically accurate, realistic, and more controllable than prior systems.” As tempting as it may be to stand in awe of the admittedly incredible advancements artificial intelligence has made in a short few years, we, the general public, corporations, and governments alike, should approach these innovations with a reasonable degree of trepidation. There is a reason Elon Musk is investing billions of dollars into AI startups; there is a reason for the Trump administration’s wholehearted embrace of notorious billionaire Peter Thiel’s brainchild, Palantir—an AI-based data analysis company. Could the reason for these investments be for the betterment of all humanity? Perhaps. Still, the reality is that man has a proclivity to corrupt even the most well-intentioned of ideas. Are these gargantuan investments likely an indicator of an AI bubble? Probably. But the fact remains that we are barrelling toward—if not already at—the event horizon of artificial intelligence. Before humanity crosses that Rubicon, it would be well-advised for nations across the globe to ensure that this benefits the many, instead of the few. International cooperation is a must to avoid a nuclear-arms-esque race to the bottom of AI regulations: an “AI Non-Proliferation Treaty,” if you will. History has shown that whenever a technological advancement has the potential to cause harm, it often does. Recognizing the veritable downsides of AI can help the world get ahead of the eight-ball, maximize the merits of innovation, and mitigate the perils.
A QUALIFICATION
While there is a need for some healthy skepticism regarding such a massive leap in technological potency, neo-Luddism should be met with caution — “In common parlance, the term ‘Luddite’ means someone who is anti-technology … Historically, however, … The Luddites did not hate technology; they only channeled their anger toward machine-breaking because it had nowhere else to go.” For international regulations to be successful, they must encourage and incentivize further breakthroughs. Granted, global regulations may be more difficult to accomplish than those for nuclear weapons simply due to AI’s less outright violent nature. Regardless, the matter of artificial intelligence is too far-reaching to be cast aside any longer.
ETHICS
Upon Sora 2’s release, deepfakes of deceased celebrities flooded the internet. Put plainly, they were gross. Posted all over social media were racist reimaginings of MLK and synthetic videos of Robin Williams. To be fair, since then, OpenAI has addressed “concerns about … RL-sloptimized feeds” and has taken substantial steps to mitigate those issues; furthermore, it has strengthened guardrails against deepfakes. But in the long term, relying on the self-regulation of corporations is unwise. For a sustainable, healthy future with AI, there must be standardized legal ramifications for releasing AI programs with unethical capabilities. Currently, the laws regarding ethical AI use are lagging behind the breakneck speed at which AI is advancing. Beyond deepfakes, when the training for machine learning systems intersects with intellectual property rights, things get convoluted quickly. While the necessity for international talks remains, a governmental agency dedicated to keeping regulations abreast of the evolving AI world and holding corporations accountable would be largely beneficial.
EDUCATION
Over-reliance on AI among the youth is on the rise, and when paired with AI’s expanding capabilities to warp reality, the well-established misinformation crisis is exacerbated. One glaring example of this is Elon Musk’s Grok AI. In pursuance of “free speech,” Musk is on a quest to silence those who disagree with him—the woke. One instrument with which he hopes to extricate “wokeness” from society is Grok, which has a history of espousing Holocaust denialism, along with many other falsehoods. Though this may be the most prominent example, on a quieter, subtler level, studies have shown that AI is susceptible to biases. Biased AI generates biased outputs, which generate biased human thoughts (if lacking the cerebral shield of analytical reasoning).
In returning to the pedagogical aspects of this issue, I must highlight AI’s relationship with critical thinking. During their developmental years, children must be taught how to think. School is, amongst other things, supposed to train students how to identify biases and find credible sources to avoid misinformation. Though the obvious complaint to be made about AI’s link to schooling may pertain to cheating, Stanford research has shown, “long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one ‘cheating’ behavior … That percentage has stayed about the same or even decreased slightly … when we added questions specific to … ChatGPT.” The true concern is more foundational: according to an MIT study, participants in groups tasked with using a Large-Language Model (a type of AI, e.g., ChatGPT) demonstrated “a likely decrease in learning skills,” and “performed worse than their counterparts in the Brain-only group at all levels.” Because of AI, this generation’s ability to think critically is at risk of, if not already, eroding. While this may not be the subject of any international treaty, this discussion should be prioritized by school boards everywhere.
ECONOMICS
Naturally, there’s the resultant question to be asked: “What happens to the labor force?” Well, further than a future working-age population that struggles to distinguish between fact and fiction. AI could automate nearly 100 million jobs over the next ten years. Specifically, the CEO of Anthropic predicted that “AI could wipe out half of all entry-level white-collar jobs.” Combine that with the fact that McKinsey projects “[c]urrent generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time,” and the trend of oligarchic intent in AI manifests. Ultimately, similar or even improved production without any labor costs is a great deal for the business owners. They are not gravely concerned about the mass working-to-middle-class unemployment that AI could wreak.
ENVIRONMENT
According to the MIT Technology Review, US-based data centers “used somewhere around 200 terawatt-hours of electricity in 2024, roughly what it takes to power Thailand for a year,” and “suck up billions of gallons of water for systems to keep all that computer hardware cool,” which affects local access to water.
The torrent of AI investments exemplifies a common shortcoming in economic decision-making; in the pursuit of explicit, monetary efficiency, investors are disregarding the implicit, negative externalities. A research paper published by MIT makes clear that “[p]olicies governing Gen-AI should be rooted in scientific evidence and sustainable growth strategies rather than being driven solely by economic ambitions.” If I may be so bold as to revise the preceding statement, I would specify that strategies should not be driven solely by short-term economic ambitions. If recent history is any indication, when climate issues are pitted against “the economy,” they lose. To be effective, pro-environment policies must be framed as in line with long-term economic growth (which is also truthful). Though, according to some climate researchers, the first environmental “tipping points” have been reached, and that “long term” may not be too far off. It is important to remember that increasing efficiency is a means to an end goal of greater human prosperity—efficiency is not the end goal.
THE OPTIONS
Governments could simply work domestically and choose their AI regulations internally, thus avoiding the extensive transaction and conformity costs incurred from international collaboration. Of course, that leaves the door open for a dog-eat-dog, downward spiral of cutting regulations, whereupon everyone, less corporate executives, is left out to dry. Governments could also cover their eyes, plug their noses, cold plunge into an AI-driven future, and hope that the free market theories work themselves out (laissez-faire policies have historically gone swimmingly, after all). Or, the preferable, albeit unlikely, outcome: an intergovernmental concord à la the Nuclear Non-Proliferation Treaty or the Paris Agreement. Obviously, both accords, the Paris Agreement in particular, leave room to be desired in terms of enforcement, as well as their vulnerability to political instability in member states. Needless to say, any functional AI-based treaty would require improvements upon those flaws.
On the individual level, citizens can continue to learn about, engage with, and become comfortable with AI. The way to tame the ever-evolving beast that is AI is not to remain oblivious. By informing ourselves about AI and how to control it, we can secure a future in which everymen and machines work in tandem. In the meantime, the US government should mandate greater employee influence in corporations, whether through stock holdings, board representation, or any other methods to guarantee proletarian voices are heard in a rapidly transforming professional environment. The choice is not between pro-business innovation versus onerous regulations, but between a future that does or does not prioritize ethical, educational, economic, and environmental growth.