Who's Driving?
Welcome
I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation.
Dear Reader,
I happen to be writing this column on a day when the US Senate is conducting hearings on artificial intelligence (AI) and, specifically, whether a need exists for greater regulation. One of the people testifying is Sam Altman, CEO of OpenAI, the company behind ChatGPT. CEOs of companies that are about to be the subject of regulation often come to Congress with dire warnings about how bad further regulation will be for their businesses. Is it refreshing, or is it alarming, that Altman is taking a different view and calling for more government oversight?
Altman says that his worst fear is that AI "could cause significant harm to the world," adding "If this technology goes wrong, it can go quite wrong" [1]. Who better to warn us about these potential consequences than an industry insider who is directly involved with developing and marketing the technology? And yet, Altman is not a whistle-blower who is resigning because of his misgivings. He is one of the guys who is making it happen, and he isn't saying he wants to stop. He is just saying he wants government to set up some rules.
It is commendable that a CEO would call for more regulation of his industry, yet I can't help but feeling a little frustrated that all the onus is on the government and that individuals (as well as companies) working in this industry would not be expected to exercise some self-restraint about building a technology that they themselves feel "could cause significant harm to the world." NYU professor Gary Marcus, who also testified, offered a more balanced perspective when he warned of AI becoming a "perfect storm of corporate irresponsibility, widespread deployment, lack of regulation, and inherent unreliability" [2].
The senators played to the cameras, looking for sounds bites and attempting to appear august, but in this case, I can sympathize with the difficulties they seem to have with understanding this issue enough to know how to regulate it. In the old days, people thought they could get computers to "think" like a person by just defining the right rules, but modern generative AI systems find their own way to the answer, with no clear path that anyone can follow later to show how they got there other than that someone (who?) might know what data was used for training.
I have read several news stories and opinion columns on the importance of regulating AI, yet I have seen few details on what this regulation would look like. Rather than writing another one of those opinion columns, I'll tell you what I know. For Altman, regulation means setting requirements for testing to ensure that AI meets "safety requirements." His concept of safety encompasses several parts, including privacy, accuracy, and disinformation prevention. In his opening statement before the Senate, he states, "it is vital that AI companies – especially those working on the most powerful models – adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements" [1].
Many computer scientists have also talked about the need for transparency in disclosing the dataset that was used for training the AI, so that others can check it and search for potential bias. This step seems essential for ensuring accurate and non-discriminatory AI systems, but we'll need to develop new systems for checking these datasets that can sometimes include millions of pages of data.
The EU already has a proposed law on the table [3]. I am not a legal expert (or an EU expert), but part of the AI Act appears to regulate the behavior of the AI, as opposed to the development process, by prohibiting activities such as subliminal manipulation, social scoring, exploitation of children or the mentally disabled, and remote biometric identification by law enforcement. Beyond these prohibited activities, other uses are classified into three different risk categories with accompanying requirements for each category. The requirements address the need for training, testing, and documentation.
I applaud the EU for getting some proposed legislation out on the table. However, the act was written two years ago, and it already sounds a little anachronistic in the ChapGPT era. Things we are worrying about now weren't even imagined then, like what if an AI steals your copyright or deepfakes you into a porn movie?
Times are rapidly changing. We need to be careful, and governments need to be unified in addressing the problem. IBM Chief Privacy and Trust Officer Christina Montgomery, who also testified at the Senate hearing, put it best in summarizing the need for "clear, reasonable policy and guardrails." Montgomery warns that "The era of AI cannot be another era of move fast and break things" [4].
Infos
- Sam Altman's opening statement: https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf
- Gary Marcus' opening statement: https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf
- The AI Act: https://artificialintelligenceact.eu/
- Christina Montgomery's opening statement: https://www.ibm.com/policy/wp-content/uploads/2023/05/Christina-Montgomery-Senate-Judiciary-Testimony-5-16-23.pdf
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.
-
Gnome OS Transitioning Toward a General-Purpose Distro
If you're looking for the perfectly vanilla take on the Gnome desktop, Gnome OS might be for you.
-
Fedora 41 Released with New Features
If you're a Fedora fan or just looking for a Linux distribution to help you migrate from Windows, Fedora 41 might be just the ticket.
-
AlmaLinux OS Kitten 10 Gives Power Users a Sneak Preview
If you're looking to kick the tires of AlmaLinux's upstream version, the developers have a purrfect solution.
-
Gnome 47.1 Released with a Few Fixes
The latest release of the Gnome desktop is all about fixing a few nagging issues and not about bringing new features into the mix.
-
System76 Unveils an Ampere-Powered Thelio Desktop
If you're looking for a new desktop system for developing autonomous driving and software-defined vehicle solutions. System76 has you covered.