11 Sep 2023
Does Schumer’s AI Plan Sidestep Responsibility?
On July 21st, the White House announced that seven of the world’s top AI companies had agreed to a voluntary pledge to put guardrails around their products. Microsoft president Brad Smith summarized their commitment to develop responsible artificial intelligence systems in three words: “safe, secure, trustworthy.”
Unfortunately, Americans don’t seem to have much faith in their ability to deliver. According to a recent poll by The AI Policy Institute, 82 percent of American voters don’t trust AI companies to self-regulate. History suggests that such skepticism is not misplaced. From Enron and Lehman Brothers to Facebook and Purdue Pharma, companies have a less than stellar track record when it comes to policing themselves and their industries.
Keep in mind, too, that the companies promising to mitigate the risks inherent in this revolutionary technology already have AI products on the market. And there’s more coming, with Goldman Sachs forecasting that global investments will reach $200 billion by 2025.
Meanwhile, the White House and Congress are struggling to develop a regulatory framework to keep AI from going off the rails. Since ChatGPT burst on the scene less than 10 months ago, lawmakers have introduced a number of bills – three in June alone – to protect us from technology even its creators fear. “If this technology goes wrong,” OpenAI CEO Sam Altman warned Congress, “it can go quite wrong.”
On Sept. 13th, Sen. Charles Schumer is gathering tech leaders in the nation’s capital for what he calls an Insight Forum. Altman, Elon Musk, Mark Zuckerberg, and half a dozen or so other tech CEOs have been invited to educate lawmakers on AI. But why a briefing?
The message from Schumer to the leaders should be firm: come with a plan or commit to a tight deadline to come up with one. Companies are perfectly capable of proposing regulations for themselves, especially with the stakes as high as they are for AI. We don’t have the luxury to wait for lawmakers to get up to speed. We don't need a photo op for technology and government leaders to signal civic virtue. And we don’t need a bunch of media stories breathlessly reporting whatever Elon Musk says or does. We need AI CEOs to present specific actionable solutions and commit their companies to putting them in place.
From my years of experience in Corporate Responsibility, I know it's possible for companies to deliver a first draft of an effective regulatory framework on a deadline. Right after the industry’s top executives and the White House announced their voluntary guidelines, I called on AI companies to produce such a draft in 90 days, which in my experience inside the C-suite is realistic and doable. A month and a half later, the clock is ticking with no actionable plan in sight
I propose that Schumer asks five questions that can become the basis for the skeleton plan to keep us safe:
What are the potential unanticipated consequences of AI? A grid that frames all the risks – from deep fakes to human extinction – will help regulators better understand the landscape. We need the dangers well-outlined and in one place. A small working group from the Big Seven AI companies can get this done in a week. It simply is framing the problem.
How are you addressing each identified risk? A second column can list current solutions or safeguards, with a third column showing what else is being considered. This can be done in week 2.
What measures can regulators use to monitor companies’ and hold them accountable? The fourth column should include monitoring and reporting functions companies would follow. This is the work of week 3.
What resources do regulators need to carry out this task? This next column would need to be filed out collectively, so it’ll take weeks 4 and 5 to get this one nailed.
How will we know that the guardrails are working? This final column can be completed in the last 10 days.
That’s a draft 45-day plan with an actionable grid. Luckily for us, the tech leaders of 2023 and their teams are extraordinary at what they do and have powerful resources to get this work done. And they can come up with a much better plan than the above if they devote the resources, put their best minds to it, and commit to a deadline.
The answers might be partial this go around – it’s a fast-moving situation – but they should at least provide a solid basis for a shared framework. AI executives who are filling the media with warnings of doom about the products they are mainstreaming can surely find the time to answer these questions in a timely fashion once the government decides to hold them to account.
We can’t uninvent AI. And we wouldn’t want to. Artificial intelligence is already diagnosing life-threatening health problems like cancer and sepsis, and helping tackle climate change, among other exciting advances. The risks, however, must be balanced against the rewards.
Airplanes’ immeasurable benefits to both businesses and travelers wouldn’t justify the potential dangers without the heavily-regulated checks we have today because our safety is paramount. It is time for AI giants and our lawmakers to stop talking and step forward with a real action plan that clearly, simply, and convincingly protects us from harm. We can't wait until a crash before we figure out how to make artificial intelligence flightworthy. The clock is ticking.