Microsoft joins plea for government regulation of AI tools like ChatGPT
Microsoft President Brad Smith went to the other Washington this week to ask government officials to put guardrails up around artificial intelligence.
At the D.C. event, Smith made the case for regulating the technology behind tools like ChatGPT.
“I think we need to think about two things here in the United States,” Smith said. “First is to enable innovation to go forward with guardrails that both provide the assurance that the American public wants and deserves, but frankly, shows the world that there is a path where we can innovate quickly and responsibly.”
Microsoft is positioned to seize a huge opportunity in generative AI as a major investor and partner with OpenAI, the company behind ChatGPT. Microsoft has integrated the technology into its Bing search engine, thrusting the company ahead of longtime search engine king Google in the AI arms race. The Redmond, Wash. software giant is also integrating AI into its enterprise tools like Office 365.
Last week, OpenAI CEO Sam Altman testified about the need for regulation of the ascendant technology before Congress. Smith reportedly advised Altman ahead of the hearing.
As part of the event in D.C. Thursday, Microsoft released a “blueprint for governing AI” that it hopes regulators will adopt. The recommendations include adapting existing law to new technologies, requiring companies that build AI systems for critical infrastructure to add “safety brakes,” and promoting transparency in the development of AI.
Smith also called for an executive order saying the federal government will only procure AI technology from companies that meet those standards. He said his most pressing concern is deepfake media — videos, images, audio, etc. — that look deceptively real, but are generated by AI.
“Our goal is to ensure that AI serves humanity, and is controlled by humans, with an approach to policy and law and regulation,” Smith said.
Though it might seem unusual for a tech company to invite regulation of itself, this is not a new strategy for Smith or Microsoft. The company has set itself aside from other tech giants in recent years by calling for regulation of social media, facial recognition, and other technologies. It is worth noting that in many cases, those regulations wouldn’t target core parts of Microsoft’s business.
There is another reason tech companies might invite government oversight: it creates a level playing field with competitors. Throughout tech’s history – but particularly in the contest to bring AI technology to market first – companies have shirked responsibility with an old adage: "If we don’t build it, someone else will." Government rules can change that equation.
Despite the growing chorus of the tech companies calling for federal regulation of AI, it’s unlikely to come from the current, divided government. It's more likely that a patchwork of state and international regulations will create a de facto standard, similar to what's happened with privacy regulations across Washington state, California, and Europe.
“Pace matters,” Smith said at the event Thursday. “If you go too slow, the U.S. will fall behind.”