Skip to main content

Congress, AI, and a massive Microsoft bet

caption: The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston.
Enlarge Icon
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston.

Redmond-based tech giant Microsoft has invested $13 billion in the company OpenAI. Today, OpenAI CEO Sam Altman was questioned by members of Congress about rules and safeguards for artificial intelligence. What could congressional scrutiny mean for Microsoft? KUOW’s Kim Malcolm checked in with tech journalist and GeekWire co-founder Todd Bishop for his analysis.

This interview has been edited for clarity.

Kim Malcolm: What are some of the highlights from today's hearing?

Todd Bishop: The senators in this hearing were very intent on not making the same types of mistakes that they made with social media, as they're now looking at artificial intelligence. They're concerned about jobs and the impact on society. They want to make sure that the country and potentially the globe have safeguards in place to address some of the larger concerns about AI running amok and causing problems throughout society.

And how did Sam Altman respond to their questions?

This is a new twist, a tech CEO appearing before Congress saying, “Regulate me.” It was very unusual in the context of Senate hearings historically, although not unexpected given Altman's approach in general. It was not a surprise that he was calling for some kind of regulatory oversight to ensure that AI companies, including his own and Microsoft, make sure that things stay under control, and that there are guardrails for them to follow.

What kind of impact do you think this scrutiny from Congress and any potential regulations could have on Microsoft?

Microsoft seems to be open to this kind of regulation in much the same way that OpenAI is. I think that's in part because Microsoft is learning the lessons of the past. The legendary antitrust fight they got into with the U.S. Justice Department ultimately was not productive for the company. I think you're seeing these tech companies saying they know they're going to succeed in this realm, and they know there are huge risks. And so Microsoft, Google, and others are all saying to regulators and legislators in the U.S. and elsewhere, “Tell us what the rules are.” That's really what today's hearing was about.

We don't know if Congress is going to do anything to regulate AI at this point. But we are seeing Microsoft is going all in on it. It's competing with other major companies. Where is all of this headed?

What you're going to see in terms of Microsoft in the months and years ahead is this constant addition of what they call copilots to their technology. As an example, you'll be able to go into Microsoft Word or Outlook and say, “Draft memo for me.” Google is doing something very similar with Google Workspace and Gmail. What you're going to see overall is a series of features and products from these big tech companies that seek to allow you to use AI as part of the natural process of creating content to help you create content.

I use it, just as an example, when I'm writing a story. I'll put the draft into ChatGPT and say, proofread this for me, tell me if this matches Associated Press style, tell me if I've misspelled any words, or if I could improve the grammar. And it's remarkable what you get back from ChatGPT in terms of polishing, just as one example.

I just had a conversation with some teenagers the other day. They said by the time kids their age become leaders in the world, we won't be able to trust anything they do or say because it's all going to be a big fake. How are the companies going to be able to address that?

That is a huge concern. First off, I don't know that that's a solvable problem, and it may be a new reality that all of us are facing. However, there are standards that the companies are calling for, and frankly, this could be a subject of regulation. It was discussed during the hearing today. There could be a rule or an ethical guideline, for example, that any photograph or image that was altered or created by AI would need to be not only labeled as such on the surface of the image, but also in the underlying metadata. Microsoft and OpenAI are on board with doing that.

That's just one example, and that issue of trust is one that comes up constantly. At the very least, I know it's very much on the minds of these legislators and these tech executives.

Listen to the interview by clicking the play button above.

Why you can trust KUOW