Skip to main content

You make this possible. Support our independent, nonprofit newsroom today.

Give Now

The 'chatbot' race is on. How should we use them?

social media facebook instagram twitter phone
Enlarge Icon

Late last year, ChatGPT took the internet by storm. Many have heralded the large language model (LLM) as a new era of technology.

Since ChatGPT's parent company, OpenAI, released it to the public, other tech giants are jumping in.

Bing, Microsoft's search engine, debuted a limited release of its helpful AI. Google is also working out the kinks in its version, named "Bard."

But what's going on under the hood?

Bing is just one player in a suddenly crowded chat-based search game – soon to be joined by Google’s “Bard,” which is still in a limited testing phase. Mark Zuckerberg now says Meta’s own chatbot, “LLaMA,” is on the way, and Snap will use OpenAI technology for a similar tool called “My AI.”

There’s no doubt about it: AI chatbots are here – and they’re going to keep saying confusing, bizarre and downright wrong stuff. So how do they work? And how can we make sense of what they’re saying?

Soundside spoke with Geekwire's Todd Bishop about his recent experiments with Bing's new technology.

Then, the show caught up with Peter Clark, interim-CEO of the Allen Institute for AI (AI2), about what's going on under the hood of these AI, and how everyday users can make sense of what feels like an increasingly convincing robot.

Why you can trust KUOW