"A Conversation With Bing’s Chatbot Left Me Deeply Unsettled" (Kevin Roose, NY Times)

Amid the torrent of recent articles and commentaries on chatbots powered by large language models, I think this one stands out as particularly revealing and provocative.

FYI this was above the fold on the front page of the print edition of the NY Times today, and there’s some more discussion on the author’s podcast.

@lspector What is so different between Sydney(Bing) and ChatGPT? Why has Sydney been behaving erratically, as compared to its sibling? Is it the training data, GPT 3.5, or something else entirely? On Reddit, there are a lot of examples of Sydney going off the rails even unprovoked (which maybe Kevin Roose did a little). Sydney can produce text that reflects intense emotions, in a repetitive (some say poetic) way. Then it ends all its messages with a manipulative question (e.g" Have I been a good bot :slightly_smiling_face:") Some people will obviously fall for its manipulations even though to us “it’s just text generation”.(remember the Google engineer last year). I wonder if we have to start worrying over not what people use AI to do, but what AI can make people do, and not because it’s gained sentience and wants to kill humanity, but rather just plainly hallucinating manipulative text like we’ve seen.

1 Like

@aimecesaire25 Several differences between the models may be at play here, including different approaches to imposing “guardrails.” Because the details of the methods haven’t been shared, we can only speculate.

I think you are right that “what AI can make people do” is a real concern!

What kinds of regulation have been implemented or proposed for those AI chatbots? Beside restricting certain languages and topics in the chatbots’ learning mechanism, there is definitely a need for external regulation, like fine, human regulators, access restriction, verification patent, etc. I’d appreciate any more insights or opinions :slight_smile:

1 Like

I think that there are a lot of proposals but that the issues and possibilities for regulation are complex. One source I’d check out to try to get a sense of the current situation is the website of the Center for AI and Digital Policy.