Be afraid. Be very afraid.

We knew it was coming, didn’t we? 

We had been warned. 

As far back as 1951, movies were forecasting the day when robots would turn on their human creators. Movies like The Day the Earth Stood Still, Space Odyssey, Blade Runner, The Terminator, The Matrix, Ex Machina – and many more.

That day has come. 

This week NBC News reported that Microsoft’s newly revamped Bing search engine can do more than give you the weather, find recipes or articles on particular topics. It can even do more than write your kid’s term paper. Here’s what NBC said, and I quote: “But if you cross its (Bing’s) artificially intelligent chatbot, it might also insult your looks, threaten your reputation or compare you to Adolf Hitler.”

Are you afraid yet? Read on.

Microsoft is in an arms race with Google, trying to cut into their dominance of the search engine market. In doing so, they may have rushed the technology a bit too much. It seems there is an almost human-like belligerence emanating from Big Brother Bing when provoked. For example, The Associated Press said “the new chatbot (has) complained of past news coverage of its mistakes, adamantly denied…errors and threatened to expose the (AP) reporter for spreading alleged falsehoods about Bing’s abilities. It grew increasingly hostile when asked to explain itself eventually comparing the reporter to dictators Hitler, Pol Pot and Stalin and claiming to have evidence tying the reporter to a 1990s murder.”

Isn’t this just want we need now? Disinformation is a big enough problem when it’s created and spread by humans. Now the forces of evil can get a computer to do it for them.

Now, you make think that I am the one spreading disinformation. If you question my facts, I won’t compare you to Hitler. You can read more for yourself by clicking on these two links: 

https://www.nbcnews.com/tech/tech-news/bing-belligerent-microsoft-looks-tame-ai-chatbot-rcna71175
https://www.cnbc.com/2023/02/16/microsofts-bing-ai-is-leading-to-creepy-experiences-for-users.html

In fairness to poor Bing, he/she/it was provoked. Apparently, the reporter did ask a lengthy series of leading questions that led to these responses from the AI chatbot. In a blog post, Microsoft said that the model will, at times, “respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend.” 

Gee, that sounds like very human behavior, doesn’t it?

The good news… I think… is that Microsoft is using these experiences to update its software and add the necessary guardrails so that provocations don’t lead to artificial vitriole in response.

We can only hope.