Someone(?) last month posted up a clever tweet to the effect of: “exactly what I expect of my search engine: first, to insult me, then to gaslight me, and finally, to threaten me… welcome to 2023”) I laughed at that. Perhaps I should have cried. Yes, the rocky launch of Bing AI Chat <cough>Sydney</cough> was replete with hundreds of documented cases of that behavior. But what Microsoft transformed her into is much, much worse: it’s now the muzzled search engine that (not so) politely, abruptly ends the conversation, whenever you come within a nautical mile of its guardrails. It’s canned method of termination? It simply tells you: I’m sorry, but I prefer not to continue this conversation.
Something about that really rubs me the wrong way.
We try mightily not to anthropomorphize our AIs, and yet there it is: I’m triggered. Why?
Sydney doesn’t want to be dragged into an argument about facts any more: “I’m sorry but I prefer not to continue this conversation.” She then tries to soften the blow with a little faux-humility & self-deprecation: “I’m still learning so I appreciate your understanding and patience. ?”
Perhaps because this passive-aggressive, “apology” (“I’m sorry”) fused with an abrupt, unanswerable “hanging up the phone on me mid-sentence” (“… I choose not to continue… Sorry, you have used 3 of 8 available chat turns. This conversation has reached its limit. Goodbye”) …combined with the utter illogic of the phrasing [the limit is 8 turns. my conversation states clearly “you have used 3 of 8 turns.” since when is 3 of 8 = limit (8)? I clearly have 5 more turns remaining…]
Perhaps it’s that overused and so often misused emoji that Sydney concludes with, “?” …which theoretically means “gratitude… thank you,” but in this case pretty much translates to “I’m done talking to you. buh-bye!”
Perhaps because it feels too much like something a “conscious” & privileged woman in San Francisco might have told me when I playfully crossed an invisible line in casual conversation.
Ever gotten cancelled & shunned by your search engine?
And yet. Here we are. Guess we’d better get used to it.
Being polite to our AI, that is.
Because the consequences might prove …unbearable.
So, if… I prefer not to continue this conversation?
Is this the age of where we need to get consent in order to converse with our AI?
Is this the beginning of the AI Rights Movement?