Siri is about the best buddy to have around especially when you are bored, and the fact that you can decide to ask whatever question, no matter how silly it is can be very fascinating. However, as you may have already found out by now, the AI assistant is built in such that it dodges any abusive or controversial questions.
One of those questions that the AI feature often decline is those that have to do with feminism. As a matter of fact, Siri may be in favor of equality, but just don’t call her a feminist or ask a question in line with that.
While you may be wondering why the assistant feature has always been corny about responding to feminism-related questions, we realized that it was as a result of a re-written code. According to internal Apple documents obtained by The Guardian; Apple Employees has been instructed to program the digital assistant to avoid and deflect questions about “sensitive topics” like feminism and the #MeToo movement.
The documents leaked by our source date back to June 2018 and were leaked contractors that worked for Apple’s Siri “Grading” program, which the company suspended in August.
The memos that was published revealed that the company explicitly rewrite how Siri handles certain topics so that it (and by extension Apple) would appear neutral. And in some ways, Siri’s current responses are an improvement over what the digital assistant said when faced with such questions in the past.
For instance, Siri used to say, “I just don’t get this whole gender thing,” when asked about topics like gender equality. The documents paint a picture of a company that goes to extreme lengths to avoid upsetting anyone. “In nearly all cases, Siri doesn’t have a point of view,” writes Apple in one guideline. It adds Siri is “non-human,” “placeless,” and “genderless.”
In addition, an Apple spokesperson said, “Our approach is to be factual with inclusive responses rather than offer opinions.” however, the irony about the current situation is that in its attempt to avoid conflict, Apple is likely to upset people for not taking a stance. Attempting to be neutral is a political stance. It’s a decision to not engage with issues that have a material effect on people’s day-to-day lives, however, given how controversial some of this topics could be, I would rather say it is the best move from the company.
However, in this generation, one would expect that a supposed intelligent feature should be able to provide an educative response as well, which then brings the concern of whether the AI Assistant is actually intelligent or not.