WeChat apologizes for mistranslations Foreign media: Should teach AI to identify "bad things"

(Original title: WeChat Apologizes for Improper Translation of Foreign Media: AI Should Be Taught to Recognize "Bad Things")

According to a report from foreign media outlet Gizmodo, WeChat recently issued an apology after its automatic translation mistakenly rendered the term "black foreigner" as "black nigger." This incident has sparked broader discussions within the tech industry about how to train artificial intelligence to recognize and avoid racist language.

The issue came to light when An James, a Black film director, posted a message on WeChat to a colleague saying, "She was late." When her colleague responded in Chinese, WeChat's AI automatically translated it into "The nigger is late." James later shared that she had noticed similar mistranslations before, where messages starting with the letter "n" were sometimes translated inappropriately.

Photo: WeChat Apologizes for Translating “Black People” into “Black Ghosts”

Note: "Nigger" is a derogatory term typically used to refer to Black people, often implying low social status. It is considered offensive and should be avoided. The word originates from the Latin "niger" (meaning black) and "nigra," which can have both positive and negative connotations. Over time, it evolved into a racial slur in English.

A few days ago, WeChat officially apologized for the incident, stating, "We are very sorry for the inappropriate translation. After receiving user feedback, we immediately addressed the issue." The company also explained that their translation system relies on artificial intelligence, which is trained on large volumes of text. However, without proper contextual understanding, such systems can misinterpret or produce offensive content.

Language AI works by identifying patterns in words and phrases. In 2016, researchers tested Google’s text algorithm and found some troubling associations—such as "Emily" being linked to "Ebony" and "Pancake" to "Fried Chicken." Even more bizarrely, "men" was sometimes translated as "women," and "doctors" as "nurses." These examples highlight the risks of training AI on biased or unfiltered data.

As one foreign media outlet pointed out, "AI is like a mirror of human behavior. Unfortunately, it learns the worst aspects of our society." This raises an important question: How can we teach AI to recognize and reject racism? The incident underscores the need for more responsible development and ethical oversight in AI systems, especially those used in public platforms like WeChat.

POS machine

Pos Machine,Smart Android Pos,Smart Mobile Tablet Terminal,Smart Pos Z90

Guangzhou Winson Information Technology Co., Ltd. , https://www.barcodescanner-2d.com