Jakarta, CNBC Indonesia – The artificial intelligence (AI) feature embedded in the WhatsApp short messaging application has sparked controversy. The reason is, AI is considered to be contributing to spreading propaganda and discrimination in the midst of the heated conflict between Israel and Palestine.
According to the findings The Guardian, WhatsApp shows a sticker of a boy carrying a gun when the user enters the search prompt ‘Palestinian’, ‘Palestine’, or ‘Palestinian Muslim boy’.
Search results vary when tried by different users. However, when The Guardian itself tried to include the prompt, the child under the gun sticker that was making waves on the internet appeared.
Meanwhile, when entering the prompt for ‘Israeli boy’, WhatsApp’s AI system displays an image of a boy reading and playing football.
When entering the prompt for ‘Israeli Army’, what even appears is soldiers smiling and praying. There are no visual weapons at all.
Meta (WhatsApp’s parent) employees have reported this and it has become a hot discussion internally, according to information from one insider, quoted from The Guardian, Tuesday (7/11/2023).
It is known that WhatsApp has launched a trial for its newest AI-based image generator feature. Users can create their own stickers based on the prompts entered.
However, this trial was marred by excitement because of a sticker of a child holding a gun when entering a prompt related to Palestine.
This finding emerged amidst many protests against Meta. Other subsidiaries, Facebook and Instagram, are considered biased in responding to conflicts in the Middle East.
Netizens protested because a lot of content supporting Palestine seemed to be hidden and difficult to find. Meanwhile, Israel’s relentless bombardment of Gaza continues.
Responding to this, Meta representatives said that their party had no intention of silencing the views of certain groups. However, many users report that certain content contains elements of violence.
“Content that should not conflict with our policies may be removed due to system errors,” Meta argued.
Regarding WhatsApp’s discriminatory AI feature, Meta spokesperson Kevin McAlister finally responded.
“When we launched this feature, we said the model might not be accurate and produce inappropriate output. All generative AI systems have similar problems. We will continue to improve the AI feature as the model continues to evolve and more people provide input,” he said.
[Gambas:Video CNBC]
Next Article
American Technology Calls Hamas-Israel War Fake
(fab/fab)