Social media giant, Facebook claims that its AI (artificial technology) can identify and remove posts that contain hate speech and violence. However, the technology doesn't work as per internal documents reviewed by the Wall Street Journal.
According to a credible source, senior engineers from Facebook have said that the company's automated system can only remove posts that generated 2% of the hate speech seen on the platform.
Meanwhile, other Facebook employees also came to a same conclusion, stating that Facebook's AI only removes the posts that created 3% to 5 % of hate speech on the platform and 0.6% of the content that debased Facebook's rules on violence.
A recent report states that Facebook does not care about its impact on things like young girls' mental health using Instagram to human trafficking, misinformation, and gang viciousness on the site. However, the company has stated that the report is "mischaracterizations."
Mark Zuckerberg, Facebook CEO, says that Facebook's AI would remove the majority of inappropriate content before 2020.
Apparently, Facebook stands by its claim that most of the hate speech and violent content on the platform gets removed by its "Super-efficient" Artificial Intelligence before users even see it. The reports from Facebook also claimed that the detection rate was above 97% in February this year.
According to sources, certain groups, such as civil rights organizations and academics, are suspicious of Facebook's figures since the social media platform's stats don't match external research.
Additionally, the latest findings come after Facebook whistleblower Frances Haugen met with Congress to disclose how the platform heavily depends on AI and algorithms.
Haugen claims that as Facebook uses algorithms to identify the content that is most engaged with. By doing so, company subsequently tries to push such content to users which is usually divisive, angry, sensationalistic posts that often contains misinformation.