Artificial Intelligence at Meta (Facebook) – Two Current Use-Cases – Emerj

Npressfetimg 920.png

Meta Platforms Inc. (herein, “Meta”), was known as Facebook up until 2021. Mark Zuckerberg states that the new brand embodies his strategic plan to create a “metaverse” for its customers using AI and VR technology. 

In 2021, Meta reported a net income of approximately $39 billion on revenues of just under $118 billion. Currently, Meta is traded on the NASDAQ (symbol: FB) and has a market cap of approximately $497 billion. Sources vary somewhat regarding the number of employees at Meta, though current estimates from reliable sources place the number somewhere around 70-77,000 workers.

In this article, we’ll look at how Meta has implemented AI applications for its business through two unique use cases:

  • Removing Offensive Content – Meta uses machine learning and natural language processing to screen for and remove offensive material such as sexual and violent content.
  • VR Immersion – Meta claims to use computer vision algorithms to track user movement in real-time for its Oculus product.

We’ll begin by examining how Facebook has purportedly focused on using AI to remove offensive content.

Use Case #1 – Removal of Offensive Content

Ever since the Facebook-Cambridge Analytica scandal in the 2010s, Facebook has been under heavy scrutiny by federal regulators, privacy and security authorities and advocates, and other concerned parties.

But it isn’t just user privacy that is of concern. Jerome Presenti, Meta’s VP of artificial intelligence, says removal of what the company calls “harmful content” such as that which contains sex, violence, and other inappropriate material is also considered a priority.

In a podcast interview, Pesenti discussed some of the purported uses of AI in this regard. Pesenti claims that Meta is analyzing text and image data for harmful content using what seems to be natural language understanding (NLU) and computer vision models. One method by which Meta is supposedly using NLU and computer vision to combat harmful content is a machine learning application called “few-shot learning,” or FSL.

Below is a video describing Meta’s FSL model:

Meta claims that its FSL solution can be trained with limited or no training data. The company states that it uses multimodal data retrieved from past users. The model is trained in sequence on three types of data: billions of generic and open-source language examples, data that Meta has labeled harmful in the past (please see disclaimer below), and condensed text that explains company use policy. 

The company claims that its FSL models “adapt more easily” to rapidly evolving content and take the appropriate corrective measures, e.g., content removal. The company claims it does this by “starting with a general understanding of a topic”  and then using “fewer labeled examples to learn new tasks.”

Please note: The information contained in the next two paragraphs are merely to relay facts to our audience, not to decry Facebook for its labor practices.

Before proceeding, it must be mentioned how Facebook is likely continuing to label data. In a 2019 article in the Washington Post, …….

Source: https://emerj.com/ai-sector-overviews/ai-at-meta/

Leave a comment

Your email address will not be published. Required fields are marked *