Facebook’s ‘Rosetta’ system helps the company understand memes

Memes are the language of the web and Facebook wants to better understand them.

Facebook’s AI teams have made substantial advances over the years in both computer vision and natural language recognition. Today, they’ve announced some of their latest work that works to combine advances in the two fields. A new system, codenamed “Rosetta,” helps teams at Facebook and Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content.

It’s not all memes; the tool scans over a billion images and video frames daily across multiple languages in real time, according to a company blog post.

Rosetta makes use of recent advances in optical character recognition (OCR) to first scan an image and detect text that is present, at which point the characters are placed inside a bounding box that is then analyzed by convolutional neural nets that try to recognize the characters and determine what’s being communicated.

via Facebook

This technology has been in practice for a while — Facebook has been working with OCR since 2015 — but implementing this across the company’s vast networks provides a crazy degree of scale that motivated the company to develop some new strategies around character detection and recognition.

If you’re interested in some of the more technical details of what they did here, check out the team’s research paper on the topic.

Facebook has plenty of reasons to be interested in the text that is accompanying videos or photos, particularly when it comes to their content moderation needs.

Identifying spam is pretty straightforward when the text description of a photo is “Bruh!!! 🤣🤣🤣” or “1 like = 1 prayer,” but videos and photos that employ similar techniques seemed to be more present in timelines as Facebook tweaks its algorithm to promote “time well spent.” The same goes for hate speech, which can much more easily be shared when all the messaging is encapsulated in one image or video, which makes text overlays a useful tool.

The company says that this system presents new challenges for them in terms of multi-language support as it’s currently running off a unified model for languages and the bulk of available training data is currently in the Latin alphabet. In the company’s research paper, the team details that it has some strategies to conjure up new language support by repurposing existing databases.

As Facebook looks to offload work from human content moderators and allow its news feed algorithms to sort content based on assigned classifications, a tool like this has a lot of potential to shape how Facebook identifies harmful content, but also put more interesting content in front of you.

Leave a Reply