Facebook is working on ad topic management

Facebook is developing tools to help advertisers keep their ad placements away from certain topics in the news feed.

The company said it will begin testing topic exclusion controls with a small group of advertisers. For example, it said that a children’s toy company could avoid content related to “crime and tragedy,” if it wanted to. Other topics are ‘news and politics’ and ‘social issues’.

The company said developing and testing the tools would take “much of the year.”

Facebook has partnered with players like YouTube and Google’s Twitter with marketers and agencies through a group called the Global Alliance for Responsible Media, or GARM, to develop standards in this area. They’ve worked on actions that help “consumer and advertiser safety,” including malicious content definitions, reporting standards, independent oversight, and agreeing on how to better manage proximity tools.

Facebook’s news feed tools build on tools that run on other parts of the platform, such as in-stream video or on the Audience Network, which allow mobile software developers to serve in-app ads targeting users on based on Facebook data.

The concept of ‘brand safety’ is important to any advertiser who wants to make sure their company’s ads are not near certain topics. But there is also a growing push from the ad industry to make platforms like Facebook more secure, not just close to their ad placements.

The CEO of the World Federation of Advertisers, which founded GARM, told CNBC last summer that it was a change from “brand security” to focus more on “social security.” The crux is that even if ads don’t appear in or next to specific videos, many platforms are substantially funded with ad dollars. In other words, ad-supported content helps subsidize all ad-free stuff. And many advertisers say they feel responsible for what happens on the ad-supported Internet.

That was made abundantly clear last summer when a slew of advertisers temporarily took their advertising dollars from Facebook and asked it to take stricter measures to stop the spread of hate speech and disinformation on its platform. Some of those advertisers not only wanted their ads to stay away from hateful or discriminatory content, they wanted a plan to make sure the content was completely off the platform.

Twitter is working on its own in-feed brand safety tools, it said in December.

.Source