POV

Google Takes Further Steps to Boost “Authoritative” Content

Share Button

Part of a Series of Changes to Remove “Fake News” or Offensive Search Results

 

Background

With the brand safety concerns surrounding YouTube of late, as well as heightened awareness of misleading or offensive content online, Google has responded with sweeping shifts to some of its core products. In addition to increased controls and improvements to its AI, updates have recently rolled out to advertisers to refine ad placement on YouTube videos. These changes will take similar steps to lessen the volume of misleading or offensive content showing up in search results.

On this front, Google announced a comprehensive rollout of quality improvements for its search results algorithm as a part of its “Project Owl” endeavor. The move aims to cut down on search and auto-complete results showing fake news — content that looks and feels like news but comes from questionable sources or contains overly biased reporting — or offensive content. Google’s approach is to build algorithmic solutions that better surface “authoritative” content from trusted sources (such as Wikipedia) combined with feedback from its (human) search quality raters and from Google users.

Inflammatory or fake content in search results are the unintended byproduct of how Google displays results and measures the validity or trustworthiness of content on the web. While there have been several high-profile examples of misleading or offensive results in the press recently, approximately only 0.25% of search results show offensive or questionable content, according to Google. So, while these changes won’t have significant impact on most search results, they should assuage concerns for advertisers and marketers about appearing alongside offensive content and should incrementally improve the content within Google’s search results.

How It Works:

This rollout will affect three key components:

  1. New Quality Rating Guidelines for Google’s human “search quality raters”: For the past couple of years, Google has publicly published quality rating guidelines, a comprehensive set of guidelines given to Google’s search result testers. This 150+ page document outlines a wide range of factors for what should be considered a poor quality page. This year, section 7 outlines what constitutes “low quality” and has new aspects added, including specific examples of poor quality content such as fake news and, inexplicably, fake recipes. Based on the events surrounding last year’s election, Google is putting more definition and emphasis around how they are minimizing these results.
  2. Ranking Signal Updates: Google states that they updated the technology used to determine rankings. While the exact changes are not expressed, the rankings will now likely assess context and authority of content more effectively in order to demote or remove these results, and replace them with sources and content deemed more authoritative.
  3. Direct User Feedback for Auto-Complete and Snippets: Two months ago, Google launched a tool for users to report offensive auto-suggest queries. This feature has now rolled out across the entire Google landscape. When searching on Google, users will still see the auto-suggest results, but now a prompt at the bottom of the suggest box will give the user an option to report inappropriate predictions. Clicking on that link prompts the user to select which prediction was inappropriate and why (hateful, explicit etc.). More than likely, Google will use this feedback to further improve ranking signals and to help drive Rank Brain, which is Google’s machine-learning algorithm.The same tool exists for the Featured Snippets results in Google, which are commonly used as a basis for Google Android and Home to answer queries. The latter is particularly important as the voice results have been identified as giving offensive answers to loaded questions.

Resolution POV:

We support Google’s proactive measures in addressing these issues to ensure results have more integrity for advertisers and users; particularly, that the measures are a mixture of manual review and user feedback along with algorithmic updates to better scale the improvements. A core principle of machine learning is that as more data is available, the machine “learns” over time and the results improve. With trillions of searches being done on Google, this process and the amount of data required to be effective is significant. We also appreciate the scale and sensitive nature of what Google is undertaking here – which is, essentially, a form of content censorship across these trillions of searches that Google handles. It is a delicate balance between surfacing search results based on content’s “popularity” and the integrity and accuracy of the information in that content.

Summary:

These steps should be positive for brands and advertisers concerned about having paid search results appear next to offensive content. Since this is a human and technology endeavor, we expect there may be a ramp up time to scale the removal of these offensive results. From an organic search perspective, this should only serve to benefit ethical content marketers who cite proper sources and report accurate information in their content. It should also dilute malicious or inappropriate associations with brands in auto suggest when they occur. Finally, this update may have impacts on ranking in the Featured Snippet if content does not have significant authority and history that may now have a greater impact on its placement there.

Resolution continues to work closely with Google to ensure our clients’ campaigns are run optimally. If you have any questions, please reach out to your Resolution teams.