Google to Flag Offensive Content in Search
Google to Flag Offensive Content in Search
Using data from human “quality raters,” Google hopes to teach its algorithms how to better spot offensive and often factually incorrect information. Google is undertaking a new effort to better identify content that is potentially upsetting or offensive to searchers. It hopes this will prevent such content from crowding out factual, accurate and trustworthy information in the top search results. “We’re explicitly avoiding the term ‘fake news,’ because we think it is too vague,” said Paul Haahr, one of Google’s senior engineers who is involved with search quality. “Demonstrably inaccurate information, however, we want to target.”
Google told Search Engine Land <read full article> that has already been testing these new guidelines with a subset of its quality raters and used that data as part of a ranking change back in December. That was aimed at reducing offensive content that was appearing for searches such as “did the Holocaust happen.”
The effort revolves around Google’s quality raters, over 10,000 contractors that Google uses worldwide to evaluate search results. These raters are given actual searches to conduct, drawn from real searches that Google sees. They then rate pages that appear in the top results as to how good those seem as answers. Quality raters do not have the power to alter Google’s results directly. A rater marking a particular result as low quality will not cause that page to plunge in rankings. Instead, the data produced by quality raters is used to improve Google’s search algorithms generally. In time, that data might have an impact on low-quality pages that are spotted by raters, as well as on others that weren’t reviewed. Quality raters use a set of guidelines that are nearly 200 pages long, instructing them on how to assess website quality and whether the results they review meet the needs of those who might search for particular queries.
The results for that particular search have certainly improved. In part, the ranking change helped. In part, all the new content that appeared in response to outrage over those search results had an impact.
What happens if content is flagged this way? Nothing immediate. The results that quality raters flag is used as “training data” for Google’s human coders who write search algorithms, as well as for its machine learning systems. Basically, content of this nature is used to help Google figure out how to automatically identify upsetting or offensive content in general.
In other words, being flagged as “Upsetting-Offensive” by a quality rater does not actually mean that a page or site will be identified this way in Google’s actual search engine. Instead, it’s data that Google uses so that its search algorithms can automatically spot pages generally that should be flagged.
If the algorithms themselves actually flag content, then that content is less likely to appear for searches where the intent is deemed to be about general learning. For example, someone searching for Holocaust information is less likely to run into Holocaust denial sites, if things go as Google intends.
Being flagged as Upsetting-Offensive does not mean such content won’t appear at all in Google. In cases where Google determines there’s an explicit desire to reach such content, it will still be delivered.