Each year, Google’s algorithms become more refined — enhancing the daily Internet-goer’s experience while also introducing innovative tools to PPC marketers.
By and large, Google’s ranking algorithms steer the process: Google’s Quality Raters continue to surpass all odds, maintaining organic digital landscapes conducive to quality brand service outreach.
If you want to understand what are those guidelines and how the rating system works, keep reading. Here you will see:
What Are Google Quality Raters?
Many have heard about Google’s Search Quality Raters Guidelines, but they can still be a little confusing for many.
Whether you’re a fledgling business owner, an entrepreneur, or even an established industry pro, knowing the ins and outs of Google’s search metrics, and how they’ll affect your site’s visibility, is pretty important.
Interestingly enough, Google’s Search Quality Raters aren’t digital tools utilized for automated website ranking.
Google’s SEO judgment algorithms are indeed intricate — and they’re only getting better at identifying, and increasing the visibility of, organic content.
The approach Google takes via its Quality Raters, however, is far more advanced in terms of perceiving and gauging the usefulness of a webpage: It’s based upon human feedback.
How Does The Rating System Work?
Google’s Quality Raters Guidelines hold the official details on Google’s approach to gauging website quality — and, thus, it’s standing within the search results.
These guidelines, essentially, outline the core conditions, performance benchmarks, and webpage elements that best determine overall quality.
Google has hired thousands of volunteers to engage with, and subsequently rate, numerous sites around the web.
After visiting, they’re encouraged to rate each website within multiple categories. It’s important to note that both positive and negative ratings provided by Quality Raters don’t impact a website’s position within Google search results. Not directly, at least.
In essence, Quality Rater feedback follows a structure based upon a sliding scale. The scale contains nine values, from “lowest” to “highest” per category rated.
The overall ratings are combined, averaged, and later processed by Google’s machine learning algorithms.
Via this process, Google can then further refine the algorithms it already uses: better gauging the quality of website content, UI navigation, and overall ease of use.
The Quality Score At Large: E-A-T
Because Google always ranks webpages after algorithmically determining their overall quality, page quality is ultimately defined by its importance as a search result.
Page Quality Rating, or “PQ,” for short, is the overall grade determined by Google’s Quality Raters.
Quality Ratings, more or less, define how well a website achieves its purpose, so to speak.
“Purpose,” of course, differs from website to website: While an e-commerce site’s purpose might be dictated by ease of navigation and product identification, an information hub’s purpose might be determined by multimedia quality, content spatiality or even indexing.
The overall purpose, in any event, also tends to be a summary of a page’s quality of hyperlinks, creator expertise, and brand citations.
Understandably, a score indicative of a webpage’s ability to serve its purpose can be pretty impactful.
Naturally, Google places a lot of weight on a webpage’s digital safety. Any encounters with cookie-based malicious code, link reroutes or even excessive or invasive ads can reduce a webpage’s perceived quality.
This is a standard Google practice, of course, and to be expected in any rating process Google employs.
User Search Response
Another primary metric Google looks at is a webpage’s responsiveness to user queries.
Google utilizes semantic search algorithms to not only understand the denotative aspects of a page’s text but also the “meaning” behind it.
It is then able to gauge a visitor’s quality of visiting time based upon day-to-day human behavior, as opposed to mechanical standards.
In Google’s search results, this is also the case: To direct queries, Google presents answers to web-surfers based upon perceived semantic intent.
So, if your webpage matches a user’s real intent, it’ll generally gain a couple of rungs within the search results.
Because every website serves a separate purpose — or even multiple purposes — Google additionally considers a webpage’s overall semantic value, as well as other pages linked through said webpage’s UI.
Overall E-A-T Value
The semantic value of a website’s content, then, introduces Google’s most powerful rating system, one which gauges value based upon how reliable a website’s contents are, in general.
Collaboratively known as “E-A-T,” this measurement standard is guided by three metrics:
To fully understand why E-A-T is Google’s major rating system, we’ll need to first take a closer look at each metric, in its own right.
Firstly, Google takes a closer look at the website’s content creators. An e-commerce website for clothing, for example, would be analyzed for its owner’s listed expertise within the industry.
A higher frequency of listed credentials results in an E-A-T ratings boost — and this impact grows with the number of verifiable sources backing these credentials.
And so we have E-A-T’s next metric: Authoritativeness.
When a website contains published content, Google takes a closer look at its author. A website covering exercise science intricacies, for example, would be examined for verifiable credentials within the fitness world.
As a website’s content increases in complexity, its analysis for authoritativeness thus scales: A case study covering scientific discoveries, for example, would carry a higher rating when published through the NASA website, as opposed to a local scientific news magazine.
Returning to Google’s prioritization of visitor safety, we have the final metric of E-A-T, which is a website’s overall “trustworthiness” not only digitally but also conceptually.
In the above-mentioned example, Google would compare pre-existing expertise scores with additional author credentials.
Articles published through peer-reviewed journals, for instance, carry a higher trustworthiness value.
The Amount Matters: Leveraging Content Quality
Understandably, Google also examines a website’s content for content’s sake. This metric is called “Main Content,” or “MC,” for short.
This major criterion is the underlying calculation driving a significant portion of a webpage’s PQ rating.
This is, by and large, because the vehicle of webpage experience “delivery” is the material consumed by visitors, while they’re there.
The First Standard: Operation
To measure a website’s MC quality, Google targets and analyzes several key qualities.
Initially, it checks for easily identifiable quality indicators, like UI availability, content errors, and scalability when viewed on mobile devices.
The Second Standard: Clarity
Next, Google takes a webpage’s ease of consumption into consideration. A page’s content, for example, would be measured for clarity.
Because every website’s purpose is different, however, this metric is augmented to fit its environment.
To manage this, Google analyzes a webpage’s overall comprehensiveness: Pages with more detail, in most cases, will rate higher. For this reason, long-form content tends to fare better when analyzed.
The Third Standard: Presentation
Similar to a webpage’s perceptual clarity, its spatial clarity gets a close examination.
Google often refers to this as a website’s “presentation.” Generally, presentation takes things like content navigation by looking for headings, effective menu item spacing, and product identification.
Google’s content quality standards are surprisingly robust, but they’re not examined in isolation.
Google takes a comparative snapshot of a website’s core functionality, section by section. Here, things like plugins and runtimes are analyzed for functionality.
While poor code design and technical errors reduce a page’s score — quick, navigable, and useful technical designs receive a point boost.
It’s important to note that Google’s E-A-T score is a constant consideration, throughout each analysis. As such, a website’s technical design is better gauged in relation to the creator’s relayed expertise.
From Internal Analysis to External Views: Reputation
Even though Google’s website analysis prioritizes its quality via an inside look, it does take the external opinion into account. From beyond its Quality Raters, that is.
Google seeks out any and all identifiers of a website’s reputation, then comparing them to its pre-established “internal score.”
Primarily, it does this by locating direct references to said website and then applying its semantic-based analytics to get a comprehensive, human-minded, snapshot of a website’s perceived value in the eyes of everyday Internet-goers.
Indeed, this means that a website’s score on Yelp, while important, isn’t everything.
As another example: An online store’s product reviews, as well as its service reviews, are examined beyond any denotative, numerical scores initially crawled by Google’s examination.
If Google’s placed importance on internal navigation tells us anything, it’s that the visitor’s comfort is greatly considered.
As such, Google not only measures how navigable a website’s pages are but also how navigable the digital road to its “front door” is.
At first glance, this would appear to be a comprehensive take on link availability and website entry load times — from Google’s featured product listings, for example.
Yet Google’s approach to visitor access feasibility matches its thoroughness displayed by E-A-T.
A website’s listed contact information, internally, is examined for accuracy, as well as the technical availability of any interactable maps.
Then, from the “outside,” Google looks at the same information’s availability on other websites it engages: While LinkedIn listings are scanned for matching contact information, affiliate posts are also checked up on — and semantically so — for matching authenticity, authority, and accuracy.
More Reputation Identifiers
Google’s take on external website reputation is pretty thorough, which makes sense.
For this reason, any affiliates of a website’s overarching brand are examined in conjunction with E-A-T analysis.
Notable recognition, awards and reputable network interaction is similarly valued, as well as the frequency of content shares and product promotions by everyday shoppers.
Reputation And The Quality Raters’ Guidelines
When undergoing this examination, a website’s external reputation values are compared with interpretations gleaned from Quality Raters.
On first glance, this simply increases the comprehensiveness of Google’s human-based outlook of any given corner of digital real estate. Yet this comparison is useful for another reason: relevancy.
Historical reputation matters, but immediate perception matters, too. How these two metrics compare and contrast is also considered, so as to give Google a better understanding of first impressions, long-term impressions, and everything in between.
This tends to be particularly impactful for online businesses which carry specialized product lines, as their consumer segments are similarly particular.
As such, short-term impressions, when compared to long-term engagement, ratings and reviews, can reveal a staggering degree of insight into a brand’s quality, overall.
The Special Considerations: Your Money or Your Life Pages
Yes, this is the metric’s Google-officiated name.
“Your Money or Your Life” pages, abbreviated as “YMYL” page, are easy enough to ascertain: They’re pages that capably impact a visitor’s “future happiness, health, financial stability, or safety.”
YMYL is a particularly important part of the latest Google Search Quality Evaluator Guidelines, and its weight of importance has only increased since July 2019.
Additionally, YMYL’s importance when compared to E-A-T has become a Google priority, lately.
Determining YMYL Qualification
So, what makes a website potentially life-threatening?
As per Google’s guidelines, YMYL pages are identifiable by the frequency of verifiable facts they contain.
Specifically, these verifiable facts (also measured for authority, reputation, and semantic approachability) must be capable of impacting one’s actions after content consumption.
Interesting, webpages containing opinion articles simply referencing verifiable, fact-based resources, however valid, are not often considered to be YMYL.
Here, the underlying decision is based upon the frequency of primary resources. This said, YMYL isn’t always an objective identifier to be determined. This isn’t due to Google’s measurement of verifiable, authoritative facts and primary resources, however.
It’s due to the fact that, sometimes, pages can be unexpectedly impactful in a very YMYL way.
A commonly overlooked YMYL identifier, for example, would be a camping website that states primary resources for camping safety.
Google’s Final YMYL Word
Fortunately, there aren’t too many surprises if, or when, a website’s content is deemed to be YMYL.
Google notifies the owner, and any visible links to the content, itself, are subsequently given a YMYL tag to let potential readers know what they’re in for, ahead of time.
Mobile Matters: UI Difference and Perceived Quality
Above, we’d touched upon mobile UI responsiveness. This tends to be lower on Google’s priority list when its Quality Raters get to business, but it does indeed matter.
Google does, of course, pay full attention to every metric and mobile design, usability, navigability, and responsiveness are given an ample serving of scrutiny from the Internet titan.
Google moved to mobile-first indexing back in March 2018. This means that it began giving mobile pages priority over desktop pages, when index analysis enters the mix. But why so?
It’s because mobile search frequency has officially overtaken desktop search frequency — and then some.
In the past, mobile accessibility was considered to be more of a “glitz and glam” metric. Now, this is anything but the case, and the Quality Raters’ Guidelines does this paradigm shift justice in its depth of metrics to look out for.
Quality Raters are given query results via Google’s rating toolkit on desktop, first. Then, they’re told to double down on their analysis via mobile.
For now, this seems to remain a constant: Despite Google’s primary focus on mobile accessibility, it seemingly still considers desktop accessibility to be an underlying primary standard of comparison.
It’s uncertain when this, too, will change but it’s fairly assumed that it will.
Google’s Overall Interpretation, SEO And Content Marketing
Google’s final elements of overall website analysis give us a pretty comprehensive outlook on where digital marketers should take extra care.
But its metrics, as a whole, are incredibly valuable information resources in terms of long-lasting potential of any given website’s effectiveness when circulating across the web.
In any event, a good rule of thumb is to consider Google’s more objective guidelines, first. E-A-T is certainly an impactful set of metrics to abide by, but website responsiveness, and definitely a bug-free user experience, should be the first thing you check.
After this, Google’s E-A-T guidelines are a good foundation to base your quality audits on.
They can seem a little subjective at times but, when compared to a pre-existing lead generation strategy, they tend to reveal a lot about overall website capability.
Once you’ve had your fill, take a closer look at the other metrics Google has instated. They’ll likely update, technically, many times but their overall value as quality indicators is unlikely to change.
In terms of SEO and Content Marketing, one could do much worse than make the most out of Google’s Quality Score Guidelines. They’re incredibly important to a website’s impact across the Internet, and they’re integral to a webpage’s visibility on the Google listings.
If this isn’t your first Digital Marketing rodeo, we’re here to talk strategy regardless: At every experience level, we believe there’s always more to learn.
If you want to take the next step, and leverage your strategy, take our SEO Maturity Assessment and identify where to focus efforts to evolve to the next stage on the maturity curve!