From irrelevant, off-topic backlinks to cookie-cutter anchor text, there are more than a few clues hidden in your backlink profile that something spammy is going on. Alone they might not be something to worry about, but in conjunction, common red flags can spell trouble when you’re performing an audit on your backlink profile. In this week’s Whiteboard Friday, Kameron Jenkins shares her best advice from years working with clients on what to watch out for in a link profile audit.
Hey, guys. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins, and I work here at Moz. Today we’re going to be talking about auditing your backlink profile, why you might want to do it, when you should do it, and then how to do it. So let’s just dive right in.
It might be kind of confusing to be talking about auditing your backlink profile. When I say auditing your backlink profile, I’m specifically talking about trying to diagnose if there’s anything funky or manipulative going on. There’s been quite a bit of debate among SEOs, so in a post-Penguin 4.0 world, we all wonder if Google can ignore spammy backlinks and low-quality backlinks, why would we also need to disavow, which essentially tells Google the same thing: “Just ignore these links.”
I posed three reasons why we might still want to consider this in some situations.
Disavow is still an option — you can go to and submit a disavow file right now if you wanted to.
Google still has guidelines that outline all of the link schemes and types of link manipulation. If you violate those, you could get a manual penalty. In your Google Search Console, it will say something like unnatural links to your site detected, total or partial. You can still get those. That’s another reason I would say that the disavow is still something you could consider doing.
I know there’s like a little bit of back-and-forth about this, but technically Google has said, “Our stance hasn’t changed. Still use the disavow file carefully and when it’s appropriate.” So we’ll talk about when it might be appropriate, but that’s why we consider that this is still a legitimate activity that you could do.
I would say that, in today’s climate, it’s probably best just to do this when you see overt signs of a link scheme or link manipulation, something that looks very wrong or very concerning. Because Google is so much better at uncovering when there are manipulative links and just ignoring them and not penalizing a whole site for them, it’s not as important, I think, to be as aggressive as we maybe used to be previously. But if you do, maybe you inherit a client and you just look at their link profile for the first time and you notice that there’s something sketchy in there, I might want to consider doing it if there are signs. You’re an SEO. You can detect the signs of whether there’s a link scheme going on.
But if you’re not quite sure how to diagnose that, check for red flags in Moz Link Explorer, and that’s the second part of this. We’re going to go through some red flags that I have noticed. But huge disclaimer — seven possible red flags. Please don’t just take one of these and say, “Oh, I found this,” and immediately disavow.
These are just things that I have noticed over time. I started in SEO in 2012, right around the time of Penguin, and so I did a lot of cleanup of so many spammy links. I kind of just saw patterns, and this is the result of that. I think that’s stayed true over the last couple of years, links that haven’t been cleaned up. Some people are still doing these kinds of low-quality link building techniques that actually could get you penalized.
These are some things that I have noticed. They should just pique your interest. If you see something like this, if you detect one of these red flags, it should prompt you to look into it further, not immediately write off those links as “those are bad.” They’re just things to spark your interest so that you can explore further on your own. So with that big disclaimer, let’s dive into the red flags.
A couple of examples of this. Maybe you are working on a client. They are US-based, and all of their locations are in the US. Their entire audience is US-based. But you get a quick glimpse of the inbound links. Maybe you’re on Link Explorer and you go to the inbound links report and you see a bunch of domains linking to you that are .ru and .pl, and that’s kind of confusing. Why is my site getting a huge volume of links from other countries that we don’t serve and we don’t have any content in Russian or Polish or anything like that? So that might spark my interest to look into it further. It could be a sign of something.
Another thing is off-topic. My favorite example, just because it was so ridiculous, was I was working with an Atlanta DUI attorney, and he had a huge chunk of backlinks that were from party planning, like low-quality party planning directories, and they didn’t make any sense. I clicked on them just to see what it was. You can go to it and see okay, yes, there really is no reason they should be linking to each other. It was clear he just went to Fiverr and was like, “$5, here build me links,” and he didn’t care where they came from. So you might notice a lot of totally off-topic, irrelevant stuff.
But obviously a disclaimer, it might look irrelevant, but then when you dive in further, they are in the same market and they kind of have a co-marketing relationship going on. Just be careful with that. But it could be a sign that there is some link manipulation going on if you have totally off-topic links in there.
The second red flag is anchor text. Again, this is another cool report in Moz Link Explorer. You can go in there and see the anchor text report. When I notice that there’s link manipulation going on, usually what I see is that there is a huge majority of their backlinks coming with the same exact anchor text, and usually it’s the exact match keyword that they want to rank for. That’s usually a huge earmark of, hey, they’ve been doing some shady linking.
The example I like to use for this and why that is concerning — and there’s no percentage that’s like, whoa, that’s manipulative. But if you see a really disproportionate percentage of links coming with the same exact anchor text, it might prompt you to look into it further. The example I like to use is, say you meet with five different friends throughout the course of your day, different occasions. They’re not all in the same room with you. You talk to each of them and they all say, “Hey, yeah, my weekend was great, but like I broke my foot.” You would be suspicious: “What, they all broke their foot? This is weird. What’s going on?”
Same thing with anchor text. If you’re earning links naturally, they’re not all going to look the same and mechanical. Something suspicious is probably going on if they’re all linking with the exact same anchor text. So that’s that.
Nofollow to follow, this is another one — please don’t use this as a sweeping rule, because I think even Russ Jones has come out and said at a mass scale that’s not a good predictor of spamminess. But what I have tended to see is usually if they also have spammy anchor text and they’re irrelevant, usually I also see that there’s a really, really disproportionate ratio of nofollow to follow. Use these red flags in conjunction with each other. When they start to pile on, it’s even more of a sign to me that there’s something fishy going on.
Nofollow to follow, you might see something ridiculous. Again, it’s something you can see in Link Explorer. Maybe like 99% of all of their backlinks are follow, which are the ones that pass PageRank. If you’re going to do a link scheme, you’re going to go out and get the ones that you think are going to pass PageRank to your site. Then one percent or no percent is nofollow. It may be something to look into.
Same thing with links to domains. Again, not an overt sign of spamminess. There’s no magic ratio here. But sometimes when I notice all of these other things, I will also notice that there’s a really disproportionate ratio of, say, they have 10,000 inbound links, but they’re coming from only 5 domains. Sometimes this happens. An example of this: I was auditing a client’s backlink profile, and they had set up five different websites, and on those websites they had put site-wide links to all of their other websites. They had created their own little network. By linking to each other, they were hoping to bolster all of their sites’ authority. Obviously, be careful with something like that. It could indicate that you’re self-creating follow links, which is a no-no.
This one is just kind of like the eyeball test, which I’ll get to later. If you go to your inbound links, you can start to notice domain names that just look weird, and they’ll start to look off the more you look into stuff like this. When I was doing a lot of backlink auditing, what I noticed was that a lot of these spammier links came from low-quality directory submission sites. A lot of those tend to have or they would say “directory” in it or “DIR,” so like bestlinkdir.co, whatever. A lot of times when they have naming conventions like that, I have noticed that those tend to be low-quality directory submission sites. You could even eyeball or scan and see if there are any “DIR” directory-type of links.
Same thing with articles. Like back in the day, when people use to submit like e-zine articles or Article Base or something like that, if it has the word “article” in the domain name, it might be something to look into. Maybe they were doing some low-quality article submission with backlinks to their site.
Then if you tend to see a lot of links in their backlink profile that have like SEO link type naming conventions, unless you’re working on a site that is in the SEO space, they shouldn’t have a bunch of links that say like bestSEOsite.com or bestlinksforyou.com. I’ve seen a lot of that. It’s just something that I have noticed. It’s something to maybe watch out for.
These can be super helpful. If you see tool metrics that maybe there is a really high Spam score, it’s something to look into. It might be helpful that Moz on their Help Hub has a list of all 27 criteria that they look at when evaluating a site’s spamminess. That might be something helpful to look into how Moz’s Spam Score calculates spamminess.
DA and PA, just to know on this Domain Authority and Page Authority, if you see links coming from low DA or low PA URLs, just make sure you don’t write those off right off the bat. It could just be that those domains are very new. Maybe they haven’t engaged in a lot of marketing yet. It doesn’t necessarily mean they’re spammy. It just means they haven’t done much to earn any authority. Watch out for kind of writing off links and thinking they’re spammy just because they have a low DA or PA. Just something to consider.
Then finally we have the eyeball test. Like I said, the more you do this, and it’s not something that you should be engaging in constantly all the time nowadays, but you’ll start to notice patterns if you are working on clients with spammier link profiles. These kind of low-quality sites tend to have like the same template. You’ll have 100 sites that are all blue. They have the exact same navigation, exact same logo. They’re all on the same network. You’ll start to notice themes like that. A lot of times they don’t have any contact information because no one maintains the things. They’re just up for the purpose of links. They don’t care about them, so no phone number, no contact information, no email address, nothing. Also a telltale sign, which I tend to notice on these like self-submission type of link sites is they’ll have a big PayPal button on the top and it will say, “Pay to Submit Links” or even worse it will be like “Uses this PayPal to get your links removed from this site,” because they know that it’s low-quality and people ask them all the time. Just something to consider on the eyeball test front.
I hope this was helpful. Hopefully it helped you understand when you might want to do this, when you might not want to do this, and then if you do try to engage in some kind of link audit, some things to watch out for. So I hope that was helpful. If you have any tips for this, if you’ve noticed anything else that you think would be helpful for other SEOs to know, drop it in the comments.
That’s it for this week’s Whiteboard Friday. Come back again next week for another one.
It’s finally here, for your review and feedback: Chapter 7 of the new Beginner’s Guide to SEO, the last chapter. We cap off the guide with advice on how to measure, prioritize, and execute on your SEO. And if you missed them, check out the drafts of our outline, Chapter One, Chapter Two, Chapter Three, Chapter Four, Chapter Five, and Chapter Six for your reading pleasure. As always, let us know what you think of Chapter 7 in the comments!
They say if you can measure something, you can improve it.
In SEO, it’s no different. Professional SEOs track everything from rankings and conversions to lost links and more to help prove the value of SEO. Measuring the impact of your work and ongoing refinement is critical to your SEO success, client retention, and perceived value.
It also helps you pivot your priorities when something isn’t working.
While it’s common to have multiple goals (both macro and micro), establishing one specific primary end goal is essential.
The only way to know what a website’s primary end goal should be is to have a strong understanding of the website’s goals and/or client needs. Good client questions are not only helpful in strategically directing your efforts, but they also show that you care.
Client question examples:
Keep the following tips in mind while establishing a website’s primary goal, additional goals, and benchmarks:
Now that you’ve set your primary goal, evaluate which additional metrics could help support your site in reaching its end goal. Measuring additional (applicable) benchmarks can help you keep a better pulse on current site health and progress.
How are people behaving once they reach your site? That’s the question that engagement metrics seek to answer. Some of the most popular metrics for measuring how people engage with your content include:
Conversion rate – The number of conversions (for a single desired action/goal) divided by the number of unique visits. A conversion rate can be applied to anything, from an email signup to a purchase to account creation. Knowing your conversion rate can help you gauge the return on investment (ROI) your website traffic might deliver.
In Google Analytics, you can set up goals to measure how well your site accomplishes its objectives. If your objective for a page is a form fill, you can set that up as a goal. When site visitors accomplish the task, you’ll be able to see it in your reports.
Time on page – How long did people spend on your page? If you have a 2,000-word blog post that visitors are only spending an average of 10 seconds on, the chances are slim that this content is being consumed (unless they’re a mega-speed reader). However, if a URL has a low time on page, that’s not necessarily bad either. Consider the intent of the page. For example, it’s normal for “Contact Us” pages to have a low average time on page.
Pages per visit – Was the goal of your page to keep readers engaged and take them to a next step? If so, then pages per visit can be a valuable engagement metric. If the goal of your page is independent of other pages on your site (ex: visitor came, got what they needed, then left), then low pages per visit are okay.
Bounce rate – “Bounced” sessions indicate that a searcher visited the page and left without browsing your site any further. Many people try to lower this metric because they believe it’s tied to website quality, but it actually tells us very little about a user’s experience. We’ve seen cases of bounce rate spiking for redesigned restaurant websites that are doing better than ever. Further investigation discovered that people were simply coming to find business hours, menus, or an address, then bouncing with the intention of visiting the restaurant in person. A better metric to gauge page/site quality is scroll depth.
Scroll depth – This measures how far visitors scroll down individual webpages. Are visitors reaching your important content? If not, test different ways of providing the most important content higher up on your page, such as multimedia, contact forms, and so on. Also consider the quality of your content. Are you omitting needless words? Is it enticing for the visitor to continue down the page? Scroll depth tracking can be set up in your Google Analytics.
Ranking is a valuable SEO metric, but measuring your site’s organic performance can’t stop there. The goal of showing up in search is to be chosen by searchers as the answer to their query. If you’re ranking but not getting any traffic, you have a problem.
But how do you even determine how much traffic your site is getting from search? One of the most precise ways to do this is with Google Analytics.
Google Analytics (GA) is bursting at the seams with data — so much so that it can be overwhelming if you don’t know where to look. This is not an exhaustive list, but rather a general guide to some of the traffic data you can glean from this free tool.
Isolate organic traffic – GA allows you to view traffic to your site by channel. This will mitigate any scares caused by changes to another channel (ex: total traffic dropped because a paid campaign was halted, but organic traffic remained steady).
Traffic to your site over time – GA allows you to view total sessions/users/pageviews to your site over a specified date range, as well as compare two separate ranges.
How many visits a particular page has received – Site Content reports in GA are great for evaluating the performance of a particular page — for example, how many unique visitors it received within a given date range.
Traffic from a specified campaign – You can use UTM (urchin tracking module) codes for better attribution. Designate the source, medium, and campaign, then append the codes to the end of your URLs. When people start clicking on your UTM-code links, that data will start to populate in GA’s “campaigns” report.
Click-through rate (CTR) – Your CTR from search results to a particular page (meaning the percent of people that clicked your page from search results) can provide insights on how well you’ve optimized your page title and meta description. You can find this data in Google Search Console, a free Google tool.
In addition, Google Tag Manager is a free tool that allows you to manage and deploy tracking pixels to your website without having to modify the code. This makes it much easier to track specific triggers or activity on a website.
There are lots of different tools available for keeping track of your site’s position in SERPs, site crawl health, SERP features, and link metrics, such as Moz Pro and STAT.
The Moz and STAT APIs (among other tools) can also be pulled into Google Sheets or other customizable dashboard platforms for clients and quick at-a-glance SEO check-ins. This also allows you to provide more refined views of only the metrics you care about.
Dashboard tools like Data Studio, Tableau, and PowerBI can also help to create interactive data visualizations.
By having an understanding of certain aspects of your website — its current position in search, how searchers are interacting with it, how it’s performing, the quality of its content, its overall structure, and so on — you’ll be able to better uncover SEO opportunities. Leveraging the search engines’ own tools can help surface those opportunities, as well as potential issues:
While we don’t have room to cover every SEO audit check you should perform in this guide, we do offer an in-depth Technical SEO Site Audit course for more info. When auditing your site, keep the following in mind:
Crawlability: Are your primary web pages crawlable by search engines, or are you accidentally blocking Googlebot or Bingbot via your robots.txt file? Does the website have an accurate sitemap.xml file in place to help direct crawlers to your primary pages?
Indexed pages: Can your primary pages be found using Google? Doing a site:yoursite.com OR site:yoursite.com/specific-page check in Google can help answer this question. If you notice some are missing, check to make sure a meta robots=noindex tag isn’t excluding pages that should be indexed and found in search results.
Check page titles & meta descriptions: Do your titles and meta descriptions do a good job of summarizing the content of each page? How are their CTRs in search results, according to Google Search Console? Are they written in a way that entices searchers to click your result over the other ranking URLs? Which pages could be improved? Site-wide crawls are essential for discovering on-page and technical SEO opportunities.
Page speed: How does your website perform on mobile devices and in Lighthouse? Which images could be compressed to improve load time?
Content quality: How well does the current content of the website meet the target market’s needs? Is the content 10X better than other ranking websites’ content? If not, what could you do better? Think about things like richer content, multimedia, PDFs, guides, audio content, and more.
Pro tip: Website pruning!
Removing thin, old, low-quality, or rarely visited pages from your site can help improve your website’s perceived quality. Performing a content audit will help you discover these pruning opportunities. Three primary ways to prune pages include:
Keyword research and competitive website analysis (performing audits on your competitors’ websites) can also provide rich insights on opportunities for your own website.
Discovering website content and performance opportunities will help devise a more data-driven SEO plan of attack! Keep an ongoing list in order to prioritize your tasks effectively.
In order to prioritize SEO fixes effectively, it’s essential to first have specific, agreed-upon goals established between you and your client.
While there are a million different ways you could prioritize SEO, we suggest you rank them in terms of importance and urgency. Which fixes could provide the most ROI for a website and help support your agreed-upon goals?
Stephen Covey, author of The 7 Habits of Highly Effective People, developed a handy time management grid that can ease the burden of prioritization:
Putting out small, urgent SEO fires might feel most effective in the short term, but this often leads to neglecting non-urgent important fixes. The not urgent & important items are ultimately what often move the needle for a website’s SEO. Don’t put these off.
“Without strategy, execution is aimless. Without execution, strategy is useless.”
– Morris Chang
Much of your success depends on effectively mapping out and scheduling your SEO tasks. You can use free tools like Google Sheets to plan out your SEO execution (we have a free template here), but you can use whatever method works best for you. Some people prefer to schedule out their SEO tasks in their Google Calendar, in a kanban or scrum board, or in a daily planner.
Use what works for you and stick to it.
Measuring your progress along the way via the metrics mentioned above will help you monitor your effectiveness and allow you to pivot your SEO efforts when something isn’t working. Say, for example, you changed a primary page’s title and meta description, only to notice that the CTR for that page decreased. Perhaps you changed it to something too vague or strayed too far from the on-page topic — it might be good to try a different approach. Keeping an eye on drops in rankings, CTRs, organic traffic, and conversions can help you manage hiccups like this early, before they become a bigger problem.
Many SEO fixes are implemented without being noticeable to a client (or user). This is why it’s essential to employ good communication skills around your SEO plan, the time frame in which you’re working, and your benchmark metrics, as well as frequent check-ins and reports.
Late last week (Feb 28 – Mar 1), we saw a sizable two-day spike in Google rankings flux, as measured by MozCast. Temperatures on Friday reached 108°F. The original temperature on Thursday was 105°F, but that was corrected down to 99°F (more on that later).
Digging in on Friday (March 1st), we saw a number of metrics shift, but most notably was a spike in page-one Google SERPs with more than 10 organic results. Across the 10,000 keywords in MozCast, here’s what we observed at the high end:
Counting “organic” results in 2019 is challenging — some elements, like expanded site-links (in the #1 position), Top Stories, and image results can occupy an organic position. In-depth Articles are particularly challenging (more on that in a moment), and the resulting math usually leaves us with page-one SERPs with counts from 4 to 12. Friday’s numbers were completely beyond anything we’ve seen historically, though, with organic counts up to 19 results.
Across 10K keywords, we saw 9 SERPs with 19 results. Below is one of the most straightforward (in terms of counting). There was a Featured Snippet in the #0 position, followed by 19 results that appear organic. This is a direct screenshot from a result for “pumpkin pie recipe” on Google.com/US:
Pardon the long scroll, but I wanted you to get the full effect. There’s no clear marker here to suggest that part of this SERP is a non-organic feature or in some way different. You’ll notice, though, that we transition from more traditional recipe results (with thumbnails) to what appear to be a mix of magazine and newspaper articles. We’ve seen something like this before …
You may not think much about In-depth Articles these days. That’s in large part because they’re almost completely hidden within regular, organic results. We know they still exist, though, because of deep source-code markers and a mismatch in page-one counts. Here, for example, are the last 6 results from today (March 4th) on a search for “sneakers”:
Nestled in the more traditional, e-commerce results at the end of page one (like Macy’s), you can see articles from FiveThirtyEight, Wired, and The Verge. It’s hard to tell from the layout, but this is a 3-pack of In-depth Articles, which takes the place of a single organic position. So, this SERP appears to have 12 page-one results. Digging into the results on March 1st, we saw a similar pattern, but those 3-packs had expanded to as many as 10 articles.
We retooled the parser to more flexibly detect In-depth Articles (allowing for packs with more than 3 results), and here’s what we saw for prevalence of In-depth Articles over the past two weeks:
Just under 23% of MozCast SERPs on the morning of March 1st had something similar to In-depth Articles, an almost 4X increase from the day before. This number returned to normal (even slightly lower) the next day. It’s possible that our new definition is too broad, and these aren’t really traditional “In-depth” packs, but then we would expect the number to stay elevated. We also saw a large spike in SERP “real-estate” shares for major publications, like the New York Times, which typically dominate In-depth Articles. Something definitely happened around March 1st.
By the new method (removing these results from organic consideration), the temperature for 2/28 dropped from 105°F to 99°F, as some of the unusual results were treated as In-depth Articles and removed from the weather report.
Note that the MozCast temperatures are back-dated, since they represent the change over a 24-hour period. So, the prevalence of In-depth articles on the morning of March 1st is called “3/1” in the graph, but the day-over-day temperature recorded that morning is labeled “2/28” in the graph at the beginning of this post.
Is this a sign of things to come? It’s really tough to say. On March 1st, I reached out to Twitter to see if people could replicate the 19-result SERPs and many people were able to, both on desktop and mobile:
This did not appear to be a normal test (which we see roll out to something like 1% or less of searchers, typically). It’s possible this was a glitch on Google’s end, but Google doesn’t typically publicize temporary glitches, so it’s hard to tell.
It appears that the 108°F was, in part, a reversal of these strange results. On the other hand, it’s odd that the reversal was larger than the original rankings flux. At the same time, we saw some other signals in play, such as a drop in image results on page one (about 10.5% day-over-day, which did not recover the next day). It’s possible that an algorithm update rolled out, but there was a glitch in that update.
If you’re a traditional publisher or someone who generally benefits from In-depth Articles, I’d recommend keeping your eyes open. This could be a sign of future intent by Google, or it could simply be a mistake. For the rest of us, we’ll have to wait and see. Fortunately, these results appeared mostly at the end of page one, so top rankings were less impacted, but a 19-result page one would certainly shake-up our assumptions about organic positioning and CTR.
Moz’s Domain Authority is requested over 1,000,000,000 times per year, it’s referenced millions of times on the web, and it has become a veritable household name among search engine optimizers for a variety of use cases, from determining the success of a link building campaign to qualifying domains for purchase. With the launch of Moz’s entirely new, improved, and much larger link index, we recognized the opportunity to revisit Domain Authority with the same rigor as we did keyword volume years ago (which ushered in the era of clickstream-modeled keyword data).
What follows is a rigorous treatment of the new Domain Authority metric. What I will not do in this piece is rehash the debate over whether Domain Authority matters or what its proper use cases are. I have and will address those at length in a later post. Rather, I intend to spend the following paragraphs addressing the new Domain Authority metric from multiple directions.
The most important component of Domain Authority is how well it correlates with search results. But first, let’s get the correlation-versus-causation objection out of the way: Domain Authority does not cause search rankings. It is not a ranking factor. Domain Authority predicts the likelihood that one domain will outrank another. That being said, its usefulness as a metric is tied in large part to this value. The stronger the correlation, the more valuable Domain Authority is for predicting rankings.
Determining the “correlation” between a metric and SERP rankings has been accomplished in many different ways over the years. Should we compare against the “true first page,” top 10, top 20, top 50 or top 100? How many SERPs do we need to collect in order for our results to be statistically significant? It’s important that I outline the methodology for reproducibility and for any comments or concerns on the techniques used. For the purposes of this study, I chose to use the “true first page.” This means that the SERPs were collected using only the keyword with no additional parameters. I chose to use this particular data set for a number of reasons:
I randomly selected 16,000 keywords from the United States keyword corpus for Keyword Explorer. I then collected the true first page for all of these keywords (completely different from those used in the training set.) I extracted the URLs but I also chose to remove duplicate domains (ie: if the same domain occurred, one after another.) For a length of time, Google used to cluster domains together in the SERPs under certain circumstances. It was easy to spot these clusters, as the second and later listings were indented. No such indentations are present any longer, but we can’t be certain that Google never groups domains. If they do group domains, it would throw off the correlation because it’s the grouping and not the traditional link-based algorithm doing the work.
I collected the Domain Authority (Moz), Citation Flow and Trust Flow (Majestic), and Domain Rank (Ahrefs) for each domain and calculated the mean Spearman correlation coefficient for each SERP. I then averaged the coefficients for each metric.
Moz’s new Domain Authority has the strongest correlations with SERPs of the competing strength-of-domain link-based metrics in the industry. The sign (-/+) has been inverted in the graph for readability, although the actual coefficients are negative (and should be).
Moz’s Domain Authority scored a ~.12, or roughly 6% stronger than the next best competitor (Domain Rank by Ahrefs.) Domain Authority performed 35% better than CitationFlow and 18% better than TrustFlow. This isn’t surprising, in that Domain Authority is trained to predict rankings while our competitor’s strength-of-domain metrics are not. It shouldn’t be taken as a negative that our competitors strength-of-domain metrics don’t correlate as strongly as Moz’s Domain Authority — rather, it’s simply exemplary of the intrinsic differences between the metrics. That being said, if you want a metric that best predicts rankings at the domain level, Domain Authority is that metric.
Note: At first blush, Domain Authority’s improvements over the competition are, frankly, underwhelming. The truth is that we could quite easily increase the correlation further, but doing so would risk over-fitting and compromising a secondary goal of Domain Authority…
Historically, Domain Authority has focused on only one single feature: maximizing the predictive capacity of the metric. All we wanted were the highest correlations. However, Domain Authority has become, for better or worse, synonymous with “domain value” in many sectors, such as among link buyers and domainers. Subsequently, as bizarre as it may sound, Domain Authority has itself been targeted for spam in order to bolster the score and sell at a higher price. While these crude link manipulation techniques didn’t work so well in Google, they were sufficient to increase Domain Authority. We decided to rein that in.
The first thing we did was compile a series off data sets that corresponded with industries we wished to impact, knowing that Domain Authority was regularly manipulated in these circles.
While it would be my preference to release all the data sets, I’ve chosen not to in order to not “out” any website in particular. Instead, I opted to provide these data sets to a number of search engine marketers for validation. The only data set not offered for outside validation was Moz customers, for obvious reasons.
For each of the above data sets, I collected both the old and new Domain Authority scores. This was conducted all on February 28th in order to have parity for all tests. I then calculated the relative difference between the old DA and new DA within each group. Finally, I compared the various data set results against one another to confirm that the model addresses the various methods of inflating Domain Authority.
In the above graph, blue represents the Old Average Domain Authority for that data set and orange represents the New Average Domain Authority for that same data set. One immediately noticeable feature is that every category drops. Even random domains drops. This is a re-centering of the Domain Authority score and should cause no alarm to webmasters. There is, on average, a 6% reduction in Domain Authority for randomly selected domains from the web. Thus, if your Domain Authority drops a few points, you are well within the range of normal. Now, let’s look at the various data sets individually.
Using the same methodology of finding random domains which we use for collecting comparative link statistics, I selected 1,000 domains, we were able to determine that there is, on average, a 6.1% drop in Domain Authority. It’s important that webmasters recognize this, as the shift is likely to affect most sites and is nothing to worry about.
Of immediate interest to Moz is how our own customers perform in relation to the random set of domains. On average, the Domain Authority of Moz customers lowered by 7.4%. This is very close to the random set of URLs and indicates that most Moz customers are likely not using techniques to manipulate DA to any large degree.
Surprisingly, link buyers only lost 15.9% of their Domain Authority. In retrospect, this seems reasonable. First, we looked specifically at link buyers from blog networks, which aren’t as spammy as many other techniques. Second, most of the sites paying for links are also optimizing their site’s content, which means the sites do rank, sometimes quite well, in Google. Because Domain Authority trains against actual rankings, it’s reasonable to expect that the link buyers data set would not be impacted as highly as other techniques because the neural network learns that some link buying patterns actually work.
Here’s where the fun starts. The neural network behind Domain Authority was able to drop comment spammers’ average DA by 34%. I was particularly pleased with this one because of all the types of link manipulation addressed by Domain Authority, comment spam is, in my honest opinion, no better than vandalism. Hopefully this will have a positive impact on decreasing comment spam — every little bit counts.
I was actually quite surprised, at first, that link sellers on average dropped 56% in Domain Authority. I knew that link sellers often participated in link schemes (normally interlinking their own blog networks to build up DA) so that they can charge higher prices. However, it didn’t occur to me that link sellers would be easier to pick out because they explicitly do not optimize their own sites beyond links. Subsequently, link sellers tend to have inflated, bogus link profiles and flimsy content, which means they tend to not rank in Google. If they don’t rank, then the neural network behind Domain Authority is likely to pick up on the trend. It will be interesting to see how the market responds to such a dramatic change in Domain Authority.
One of the features that I’m most proud of in regards to Domain Authority is that it effectively addressed link manipulation in order of our intuition regarding quality. I created three different data sets out of one larger data set (auction domains), where I used certain qualifiers like price, TLD, and archive.org status to label each domain as high-quality, mid-quality, and low-quality. In theory, if the neural network does its job correctly, we should see the high-quality domains impacted the least and the low-quality domains impacted the most. This is the exact pattern which was rendered by the new model. High-quality auction domains dropped an average of 61% in Domain Authority. That seems really high for “high-quality” auction domains, but even a cursory glance at the backlink profiles of domains that are up for sale in the $10K+ range shows clear link manipulation. The domainer industry, especially the domainer-for-SEO industry, is rife with spam.
There is one network on the web that troubles me more than any other. I won’t name it, but it’s particularly pernicious because the sites in this network all link to the top 1,000,000 sites on the web. If your site is in the top 1,000,000 on the web, you’ll likely see hundreds of root linking domains from this network no matter which link index you look at (Moz, Majestic, or Ahrefs). You can imagine my delight to see that it drops roughly 79% in Domain Authority, and rightfully so, as the vast majority of these sites have been banned by Google.
Continuing with the pattern regarding the quality of auction domains, you can see that “mid-quality” auction domains dropped nearly 95% in Domain Authority. This is huge. Bear in mind that these drastic drops are not combined with losses in correlation with SERPs; rather, the neural network is learning to distinguish between backlink profiles far more effectively, separating the wheat from the chaff.
If you spend any time looking at dropped domains, you have probably come upon a domainer network where there are a series of sites enumerated and all linking to one another. For example, the first site might be sbt001.com, then sbt002.com, and so on and so forth for thousands of domains. While it’s obvious for humans to look at this and see a pattern, Domain Authority needed to learn that these techniques do not correlate with rankings. The new Domain Authority does just that, dropping the domainer networks we analyzed on average by 97%.
Finally, the worst offenders — low-quality auction domains — dropped 98% on average. Domain Authority just can’t be fooled in the way it has in the past. You have to acquire good links in the right proportions (in accordance with a natural model and sites that already rank) if you wish to have a strong Domain Authority score.
For most webmasters, this means very little. Your Domain Authority might drop a little bit, but so will your competitors’. For search engine optimizers, especially consultants and agencies, it means quite a bit. The inventories of known link sellers will probably diminish dramatically overnight. High DA links will become far more rare. The same is true of those trying to construct private blog networks (PBNs). Of course, Domain Authority doesn’t cause rankings so it won’t impact your current rank, but it should give consultants and agencies a much smarter metric for assessing quality.
We aren’t going to rest. An important philosophical shift has taken place at Moz with regards to Domain Authority. In the past, we believed it was best to keep Domain Authority static, rarely updating the model, in order to give users an apples-to-apples comparison. Over time, though, this meant that Domain Authority would become less relevant. Given the rapidity with which Google updates its results and algorithms, the new Domain Authority will be far more agile as we give it new features, retrain it more frequently, and respond to algorithmic changes from Google. We hope you like it.
Be sure to join us on Thursday, March 14th at 10am PT at our upcoming webinar discussing strategies & use cases for the new Domain Authority:
Not everyone has time to watch C-SPAN for five-and-a-half hours in the middle of the week. Not even to watch President Trump’s former lawyer and fixer Michael Cohen call Trump a “racist,” “con man” and “cheat,“ as happened on Wednesday. And not even to watch Cohen be forcefully questioned by Republicans in response.
As such, we rely on the news media to watch for us. But the media is not a monolith. How an outlet condenses a big event like the Cohen hearings can shade how its audience interprets the events. And when it came to cable news, the networks differed in their coverage of the hearing’s aftermath, as you might expect. But an analysis of how the words used by each network differed is a window into how they’re framing the threats to Trump’s presidency. MSNBC, for example, appeared particularly focused on the legal implications of the hearing — on Robert Mueller and prosecutors. CNN was heavy on issues of credibility, money and payments, and the claim by Cohen that Trump is a “racist.” And Fox News was especially focused on other news altogether, namely what was happening thousands of miles away, where Trump was sitting down with Kim Jong Un.
Certain specific words gave the Cohen hearing these flavors on each of the three cable networks. Using data from the Internet Archive’s Television News Archive and processed by the GDELT Project, we analyzed the coverage of Cohen on CNN, Fox News and MSNBC from 5 p.m. to midnight the day of the testimony.1 To suss out any differences in the networks’ coverage, we first looked at when “Cohen” was spoken and which other words were said within the same 15-second window. (That’s the size of the clips we can access from the data sources.) Then we looked at the 200 most-used Cohen-adjacent words across the three networks and isolated the 15 words that were most particular to each network. (By most particular, we mean the words that were used relatively more often in a network’s Cohen coverage.)2 You’ll see those words plotted on the chart below; we arranged each word by what percentage of clips that used that word came from each network. For example, of all the Cohen-related clips mentioning the word “summit,” 80 percent were on Fox News, 15 percent were on MSNBC and 5 percent were on CNN.
The words close to each network’s corner of the coverage triangle are the ones most specifically associated with its coverage. For CNN, “certainly,” “credibility” and “racist” stood out. Fox News was notable for its use of the word “summit” — presumably in reference to Trump’s meeting with North Korean leader Kim Jong Un, which happened around the same time as the Cohen hearing. And MSNBC’s coverage was distinguished by its talk about “prosecutors” and “Mueller.” (Words in the center, such as “news,” were used a lot but not especially favored by any network in particular.)
And the qualitative flavor of the coverage varied widely as well. CNN talking head Chris Cillizza baldly declared “winners” and “losers” from the hearing. The former included the performance during the hearing of U.S. Rep. Alexandria Ocasio-Cortez of New York — “and man, did she nail it.” Ocasio-Cortez’s interrogation of Cohen was praised elsewhere for being “well thought out.” The latter included Mark Meadows, the chairman of the House Freedom Caucus, who was “out for blood,” revealing little in his questions beyond his contempt for Cohen.
On the other hand, other coverage suggested that the hearing was merely a tool for Trump’s opponents and that given Cohen’s history of lying, the whole thing was something of a farce. The day after the hearing, the morning show Fox & Friends, for example, went meta, declaring that the media “misses the mark.” “This is what you get when you have partisan political operatives masquerading as journalists,” said Ned Ryun, a Republican strategist and the show’s guest. “They can barely control their glee.” He went on to call it “theater of the absurd” and a “total clown show.”
Fox & Friends then stepped hard on the FiveThirtyEight brand, comparing data on the amount of time the cable news networks had spent on Cohen versus the U.S.-North Korea summit during the run-up to the hearing, lamenting the fact that the other networks had given far more airtime to the former. “There’s a reason they call us fair and balanced,” a host said. (FiveThirtyEight has not independently verified those numbers.) The summit fell apart early, and no deal was reached.
There will be more political events in the weeks and months to come. The cable networks’ coverage surely won’t march in lockstep on those, either. We’ll be watching.
From ABC News: