Short – THEORY on how to tune for Rank Brain



Thanks for listening to SEO Fight Club. This isn’t a full episode. This is just a short, but it is a very important and time sensitive topic so I wanted to share it right away.

On October 26, 2015
Jack Clark
At Bloomberg Business
Wrote an article announcing RankBrain to the world.

I’ll put the link in the show notes.

Since the announcements I’ve heard dozens of SEO try to explain Rank Brain and fail and none of them were able to tell you how to tune for it. It just so happens that I am an software engineer with some background in AI. I’m going to show you what rank brain does in this short and I will tell you exactly how to tune for it so you can start taking advantage.

In the article Google says

For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain, said Greg Corrado, a senior research scientist with the company

RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors

This is geek speak for Natural Language Processing… this is where software can understand the clauses and parts of speech within a sentence. NLP can identify the subject and predicate in a sentence and which verbs are actions of which nouns and which adjective phrases apply to which noun phrases and so on. They can even map things across different sentences. And of course all the synonyms, equivalent phrasings, and word stemming too.

The article then says:

If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.

Because this is NLP it is no longer basic search term matching… it is mathematically understand what an answer to a question is based on language graphs. Because of this it can handle never-before-seen queries.

The system helps Google deal with the 15 percent of queries a day it gets which its systems have never seen before, he said.

For example, it’s adept at dealing with ambiguous queries, like,
“What’s the title of the consumer at the highest level of a food chain?”

Rank brain is breaking down the clauses in the question to find the criteria for the answer:

What’s the title (that’s the basic query)
of the consumer (criteria on which title)
of a food chain (criteria on which consumer)
at the highest level (criteria on which level of a food chain)

Then rank brain is looking for sentences with clauses that meet those criteria.

Acceptable Answers to the question are:

tertiary consumer
quaternary consumer
apex predators

The most common accepted answer appears to be “predators”

If you look at the results for the example they provide one of the page one results is OFF TOPIC… but it demonstrates the value of a high PR page with an exact match of the question in the main content but no answer:

Above that in the #2 spot is a result with the most common answer.

Google knew to make “predators” bold. They know to treat it like a keyword match.

This is likely what rank brain is… on a question query it folds in the likely answer to the question as one of the search term matches to count as a hit.

The pages with the exact match question and no answer ranked lower than the pages with the generally accepted correct answer.

The second result clearly states: “At the top of the levels are predators” with predators put in bold. Google knowing to make predators bold IS the evidence that what I am describing is in fact happening in some fashion behind the scenes.

So if you want to rank well for question and answer format you need both the question and the answer on the page. You probably want to mark them up with markup.
An optimized sentence answer to the question might perform better than just a simple answer like:

“Apex predators are the consumer at the highest level of a food chain.”

Specificity probably helps your answer score. But I haven’t tested this. I’m just making assumptions about the observations that we now know to be true.

You probably want to make sure that your answer is in agreement with the generally accepted answer.

Google’s AI is probably finding all the variations of the same question and all the variations of the same answer and using the one’s with the most agreement as the “correct” answer.

There is more evidence that what I am describing is happening. Midway down the page Google displays a “People Also Ask” block of similar questions:

Google has to understand what makes questions similar and that takes NLP.

The article goes on to say “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query”

“Google search engineers, who spend their days crafting the algorithms that underpin the search software, were asked to eyeball some pages and guess which they thought Google’s search engine technology would rank on top.”

“While the humans guessed correctly 70 percent of the time, RankBrain had an 80 percent success rate.”

“Typical Google users agree. In experiments, the company found that turning off this feature “would be as damaging to users as forgetting to serve half the pages on Wikipedia,”

This makes sense because pages with the correct answer to your question are typically much more useful than pages without the correct answer.

Episode 4 – Real Negative SEO

Get it on iTunes

Hi and thank you for listening to SEO Fight Club. I’m Ted Kubaitis and I have 22 years of web development and SEO experience. I have patented web technologies and started online businesses. I am both an engineer and a marketer. My goal is to help you win your SEO fights.

This episode’s FREEBIE!

With every episode I love to give something away of high value. This episode’s SEO freebie is my own personal 40 point negative SEO checklist. If you are concerned about negative SEO then this checklist will help you conduct periodic audits for suspicious activity. This free download will save you hours on checking your websites for attacks.

You can download the freebie at:

Negative SEO refers to the practice of using techniques to ruin a competitor’s rankings in search engines, but really just in Google.

Most people think of Negative SEO as unwanted spammy backlinks. To put it simply, those people have a failure of imagination. To understand the dozens of tactics of negative SEO you first have to understand its numerous motivations and intentions.

Here are some of those intentions:

  • Get a site banned from search results.
  • Get a page demoted in the rankings.
  • Steal a website’s customers.
  • Steal a website’s content.
  • Steal a website’s viewers or contributors.
  • Change the topics a webpage ranks for.
  • To hurt the reputation of the website.
  • To hurt the reputation of the author.
  • To hurt the reputation of a product.
  • To disrupt a website’s ad revenue.
  • To squander a website’s ad spend.
  • To ruin a website’s content or data.
  • To disrupt the operation of the website.
  • To cause financial harm to a website.
  • To cause confusion about products and services or to blur the lines
  • differentiating them.
  • To slander or make false claims about a website, business, or person.
  • To bully, harass, or otherwise intimidate a person, website, or business.

I am certain there are more I’m not thinking of. Very quickly people can start to sense how they or people they know have already been hurt by negative SEO. I’m sure you are already sensing the kind of uneasy ground we all stand on now. When a business gets hit by these it can be crippling. It is absolutely devastating when you are hit by multiple of these at the same time.

There are an estimated 40 kinds of negative SEO tactics that I know of and the list seems to grow every year as Google adds new things to punish that can be weaponized by negative SEO practitioners.

  • spammy links
  • false ratings and reviews (worst product ever)
  • rating and review spam (this is ok but my product over here is better)
  • content spam
  • content theft
  • GoogleBot interruption
  • False canonicalization
  • False authorship
  • toxic domain redirection
  • denial of service
  • crippling the site’s speed
  • fraudulent DMCA takedown
  • Cross site scripting
  • Hacking the site
  • Keyword bombing posts and comments
  • Website or Social Identity Theft
  • Fake complaints
  • Injecting Malware
  • Click-spamming ads
  • CTR and other false signal bots
  • faking Email spam to get competitor publicly blacklisted
  • Fake bots that claim to be competitor and behave badly (again to get publicly
  • blacklisted)
  • Submitting alternative URLs or hosts to exploit missing canonical tags
  • Link building a page into a new keyword context
  • Link building incorrect pages into the keyword context for bad experience
  • Flooding a web database with bogus data
  • Posting adult or damaging content
  • Linking from adult sites or other toxic locations
  • Disavow misuse to declare a competitor as spam to google
  • Misuse of Google’s spam report form
  • Inserting grammar, spelling, and content encoding errors
  • Unwanted bot and directory submission
  • Redirecting many domains bot activity onto a target site all at once.
  • Negative and false Press
  • Inclusion in blog networks, link wheels, other linking schemes
  • Content overflow… keep posting to a one page thread until the page is too big
  • Topic Flooding… flood the forum with so many crappy posts the forum becomes unusable
  • Keep replying to bad or outdated posts so it keeps fresher or better content off the main indexes
  • Pretend to be a competitor and ask for link removals
  • Flooding junk traffic to a site so Google gets the wrong idea about the site’s
  • audience location or demographics
  • Domain squatting and hijacking

I’m not sure how many of these are still effective but I want to tell you about my experience to one of them that was extremely devastating. The GoogleBot interruption attack.

I used to say “negative SEO isn’t real”. My desk is in the engineering bullpen. There are no cubes or offices. This allows everyone to overhear of all of the issues of the day. I heard the network admin complaining about very weak denial of service attacks on our websites.

The specific type of denial of service attack my network administrator was battling is called “slow loris”.

Slow Loris Defined

Slowloris is a piece of software written by Robert “RSnake” Hansen which allows a single machine to take down another machine’s web server with minimal bandwidth and side effects on unrelated services and ports.

Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never completing—the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients.

Source: Wikipedia

The attacks didn’t make much sense. We could detect them and block them within an minutes, and they would keep appearing every two to three weeks. This went on and on months and possibly years. We are not entirely sure when the attacks started.

We didn’t understand the motivation behind these attacks. We could easily counter them. Why was someone working so hard to do this when we could stop it so quickly when it happens?

Several months went by and I was in a meeting trying to explain the unusual volatility in SEO revenue. While in that meeting I got chills down my spine. I had a thought I just couldn’t shake. Later, I put the chart of the slow loris attacks on top of the chart for SEO revenue, and every drop in SEO followed a slow loris attack. From then on I knew negative SEO was very real and very different from what everyone thought negative SEO was. This was a very effective negative SEO attack. It had absolutely nothing to do with backlinks.

I spent the next few weeks learning what I could about this attack. I learned how it worked. Basically, the attacker was waiting for an indication that googlebot was crawling our site, then they would launch the attack so our web server would return 500 errors to Googlebot. Googlebot would remove the pages that returned a 500 error from the search results. Googlebot would not retest the pages for days This is not the case today. Google will retests pages within hours now, but it was the case at the time. To make things even worse, once googlebot found working pages again they would reappear several places lower in the results for about 2 or 3 weeks before recovering to their original positions.

These attacks that we assumed were unsuccessful and weak were totally the opposite. They were both successful and devastating. They had lasting effects and the timing of the attacks was keeping us pinned down in the rankings.

If you were only watching rankings then this would just look like normal everyday Google dance. No one cares that one week a page went down 6 spots and then two weeks later comes back up. We have thousands of pages across 20 websites, and most of those websites are on shared servers. Google Search Console tools doesn’t let you see combined impact across sites. If I wasn’t reporting on SEO revenue, which many SEOs object to, this would have continued undetected.

So now I knew the attack was real, and I knew how it worked. So how do I stop them?

For the interruption attack to be effective the attacker needs to time his attack to coincide with Googlebot’s visit to the website. How can they do this? There are five ways I can think of:

Monitor the google cache and when a cache date is updated you know googlebot is crawling.
Analyze the cache dates and estimate when Googlebot will come back
Cross-site scripting to see visiting User agents
Attack often and hope for the best
Hack the system and access the logs

I believed the attacker was probably doing #1.

I put a NOARCHIVE tag on all of our pages. This prevents Google from showing the Cached link for a page. This would stop the attacker from easily monitoring our cache dates.

The attacks stopped for about 4 months following that change. I thought we had won, but I was wrong.

Late in the third quarter of 2014 we were hit hard by an extremely-precise attack. Our attacker then went dormant. The attacker had his attack capability back and the attacker knew we were on to them. We suspected he was picking his timing more carefully now. It was the second time I got chills down my spine. Like most online stores we do most of our sales in the fourth quarter. My suspicion was that the attacker was lying in wait for Black Friday. One of these attacks the week before Black Friday would cripple our top performing weeks of the year.

We scrambled to figure out how they were timing the attacks with GoogleBot. We failed. The week before Black Friday we were hit harder than we were ever hit before. We lost seventy percent of our SEO revenue for the month. I was devastated.

The company accepted that the attacks were amounting to significant losses. We knew the attacks were going to continue. We invested hundreds of thousands of dollars in security appliances that detect and block hundreds of different attacks.

It took six months to get all the websites protected by the new firewalls and to get all the URLs properly remapped. We had finally stopped the onslaught but at a pretty heavy cost in time and money. This year we have seen high double-digit growth in SEO. It is due to stopping the negative SEO attacks.

The attack I just described I call a “GoogleBot Interruption Attack”. Negative SEO is so new these attacks probably don’t have official names yet.

I have seen a number of other attacks too, but none we as crippling as the GoogleBot interruption attack.

Another attack I have encountered is when a black hat takes a toxic domain name that has been penalized into the ground and then points the DNS to your website. Some of those penalties appear to carry over at least for a short while. The worst is when a lot of these toxic domains are pointed all at once at your website.

Another similar attack to that is when an attacker redirects the URLs from the toxic sites to your URLs. This has the effect of giving your website a surplus of bad backlinks. What is scary about this is the attack can recycle those toxic backlinks and change targets over and over again.

Another attack is the attacker targets a page that is missing a canonical tag by submitting a version of the URL that works but Google has never seen before. This is done by adding things like a bogus URL parameter or anchor text. Then they link build to the bogus URL until it outranks the original. The original will fall out of the results as a lower PR duplicate. Then they pull the backlinks to the bogus URL, and they have effectively taken a page out of the Google index until Google recalculates PR again. Just put a canonical tag on every page and you can protect yourself from this one.

Another attack just requires a lot of domains and they don’t have to be toxic. The requirement is that they are indexed and visited by a LOT of bots. The attacker in many cases will point hundreds of these domains at a single website and use the collective of bot activity as a denial of service against the website.

I’m certain there are more kinds of attacks out there. It is limited to the creativity of the attackers, and the bad guys can be pretty creative. I am constantly afraid of what I might find next having run the gauntlet and paid the price already.

Your only hope is to accurately attribute SEO revenue and monitor it regularly. Conversions are good, but if you’re looking for strong signal on the health of your SEO then revenue is better. Revenue implies good indexation, rankings, traffic, and conversions all in one very sensitive gage. Conversions are good too, but the needle doesn’t move as much, making it harder to see the signals.

Secondly… sit next to your engineers. The issues they encounter are directly relevant to the effectiveness of your SEO. The frantic firefighting of the network administrator is one of the best indicators. Log serious events and plot them with revenue and other KPIs

Third… logs. The crawl error logs in Google Search Console and your web server logs tell you about the issues googlebot encounters and the attempts made on your server.

  • Lots of 500 errors might signal a GoogleBot interruption attack.
  • Lots of 404 errors might be toxic domain redirection.
  • Lots of URLs in errors or duplicate content that make no sense for your website might signal canonical misuse.

Following each and every soul-crushing SEO revenue event I had to pour through the logs and testimony of everything to try and make sense of things. Not every SEO event was an attack. In many cases the events were caused by errors deployed on our websites. Or the marketing team installed a new problematic tracking pixel service. Several times the owners bought domains and pointed them at our sites not knowing that the previous owners had made them permanently toxic. As an SEO, you need to detect and address these as well:Revenue, Logs, and general awareness of daily events was critical to early detection.

when I went to the reader base of many popular SEO forums and blogs, I was ridiculed and called a liar for asking for help with a problem most SEOs had never seen or heard of before. It was all too common that the peanut gallery of SEO professionals would criticize me for not having links and kept saying I had the burden of proof. These were supposedly members of the professional SEO community, but it was just a political flame war. The black hat community actually helped me research the kinds of attacks I was facing, explained how they worked and suggested ideas for countering them. Google and the SEO community in general were very unsupportive. I’m going to remember that for a very long time.

For some reason we are a big target. It is probably because so many of our products compete with similar products that are often affiliate offerings. If you are an online retailer that does a lot of sales seasonally, you need to be on the look out. The big threat is solved for my sites for now, but the vast majority of retail sites are unprotected, and many of them aren’t in a position to solve the issue the way we did.

Over the years I’d say we’ve been attacked hundreds of times but it wasn’t until 2014 that we became aware of it, and there were a lot of random events that helped that happen. There is “security by obscurity” for most websites. You have to be a worthy enough target to get this kind of attention.

Detection is paramount. You can’t mitigate problems if you are unaware of them. For false parameters specifically there are several options… you can use canonical tags on every page, which I highly recommend. You can also use URL rewriting to enforce very strict URL formatting. But if you aren’t looking at the logs and if you aren’t looking at your search result URLs closely then you wont even know about the issue.

Detailed revenue attribution is the big one. Seeing that the losses only come from Google is an important signal. For me, SEO revenue comes from dozens of sources. Search Engines, like Google, Bing, Excite, AOL, Yahoo, etc… Syndicated Search like laptop and ISP start pages and meta search engines, Safe Search AVG, McAfee, etc… and finally my SEO experiments.

Having the revenue attribution lets me know the revenue loss only occurred on Google this time so it can’t be consumer behavior like Spring Break because the drop would have been across the board if consumers just went on holiday.

Also keep an eye on your errors, search results, and logs. Also keep an eye on your network administrator’s “Frustration Meter”.

Here are a few specific things to check when looking for negative SEO attacks:In

Google Search Console:

  • Check GSC Messages for penalties.
  • Check GSC Messages for outages.
  • Check GSC Messages for crawl errors.
  • Check Server Errors Tab for Desktop and Mobile
  • Check Not Found Errors for Desktop and Mobile
  • If errors look suspicious then Download and archive them.
  • If errors look minor and are no longer reproducible then mark them as fixed so you only see new errors next time.
  • Check the Index Status page and make sure your total number of pages looks correct.
  • Check your content keywords and make sure nothing looks spammy or out of place there.
  • Check Who Links to your site the most under search traffic
  • Make sure your link count hasn’t abnormally grown since last check. Update your link count spreadsheet
  • Check your Manual Actions
  • In search analytics check countries and make sure your not suddenly popular in Russia
  • In search analytics check CTR and Position and make sure the chart looks ok… no drastic events
  • In Search Appearance investigate your duplicate title and meta description pages. Check the URLs to make sure they aren’t bogus
  • Check Security Issues

In your web server logs:

  • Check Server Logs: SQL injection
  • Check Server Logs: Vulnerability Testing
  • Check Server Logs: 500-503 Errors
  • Check Server Logs: Outrageous requests per second
  • Check Server Logs: Bad Bots
  • Check Server Logs: Large volume 404 errors from 1 referring domain

In the Google Search Results:

  • Check For Bizarre URLs
  • Check For Domains matching your content
  • Check For Unusual Sub-domains
  • Check For Odd URL parameters and URL anchors
  • Check For Negative mentions of domain or products

On your website and servers:

  • Check Ratings and Reviews
  • Check For Comment or Post Spam
  • Check Content Indexes For Over-abundance of old or bad topics
  • Check for profile spam
  • Actively run anti-virus on server
  • Routinely back up server
  • Periodically run vulnerability testing on your site to close security vulnerabilities
  • Patch your server regularly
  • Update your web platform and plugins regularly
  • Double check your WordPress Security plugin for any loose ends if applicable.
  • Periodically change your admin passwords and account names
  • Use strong passwords
  • Don’t share accounts or email credentials
  • Use a version control system and check the update status of deployed code for
  • changes regularly.
  • Check your domain name expiration date

There is a lot to consider in this episode. Please download the FREEBIE which is my own personal 40 point negative SEO checklist. If you are concerned about negative SEO then this checklist will help you conduct periodic audits for suspicious activity. This free download will save you hours on checking your websites for attacks.

Episode 4 – Real Negative SEO

Please subscribe and come back for our next episode where we will be “Setting SEO Guidelines For A Single Page” and I will continue the lesson by teaching my most powerful methods and secrets to content tuning a single page that targets a single keyword.

Thanks again, see you next time and always remember the first rule of SEO Fight Club: Subscribe to SEO Fight Club!

Episode 3 – The Truth About Keyword Stuffing

Get it on iTunes

Hi and thank you listening to I’m Ted Kubaitis and I have 22 years of web development and SEO experience. I have patented web technologies and started online businesses. I am both an engineer and a marketer. My goal is to help you win your SEO fights.

Maybe you are new to SEO or maybe you have been doing SEO for as long as I have. This SEO primer will reboot your effectiveness and point you in a direction for achieving positive results that continue to build on new learning. Most SEOs fail in the “O” part of SEO. The optimization part of SEO demands iterative learning based on empirical measurements. In this and future episodes I’m going to show you how fight the good SEO fight.

This episode’s FREEBIE

With every episode I love to give something away of high value. This episode’s SEO freebie is the all the data I collected on DUI lawyers in 10 major US cities to present my case. Its not just the keyword stuffing data but over 300 factors for every page 1 result for each search in the sample. Additionally, I include the aggregate analysis across the whole sample set. If you are an SEO data junkie like me then hang on. This data is GOLD!

You can download the freebie at:

First my disclaimer. I have discussed the ethics of talking about the truth of keyword stuffing at length with SEO Expert and Search Ethicist Josh Bachynski. I understand the reasons why he would never share such information with a client. The risks are simply too great. The temptation for beginners or desperate people might be too great to resist. I come from a different background and my opinion is that to be good at SEO you have to at least understand the forces that are working for and against you. I am not recommending you use keyword stuffing. I am explaining the controversy and my understanding of this particular search engine exploit. I believe that you can both know about bad behavior and still choose to do the right thing.

When you google “keyword stuffing” wikipedia is the #1 result. This is some of what wikipedia has to say about keyword stuffing:

“Keyword stuffing is a search engine optimization (SEO) technique, in which a web page is loaded with keywords in the meta tags or in content of a web page. Keyword stuffing may lead to a website being banned or penalized in search ranking on major search engines either temporarily or permanently.”

Wikipedia goes on to say “This method is outdated and adds no value to rankings today. In particular, Google no longer gives good rankings to pages employing this technique.”


I wonder if that’s true? Was it ever true?

Google’s webmaster guidelines defines keyword stuffing as “the practice of loading a webpage with keywords or numbers in an attempt to manipulate a site’s ranking in Google search results.”

Google goes on to say keyword stuffing “can harm your site’s ranking.“

Matt Cutts said in a video that there was diminishing returns and at some point the more you stuff the worse you rank.

If this is true then how is it that when I search for DUI lawyers in 10 different US cities the average page 1 result has more than 500 keyword matches in the HTML source and the maximum has 2550 matches in a single page!

To understand this issue I think we need to look at the core of what google is doing. At a basic level Google is performing a full text search.

“In a full-text search, a search engine examines all of the words in every stored document as it tries to match search criteria.”

“In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user.”

“Nowadays, commercial
web-page search engines combine hundreds of features
to estimate relevance. The specific features and their
mode of combination are kept secret to fight spammers
and competitors. Nevertheless, the main types of
features at use, as well as the methods for their
combination, are publicly known and are the subject of
scientific investigation. “

“ Although web search engines do not disclose
details about their textual relevance features, it is
known that they use a wide variety of them, ranging
from simple word counts to complex nonlinear
functions of the match frequencies“

That’s right folks… at the most basic and fundamental core search relevance is influenced by simple word counts and match frequencies. Obviously there is a lot more going on in the rankings but at some basic level when all else is equal whoever says it more wins.

The best evidence that is is the case is that Google openly threatens to punish you if you exploit this. I’m not a great poker player but even I can read that tell. But does this happen?

In fact Matt Cutts talks about the keyword stuffing manual action in a youtube video. Link in the show notes.

I have found a few short lists of sites that have been publicly penalized but literally none of them were penalized for keyword stuffing. I’ll put some links in the show notes. I searched for stories of people being penalized by google for keyword stuffing and I found the opposite… I found people trying experimentally to be penalized by google for keyword stuffing and failing to get any result. See links in the show notes.

Don’t get me wrong. I understand that the penalties for keyword stuffing might only be manual penalties but if that is the case then there is no way google can police this effectively which explains why in high competition niches where the cost per click can reach hundreds of dollars the keyword stuffing is running rampant.

In another video Matt Cutts says there is a point of diminishing returns and at that point having more matches starts to hurt you. But since the amount of keyword stuffing increases with the competition for the keyword and we are seeing today page 1 results having 2500+ matches per page we have to question if this is even true or possible. Ok the point of diminishing returns might be 3000 matches per page. Does that make sense to you?

Google could easily make a rule that if you have more than 500 matches per page then you are over-optimized but a rule like that would strip the first couple pages of results for every high traffic search term there is. Removing all the most relevant results would be a very bad user experience. So while they technically can auto-enforce a hard rule, it appears they are choosing to not do so.

Ok… so we know how it works, we have an idea why google isn’t automatically punishing it. How well does it work? Well. Now we are at my favorite part of the episode. Lets look at some measured data!

I used my software to measure over 300 SEO factors for every page 1 results for DUI lawyers in 10 different cities. The CPC for some of these searches is over $150 per click. These websites could not be more motivated to win free clicks.

The overall number of matches in the HTML source has NO CORRELATION with ranking position. It appears that the Google algorithm has accounted for keyword stuffing in a general kind of way. But but but but but… There is strong correlation for keyword stuffing in certain parts of the page.

Strong Correlation:
“Number of exact matches in web page H4 to H6 tags”
“Number of exact matches in web page H4 tags”
“Has leading matches in web page H2 tags”
“Number of leading matches in web page H1 to H3 tags”
“Number of exact matches in web page li tags”
“Number of matches in web page label tags”

Weak Correlation:
“Number of exact matches in web page option tags”
“Number of matches in Google result URL path”
“Has leading matches in web page H3 tags”
“Has leading matches in web page H1 to H6 tags”
“Number of matches in web page H1-H2 tags”

So it would seem that keyword stuffing in headings, lists, labels, form options, and URLs does correlate with rankings. So the secret to the keyword stuffing exploit is knowing where and how how much to stuff.

The problem with stating how much is that the degree of keyword stuffing is relative. In high competition keywords achieving competitive parity is going to be quite difficult and ranges from hundreds to potentially thousands of matches per page. In low competition keyword niches we might hit competitive parity with a dozen matches. The only way to know is to pull the competitive numbers for our niche.

The other thing we need to do is to really investigate the outliers. The page that had 2550 matches had a long thread of comments on the topic. Very understandable for a hot topic. Putting 2550 matches on our product page will scare away our customers. We have to understand the nature of the stuffing which is probably why Google has to police this exploit manually. This is also why I could see there being a point where more stuffing doesn’t benefit more, but I doubt there is a point where more stuffing starts automatically hurting our rankings. There would likely be too much collateral damage.

Until the data tells a different story thats how I see it. Don’t keyword stuff. Do understand how the technology works and sculpt your content to communicate effectively to both your audience of humans as well as search engines. There is a difference and it is often found in the intentions behind making the page. Have good intentions and apply integrity to everything you do.

There is a lot to consider in this episode. Please download the FREEBIE which is the all the data I collected on DUI lawyers to present my case. Its not just the keyword stuffing data but over 300 factors for every page 1 result for each search in the sample plus the aggregate analysis across the whole sample set. If you are an SEO data junkie like me then hang on. This data is GOLD!

Please subscribe and come back for our next episode where I will tell my story of “Real Negative SEO”. I will warn you now that it is not for the feint of heart. It was a brutal gauntlet for me that took hundreds of thousands of dollars and almost a year to solve.

Thanks again, see you next time and always remember the first rule of SEO Fight Club: Subscribe to SEO Fight Club

Episode 2 – Debunking Content Placement in SEO

Season 1, Episode 2, SEO Fight Club

Get it on iTunes

Hi and thank you listening to I’m Ted Kubaitis and I have 22 years of web development and SEO experience. I have patented web technologies and started online businesses. I am both an engineer and a marketer. My goal is to help you win your SEO fights.

Maybe you are new to SEO or maybe you have been doing SEO for as long as I have. This SEO primer will reboot your effectiveness and point you in a direction for achieving positive results that continue to build on new learning. Most SEOs fail in the “O” part of SEO. The optimization part of SEO demands iterative learning based on empirical measurements. In this and future episodes I’m going to show you how fight the good SEO fight.

Columnist Chris Liversidge at SearchEngineLand posted an article on August 25,2015 called Mega Menus & SEO.

In this article Chris aims to answer the question “What are the best practices surrounding mega menu navigation elements when it comes to SEO?”. Under the heading of “Best Practices In Mega Menu Implementation” he lists “Code order placement” and later states “my first piece of advice for mega menus is this: Try to include it in the HTML after the main body content.”

If I took Chris’s advice then I would go to my company’s engineering team and submit a work order to re-architect the pages across 20 different websites. The costs would be rather large. I’d have to get management involved and testers. The cost of even redeploying 20 large scale websites is high too. With all the people and time I can only make 2-3 of these kinds of requests each year so I need to be sure that making this change is going to help my SEO results. If I make the company spend all this time to do all this work and the result is “no change” then I lose credibility, people will likely be mad at me for wasting their time, and there is an opportunity cost because I could have suggested other work that does matter. I need to test this idea that the Mega Menu “should” be below the main content.

This episode’s FREEBIE:

With every episode I love to give something away of high value. This episode’s SEO freebie is the data I collected to prove my case. This data comes from software I created. The software makes empirical measurements of on page SEO factors. It also calculates the statistical correlation of each factor’s measurements versus the results ranking position in the search results. This gives me strong clues as to which factors are plausibly influencing my rankings and which are not. The software measure over 300 different factors and today I am giving you the data for all the keywords I researched for this topic. So if you are an SEO data junkie like me then this freebie is a gold mine!

First lets spot check a high competition keyword and see what the winning sites are doing. Maybe we’ll get lucky and find some clues in the data.

I guess we should start by explaining how to see if the mega menu is above the main content. I use chrome as my primary web browser. To see if the mega menu is above the main content I right click the mega menu and select “Inspect Element” the mega menu is nearly always in a wrapping DIV tag with an ID like menu,nav,navigation, or even megamenu. When you hover the div tags in the inspected source chrome highlights the contents of the the div in the browser’s HTML view. By simply hovering the divs below the menu’s wrapper div you can quick see if the mega menu is above or below the main content.

Search Term: toys

Odds are pretty good that most of these page one companies are pretty invested in professional SEO services. Given they are top ranking for a highly coveted search term I think it is safe to say they are successful at it too.

#1 Toys R Us mega menu above the MC
#2 mega menu above the MC
#3 mega menu above the MC
#4 mega menu above the MC
#5 mega menu above the MC
#6 mega menu above the MC
#7 no mega menu
#8 mega menu above the MC
#9 no mega menu
#10 IMDB page for movie “Toys” mega menu above the MC

So 80% of the page 1 results had a mega menu above the main content.
20% had no mega menu at all.
0% had a mega menu below the main content.

Not a great start for Chris’s hypothesis, but these initial results are actually inconclusive. Just because nobody is doing it doesn’t mean there isn’t advantage to placing the mega menu below the main content.

Clearly having you mega menu above the main content is a very common standard and it looks like Google is accepting of the practice. So at a glance having your mega menu at the top doesn’t appear to hurt you. This means our investigation can shift to explaining a more basic premise behind the hypothesis.

Does having your keywords near the top help you rank better? The idea is that having your mega menu push your keyword content further down in the source code hurts your rankings. I think we need to measure this factor and see if it correlates at all with rank position in the search results. If it does then Chris’s hypothesis will be plausible and if it doesn’t then we can all send Chris our grumpy faces for gambling with our time and resources in his advice.

My software measures a factor “Number of Bytes To The First Match In The HTML Source”… If Chris’s hypothesis is true then I would expect to see a correlation when comparing this measurement to ranking position. The result? No Correlation. The values do not trend with rank position. The values look like they are in a random order… Here they are round to nearest 100 bytes going from #1 on google to #10:
2700, 2000, 8000, 200, 20000, 300, 2100, 10000, 200

I know… I have only tested 1 keyword. What if a pattern emerges after 10 keywords. Lets try that.

Search Terms: toys,toys for boys,toys for girls,toy sale,online toy store,discount toys,cheap toys,buy toys,buy toys online,best toys

The result? No correlation… again let me share the values, but this time the values are the average number of bytes to the first match for the rank position in the search results for all the search terms in our sample. So it is the average bytes to the first match for all the number 1 results and the average number of bytes for all the number 2 results and so on.

Again going from #1 in google to #10 the averages across the whole sample set:
700, 1MB, 7000, 1000, 7000, 700, 1MB, 3000, 2000, 2400

Pretty random huh? Random means no correlation with rank position. Let me geek out on math a little here. I promise to keep it short.

A Correlation Coefficient describes how well your data points fit to a trend line. Something that is a significant factor should trend as you approach number 1 on google. As you grow your sample size the strength of that ranking signal should improve.

There are two correlation coefficients that are frequently used. Spearman’s which is typically used for things that have a curved trend line and Pearson’s which is good for things that fit a straight trend line. My software calculates both.

When we looked at toys all by itself the Spearman’s Correlation was -0.14 and the Pearson’s Correlation was 0.02. When the values are close to zero it means your measurements appear to be random with respect to rank position. The strongest correlation would have a value close to 1 or -1. So when I say the values look random and there is no correlation… that is not my opinion. That is a mathematical fact. And the statistics people listening will want to know my critical values and significance. For this study Spearman’s needs a critical value of 0.65 and Pearson’s needs 0.63 to be 95% certain that the distribution is non-random. Our correlation values are nowhere near that and that is the math telling us that these measures appear to be random.

When we switched to a ten search sample the Spearman’s correlation became -0.02 and the Pearson’s correlation became -0.17. They stayed near zero even after adding more searches to the sample. That means they look random across our sample set too.

This is pretty compelling data that the number of bytes to the first keyword match is not a ranking factor for google at this point in time. It may have been last month and it might be next month, but right now it is not. And that is the big problem with relying on your personal experiences too much. To know the truth or fact of the matter you have to constantly challenge your own beliefs.

But we are not done yet. It could be that Google discounts certain things like javascript and inline CSS and other HTML gobble-de-gook. All those thing could make out bytes to the first match appear random and boy do they appear random. So until we rule that out are study is still inconclusive.

My software measures another factor. The number of matches in the first 100 words after stripping all the HTML out of the source.

So for toys all by itself the Spearman’s Correlation was -0.33 and the Pearson’s correlation was -0.32. Certainly less random, but still too random to be significant. No Correlation.

Across the whole sample set the Spearman’s Correlation was 0.17 and the Pearson’s Correlation was 0.31. Still too random. The more positive these values get the more it implies having more of this factor hurts your rankings.But still too random even when we expanded the sample size. No correlation.

So to sum this all up:

First, we saw that websites are ranking at the top doing the opposite of Chris’s advice.

Second, we measured that there is no correlation between rankings and how close to the top of the source your keywords are.

Third, we measured that there is no correlation even when you remove HTML and scripting from the equation.

“Code Order Placement” is not a factor. I should not tell my engineers to move the mega menu below the main content because all the data I have seen says that this will have no impact on my rankings. I would have wasted weeks of development time implementing a change that would have done nothing for me.

I don’t mean to bully Chris. We have all been in his shoes at one point or another. I felt it was important to do this episode to demonstrate how dangerous it can be to blindly follow SEO advice, even expert SEO advice. I think Chris would agree that in the presence of more or better information even he would make different choices, but we don’t always have that luxury. Almost nobody has a software tool like mine, but SEO as an industry needs to evolve and become more data-driven and scientific. SEO is a place where that can happen but it won’t happen overnight but I think we all want to see the best practices and toolsets achieve a new higher standard and I think we can get to that point together as a community.

Please download the FREEBIE which is all of the data containing over 300 factor measurements for all the search terms we talked about in this episode.

Please subscribe and come back for our next episode where we will be “The Truth About Keyword Stuffing” and I will continue to kick your butt with data.

Thanks again, see you next time and always remember the first rule of SEO Fight Club: Subscribe to SEO Fight Club

Episode 1 – Where to begin with SEO

Season 1, Episode 1 of SEO Fight Club with Ted Kubaitis.

Get it on iTunes

The episode begins with the hair pulling and fight starting topic of defining what SEO is and what it should fundamentally encompass. Get ready to get your SEO fight on with this first episode in a new SEO podcast series.

Podcast Transcription:

Hi and thank you for listening to I’m Ted Kubaitis and I have 22 years of web development and SEO experience. I have patented web technologies and started online businesses. I am both an engineer and a marketer. My goal is to help you win your SEO fights.

Maybe you are new to SEO or maybe you have been doing SEO for as long as I have. This SEO primer will reboot your effectiveness and point you in a direction for achieving positive results that continue to build on new learning. Most SEOs fail in the “O” part of SEO. The optimization part of SEO demands iterative learning based on measurements. In this and future episodes I’m going to show you how fight the good SEO fight.

This episode’s FREEBIE:

With every episode I love to give something away of high value. This episode’s SEO freebie is my own personal excel template I use for tracking monthly SEO revenue. In this episode I will walk through why collecting this data is vital to SEO success. This free download will save you hours on setting up and archiving the data. The download also shows great examples and demonstrates why the advice in this episode is so important.

You can download the freebie at: 

SEO Freebie season 1 episode 1
SEO Freebie, season 1 episode 1, SEO Revenue Reporting Template

What is SEO?

Stay with me here. I know you just groaned at the question but this definition is so fundamental to how each of us conducts SEO that we can’t possibly compare one approach to another without considering how our definitions differ.

My definition is radical compared to most mainstream SEOs, but I want you to really consider it from all sides especially from the viewpoint of establishing good business principles and doing “the right thing” for your business. The right thing for your business doesn’t always agree with what the right thing is for Google or what the right thing is in your own best interest. We have a word for when you can isolate those areas and talk about them openly and act on them independently. It’s called integrity and its what will make a business invest in you for the long haul through all the ups and downs.

Here is my definition of SEO: Search engine optimization is the optimization of revenue from search engine sources.

I define optimization as iterative cycles of tuning and learning from the last period’s data to influence next period’s outcomes.

I know. I really went out on a limb with that, but you will be amazed with how many career SEOs are out there that are absolutely against revenue reporting in SEO. Some of them are “so-called experts” who openly advise people not to do it.

Let me explain my position and hopefully I can convince some SEO revenue haters to see the advantages revenue attribution has.

Firstly, SEO Revenue measures the health of your crawlability, indexing, keyword choices,content tuning, backlinks, rankings, relevance, and conversions all from one very sensitive gage. No other measurement in SEO tells you so much in a single measurement. I know many SEOs will argue conversion counts are just as good. Conversions are great and they signal the same things, but with Revenue the needle moves more and the signals you are looking for will be easier to read.

Secondly, every large corporation has SEO or organic revenue as a line item on their gross sales reporting. Like it or not SEO is a marketing channel that produces measurable revenue for large businesses and smaller businesses are hoping to attain that. What is the business’s goal in optimizing that SEO? To make that revenue go up! That’s what is best for the business. For SEO to be viable in a business perspective its revenue needs to be meaningful. Otherwise, why should the business care about SEO? If it doesn’t yield a better business outcome then it would be a waste of time and money… if it doesn’t help then the time and money should be spent on some other channel like social marketing or pay per click. Spending time and money for no benefit is very bad in business.

Thirdly, every bad SEO and a lot of the good ones refuse revenue accountability. They just won’t do it. By wanting the business to achieve a good financial outcome you are demonstrating a willingness to do what is in the best interest for the business even if it comes to the detriment of your own personal best interests. That’s integrity. A great SEO always puts the needs of the business first. To be a great SEO these days it takes integrity and it takes a lot of it.

I have told small businesses before that it will cost them too much to succeed in SEO in their niche. I’ve turned down their business because the margins were too thin and there was much easier revenue to be had in Pay Per Click or other channels.

Many people believe SEO is the “easy money” or “low hanging fruit”. This is simply not true. SEO is the bright shiny apple at the very top of the tree and it takes a very special and rare talent to collect those apples. Most SEOs use tactics that amount to getting a bucket and standing under their tree waiting for the apples to fall. When you use my definitions and my methods I am not only showing you how to reach out to grab the apples, but how to do it with integrity. I want you to focus on the best interests of the company. If you do this right then even the cases where you and a company part ways it still results in the business endorsing you and praising your integrity and wanting to work with you in the future if the situation changes.

Many SEOs don’t want to report on SEO revenue because it will point out that the revenue doesn’t justify the cost of SEO. All businesses start here. You need to get used to this. This is where personal interests conflict with what is best for the business. Hiding it will only last so long. Unless you sole purpose is to collect lots of retainers you will be far better off discussing the value of direction setting so that one day your revenue can justify your cost.

Let me geek out for a moment here. SEO is a vector. That means it has both magnitude and direction. As far as tomorrow is concerned direction always matters more. A smart business understands the need to invest in the future. By tracking revenue despite your personal interests you are demonstrating integrity and implementing a process that can possibly achieve the business’s goals of growing revenue. If you don’t measure the revenue then you won’t know what is or isn’t working. You won’t know if today was any better than yesterday for the business. Get out of that situation! It will ultimately hit your where it hurts most, in your integrity. SEO has ups and downs and the downs can last months. Integrity can last a lifetime and it is your best chance to weather the downs in SEO.

Please please please make direction setting, investing in the future, accountability and integrity your hallmarks. Make them the cornerstone of everything you do.

If you are new to SEO you may be asking what are the arguments against my definition?

I have gotten countless rants on why I am wrong. I read and consider them all. So far all of them have failed to convince me and I’m going to explain why.

“SEO Set Up” versus “Actually Conducting SEO”

Most of the complaints against my definition are because my definition doesn’t include the vast amounts of work that go into setting up a website and configuring it to appear in the search results. Yes this work is critical, but I view all that work as the “SEO set up”.

You can’t actually conduct the “iterative optimization” work until that set up work has been properly done. Some SEOs are good at the set up work and others are good at the iterative optimization. The set up work tends to be a lot more technical and involved with server configuration and web development. The iterative optimization tends to be more about applying marketing and business principles, tracking progress towards goals, crafting content and executing marketing plans.

It makes total sense that the technical SEO setup people would want to define SEO a different way that highlights there talents in a way that isn’t dependent on the revenue outcome which they may not understand or have control over.

It makes total sense that the marketing optimization people would want to define SEO a different way that highlights the optimization process and progress towards goal.

Both sides aren’t wrong, but the way a large successful company will see it is that the “SEO setup” is part of engineering. It is part of the “proper web development” and system configuration that is a cost of doing business anyway. So for the business’s perspective SEO is and should be defined by the “O” in SEO. Its the marketing optimization of the SEO channel that will make the channel succeed or fail in the eye’s of he business. It is about the content development, traffic and revenue growth, understanding seasonality and year over year growth and progress towards goals. Its about defining and adjusting those goals as our evolving understanding of the landscape changes. It’s about using that understanding to make a plan to invest in a better tomorrow. If all you do is the SEO setup and nothing else then you have abandoned the whole marketing channel and its revenue which is the sole reason a business would invest in SEO.

I’m going to guess that right now you feel like you are on one side or the other. Might I suggest the crazy thought that you can be and should be on both sides at the same time. I am fortunate that I am both an engineer and a marketer. You should learn both sides at least to a point of basic understanding. Both sides need to exist, but recognize that the ultimate goal for the business is improving SEO revenue.

I know there are some SEOs saying “Ah ha!” right now. They are thinking that they caught the flaw in my argument. They are saying things like “Conversion optimization is different than SEO. Don’t mix the two.” This is a ridiculous argument. If you went to the doctor for you check up and the doctor just skipped taking your blood pressure and pulse wouldn’t you ask about it? What if the doctor said “I don’t do hearts. Cardiologists do hearts”? Just because there are specialties in a discipline that doesn’t mean you get to skip those things completely as a generalist. Just because there are conversion specialists doesn’t mean you get to write off that part of SEO. Just like the doctor who doesn’t do hearts… If you don’t see SEO all the way through to revenue then you are only doing the set up part of the job. You are just the webmaster taking care of the website configuration. Your not an SEO if you aren’t doing the whole job of achieving the business’s goal in investing in SEO.

I’m speaking in general here. I know there are other cases like Dell Computers who use SEO to reduce costs for their customer service call centers. When their SEO does poorly they pay more in call center resources and support staffing. But cost reductions are practically the same thing to a business as revenue growth. The business likes them both for the same reasons. The principles are still the same. We are still putting what the business needs first and foremost.

Hopefully I have planted the seed of integrity in your brain and if you aren’t already that you seriously consider attributing and reporting on SEO revenue. I know in many cases it can be very difficult like when you sell a product or service where a large number of your customers will ultimately convert on the phone or walk into a brick and mortar store. Revenue attribution can be hard in many cases. Your goal should be to establish proper SEO revenue reporting. By achieving that goal you have also completed all of the necessary SEO setup. It is only when all that work is properly completed that “O” in SEO can really begin. The good news is that my definition makes it easy to justify and explain the real and often significant time and investment required to do SEO properly and it sets the expectations of the milestones for accountability. By showing the true path to reach the business’s goal you are setting yourself up for success and building integrity. Best of all you are building the foundation for an iterative approach to improvement and reproducible results.

Lets talk about how to do revenue attribution:

My sites use custom logging and google analytics. This is great because when the two don’t agree I can see what is missing in either reporting system and track it down and fix whatever is wrong. It is very uncomfortable when you don’t have a way to test if you are capturing all your data correctly. Usually a website will have a system of record like their payment gateway or a database that you can compare to your google analytics. When you set up your revenue attribution keep this in mind. How do you know you are collecting ALL of the data? How much are you missing? It is OK if your analytics is a little lossy. All pixel tracking systems are. Just be in a position to be able to calculate how lossy the data is when compared with the actuals from something like your payment processor.

Google analytics has an integration point for backfilling the missing data. There are APIs and apps that can do this like.

Analytics Importer

Google has documentation on setting up commerce tracking for your website:

This is all part of that SEO setup that is a critical prerequisite to conducting the optimization. SEO doesn’t really begin until all this work is complete and you can actually measure the lay of the land. By setting up the revenue tracking you will know if your SEO methods are helping or hurting the business. You have to measure to know you are making good choices. With detailed revenue reporting you will be able to see exactly where your problems and opportunities are.

Make sure you are logging your order IDs (transaction IDs in GA) and revenue, shipping costs, item counts. You ultimately want to know things like Bing converts better and with a larger average cart. If you can report costs or landed costs then you will have an ideal scenario to report on costs and profits. But at a minimum you want to achieve gross revenue reporting.

I know I am understating how much work is involved here. Many business will hire a consulting company to set up just there analytics. If you can afford to do this then I highly recommend it. I have setup analytics myself before and its never easy. Expect it to take a few months before it is all properly dialed in. If it is your first time then expect longer because there are a number of learning curves you will have along the way. Your time and budget constraints should steer you toward the right path for your business.

Once your ecommerce reporting is set up you will want to make a couple custom reports in Google Analytics.
TODO: SHOW NOTES DETAIL and audio describing the two reports… for details on how to create them see the blog post.


To create these custom reports you need to use some basic regular expressions. Regular expressions are a way of doing wildcard matches on text.

Remember to come back and read this tutorial on regular expressions.

In a nutshell regular expressions are a way to templatize patterns in text.

Regular Expressions Cheat Sheet

^ = start of the field
$ = end of the field
| = OR
() = match this group
.* = match any number of characters
.*? = match any number of characters until

This will make more sense later… just move on for now.

The first custom report is your SEO Revenue Report. For the selected website and specified date range this report logs order count, item count, shipping totals, and revenue totals by source and medium.

This report lets you see and export your data by site.


SEO Revenue report config

SEO Revenue Report Filters

Include Source / Medium
^(google / organic)$|^(bing / organic)$|^(yahoo / organic)$

These are the source mediums that are recognized as SEO sources

The medium is like “organic”, “referral”, “pay per click”. The source is usually the referring host name or tracking code. The default reporting in Google Analytics isn’t exactly what you’ll need. You can build exactly what you need in the customization tab.

google analytics samplingIf you see text saying that your report is based on a percentage of sessions then sampling may be coming into play. Usually waiting a few day after the date range you are trying to look at will resolve the sampling issue. Sometimes Google needs extra time to finish processing all the data. If the option is available you can also export an unsampled report. This option might only be available to analytics premium customers.


The second report you need is the SEO Finder Report. This report makes it easier to find sources of SEO revenue.



SEO Finder Report Filters

Exclude Source / Medium
^(google / organic)$|^(bing / organic)$|^(yahoo / organic)$

These are the source mediums that are SEO sources. We exclude them so we dont keep finding them.

Include Medium

These are the mediums we want to search for SEO sources within.

Exclude Source

These are just sources that aren’t SEO that we don’t want to keep searching through

### END Report DETAILS ###

The first custom report is your SEO Revenue Report. For the selected website and specified date range this report logs order count, item count, shipping totals, and revenue totals by source and medium. This report lets you see and export your SEO revenue data by website.

The second report you need is the SEO Finder Report. This report makes it easier to find sources of SEO revenue. In a nutshell the SEO finder report filters out all the known SEO,email, and social sources from the organic and referral mediums so you have a short list of sources to watch for new SEO sources.

A side benefit of setting this up for SEO means it will be easy to set up the same reports for email, social, and paid media. They are all the same report but filter for different sources and mediums.

The blog post in the show notes has screenshots and instructions on creating these reports in Google analytics (this assumes you have successfully set up ecom tracking in GA already).

When you find new SEO sources you add them to the SEO Revenue Report and filter them from the SEO Finder Report so you don’t keep finding them.

Some companies may want to build an Email revenue report at the same time because the SEO Finder report usually finds untracked email revenue as well.

Then at the end of each month you create a view of the SEO revenue for each site by source and medium. You total the rows to see SEO revenue by source and medium across all sites and you total the columns to see SEO revenue by site across all sources. It is a total view of the health of your entire SEO landscape. It is literally the best SEO resource you will ever have.


There is a lot to consider in this episode. Please download the FREEBIE which is the excel template I use for tracking SEO revenue on my sites.

Download FREEBIE at:

Please subscribe and come back for our next episode where we will be “Debunking Content Placement in SEO” and I will continue the lesson by debunking an SEO myth about placing keywords near the top of your source and demonstrating how dangerous even expert SEO advice can be when it isn’t backed by hard data.

Thanks again, see you next time and always remember the first rule of SEO Fight Club: Subscribe to SEO Fight Club