Google Algorithm Problems
By Rodney Ringler (c) 2006.
Posted on this blog by Afzal Khan
Have you noticed anything different with Google lately? The
Webmaster community certainly has, and if recent talk on several
search engine optimization (SEO) forums is an indicator,
Webmasters are very frustrated. For approximately two years
Google has introduced a series of algorithm and filter changes
that have led to unpredictable search engine results, and many
clean (non-spam) websites have been dropped from the rankings.
Google updates used to be monthly, and then quarterly. Now with
so many servers, there seems to be several different search
engine results rolling through the servers at any time during a
quarter. Part of this is the recent Big Daddy update, which is a
Google infrastructure update as much as an algorithm update. We
believe Big Daddy is using a 64 bit architecture. Pages seem to
go from a first page ranking to a spot on the 100th page, or
worse yet to the Supplemental index. Google algorithm changes
started in November 2003 with the Florida update, which now
ranks as a legendary event in the Webmaster community. Then came
updates named Austin, Brandy, Bourbon, and Jagger. Now we are
dealing with the BigDaddy!
The algorithm problems seem to fall into 4 categories. There are
canonical issues, duplicate content issues, the Sandbox, and
supplemental page issues.
1. Canonical Issues: These occur when a search engine
treats www.yourdomain.com, yourdomain.com, and yourdomain.com/index.html
all as different websites. When Google does this, it then flags
the different copies as duplicate content and penalizes them.
Also, if the site not penalized is http://yourdomain.com, but
all of the websites link to your website using www.yourdomain.com,
then the version left in the index will have no ranking. These
are basic issues that other major search engines, such as Yahoo
and MSN, have no problem dealing with. Google is possibly the
greatest search engine in the world (ranking themselves as a 10
on a scale of 1 to 10). They provide tremendous results for a
wide range of topics, and yet they cannot get some basic indexing
issues resolved.
2. The Sandbox: This has become one of the legends of
the search engine world. It appears that websites, or links to them,
are "sandboxed" for a period before they are given full rank in the
index, kind of like a maturing time. Some even think it is only
applied to a set of competitive keywords, because they were the
ones being manipulated the most. The Sandbox existence is
debated, and Google has never officially confirmed it. The
hypothesis behind the Sandbox is that Google knows that someone
cannot create a 100,000 page website overnight, so they have
implemented a type of time penalty for new links and sites
before fully making the index.
3. Duplicate Content Issues: These have become a major
issue on the Internet. Because web pages drive search engine rankings,
black hat SEOs (search engine optimizers) started duplicating
entire sites' content under their own domain name, thereby
instantly producing a ton of web pages (an example of this would
be to download an Encyclopedia onto your website). As a result
of this abuse, Google aggressively attacked duplicate content
abusers with their algorithm updates. But in the process they
knocked out many legitimate sites as collateral damage. One
example occurs when someone scrapes your website. Google sees
both sites and may determine the legitimate one to be the
duplicate. About the only thing a Webmaster can do is track down
these sites as they are scraped, and submit a spam report to
Google. Another big issue with duplicate content is that there
are a lot of legitimate uses of duplicate content. News feeds
are the most obvious example. A news story is covered by many
websites because it is content the viewers want. Any filter will
inevitably catch some legitimate uses.
4. Supplemental Page Issues: Webmasters fondly refer to
this as Supplemental Hell. This issue has been reported on places like
WebmasterWorld for over a year, but a major shake up around
February 23rd has led to a huge outcry from the Webmaster
community. This recent shakeup was part of the ongoing BigDaddy
rollout that should finish this month. This issue is still
unclear, but here is what we know. Google has 2 indexes: the
Main index that you get when you search, and the Supplemental
index that contains pages that are old, no longer active, have
received errors, etc. The Supplemental index is a type of
graveyard where web pages go when they are no longer deemed
active. No one disputes the need for a Supplemental index. The
problem, though, is that active, recent, and clean pages have
been showing up in the Supplemental index. Like a dungeon, once
they go in, they rarely come out. This issue has been reported
with a low noise level for over a year, but the recent February
upset has led to a lot of discussion around it. There is not a
lot we know about this issue, and no one can seem to find a
common cause leading to it.
Google updates were once fairly predictable, with monthly
updates that Webmasters anticipated with both joy and angst.
Google followed a well published algorithm that gave each
website a Page Rank, which is a number given to each webpage
based on the number and rank of other web pages pointing to it.
When someone searches on a term, all of the web pages deemed
relevant are then ordered by their Page Rank.
Google uses a number of factors such as keyword density, page
titles, meta tags, and header tags to determine which pages are
relevant. This original algorithm favored incoming links and the
anchor text of them. The more links you got with an anchor text,
the better you ranked for that keyword. As Google gained the
bulk of internet searches in the early part of the decade,
ranking well in their engine became highly coveted. Add to this
the release of Google's Adsense program, and it became very
lucrative. If a website could rank high for a popular keyword,
they could run Google ads under Adsense and split the revenue
with Google!
This combination led to an avalanche of SEO'ing like the
Webmaster world had never seen. The whole nature of links between
websites changed. Websites used to link to one another because
it was good information for their visitors. But now that link to
another website could reduce your search engine rankings, and if
it is a link to a competitor, it might boost his. In Google's
algorithm, links coming into your website boost the site's Page
Rank (PR), while links from your web pages to other sites reduce
your PR. People started creating link farms, doing reciprocal
link partnerships, and buying/selling links. Webmasters started
linking to each other for mutual ranking help or money, instead
of quality content for their visitors. This also led to the
wholesale scraping of websites. Black hat SEO's will take the
whole content of a website, put Google's ad on it, get a few
high powered incoming links, and the next thing you know they
are ranking high in Google and generating revenue from Google's
Adsense without providing any unique website content.
Worse yet, as Google tries to go after this duplicate content,
they sometimes get the real company instead of the scraper. This
is all part of the cat and mouse game that has become the Google
algorithm. Once Google realized the manipulation that was
happening, they decided to aggressively alter their algorithms
to prevent it. After all, their goal is to find the most
relevant results for their searchers. At the same time, they
also faced huge growth with the internet explosion. This has led
to a period of unstable updates, causing many top ranking
websites to disappear while many spam and scraped websites
remain. In spite of Google's efforts, every change seems to
catch more quality websites. Many spam sites and websites that
violate Google's guidelines are caught, but there is an endless
tide of more spam websites taking their place.
Some people might believe that this is not a problem. Google is
there to provide the best relevant listings for what people are
searching on, and for the most part the end user has not noticed
an issue with Google's listings. If they only drop thousands of
listings out of millions, then the results are still very good.
These problems may not be affecting Google's bottom line now,
but having a search engine that cannot be evolved without
producing unintended results will hurt them over time in several
ways.
First, as the competition from MSN and Yahoo grows, having
the best results will no longer be a given, and these drops in
quality listings will hurt them. Next, to stay competitive
Google will need to continue to change their algorithms. This
will be harder if they cannot make changes without producing
unintended results. Finally, having the Webmaster community lose
faith in them will make them vulnerable to competition.
Webmasters provide Google with two things. They are the word of
mouth experts. Also, they run the websites that use Google's
Adsense program. Unlike other monopolies, it is easy to switch
search engines. People might also criticize Webmasters for
relying on a business model that requires free search engine
traffic. Fluctuations in ranking are part of the internet
business, and most Webmasters realize this. Webmasters are
simply asking Google to fix bugs that cause unintended issues
with their sites.
Most Webmasters may blame ranking losses on Google and their
bugs. But the truth is that many Webmasters do violate some of
the guidelines that Google lays out. Most consider it harmless
to bend the rules a little, and assume this is not the reason
their websites have issues. In some cases, though, Google is
right and has just tweaked its algorithm in the right direction.
Here is an example: Google seems to be watching the
incoming links to your site to make sure they don't have the same anchor
text (this is the text used in the link on the website linking
to you). If too many links use the same anchor text, Google
discounts these links. This was originally done by some people
to inflate their rankings. Other people did it because one
anchor text usually makes sense. This is not really a black hat
SEO trick, and it is not called out in Google's guidelines, but
it has caused some websites to lose rank.
Webmasters realize that Google needs to fight spam and black
hat SEO manipulation. And to their credit, there is a Google
Engineer named Matt Cutts who has a Blog site and participates
in SEO forums to assist Webmasters. But given the revenue impact
that Google rankings have on companies, Webmasters would like to
see even more communication around the known issues, and help
with identifying future algorithm issues. No one expects Google
to reveal their algorithm or what changes they are making. Rumor
on the forum boards speculates that Google is currently looking
at items like the age of the domain name, websites on the same
IP, and frequency of fresh content. It would be nice from a
Webmaster standpoint to be able to report potential bugs to
Google, and get a response. It is in Google's best interest to
have a bug free algorithm. This will in turn provide the best
search engine results for everyone.
==============================================
Rodney Ringler is President of Advantage1 Web Services, Inc.,
which owns a network of Web Hosting Informational Websites
including Hostchart.com (http://www.hostchart.com),
Resellerconnection.com (http://www.resellerconnection.com),
Foundhost.com (http://www.foundhost.com) and Resellerforums.com
(http://www.resellerforums.com).
==============================================
By Rodney Ringler (c) 2006.
Posted on this blog by Afzal Khan
Have you noticed anything different with Google lately? The
Webmaster community certainly has, and if recent talk on several
search engine optimization (SEO) forums is an indicator,
Webmasters are very frustrated. For approximately two years
Google has introduced a series of algorithm and filter changes
that have led to unpredictable search engine results, and many
clean (non-spam) websites have been dropped from the rankings.
Google updates used to be monthly, and then quarterly. Now with
so many servers, there seems to be several different search
engine results rolling through the servers at any time during a
quarter. Part of this is the recent Big Daddy update, which is a
Google infrastructure update as much as an algorithm update. We
believe Big Daddy is using a 64 bit architecture. Pages seem to
go from a first page ranking to a spot on the 100th page, or
worse yet to the Supplemental index. Google algorithm changes
started in November 2003 with the Florida update, which now
ranks as a legendary event in the Webmaster community. Then came
updates named Austin, Brandy, Bourbon, and Jagger. Now we are
dealing with the BigDaddy!
The algorithm problems seem to fall into 4 categories. There are
canonical issues, duplicate content issues, the Sandbox, and
supplemental page issues.
1. Canonical Issues: These occur when a search engine
treats www.yourdomain.com, yourdomain.com, and yourdomain.com/index.html
all as different websites. When Google does this, it then flags
the different copies as duplicate content and penalizes them.
Also, if the site not penalized is http://yourdomain.com, but
all of the websites link to your website using www.yourdomain.com,
then the version left in the index will have no ranking. These
are basic issues that other major search engines, such as Yahoo
and MSN, have no problem dealing with. Google is possibly the
greatest search engine in the world (ranking themselves as a 10
on a scale of 1 to 10). They provide tremendous results for a
wide range of topics, and yet they cannot get some basic indexing
issues resolved.
2. The Sandbox: This has become one of the legends of
the search engine world. It appears that websites, or links to them,
are "sandboxed" for a period before they are given full rank in the
index, kind of like a maturing time. Some even think it is only
applied to a set of competitive keywords, because they were the
ones being manipulated the most. The Sandbox existence is
debated, and Google has never officially confirmed it. The
hypothesis behind the Sandbox is that Google knows that someone
cannot create a 100,000 page website overnight, so they have
implemented a type of time penalty for new links and sites
before fully making the index.
3. Duplicate Content Issues: These have become a major
issue on the Internet. Because web pages drive search engine rankings,
black hat SEOs (search engine optimizers) started duplicating
entire sites' content under their own domain name, thereby
instantly producing a ton of web pages (an example of this would
be to download an Encyclopedia onto your website). As a result
of this abuse, Google aggressively attacked duplicate content
abusers with their algorithm updates. But in the process they
knocked out many legitimate sites as collateral damage. One
example occurs when someone scrapes your website. Google sees
both sites and may determine the legitimate one to be the
duplicate. About the only thing a Webmaster can do is track down
these sites as they are scraped, and submit a spam report to
Google. Another big issue with duplicate content is that there
are a lot of legitimate uses of duplicate content. News feeds
are the most obvious example. A news story is covered by many
websites because it is content the viewers want. Any filter will
inevitably catch some legitimate uses.
4. Supplemental Page Issues: Webmasters fondly refer to
this as Supplemental Hell. This issue has been reported on places like
WebmasterWorld for over a year, but a major shake up around
February 23rd has led to a huge outcry from the Webmaster
community. This recent shakeup was part of the ongoing BigDaddy
rollout that should finish this month. This issue is still
unclear, but here is what we know. Google has 2 indexes: the
Main index that you get when you search, and the Supplemental
index that contains pages that are old, no longer active, have
received errors, etc. The Supplemental index is a type of
graveyard where web pages go when they are no longer deemed
active. No one disputes the need for a Supplemental index. The
problem, though, is that active, recent, and clean pages have
been showing up in the Supplemental index. Like a dungeon, once
they go in, they rarely come out. This issue has been reported
with a low noise level for over a year, but the recent February
upset has led to a lot of discussion around it. There is not a
lot we know about this issue, and no one can seem to find a
common cause leading to it.
Google updates were once fairly predictable, with monthly
updates that Webmasters anticipated with both joy and angst.
Google followed a well published algorithm that gave each
website a Page Rank, which is a number given to each webpage
based on the number and rank of other web pages pointing to it.
When someone searches on a term, all of the web pages deemed
relevant are then ordered by their Page Rank.
Google uses a number of factors such as keyword density, page
titles, meta tags, and header tags to determine which pages are
relevant. This original algorithm favored incoming links and the
anchor text of them. The more links you got with an anchor text,
the better you ranked for that keyword. As Google gained the
bulk of internet searches in the early part of the decade,
ranking well in their engine became highly coveted. Add to this
the release of Google's Adsense program, and it became very
lucrative. If a website could rank high for a popular keyword,
they could run Google ads under Adsense and split the revenue
with Google!
This combination led to an avalanche of SEO'ing like the
Webmaster world had never seen. The whole nature of links between
websites changed. Websites used to link to one another because
it was good information for their visitors. But now that link to
another website could reduce your search engine rankings, and if
it is a link to a competitor, it might boost his. In Google's
algorithm, links coming into your website boost the site's Page
Rank (PR), while links from your web pages to other sites reduce
your PR. People started creating link farms, doing reciprocal
link partnerships, and buying/selling links. Webmasters started
linking to each other for mutual ranking help or money, instead
of quality content for their visitors. This also led to the
wholesale scraping of websites. Black hat SEO's will take the
whole content of a website, put Google's ad on it, get a few
high powered incoming links, and the next thing you know they
are ranking high in Google and generating revenue from Google's
Adsense without providing any unique website content.
Worse yet, as Google tries to go after this duplicate content,
they sometimes get the real company instead of the scraper. This
is all part of the cat and mouse game that has become the Google
algorithm. Once Google realized the manipulation that was
happening, they decided to aggressively alter their algorithms
to prevent it. After all, their goal is to find the most
relevant results for their searchers. At the same time, they
also faced huge growth with the internet explosion. This has led
to a period of unstable updates, causing many top ranking
websites to disappear while many spam and scraped websites
remain. In spite of Google's efforts, every change seems to
catch more quality websites. Many spam sites and websites that
violate Google's guidelines are caught, but there is an endless
tide of more spam websites taking their place.
Some people might believe that this is not a problem. Google is
there to provide the best relevant listings for what people are
searching on, and for the most part the end user has not noticed
an issue with Google's listings. If they only drop thousands of
listings out of millions, then the results are still very good.
These problems may not be affecting Google's bottom line now,
but having a search engine that cannot be evolved without
producing unintended results will hurt them over time in several
ways.
First, as the competition from MSN and Yahoo grows, having
the best results will no longer be a given, and these drops in
quality listings will hurt them. Next, to stay competitive
Google will need to continue to change their algorithms. This
will be harder if they cannot make changes without producing
unintended results. Finally, having the Webmaster community lose
faith in them will make them vulnerable to competition.
Webmasters provide Google with two things. They are the word of
mouth experts. Also, they run the websites that use Google's
Adsense program. Unlike other monopolies, it is easy to switch
search engines. People might also criticize Webmasters for
relying on a business model that requires free search engine
traffic. Fluctuations in ranking are part of the internet
business, and most Webmasters realize this. Webmasters are
simply asking Google to fix bugs that cause unintended issues
with their sites.
Most Webmasters may blame ranking losses on Google and their
bugs. But the truth is that many Webmasters do violate some of
the guidelines that Google lays out. Most consider it harmless
to bend the rules a little, and assume this is not the reason
their websites have issues. In some cases, though, Google is
right and has just tweaked its algorithm in the right direction.
Here is an example: Google seems to be watching the
incoming links to your site to make sure they don't have the same anchor
text (this is the text used in the link on the website linking
to you). If too many links use the same anchor text, Google
discounts these links. This was originally done by some people
to inflate their rankings. Other people did it because one
anchor text usually makes sense. This is not really a black hat
SEO trick, and it is not called out in Google's guidelines, but
it has caused some websites to lose rank.
Webmasters realize that Google needs to fight spam and black
hat SEO manipulation. And to their credit, there is a Google
Engineer named Matt Cutts who has a Blog site and participates
in SEO forums to assist Webmasters. But given the revenue impact
that Google rankings have on companies, Webmasters would like to
see even more communication around the known issues, and help
with identifying future algorithm issues. No one expects Google
to reveal their algorithm or what changes they are making. Rumor
on the forum boards speculates that Google is currently looking
at items like the age of the domain name, websites on the same
IP, and frequency of fresh content. It would be nice from a
Webmaster standpoint to be able to report potential bugs to
Google, and get a response. It is in Google's best interest to
have a bug free algorithm. This will in turn provide the best
search engine results for everyone.
==============================================
Rodney Ringler is President of Advantage1 Web Services, Inc.,
which owns a network of Web Hosting Informational Websites
including Hostchart.com (http://www.hostchart.com),
Resellerconnection.com (http://www.resellerconnection.com),
Foundhost.com (http://www.foundhost.com) and Resellerforums.com
(http://www.resellerforums.com).
==============================================