There has been a lot of talk about the latest Google algorithm update. Known as Panda (or Farmer, due to its apparent targeting of 'content farms'), the change has been implemented to improve overall search quality by weeding out low-quality websites. But what does this mean for SEO, and how can you use it to your advantage?
First, let's recap. Google changes its algorithm all the time, but most changes are so subtle that they are hardly noticed. Panda is different through. In February, Panda went live in the US, affecting around 12% of search queries. Now, as of mid-April, the update has been rolled out to all English-language queries worldwide, and is expected to make a similar impact.
The idea behind the update is to help the searcher find information with real value to them- not just the information that has been best optimised for their search phrases. In particular, this means weeding out websites with large amounts of shallow, low-quality content, known derisively as 'content farms.' These sites mass-produce content that specifically targets popular search queries. Hundreds or even thousands of articles are written daily, but little effort is put into quality assurance. Usually, these articles are written by poorly-paid writers, often based in developing countries, so the standard is generally low, with poor English usage and shallow information.
The problem has been that often these sites rank higher than well-researched, well-written and authoritative information- in other words, content that would actually have much greater value to the searcher. Panda was designed to change this, rewarding sites with high-quality content and lowering the visibility of less useful content.
The effect on site rankings
Since the update was rolled out, the web has been buzzing about its implications. Many sites have noticed a considerable difference in their search rankings- some for better, and some for worse. Content farms have suffered noticeable rankings drops, and scraper websites (sites that do not publish original content, but instead copy content from elsewhere on the internet) are also being punished. On the up side, some established sites with high-quality information have been rewarded with higher rankings. You can see some interesting data on the biggest winners and losers here.
Since the update was rolled out, the web has been buzzing about its implications. Many sites have noticed a considerable difference in their search rankings- some for better, and some for worse. Content farms have suffered noticeable rankings drops, and scraper websites (sites that do not publish original content, but instead copy content from elsewhere on the internet) are also being punished. On the up side, some established sites with high-quality information have been rewarded with higher rankings. You can see some interesting data on the biggest winners and losers here.
Still, there have been complaints from site owners who believe their content should not have been marked as low quality. On that note, Google has said that tweaks are still being made as necessary, but that site owners should look closely at their content. A statement on the Google blog said: "Sites that believe they have been adversely impacted by the change should be sure to extensively evaluate their site quality. In particular, it's important to note that low quality pages on one part of a site can impact the overall ranking of that site. Publishers who believe they've been impacted can also post in our webmaster forums to let us know. We will consider feedback from publishers and the community as we continue to refine our algorithms."
The definition of 'low-quality'
The question we've all been asking is how Google determines what constitutes a 'low-quality' site. We don't know the exact formula, but Google's search-quality guru Amit Singhal and top search-spam fighter Matt Cutts gave some clues about their process in an interview with Wired. Basically, they needed to develop a mathematical definition of 'low-quality.' So, they conducted qualitative research to find out which of a sample of sites people considered to be low quality and why. From these results, they developed a list of factors that Google could measure.
The question we've all been asking is how Google determines what constitutes a 'low-quality' site. We don't know the exact formula, but Google's search-quality guru Amit Singhal and top search-spam fighter Matt Cutts gave some clues about their process in an interview with Wired. Basically, they needed to develop a mathematical definition of 'low-quality.' So, they conducted qualitative research to find out which of a sample of sites people considered to be low quality and why. From these results, they developed a list of factors that Google could measure.
There has been some discussion about whether information collected from the recently launched Personal Blocklist Chrome extension could have impacted results. Singhal and Cutts say that Google did not directly use the information collected from the extension to make decisions, but did compare that data with what the algorithm update considered low-quality sites, and used it as a confirmation of sorts. They explain: "If you take the top several dozen or so most-blocked domains from the Chrome extension, then this algorithmic change addresses 84% of them, which is strong independent confirmation of the user benefits."
What does this mean for SEO?
Basically, the Panda update reiterates the importance of having quality, unique content. Although nobody knows for sure, we can make some educated guesses about the factors that Panda is probably taking into consideration, based on the things we do know.
Basically, the Panda update reiterates the importance of having quality, unique content. Although nobody knows for sure, we can make some educated guesses about the factors that Panda is probably taking into consideration, based on the things we do know.
So, some of the things that may make a website considered 'low quality' include:
• A high percentage of duplicate content.
• A large amount of ads, particularly those that aren't relevant to the site.
• Page content and title tag not matching the search queries a page does well for.
• Poorly written English or unnatural language use, including clumsy use of on-page SEO, like over-using keywords.
Large amounts of very short articles making up the majority of your content
• High bounce rate
• Low visit times
• Low percentage of users returning
• Low click through percentage from Google's results pages
• High percentage of boilerplate content (the same on every page)
• Low or no quality inbound links
• A high percentage of duplicate content.
• A large amount of ads, particularly those that aren't relevant to the site.
• Page content and title tag not matching the search queries a page does well for.
• Poorly written English or unnatural language use, including clumsy use of on-page SEO, like over-using keywords.
Large amounts of very short articles making up the majority of your content
• High bounce rate
• Low visit times
• Low percentage of users returning
• Low click through percentage from Google's results pages
• High percentage of boilerplate content (the same on every page)
• Low or no quality inbound links
Another important point is that if you have low-quality or duplicated content on your site, it can make your whole site be deemed low quality, even if you do have other original, high quality content as well. Google's John Mu advised: "If you do have such high-quality, unique and compelling content, I'd recommend separating it from the auto-generated rest of the site, and making sure that the auto-generated part is blocked from crawling and indexing, so that search engines can focus on what makes your site unique and valuable to users world-wide."
0 comments :
Post a Comment