Released on April 24th 2012, Penguin 2.0 has so far continued the work of the original Google Penguin algorithm in attempting to tackle web spam and poor linking practices. When Penguin was first launched last year, Google framed it as a way to punish sites that were engaged in ‘black hat’ SEO practices, from buying up links through web directories, to stuffing pages and meta tags with keywords.
Penguin 2.0 aims to build on this approach, and aligns with Google Panda in making it harder for sites to inflate their organic visibility without having quality content and organic linking from high quality sites to back up their position. What, then, do we know about Penguin 2.0 so far, and what can sites do to improve their ranking if they’ve been punished by the algorithm change?
What Penguin 2.0 Does
It’s perhaps best to start by looking to Google’s Matt Cutts, the chief engineer behind Penguin, to explain what some of the major features of version 2.0 are. As with the original Penguin, 2.0 is all about tackling web spam, but goes into more detail over what counts as links from low quality sites. Google have also updated their Panda algorithm, which is now in a state of ‘everflux’ so gaming Google is now harder than ever.
Cutts suggests that Penguin 2.0 will affect about 2.3 per cent of English and US search queries, and will also be rolled out to include other languages. The algorithm will work to promote good SEO practices from webmasters, which means generating relevant, well written content on sites, and building up a search profile based on links from high quality sites, rather than from pages that have been identified as participating in spam […]