Originally launched in February of 2011, the Google Panda update was developed to keep low-quality content off the top pages of the Big G’s search results. Since then, Panda went through dozens of refreshes virtually every month. Panda rollouts became so common that Google stopped making big announcements and the SEO world just learned to live with it.
Google is rolling out our Panda 4.0 update starting today.
— Matt Cutts (@mattcutts) May 20, 2014
That was until late last May when Matt Cutts announced the deployment of Panda 4.0. Industry experts immediately knew something big was up due to the fact that Google took the time to let people know it’s coming. Sure enough, carnage followed as reports claimed that thousands of sites were affected. Ask.com, History.com and eBay were some of the worst hit by Panda as they suffered organic search traffic drops of 33% or more. If you feel your site might be at risk or might have been nailed by Panda, this post is for you.
Who’s at Risk?
The short answer: websites that have an inordinate amount of low-quality content on its pages. Of course, “low quality” is a subjective description and it can mean a lot of different things depending on who you ask. Nevertheless, there are some universally accepted hallmarks of poor content quality. These are:
- Thin content – According to Copypress, a page’s content can be considered “thin” by Google if it’s less than 200 words in length. Some exceptions may apply. For instance, news sites that post brief but useful updates and pages with other media assets (images, audio, video) might bet a pass.
- Spun content – Content spinning is by far one of my biggest pet peeves. Basically, it’s the act of creating “new” and “original” content by copying another site’s content, rewording it and claiming that it’s yours in an attempt to use it for SEO. Prior to Panda, a lot of webmasters used spun content to build entire sites and to generate backlinks from content farms. It came as no surprise that Panda’s first victims when it launched were article directories and blog networks. This forced a lot of SEOs to abandon strategies that run on content spinning.
If you’re wondering why spun content is bad for the Web, it’s because it usually offers a poor reading experience. When a genuinely good article is rephrased and its words are subbed out by synonyms, it loses its style and its context suffers. If you’re using spun content in any way, shape or form…. I wish you luck.
- Poorly-written content – Just because anyone can publish content on the Web doesn’t mean everyone is a qualified writer. Grammar, style and flow still separate real writers from random dudes with computers. Google is getting better at understanding human language every day and you can bet that it recognizes and rewards good writing.
- Duplicate content – Content duplication within your domain can be taken by Google as a sign that a website is either negligent, misleading or spammy. The last thing the search giant wants to do is to display multiple search results that lead to identical content pieces. Duplication filters have long been a part of Google’s algorithms and Panda came out to take detection and action to a higher level.
- Scraped content – this is the act of stealing content from someone else’s site and posting it to another place in the Web without the author’s permission. People who do content scraping often use automated tools to collect content en masse. Search engines are not perfect and in some cases, they might filter out the original source in favor of the duplicating one. If you have no internal duplicate content and your rankings are suffering, you might want to check if your content is being scraped.
- Excessive, intrusive ads – Some sites are built on information and they don’t sell products of their own. These sites make money by relying on ads that pay them on a cost-per-click (CPC), cost-per-thousand-impressions (CPM) or cost-per-acquisition (CPA) basis. Needless to say, it’s in the best financial interest of these sites to display ads to their visitors as much as they possibly can.
Google isn’t against making money and it embraces advertising as its main revenue model as well. However, it does understand that too much ads can annoy users and disrupt their desired activities. If a site displays too much advertising above the fold (the part of the page that splashes into view as soon as it loads), this could trigger Panda filters.
- Excessive use of doorway pages – One of the biggest stories related to Panda 4.0 was eBay’s fall from search engine grace. The online auction giant had the habit of using doorway pages to rank for thousands of niche keywords. Google defines doorway pages as “typically large sets of poor-quality pages where each page is optimized for a specific keyword or phrase.” In eBay’s case, it liked to optimize for internal search result pages that didn’t immediately show Google users the information they expected when they clicked on an eBay listing. This means that users would have to go through the doorway page, hope to find what they were looking for in the internal search page and then click on a link only to be led to another page.
Needless to day, Google saw this as a source of user frustration and put its foot down on eBay’s methods.
- Low Click-Through Rates – Reaching Google’s first page for your target keywords is no guarantee that you’ll be there to stay. Google expects high-ranking search results to get a definite share of clicks from its users. If the listing consistently underperforms, it gives off the signal that users don’t find it enticing or relevant. Google then has the impetus to lower that listing’s rank and put better-performing listings ahead of it.
A search listing’s click-through rate (CTR) is the ratio between the number of impressions that it gets versus how many times it actually gets clicked on. Top-ranking listings usually get anywhere between 35% to 40% of the clicks on a page while the second-placer gets around 17.5%-20%. If the CTRs fall well below these ballpark ranges on a lot of pages in your site, it may send a signal to Google that you’re doing something wrong.
- High Bounce Rate – In web analytics, a “bounce” happens when a user lands on a page and leaves without any signs of engagement activities like in-depth reading, clicking on links or commenting. The bounce rate is the ratio of user visits versus the number of visits who bounced off. Analytics guru Avinash Kaushik calls the bounce rate “the sexiest metric ever” because it’s simple, powerful and it tells you how well a page is serving its audience in one glance.
Having a lot of pages with bounce rates of 60% of higher is a red flag. It means that more than half of the people driven to your pages don’t find your content engaging. If this is a sitewide phenomenon for you, it could be a mitigating factor in a Panda hit.
Diagnosing a Panda Attack
Let’s get one thing straight: a Panda-related ranking drop is not a penalty. It’s an algorithm update that filters low-quality pages from the SERPs. Since it’s algorithmic, it means nobody in the Googleplex pushed a button to dip your site in hot water. You will not receive a manual action message in Webmaster Tools but the effects can be just as devastating. People love to refer to Panda as a penalty, but that’s an incorrect label if you want to be technical about it.
That said, diagnosing a Panda attack can be very difficult. You won’t have a message telling you what you’ve done wrong and it could take days for you to realize that something went awry. Distilled wrote a post on determining Panda hits three years ago, but most of the lessons in that post still hold true today. When investigating a possible case of Panda filtering, watch out for the following signs:
- Abrupt ranking drops
- Sharply-diminished Google traffic
- Drop in number of pages receiving Google traffic
It’s helpful to keep your ears close to the SEO community ground. Find out whether or not a Panda refresh happened around the same time you experienced diminished search engine visibility. Of course, looking at all the things mentioned above is no guarantee that it was indeed Panda that got you. If you’ve been using other questionable tactics such as over-aggressive link acquisition, link selling or link network participation, you could have gotten hit with Penguin or some other algorithm updates.
Bottom line: you know your site best, so look into all the possible elements that might have triggered the filter and start addressing them right away.
Dealing with the Panda
I can’t lie: I haven’t encountered a Panda recovery job that I would call easy. Bouncing back involves a lot of time, effort, know-how and patience if you think you’ve been hit with Panda, here are some steps that you’ll want to take on the road to recovery:
- Find and Fix Pages with Thin Content – I personally use the Screaming Frog SEO Spider tool to give me the data I need when I perform on-site audits. I highly suggest getting the paid version because it allows you to crawl all the links on all the pages in your site. Once you’ve completed the crawl, go to the ‘Internal” tab and filter out the non-HTML URLs. This allows you to see webpages and along with a corresponding “word count” column.
Export the table to a CSV file and sort the word counts from smallest to largest. Examine all the pages that have 200 words or less on them. See if these pages can be beefed up. If not, deindexing them may be your best and quickest fix.
- Hire Real Writers – Take Joe Pulizzi’s advice in his book Epic Content Marketing and hire real, professional writers to create your content if you don’t have the time or energy to create it yourself. Personally, I prefer in-house writers because they tend to grow into subject matter experts over time. However, I’m not discounting the fact that there are some very good freelancers out there who can help you just as much. You’ll just have to be very careful when screening them. Grammar punctuation, versatility and research tenacity are some of the things I consider when I hire a writer.
Also, don’t confuse copywriters with journalists. Copywriters specialize in writing persuasive copy that will entice readers to heed your calls to action. Journalists, by nature and training, are more of fact-driven storytellers. There are some writers who are good enough to do both, but those people are rare and they don’t come cheap.
Bottom line, I would suggest that you make some of your biggest investments on hiring great writers than anything else. SEO, marketing and technical concepts can be taught within days. Writing ability is honed as a person grows older and it can’t be taught that easily. Talented writers tend to take pride in their work. They’re not likely to plagiarize, spin or scrape other people’s content. Great writers love to hear praises for their work, so they tend to submit material that will require as little editing as possible from you and your editorial team.
- Find and Address Internal Duplicate Pages – There are a couple of quick ways to do this. The first is via Google Webmaster Tools. On the left sidebar menu, click on Search Appearance and then go to HTML Improvements. You’ll see a dashboard that lists duplicate title tags and duplicate meta descriptions among other useful info. Download the data as CSV and check out the URLs of the pages with title or description duplications. Check out those pages and see whether it’s just the title and the meta data that’s repeating itself across multiple pages or if the entire content is the same.
If only the title tags and/or meta descriptions are duplicating but not the body of the content, simply rewrite them as something that better describes the page’s content. If the pages are absolute duplicates, check whether the duplicate pages need to exist from a usability standpoint or not. If they need to exist, use a rel=canonical tag to tell Google which among the duplicating pages it should rank. If the duplicate pages don’t need to exist, you can simply delete and 301 redirect them to the original one.
- Hunt Down Content Scrapers – Your site might not be doing any wrong in itself but content scrapers might be stealing your content and fooling Google into thinking you’re the copycat. There are two things I’d immediately do about it:
- Add a rel=canonical tag to each page that points to itself – Scarping operations are run by lazy people and their tools often copy content down to the code level. That means if you plant a rel=canonical tag that points to your own page, they’ll take that too and unwittingly tell Google that you’re the original. It might take some time for Google to see what’s going on but this works well every time I’ve tried it.
- Send a Cease and Desist Letter – If the scraping persists and it’s affecting your business, find out who owns the offending domain over at Whois and send that person an email from your legal team. The letter is called a “cease and desist” correspondence which essentially tells the scraper to stop what he’s doing or face legal action. Most scrapers don’t have the desire and resources to fight lawsuits and they will comply more times than not.
- Review Your Ad Placement Policy – If you have layouts that prioritize ad placement over usability, hold a meeting with all key stakeholders and discuss the matter with them. Be aware of the fact that ads drive revenue for your business and there could very well be internal resistance towards moving or removing them. Find a compromise between giving users a much better experience when visiting your pages while maintaining ad visibility in all the right places. Banner ads, sidebar ads and ads placed between blog entries are all acceptable as long as they’re properly labelled.
- Avoid Using Doorway Pages to Rank for Keywords – Google wants the best experience possible for their users. That means when they click on your listing, they should immediately find what you promised them. If you’ve set up doorway pages with keyword-laced copy and bombarded them with backlinks, you probably have a ticking timebomb in yout hands.
Address this issue by 301 redirecting doorway pages directly to the pages that have the highest degree of contextual relevance to them. Keep in mind that Google wants to align the intent of the queries with the listings that they display and the pages where users are taken. If the target keyword intent is satisfied by the page where you redirect the doorway page to, Google is bound to notice that and it may help youavert Panda filters or recover from it if you’ve already been hit.
- Use Canonical Tags for Ecommerce Sites with Lots of Similar Products – I’ve handled very large ecommerce sites and some of them contain hundreds of thousands of products. In some cases, groups of products had very minor distinguishing characteristics and the resulting copy was practically the same for each of their pages. This caused content duplication issues which triggered the Panda filter and we encountered sharp dips in rankings and traffic. After a week of deliberations with my POCs, we came up with a solution that ultimately led to full recovery. Here’s an example:
We had a retail site that sold safety signs for factories and warehouses. No Smoking signs came in different colors but the materials used and the product dimensions were identical. Each color variant had a separate page and all those pages had very similar product descriptions. Smoking signs were not the only products that had this issue in our ecommerce site. Practically every one of our 400+ signs had color variants and it would be impractical to rewrite all that copy based on just a color angle.
We resolved this issue by identifying the best-selling color variation of each sign and pointing canonical tags to it from the pages of the other color variants. Sure, only one variant would be ranked by Google but that’s better than being stuck in Panda’s clutches. Eventually, the pages that we chose to rank for Google began to recover on the SERPs and sales went back to normal.
Next Steps
After covering all your bases and applying fixes to every aspect of your site that might have incurred the wrath of the Panda, you’ll want to sit back and wait for another refresh to happen. As stated above, Panda refreshes roll out about once a month. Watch out for news on rollouts and see if your traffic and rankings bounce back within the next few days.
Keep in mind that there are a few cases when rankings don’t rebound after you addressed Panda-related issues and a refresh rolls out. Take a closer look and make sure that there are no issues left to deal with. Wait for another refresh and see what the effects are. As long as originality, substance and usability are palpable in your pages, you should be able to rebound.
Hopefully this post gave you a better perspective of how Panda works and how to deal with it. If you have questions or insights based on Panda 4.0, let us know in the comments section below.
This is great! I’m going to use some of these tools with my company. I’d read a lot about Panda 4.0, but this simplifies it perfectly. Thank you!
Glad you found it useful, Ryan.
Oh Panda, how you hurt my neglected Amazon sites
+1 for Screaming Frog, it definitely makes spotting weak pages easier.