Google Panda – Search Engine Watch https://searchenginewatch.com Wed, 24 Feb 2021 12:13:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 The Panda anniversary and what we desperately must remember about search https://searchenginewatch.com/2021/02/24/the-panda-anniversary-and-what-we-desperately-must-remember-about-search/ https://searchenginewatch.com/2021/02/24/the-panda-anniversary-and-what-we-desperately-must-remember-about-search/#respond Wed, 24 Feb 2021 12:13:01 +0000 https://www.searchenginewatch.com/?p=143065

30-second summary:

  • This week marks the 10 year anniversary of Google’s landmark web quality algorithm Panda
  • It was a seminal moment for the SEO industry with 12% of US sites being targeted for poor quality and manipulative optimization practices
  • Despite removing much of the worst black-hat tactics SEO is still hasn’t lived up to its experiential potential ten years on
  • Many clients and practitioners still use outdated language and practices to position the value of Search in this vastly more mature marketing landscape
  • To escape this pre-Panda legacy SEO needs to take the best of its constituent parts and shape a new customer-centric Search future once and for all

I was recently notified of a significant work anniversary which transported me back in time to the turbulent start of my SEO career just over 10 years ago. I was prompted to reflect on the industry I love, where it continues to fall short, and ultimately where I see it going. This professional milestone closely corresponded with what was a seminal event for the immature SEO business. On February 24th, 2011 the ‘Death Star’ took aim, and with a typically understated Tweet from the Head of Google’s Web Spam team, Matt Cutts confirmed it. Google had launched its landmark web quality algorithm that would forever be known as Panda. 

Source: Twitter

The day of reckoning had arrived for an industry that tied their client’s lucrative search fortunes to a house of cards built on the spammy and manipulative best practices that had become SEO’s calling card. Thin, duplicate and often stolen content was accompanied by on-site keyword stuffing and obvious over-optimization. This might have gamed the rankings for a time but provided little value to the users who bounced en masse giving Google a solid signal that many sites deserved an algorithmic slapdown. 

What exactly happened in 2011 with Panda?

In what was a relatively short rollout, around 12% of US search queries were affected and the target of the rollout was poor quality sites relying way too heavily on content farms and directories to fabricate their popularity in search. 

Shell-shocked webmasters stared at their Analytics dashboards like Wall Street traders on Black Monday, watching in disbelief as their search share plummeted and asked, “What do we do now?”

At the time, I was simply a fledgling Search Executive with a mere nine months’ industry experience under my belt, with the only thing protecting me from this fallout being the founders of our agency. As a start-up, luckily we were free and clear of this mess as they had seen the writing on the wall long before. 

SEO was dead, or so we thought, and a new age of experience was dawning. We looked on as Rome burned.

But, despite its obituary being cynically written every year since SEO refused to die. At the time, practitioners paid lip service to profound change but were far too invested in their ways of operating, and clients, although badly burned, were addicted to the quick wins the hackers of the algorithm had peddled. And so, the dance went on.

Was Panda a missed opportunity for the industry?

Yes Panda, and its sister link-spam algorithm Penguin, had a profound effect and removed the absolute worst of the worst black-hat practices but a significant proportion of the industry simply did their best to clean up the mess they’d created – often charging clients to take out their own trash so to speak – and so the probing began for what was the new acceptable minimum you needed to meet in order to get your site ranking once again. 

  • “Is 300 words enough now?” 
  • “How many keywords can I get away with using without angering Google?”
  • “How much content do I need to change for it to be considered unique, will 60% do it?”

This mentality of chasing the ever-evolving algorithmic goalposts is the continued failure of many in the industry who still largely prefer to please bots ahead of delivering real value for users. 

I’m not meaning to preach, my hands aren’t squeaky clean and these tactics do have a use but it’s a belief gaining momentum that they should not be allowed to ride roughshod over both brand and UX. I was lucky enough to have been scared straight from the start, firmly putting my focus on how to drive real value to the consumer, building great experiences, authority, and trust.

Panda’s pain is still real

This is the Jackal and Hyde reputation the industry has suffered through ever since. The straightest of strait-laced operators – who see search as a powerful and useful customer touchpoint, are tarnished with the same brush as the sketchiest of spammers and scammers who are still alive and well within the industry.

Their presence diminishes the overall value of search and can create a race to the bottom kind of mentality. Clients who are still sore with the industry ten years on sometimes expect “old-school” results without being willing to invest in long-term value – ironically because they’re terrified of being burned again by another update. 

It’s crazy but it’s true, I’m still having these conversations on a basis that is more than is reasonable and it is because the discipline is haunted by the original sins of its birth.

It goes without saying that I want to scream every time I hear the words:

  • “Can you do some quick SEO for me?”
  • “I’d love it if you could build us some cheap links?”
  • “Can you just get rid of this negative article from Google for me?”
  • “Just tell me what keywords I should use!”

All with the retort of, 

“… it will cost what?! I found a guy online who’ll do it for peanuts”.

The damage has been done and this is the cross that SEO has to bear, but is there a way to move out of the long shadow cast by a decade-old catastrophe? 

The answer is resounding, “yes!” but we need to meet the revolutionary promise we made in 2011 and we desperately need to stop talking just about SEO and reposition the value of search.

What does our SEO past mean for our search future?

First let’s start with the term itself, what it means to clients and how it needs to be repositioned. SEO is a collection of data-driven tactics which are often seen as a cure-all by clients, a channel unto itself, this it is not. 

Despite sitting at the critical crossroads of web development, content, and PR, SEO is far too often a siloed activity that does not play nicely with other marketing disciplines, even separated in mind and budget from its closest counterpart SEM. 

Instead, we need to be evaluating search, not SEO, as a valuable driver in a customer’s path to purchase and how it can facilitate discovery, consideration, and purchase, driving an overall brand experience.

Looking at SEO and how it operates on the Panda update anniversary

The reason SEO too often operates in a vacuum is that historically it’s far less complicated to manage and measure in isolation. But the impact and delivery of search should be more dynamic and incorporated across marketing departments as you can see above or agencies with the constituent tactics of SEO being greater as part of the search. 

It’s fair to say that the Panda SEO ripples from 10 years ago have not yet matured nearly as quickly as the dynamic marketing ecosystem that’s grown up around it. 10 years ago, rich media, mobile and social media weren’t yet huge drivers or mediums, also with the arrival of personalized email and marketing automation being relatively new on the scene too.

Google has evolved well beyond its blue link roots providing a valuable blended search experience featuring products, local results, answers, reviews, news, video and is powered by advanced AI which actually understands user intent and voice searches.

Search is no longer the one-dimensional digital bottleneck it once was and consumers hold the power to choose how they interact with brands and follow the path that’s most convenient for them, not one that’s engineered by SEO alone. 

Remember, people will always do what’s right for them.

Three considerations for how search should learn from SEO’s past

1. Put the customer first

A customer-centric approach is a given in most marketing disciplines but a lot of people in the SEO community did not seem to get the memo. 

Instead of talking about search share and obsessing about the ranking opportunities we need to focus on, try to refocus the lens on what the customer feels, wants, and needs as the foundation of an experiential strategy from which, not only search will be the tactic it delivers on. 

Beyond the implied minimum of a technically sound site, we need to put a greater emphasis on analyzing search behavior, not just keywords, to provide the customer with the right information at the moments that matter in their journey. 

Marketing teams need to be asking themselves, “why?” more often and for search the answer needs to be, “because it’s what’s best for the customer”. 

2. Change the tone and vocabulary

These points all have one thing in common in that we need to try and move away from the acronyms, verbiage, and lingo that was coined in a non-customer-centric world and based on optimization rather than value. 

This will be one of the hardest things to move away from as so many veterans wear SEO as a badge of honor and clients will more than struggle to learn a new way of referring to a discipline they still don’t fully understand. 

Obviously, I don’t have all the answers here, so from a quick poll I ran on LinkedIn, I wanted to gauge other industry opinions on this divisive topic.

Poll on the state of SEO

As you can see, even from this small pool of 39 people in my marketing network, there are almost half of them who also sense that there is a problem but either feel that the hill is too high to climb or that the problem is there but can continue to be ignored. The conversation continues.

3. Create don’t build

Just showing up in the right search simply isn’t good enough and we know that we need to move away from the mentality of building SEO-optimized content and links as simply a means to an end. 

Search data should inform what kind of content people are looking for and also what they like to consume but owned branded content should not be the playground of optimization. There aren’t any shortcuts to creating great user experiences or content that is genuinely useful and deserving of press but you can use search data to make valuable decisions. 

Search as a collaborative marketing discipline will win the day.

The final conclusion to all of this is that search holds extreme value but the industry still is not living up to its full potential because of the ghosts of its pre-Panda past. 

The long-term beneficiaries of SEO will be those who can effectively rip it apart and piece it back together in everything marketing teams do, which is no easy feat. 

If we educate the experience makers, everyone from the copywriter to the PR director, the developer to UX designer on the beneficial insights that search teams can provide then a new paradigm can be born. 

Then, and only then, can SEO finally be put out to stud and enjoy the retirement it so desperately deserves.

Kevin Mullaney is MarTech Lead at Nordic Morning’s Malmö office. Kevin has over 12 years’ experience working with large global brands at established digital consultancies. A veteran of the SEO industry Kevin has been a speaker at BrightonSEO and other industry events and now leads the MarTech and Media team at Nordic Morning’s Malmö office in Sweden.

]]>
https://searchenginewatch.com/2021/02/24/the-panda-anniversary-and-what-we-desperately-must-remember-about-search/feed/ 0
Are keywords still relevant to SEO in 2018? https://searchenginewatch.com/2018/02/26/are-keywords-still-relevant-to-seo-in-2018/ https://searchenginewatch.com/2018/02/26/are-keywords-still-relevant-to-seo-in-2018/#respond Mon, 26 Feb 2018 13:47:11 +0000 https://www.searchenginewatch.com/2018/02/26/are-keywords-still-relevant-to-seo-in-2018/ What a useless article! Anyone worth their salt in the SEO industry knows that a blinkered focus on keywords in 2018 is a recipe for disaster.

Sure, I couldn’t agree with you more, but when you dive into the subject it uncovers some interesting issues.

If you work in the industry you will no doubt have had the conversation with someone who knows nothing about SEO, who subsequently says something along the lines of:

SEO? That’s search engine optimization. It’s where you put your keywords on your website, right?”

Extended dramatic sigh. Potentially a hint of aloof eye rolling.

It is worth noting that when we mention ‘keywords’ we are referring to exact match keywords, usually of the short tail variety and often high-priority transactional keywords.

To set the scene, I thought it would be useful to sketch out a polarized situation:

Side one:

Include your target keyword as many times as possible in your content. Google loves the keywords*. Watch your website languish in mid table obscurity and scratch your head wondering why it ain’t working, it all seemed so simple.

(*not really)

Side two:

You understand that Google is smarter than just counting the amount of keywords that exactly match a search. So you write for the user…..creatively, with almost excessive flair. Your content is renowned for its cryptic and subconscious messaging.

It’s so subconscious that a machine doesn’t have a clue what you’re talking about. Replicate results for Side One. Cue similar head scratching.

Let’s start with side one. White Hat (and successful) SEO is not about ‘gaming’ Google, or other search engines for that matter. You have to give Doc Brown a call and hop in the DeLorean back to the early 2000s if that’s the environment you’re after.

Search engines are focused on providing the most relevant and valuable results for their users. As a by product they have, and are, actively shutting down opportunities for SEOs to manipulate the search results through underhanded tactics.

What are underhanded tactics? I define them by tactics that don’t provide value to the user; they are only employed to manipulate the search results.

Here’s why purely focusing on keywords is outdated

Simply put, Google’s search algorithm is more advanced than counting the amount of keyword matches on a page. They’re more advanced than assessing keyword density as well. Their voracious digital Panda was the first really famous update to highlight to the industry that they would not accept keyword stuffing.

Panda was the first, but certainly not the last. Since 2011 there have been multiple updates that have herded the industry away from the dark days of keyword stuffing to the concept of user-centric content.

I won’t go into heavy detail on each one, but have included links to more information if you so desire:

Hummingbird, Latent Semantic Indexing and Semantic Search

Google understands synonyms; that was relatively easy for them to do. They didn’t stop there, though. Hummingbird helps them to understand the real meaning behind a search term instead of the keywords or synonyms involved in the search.

RankBrain

Supposedly one of the three most important ranking factors for Google. RankBrain is machine learning that helps Google, once again, understand the true intent behind a search term.

All of the above factors have led to an industry that is focused more on the complete search term and satisfying the user intent behind the search term as opposed to focusing purely on the target keyword.

As a starting point, content should always be written for the user first. Focus on task completion for the user, or as Moz described in their White Board Friday ‘Search Task Accomplishment’. Keywords (or search terms) and associated phrases can be included later if necessary, more on this below.

Writing user-centric content pays homage to more than just the concept of ranking for keywords. For a lot of us, we want the user to complete an action, or at the very least return to our website in the future.

Even if keyword stuffing worked (it doesn’t), you might get more traffic but would struggle to convert your visitors due to the poor quality of your content.

So should we completely ignore keywords?

Well, no, and that’s not me backtracking. All of the above advice is legitimate. The problem is that it just isn’t that simple. The first point to make is that if your content is user centric, your keyword (and related phrases) will more than likely occur naturally.

You may have to play a bit of a balancing act to make sure that you don’t up on ‘Side Two’ mentioned at the beginning of this article. Google is a very clever algorithm, but in the end it is still a machine.

If your content is a bit too weird and wonderful, it can have a negative impact on your ability to attract the appropriate traffic due to the fact that it is simply too complex for Google to understand which search terms to rank your website for.

This balancing act can take time and experience. You don’t want to include keywords for the sake of it, but you don’t want to make Google’s life overly hard. Experiment, analyse, iterate.

Other considerations for this more ‘cryptic’ content is how it is applied to your page and its effect on user experience. Let’s look at a couple of examples below:

Metadata

Sure, more clickbait-y titles and descriptions may help attract a higher CTR, but don’t underestimate the power of highlighted keywords in your metadata in SERPs.

If a user searches for a particular search term, on a basic level they are going to want to see this replicated in the SERPs.

Delivery to the user

In the same way that you don’t want to make Google’s life overly difficult, you also want to deliver your message as quickly as possible to the user.

If your website doesn’t display content relevant to the user’s search term, you run the risk of them bouncing. This, of course, can differ between industries and according to the layout/design of your page.

Keywords or no keywords?

To sum up, SEO is far more complex than keywords. Focusing on satisfying user intent will produce far greater results for your SEO in 2018, rather than a focus on keywords.

You need to pay homage to the ‘balancing act’, but if you follow the correct user-centric processes, this should be a relatively simple task.

Are keywords still relevant in 2018? They can be helpful in small doses and with strategic inclusion, but there are more powerful factors out there.

]]>
https://searchenginewatch.com/2018/02/26/are-keywords-still-relevant-to-seo-in-2018/feed/ 0
Duplicate content FAQ: What is it, and how should you deal with it? https://searchenginewatch.com/2017/10/18/duplicate-content-faq-what-is-it-and-how-should-you-deal-with-it/ https://searchenginewatch.com/2017/10/18/duplicate-content-faq-what-is-it-and-how-should-you-deal-with-it/#respond Wed, 18 Oct 2017 16:17:42 +0000 https://www.searchenginewatch.com/2017/10/18/duplicate-content-faq-what-is-it-and-how-should-you-deal-with-it/ There are a few questions that have been confusing the SEO industry for many years. No matter how many times Google representatives try to clear the confusion, some myths persist.

One such question is the widely discussed issue of duplicate content. What is it, are you being penalized for it, and how can you avoid it?

Let’s try to clear up some of the confusion by answering some frequently-asked (or frequently-wondered) questions about duplicate content.

How can you diagnose a duplicate content penalty?

It’s funny how some of the readers of this article are rolling their eyes right now reading the first subheading. But let’s deal with this myth first thing.

There is no duplicate content penalty. None of Google’s representatives has ever confirmed the existence of such a penalty; there were no algorithmic updates called “duplicate content“; and there can never be such a penalty because in the overwhelming number of cases, duplicate content is a natural thing with no evil intent behind that. We know that, and Google knows that.

Still, lots of SEO experts keep “diagnosing” a duplicate content “penalty” when they analyze every other website.

Duplicate content is often mentioned in conjunction with updates like Panda and Fred, but it is used to identify bigger issues, i.e. thin or spammy (“spun”, auto-generated, etc.) and stolen (scraped) content.

Unless you have the latter issue, a few instances of duplicate content throughout your site cannot cause an isolated penalty.

Google keeps urging website owners to focus on high-quality expert content, which is your safest bet when it comes to avoiding having your pages flagged as a result of thin content.

You do want to handle your article republishing strategy carefully, because you don’t want to confuse Google when it comes to finding the actual source of the content. You don’t want to have your site pages filtered when you republish your article on an authoritative blog. But if it does happen, chances are, it will not reflect on how Google treats your overall site.

In short, duplicate content is a filter, not a penalty, meaning that Google has to choose one of the URLs with non-original content and filter out the rest.

So should I just stop worrying about internal duplicate content then?

In short, no. It’s like you don’t want to ignore a recurring headache: it’s not that a headache is a disease on its own, but it may be a symptom of a more serious condition, so you want to clear those out or treat them if there are any.

Duplicate content may signal some structural issues within your site, preventing Google from understanding what they should rank and what matters most on your site. And generally, while Google is getting much better at understanding how to handle different instances of the same content within your site, you still don’t want to ever confuse Google.

Internal duplicate content may signal a lack of original content on your site too, which is another problem you’ll need to deal with.

Google wants original content in their SERPs for obvious reasons: They don’t want their users to land on the same content over and over again. That’s a bad user experience. So Google will have to figure out which non-unique pages they want to show to their users and which ones to hide.

That’s where a problem can occur: The more pages on your site have original content, the more Google positions they may be able to appear at throughout different search queries.

If you want to know whether your site has any internal duplicate content issues, try using tools like SE Ranking, which crawls your website and analyzes whether there are any URLs with duplicate content Google may be confused about:

SE Ranking

How does Google choose which non-original URLs to rank and which to filter out?

You’d think Google would want to choose the more authoritative post (based on various signals including backlinks), and they probably do.

But what they also do is choose the shorter URL when they find two more pages with identical URLs:

Duplicate content

How about international websites? Can translated content pose a duplicate content issue?

This question was addressed by Matt Cutts back in 2011. In short, translated content doesn’t pose any duplicate content issues even if it’s translated very closely to the original.

There’s one word of warning though: Don’t publish automated translation using tools like Google Translate because Google is very good at identifying those. If you do so, you run into risk of having your content labeled as spammy.

Use real translators whom you can find using platforms like Fiverr, Upwork and Preply. You can find high-quality translators and native speakers there on a low budget.

Translation

Look for native speakers in your target language who can also understand your base language

You are also advised to use the hreflang attribute to point Google to the actual language you are using on a regional version of your website.

How about different versions of the website across different localized domains?

This can be tricky, because it’s not easy to come up with completely different content when putting up two different websites with the same products for the US and the UK, for example. But you still don’t want Google to choose.

Two workarounds:

  • Focus on local traditions, jargon, history, etc. whenever possible
  • Choose the country you want to focus on from within Search Console for all localized domains except .com.

There’s another old video from Matt Cutts which explains this issue and the solution:

Are there any other duplicate-content-related questions you’d like to be covered? Please comment below!

]]>
https://searchenginewatch.com/2017/10/18/duplicate-content-faq-what-is-it-and-how-should-you-deal-with-it/feed/ 0
Seven ways you might be losing out on search rankings https://searchenginewatch.com/2015/10/05/seven-ways-you-might-be-losing-out-on-search-rankings/ https://searchenginewatch.com/2015/10/05/seven-ways-you-might-be-losing-out-on-search-rankings/#respond Mon, 05 Oct 2015 12:30:00 +0000 https://www.searchenginewatch.com/2015/10/05/seven-ways-you-might-be-losing-out-on-search-rankings/ SEO is increasingly becoming more closely aligned with user experience (UX) and content value, as Google sharpens up its algorithms to retain its primacy as a search tool. Now, you can expect to be penalized more frequently by Google in the form of less traffic. Also, you can anticipate being penalized by users in the form of more bounce for the same issues: weak content, unfriendly user design, or attempts to game Google and feed people ads they don’t want.

If you feel like you’re doing everything right, but you’re still not ranking for the searches you’re targeting, maybe one of these is the problem?

1. Your Website Isn’t Optimized for Keyword Search

That’s optimized, not maximized. You want the ideal keywords and a number of keyword incidences, not the biggest collection of keywords. Stuff your website with keywords and Google will penalize you for it. Leave them out and you are neglecting one of the simplest ways to increase organic traffic.

Diagnosis:

One way you can end up in a keyword-free environment is if your headers and other key text are part of graphics because they’re not crawled as text. Aside from that, it’s a pure copywriting issue and a clear sign that you need a new copywriter. Keyword optimization should come as standard.

Treatment:

In order of importance, your target keywords should appear in:

  1. Page title
  2. H1 and H2 header tags
  3. Content
  4. Meta description

A Word About This: Google doesn’t care that much about keywords anymore. Rather, it cares about key meanings.

Let me elaborate. According to Jason DeMers, “It doesn’t matter that you used the phrase ‘auto repair shop’ exactly several times throughout your website. You could use ‘auto repair shop,’ ‘car repair specialists,’ and ‘vehicle repair facility’ on different pages, and Google could theoretically put you in the exact same category.”

It’s the meaning that’s getting crawled. However, the higher up the table of importance you go, the more sense it makes to shoot for specific keywords.

Best advice? Single keyword use for your best keyword in the title and meta description. Consider using it or a close competitor in H1 and H2, and use synonyms in body content. That way, you’re getting all three sections: fat head, chunky middle, and long tail.

2. You’re Repelling Spiders

When Google doesn’t crawl your site frequently, you slide down search rankings. And if you leave Google to its own devices, it might not crawl you for weeks. Therefore, your newly optimized site isn’t getting any more action because Google hasn’t noticed it yet.

Diagnosis:

Google uses the data that spiders report to rank pages in search. Not being crawled means search rank doesn’t get updated. Additionally, being crawled infrequently contributes to poor search rank because Google notices that you don’t update your site very often.

Treatment:

You want to entice the spider bots to crawl your site as often as possible. You can find out how often they already do it under Crawl Stats in your Google Search Console.

Here’s how to get spiders to your site:

  1. Check server function. Slow load times and unreliable servers incur SEO penalties and discourage frequent crawling.
  2. Update your site frequently. This is one function of your blog. It’s also a reason why your blog should be under yoursite.com/blog, not blog.yoursite.com. Google applies the SEO benefits of your blog to your entire site. You should also frequently update site copy if it’s appropriate.
  3. Get more inbound links. Beware, though. Quality counts more than quantity.
  4. Ask Google to crawl your site. Use Fetch as Google in Search Console’s Crawl menu. Put the URL to any of your pages in the box and Google will crawl it.
  5. Keep your sitemap updated and error-free

3. Pandas Are Devouring Your Content

Pandas are harmless in the wild. Online, they’re feared.
mean-panda

Websites lose serious amounts of organic traffic each time Google brings out a new Panda update. If it’s poorly written, too short, uninformative, or duplicated, Panda will chew you up. This affects not just low-quality pages, but the entire site.

Diagnosis:

You won’t be notified about algorithmic penalties. But if site-wide traffic falls around the time a Panda update is announced, Panda might be the reason.

Treatment:

Panda hunts weak content. So the first thing to do is go over your content. Clear weak blog posts and poorly written copy from your website. After this, all you can do is wait till the next refresh, as Panda rolls out at a very slow pace.

My suggestion is to preempt Panda. Rand Fishkin says, “If you can’t consistently say, ‘We’re the best result that a searcher could find in the search results,’ well, then guess what? You’re not going to have an opportunity to rank.”

Aim to have all your content as good as the top-ranking pages for your target searches. That’s the floor, not the ceiling.

4. Your Links Are Attracting Penguins

Google has a habit of naming hunter-killer algorithms which threaten large percentages of your traffic after cuddly animals. If Panda hunts weak content, Penguin attacks unnatural link profiles.
mean-pengin

When you have lots of spammy or unnatural links, it’s Penguin that will penalize your site. Bad link quality or sudden spikes in a site’s link additions followed by a sudden lull will draw Penguin’s attention. Also, having too many links from the same source, such as links that are all from blog anchor text, will entice Penguin and lead to penalties.

Links are still useful. However, relevance and quality contribute to Domain Authority, which makes those links, as Google’s Matt Cutts said, “The best way we’ve found to discover how relevant or important someone is.”

Diagnosis:

As with Panda, you won’t get an email from Google; you’ll just get a whole lot less traffic. Penguin hits specific pages. If traffic to certain pages suddenly drops by more than half, it’s likely Penguin.

Treatment:

Clean up your links – and then wait. If you’re hit by Penguin, you have to wait until it comes around again to recover. Worse, the problem could be several things or a combination, so work on having a natural link profile now by avoiding the temptation to artificially link build. Google’s John Mueller recently cautioned that link building was best done naturally by making it easy to link to your content.

5. Your User Experience Is Top Heavy

Top Heavy is another Google algorithm that targets certain website configurations. Google has moved into targeting websites that offer poor user experience (UX), as well as spammy text or blacker-than-black-hat link profiles. The Top Heavy algorithm targets websites that keep their content under a huge array of banners, ads, and other non-user-oriented material. If your site requires users to scroll past ads or if you don’t have much content “above the fold,” Google thinks, “that’s not a very good user experience” and penalizes you accordingly.

Diagnosis:

How do you know? Probably only by correlating traffic drops with Top Heavy roll outs, which are infrequent, only happening once every couple of years. This is a site-wide penalty, so traffic to all pages will drop simultaneously if Top Heavy is the culprit.

Treatment:

Basic UX rules should keep you safe. Design a decent user experience, and you won’t even feel the tailwind from Top Heavy.

6. You’re Immobile

Google gets half of its traffic, as well as half the views on YouTube, from mobile. If your site isn’t good with mobile, it’s not good with Google. People want mobile sites and it’s in any site’s best interest to be mobile-friendly. Just like other algorithm penalties, this isn’t about pushing things in a certain direction; it is about reflecting user experience in search results.

Diagnosis:

To see if you’ve been penalized for poor mobile performance, you can use the Mobile Friendly Test tool. However, you should already know if your site is mobile-friendly and if it is not, mobile algorithm updates are the least of your worries. You’ll lose users when your site loads slowly, looks bad, and doesn’t work on their mobile devices.

Treatment:

Consider a mobile-first design, especially for landing pages. Mobile accounts for just under half of web use by organic search, and this amount is rapidly increasing. A mobile-first website can look great on a desktop, but the other way around doesn’t work as well.

7. Googlers Don’t Like You

Poor quality and thin content doesn’t just repel users and attract Pandas. It also attracts Google staff who will penalize you manually. Thin content is defined as:

  • Repetitive or spun content that provides little value to the user
  • Artificially-created content
  • Low-quality guest posts
  • Scraped articles

Diagnosis:

There’s no need for third-party tools or tactics. The Google Search Console will just tell you if you’ve been hit with one of these. Under Manual Actions in Search Traffic, you’ll see a notification alerting you that your site has “thin content with little or no added value.” Site-wide matches mean your whole website is being penalized, while partial matches mean only certain pages are affected.

Treatment:

Improve your content. Because it’s a manual penalty, you won’t have to wait until the algorithm updates. However, you will need to radically improve site content.

In Conclusion

If you’re not ranking for the searches you’re targeting, maybe you need an SEO overhaul. Perhaps the problem is design or links, or you might need to look at copy and content.

Whichever approach is needed – and it may be more than one – the best way to get good results is to build with a user-first focused approach with an emphasis on quality content and quality links. Make the user experience a priority. That way Google won’t penalize you, and you’ll reap the benefits of higher organic search rank and lower bounce.

]]>
https://searchenginewatch.com/2015/10/05/seven-ways-you-might-be-losing-out-on-search-rankings/feed/ 0
Fallout From Panda Update Already Starting to Show https://searchenginewatch.com/2015/07/24/fallout-from-panda-update-already-starting-to-show/ https://searchenginewatch.com/2015/07/24/fallout-from-panda-update-already-starting-to-show/#respond Fri, 24 Jul 2015 21:34:00 +0000 https://www.searchenginewatch.com/2015/07/24/fallout-from-panda-update-already-starting-to-show/ Last weekend’s Google Panda update is rolling out so slowly and sporadically that industry experts are having a hard time getting a sense of its impact on search.

Gary Illyes, Webmaster Trends analyst at Google, announced last month at the SMX Advanced conference in Seattle that the latest Panda update, which is technically more of a refresh, will roll out in the coming months. This marks Panda’s 29th update, but it’s the first one since the September 2014 update impacted many sites’ rankings.

Hallmark was one company that was hit particularly hard last time, losing 20 percent of its keywords, according to AJ Ghergich, founder of content marketing agency Ghergich & Co. When there’s a new update, Ghergich judges Panda’s rollouts based on the fluctuation of the sites that were most affected during the last update. He notes that in mid-July, before Illyes’ announcement, Hallmark went from ranking for 29,000 keywords to just 17,000.

“That’s a brutal hit,” Ghergich says. “It seems really weird to me that these sites are getting hit a little bit earlier so maybe [Google was] testing it on those sites. Google is probably not going to tell us, but there’s no doubt in my mind it’s related to this update.”

I definitely think that we’re just going to have to monitor this over the course of a few weeks to really get the fallout and see the winners and losers,” he adds.

Steve Szeliga, an SEO specialist from upstate New York, agrees that it’s too early to tell, especially since many clients have come and gone between September and now. But unlike Ghergich, he didn’t see any evidence of Panda rolling out early.

Testing link-building techniques, Szeliga spammed his own affiliate sites with back links. The sudden disappearance of one site, which had survived previous updates despite its relatively thin content, is what clued him into Google’s latest change.

“Right around the same time this supposed update rolled out, I noticed the one page in particular – I was experimenting with just the homepage – had dropped out of the search engine,” Szeliga says. “It’s not reindexed because there are other pages that are still ranking, but it’s just kind of strange that it’s this one in particular. There wasn’t a massive amount of links: maybe 1,000 dripped over two weeks.”

Comparing the results with his other tests, Szeliga noticed some inconsistencies and deduced that this particular site disappeared because of its content. In addition, he’s seen four occasions of his clients’ sites that had multiple rankings dropping down to one.

“I don’t know that it’s been noticeable; I would say it’s been volatile,” he says. “Things are dancing around now quite a bit. For one of the clients I’m working with, the site is relatively new so that’s normal for them. But just the other sites they’re competing against, there’s a lot of shuffling around.”

]]>
https://searchenginewatch.com/2015/07/24/fallout-from-panda-update-already-starting-to-show/feed/ 0
Simple SEO Mistakes That Can Cause Damage https://searchenginewatch.com/2015/07/07/simple-seo-mistakes-that-can-cause-damage/ https://searchenginewatch.com/2015/07/07/simple-seo-mistakes-that-can-cause-damage/#respond Tue, 07 Jul 2015 10:30:00 +0000 https://www.searchenginewatch.com/2015/07/07/simple-seo-mistakes-that-can-cause-damage/ For the average website owner, getting into trouble with Google’s organic search algorithm can happen accidentally. Between optimization, architecture, getting links and structuring data, it’s easy to make a misstep or three. Simple mistakes can unwittingly put sites at risk for ranking loss, manual penalties, and algorithmic filters.

With changes that consistently tighten Google’s “quality” belt, there are inevitably winners and losers. The slap-down nature of algorithm updates are intended to improve the quality of search results, but they’ve also created a landscape where it’s entirely possible to mess up, even while trying to do things right.

Structured Markup

Structured data penalties, recently invigorated after Google’s recent Quality Update, offer slightly newer opportunities to enter the danger zone – cue the Kenny Loggins.
Targeting marked-up pages that include invisible, misleading or irrelevant content makes sense on Google’s part, but the notion of relevance does create some gray area of subjectivity. Moreover, the specificity of the guidelines surrounding ratings and reviews on individual products, rather than lists or categories, is such a precise detail that it would be easy to inadvertently implement them on disqualified pages.

While understanding the nuance of the guidelines is the first line of protection, a larger reflection on the purpose of structured markup provides an even clearer directive. Principally, consider how it adds to the user experience in a way that is more significant than the visual effect or SEO value.

Certain markups should always be about making it easier to navigate to a place or a product on the site directly from search. But it’s only of value when this is the exact information a user intended to get. Reviews and ratings applied to categories does not provide the granular level of feedback a searcher may be expecting. In any situation, if the markup only makes it easier for a user to get to the wrong information, that’s when there is a problem.

Individual Page Value

There are so many ways page creation and optimization can go wrong, putting a site at risk for Panda problems. New URLs created by search parameters, internal search results, and quick views can get cached inflating a site’s index with inauthentic, low-value pages. The addition of new pages to provide destinations for searchers can be perceived as low quality if they are lacking in distinct user value.

Legitimate features like coupon codes, maps, listings and definitions can be considered thin by the succinct nature of the information presented. These innocuous and useful resources can be perceived as insubstantial, particularly when they comprise a considerable portion of the site’s entire composition. In all these cases, the offending behavior may be a result of small cracks an SEO foundation, rather than a willful attempt at manipulation.

Link Building

Link building has become a minefield. Old, misbegotten links can fester. An overabundance of keyword anchor text, purposeful or not, can incur wrath. Directory placement, certain guest blogs, syndication and a number of once-popular – and unfortunately, still available – tactics can cause a site’s rankings to plummet. Then comes the difficult process of a reconsideration request in manual cases or the painful limbo of waiting between Penguin rollouts.

While these refreshes may eventually be integrated into the standard algorithm, for now they remain few and far between. Getting in bed with the wrong link provider or even failing to keep a closely monitored profile can result in bad links that can pop up like weeds in a garden. And just like weeds, if enough of them are allowed to invade, they can choke out the growth you’ve carefully cultivated.

Scrutiny is Safety

Even if current iterations of Panda and Penguin are causing less widespread devastation than in the past, continuing data refreshes can still hurt. Minor oversights in the areas of links, markup or content errors can become ticking time bombs.

If the core of your SEO strategy is users – rather than search engines and creating quality content, building relationships and leveraging multiple channels for brand visibility – you’re already on the right course.

But if good intentions can pave the way to hell, even the best SEO intentions can take a wrong turn. Having people on your team who are monitoring the evolution of search engine changes and can apply that insight as it relates to all areas of planning and implementation is crucial. When it comes to search the big picture perspective and understanding the granular details become equally important for creating a safe and thriving strategy.

]]>
https://searchenginewatch.com/2015/07/07/simple-seo-mistakes-that-can-cause-damage/feed/ 0
Beyond Links: Why Google Will Rank Facts in the Future https://searchenginewatch.com/2015/06/02/beyond-links-why-google-will-rank-facts-in-the-future/ https://searchenginewatch.com/2015/06/02/beyond-links-why-google-will-rank-facts-in-the-future/#respond Tue, 02 Jun 2015 11:30:00 +0000 https://www.searchenginewatch.com/2015/06/02/beyond-links-why-google-will-rank-facts-in-the-future/ If Google researchers have their way, we may soon look back and laugh at the time when search engines ranked web pages based on link-driven popularity instead of factual content.

According to New Scientist, a team of Google researchers is currently working toward a future where search engines judge websites not on the number of other sites that trust them enough to link to them, but by the accuracy of their content.

facts-myths

Exogenous vs. Endogenous Credibility

Google researchers are brilliant, so they use words like “exogenous” to describe signals that come from outside a web page, such as hyperlink structure. They use words like “endogenous” to describe signals that come from within a web page, such as factual accuracy.

Since web pages can be littered with factual inaccuracies and still appear credible because of a high number of quality links, the Google team is pursuing a future where endogenous signals carry far more weight than exogenous signals.

In short, Google may soon be more concerned with the information your website contains than the level of trust people have in your website. New websites could immediately be ranked higher than established competitor sites just by hosting content that is more factually accurate than theirs.

“Knowledge Vault”: The Storage Room for Humanity’s Collective Information

So where do Google’s bots go to check the facts found in the web pages they crawl?

Google has quietly been building a database that contains the accumulated knowledge of the entire human race. This enormous cache of facts and information is readable by both machines and humans. Called the Knowledge Vault, this information super warehouse is locked in a cycle of self-perpetuating improvement – the more information it gathers, the more information it is able to collect. Bots scan text on web pages and then double-check what they find against the information stored in Knowledge Vault. Those same bots can then deposit new information that they “learn” from those web pages into the vault.

Researchers believe the very near future will include machines that recognize objects. When a person wearing a heads-up display looks at a bridge, for example, the device will recognize it and request information from Knowledge Vault. Knowledge Vault will instantly beam facts about the bridge back to the wearer.

For now, Knowledge Vault is just the world’s greatest fact checker – the brains behind Google’s pursuit of being able to judge web pages on their endogenous information, not their exogenous links.

Early Tests Provide Promising Results

Google tested 2.8 billion triples, which are are facts discovered on and extracted from web pages. Using those triples, Google researchers were able to “reliably predict the trustworthiness of 119 million web pages and 5.6 million websites.”

Although this system is not yet ready to be applied Internet-wide, it could certainly supplement the signals that are currently used to evaluate a website’s quality.

google-penguin-panda

In 2012, Penguin and Panda changed the relationship between SEO and search rankings. The impact felt by those algorithm updates, however, could be dwarfed by Google’s current quest to judge websites by their factual accuracy and truthfulness, as opposed to ranking pages based on links.

That future isn’t here yet, but it appears to be close. If Google’s early tests are accurate, web pages may soon be ranked by the facts they contain, not the links they receive.

]]>
https://searchenginewatch.com/2015/06/02/beyond-links-why-google-will-rank-facts-in-the-future/feed/ 0
Tracking the Evolution of Google Panda Updates – From Monthly to Tremors to Missing in Action https://searchenginewatch.com/2015/03/18/tracking-the-evolution-of-google-panda-updates-from-monthly-to-tremors-to-missing-in-action/ https://searchenginewatch.com/2015/03/18/tracking-the-evolution-of-google-panda-updates-from-monthly-to-tremors-to-missing-in-action/#respond Wed, 18 Mar 2015 13:30:00 +0000 https://www.searchenginewatch.com/2015/03/18/tracking-the-evolution-of-google-panda-updates-from-monthly-to-tremors-to-missing-in-action/ google-panda-evolution

At a recent industry event, Google’s Gary Illyes dropped a bombshell on the audience (and the SEO world). He explained that Panda was now real-time. And if that was the case, it would mean that if you’ve been impacted by Panda, then making the right changes would immediately be reflected in the search results (once Google recrawled and reprocessed your URLs). In other words, you can be hit, or recover, at any time. That was big news to say the least.

But here’s the problem. I had a hard time believing that was true from the second I heard it. And many others didn’t believe it was accurate, either.

I have access to a lot of Panda data across websites, categories, and countries. And based on having access to that data, I can typically see when Panda updates are released into the wild. That’s both confirmed updates like Panda 4.0 and 4.1, and unconfirmed updates like the sneaky 10/24/14 update, which I picked up while Penguin 3.0 was rolling out. By the way, that’s the Panda update John Mueller referenced during a recent webmaster hangout when speaking about the last time Google released Panda.

To be more specific about what I’ve witnessed Panda-wise, I haven’t seen any significant movement on sites impacted by Panda since the 10/24/14 update. I also haven’t seen fresh hits that resemble Panda attacks. In other words, large drops in traffic on websites susceptible to Google Panda. And again, many others who track Panda closely are saying the same thing.

John Mueller Confirms What We Thought – Panda Is Not Real-Time

In a webmaster hangout video from March 10, 2015, Barry Schwartz asked a question that many of us have been dying to know the answer to (especially since Gary dropped the real-time bombshell). He asked John if Panda was in fact real-time and if he could explain more about Gary’s comments.

John explained that the last Panda update was in October of 2014 and that he would have to check to see what Gary was referring to. It seems there are aspects of Panda that might be real-time, but you still need a Panda refresh or update in order to see the impact.

John speaks about Panda at 23:43 in the video:

The last point is incredibly important to understand, since many sites impacted by Panda are wondering when they can see recovery or partial recovery. With real-time comments being thrown around, some webmasters were left scratching their heads about why they haven’t seen any movement since October 2014. Well, if there hasn’t been a Panda refresh or update, they won’t see any movement… Instead, they need Panda to be released in order to see that impact.

Tracking the Evolution of How Google Rolls Out Panda

As the real-time situation unfolded, I started thinking about common questions I get from webmasters about Panda, how Google releases Panda updates and refreshes, when they occurred, etc. So below, I decided to provide a historical background of how Google releases Panda.

I’m not going to list all of the updates, but instead, I’ll explain when important changes occurred to how the algorithm rolls out. My hope is that the information below will clear up common misconceptions about Panda and how Google releases it into the wild. Then I’ll end this post with my thoughts about the future of Panda updates (and other major algorithms).

Panda 1.0 – The Cutssozoic Era

More than 15,000 years ago, when content farms roamed the Web, a new algorithm arrived called Farmer, I mean Panda. 🙂 When Panda first hit the scene in February of 2011, it rocked the industry. Many sites that traditionally received boatloads of traffic plummeted faster than a lead anvil in a pool. It targeted low-quality content, and content farms were the core focus of attention.

panda-evolution-feb-2011

Rolling Updates Every Four to Six Weeks

After Panda 1.0, Google rolled out Panda every four to six weeks and would announce those updates. Ah, those were the days… SEOs were able to put a numbering system in place, know exactly when Panda rolled out, and companies would clearly (OK, mostly clearly) understand when they were hit and what hit them.

panda-evolution-rolling-updates

10-Day Rollout, But Will Not Be Confirmed – The Unconfirmazoic Era

In March 2013, Matt Cutts announced that Panda was being incorporated into Google’s normal indexing process, so they wouldn’t be announcing future Panda updates. He also said this:

“Rather than having some huge change that happens on a given day, you are more likely in the future to see Panda deployed gradually as we rebuild the index. So you are less likely to see these large scale sorts of changes.”

This led many to believe that Panda was approaching the real-time stage (which it wasn’t). And by the way, we absolutely saw large-scale impact from Panda after that date… so I’m not sure what Matt said was entirely accurate. More about that soon.

Then in June, Matt explained that Panda had matured to the point where Google trusted the algorithm more. Based on the maturation of Panda, Google would roll out Panda monthly, but it could take up to 10 days to fully roll out. He also reiterated that Google would not confirm future Panda updates because it was more a rolling update. That’s when I wrote a post about what this meant for Panda victims, and SEOs overall. I basically said that a new layer of complexity had arrived, and I was right.

panda-evolution-10-days

So, we moved on and tracked Panda updates the best we could. As I mentioned earlier, I was able to track a number of the updates and tried to document them when possible. For example, here’s a post about the January 2014 update. I saw a number of Panda victims recover, while also getting calls from new Panda victims. The combination enables me to identify a specific date of the rollout. And here’s a post about the March 2014 update. You get the picture. So Panda was rolling out regularly, but Google just wasn’t confirming the updates. That’s until May of 2014, which I’ll cover next.

New Factors = Google Confirmation – The Hugozoic Era

Then May 19, 2014 arrived and my Panda Richter scale was moving so fast it almost set fire to my office. Panda 4.0 rolled out and it was HUGE. Google announced the update after many of us saw significant movement across sites impacted by Panda, while also seeing many fresh hits. And many of those hits were extreme.

panda-evolution-panda4

So, what happened to the “we won’t confirm any more Panda updates” statement from Matt Cutts? Well, Panda 4.0 was so significant, and had so much impact, that they had to explain what was going on. For example, I had one company reach out to me that saw a 91 percent decrease in Google organic traffic after Panda 4.0 rolled through. Yes, 91 percent.

panda-evolution-p4-matt-cutts

It seemed that major updates, with new factors added to the algorithm, would yield confirmation from Google that Panda did roll out. Well, at least we had that going for us…

Panda Goes Near-Real Time – “Panda Tremors” Emerge – The Tremorzoic Era

After Panda 4.0 rolled out, I noticed something very strange. Actually, it was fascinating to analyze. Each week after May 19, 2014, I noticed more and more movement on sites impacted by Panda. Basically, I noticed what looked like Panda refreshes almost weekly after Panda 4.0. I named them “Panda tremors,” and reached to John Mueller for clarification.

panda-evolution-tremors-up

panda-evolution-tremors-down

John’s response was awesome to read. He explained that Google can, and will, tweak major algos and roll out those changes over time. So, you might see a major update like Panda 4.0, followed by smaller tweaks as they refine the algorithm. And that’s exactly what I was seeing in the months following Panda 4.0. I wrote about the near real-time Panda on my blog based on the overwhelming evidence of multiple Panda tremors following P4.0.

panda-evolution-john-mueller

September 2014 – The Farozoic Era

Those tremors continued through the summer until September arrived. And then we had one of the most volatile months I can remember from a Panda standpoint. I picked up a major Panda update on 9/5 that impacted many companies. It was bigger than a tremor, as some companies saw full recovery from previous Panda updates, while new Panda victims lost significant amounts of traffic. For example, here’s a big recovery during the 9/5 update. Note, the 9/5 update was not confirmed by Google.

panda-evolution-9-5-14

But Google wasn’t done yet… That was a foreshadowing of another major update that would arrive on 9/23/14. Panda 4.1 rolled out on that date and was also a huge update. I saw many recoveries, especially from Panda 4.0 victims that had completed a lot of remediation work. And mixed in, I saw some temporary recoveries roll back to lower levels.

Panda 4.1 was announced by Google’s Pierre Far, so you knew it was a significant update. So, we had two major Panda updates during September. Like I said earlier, it was a big month Panda-wise.

panda-evolution-pierre-far-p41

The Cloaked Panda Update on 10/24 – The Cloakazoic Era

October arrived, and between Panda rollouts galore and waiting for Penguin to finally roll out, many SEOs were going out of their minds. Then on October 17, 2014, the wait for Penguin was over. Google finally rolled out Penguin 3.0. And it was… a disaster. I won’t go into detail here, since that’s the not the focus of this post, but let’s just say the rollout was all over the place.

But something happened during the extended Penguin rollout that caught my attention (understatement of the year). I saw massive swings in rankings and traffic on 10/24/14 across sites impacted by Panda, not Penguin. And not only did I see this across the data I have access to, I also had many people reach out to me explaining they were seeing the same thing. Again, with sites impacted by Panda, not Penguin.

panda-evolution-10-24-recovery

panda-evolution-10-24-hit-update

So, we had a big Panda update rolling out during an extended Penguin update. Holy cow, Google was really messing with us. 🙂 You can read more about the 10/24/14 Panda update on Moz, where I wrote an entire post about the situation. Needless to say, it was a sneaky update, since most webmasters would think there were hit by Penguin, when in fact, they were impacted by Panda!

4.5 Months of Panda Silence – The Silencozoic Era

We saw so much volatility during the fall of 2014, that it was strange to see Panda activity screech to a halt. But that’s exactly what happened after the 10/24 update. Now, Google will often hold off on releasing major algorithm updates during the holidays, so that didn’t shock me too much. But then January arrived and all was still quiet on the Panda front.

Last year, Google rolled out Panda on 1/10/14, so my thought was that they would do something similar in 2015. But Panda did not roll out in January, or February, and it hasn’t rolled out yet in March. That’s a long time for Panda victims to sit in limbo. By the way, doesn’t it remind of you of another major algorithm update that didn’t roll out for a long time? Cough, Penguin. That was an extreme situation, as we waited more than a year for Penguin to roll out. But this extended silence is extremely unusual for Panda, which again, was running regularly in 2014. That’s a good segue to my thoughts about the future of Panda.

My Thoughts About the Future of Panda (and Other Major Algorithms)

In my Moz post about the 10/24 update, I explained that we were approaching a time when major algorithms will go near-real time (or run in actual real-time), which can cause massive confusion for webmasters. Unconfirmed updates that rock sites at any given time could cause serious problems for everyone involved (business owners, webmasters, SEOs trying to help, etc.)

Below, I’ll provide a bulleted list covering my thoughts about the future of Panda, what’s going on currently with the algorithm, and where I think we are headed:

  • The Previous Panda Update: The 10/24/14 Panda update was the last update we experienced. To me, and to others heavily involved in Panda work, that was the last date that sites impacted by Panda saw significant movement.
  • When Will Panda Finally Roll Out?: My gut-feel is that Panda will be released this month (before the mobile UX algo is pushed out on 4/21/15). John Mueller has explained that they are trying to get things moving a little quicker with Panda, so I expect an update or refresh soon.
  • Mobile UX Algo Pushing Panda to the Backburner: I do believe the impending mobile UX algorithm update on 4/21 has been taking up lot of Google’s time. It’s going to be a huge update with significant impact on the smartphone search results, so I’m sure they have been busy testing and refining the algo. That has to be taking time away from Panda.
  • New Factors?: The Panda delay also could be based on new factors being baked into Panda. If that’s the case, then expect Panda 5.0 (or whatever it’s called) to be huge. It could be on the level of Panda 4.0 or 4.1 (both of which had significant impact).
  • Mobile UX Algo + Panda?: This is just a conspiracy theory, but Panda might be rolled out at the same time as the mobile UX algo, and possibly incorporate more mobile factors. Now wouldn’t that be scary? It’s entirely possibly based on the surge in mobile traffic over the past few years.
  • The Algo Trifecta: And for my last bullet, imagine if Google rolled out the mobile UX algo, Panda, and Penguin all at the same time. That would be the algo trifecta, and I fear the universe would implode. I don’t know if Google would really do that, since it could cause problems for them, too…But it’s entirely possible. Anyone remember the algo sandwich from April of 2012? That’s when Google rolled out Panda, then Penguin 1.0, and then Panda again, all within 10 days. Talk about confusion…

Summary – In Search of Google Panda

I hope you found this post about the history of Panda updates both interesting and helpful. I know there is a lot of confusion about Panda in general, and with how Google releases the algorithm, so I hope this post cleared up some of that confusion. In closing, I do believe we’ll see Panda again soon. The big question is whether he’ll be accompanied by a new black and white animal focused on mobile UX. And let’s hope they keep their Penguin friend at home. One thing is for sure, the next four to eight weeks will yield significant activity from a Google algorithm standpoint.

This is literally the calm before the storm. Enjoy it while you can.

]]>
https://searchenginewatch.com/2015/03/18/tracking-the-evolution-of-google-panda-updates-from-monthly-to-tremors-to-missing-in-action/feed/ 0
Spotlight On: Tribal Worldwide’s Director of SEO, Steve Liu https://searchenginewatch.com/2015/02/24/spotlight-on-tribal-worldwides-director-of-seo-steve-liu/ https://searchenginewatch.com/2015/02/24/spotlight-on-tribal-worldwides-director-of-seo-steve-liu/#respond Tue, 24 Feb 2015 15:00:00 +0000 https://www.searchenginewatch.com/2015/02/24/spotlight-on-tribal-worldwides-director-of-seo-steve-liu/ Content Takeover Mobile & Local SearchTribal Worldwide’s director of SEO, Steve Liu, established the search practice of the agency when he joined in 2012. He now leads the department, working on anything and everything SEO, including keyword search, content strategy, link strategy, and social media.

Search Engine Watch (SEW): Some say that SEO is dead. What are your thoughts on that?

Steve Liu (SL): I think there are two schools of thought on SEO. One school is you use technical tactics. For example, you use certain keywords, and you try to acquire links. Another school is you try to optimize user experience. Instead of saying we need to start using these keywords in this content, we say that we want to understand our users, and we want to understand what our users want to read. In that process we can use certain types of words.

The best way I can answer this is that if your definition of “SEO” is merely a collection of techniques and tricks meant to reverse-engineer and manipulate Google’s algorithm then yes, SEO died a long time ago.

On the other hand, if your definition of “SEO” is having a deep understanding of your users’ needs, understanding what words they use in searching for answers, understanding what kinds of content they want to consume to find answers, and understanding how this information is shared, then SEO is alive and well.

Put another way, up to a few years ago SEO was typically done in a silo — companies would develop their Web properties and then almost as an afterthought send it to a SEO consultant or agency to “do SEO.” I think today, proper SEO should be approached less as a one-off, specialized task and more so integrated into every part of building digital projects, serving as continual validation of a site’s technical development, UX, content strategy, and outreach strategy.

SEW: In your opinion, which one is more important, keyword ranking or traffic?

SL: Honestly, at the end of the day, neither of these things mean anything if you don’t care about conversion or the engagement on your site. I think keyword rank and traffic are both important, but they are almost secondary to helping users accomplish what they want to accomplish on your site.

I think different clients have different philosophies. From my perspective, both are important.

SEW: It appears that SEO isn’t just about keyword search anymore and has evolved into many things. This is evident in the areas that your team has been working on. What evolutionary changes have you seen in SEO in the past few years?

SL: It’s an interesting question, because I got into SEO almost 10 years ago, even before [people] had a word for it. The industry has evolved so much. When I started, it was mostly about user experience. Over the years, more people are jumping into SEO. Now it’s more about “manipulation,” or finding holes in Google’s algorithms. And that’s where you saw tactics such as keyword stuffing.

I remember one particular dark day in the mid-2000s. I was working for a major e-commerce company where we were dominating most of our keywords in organic search. Suddenly I noticed one of our competitors outranking us overnight for nearly every term. It didn’t take long to figure out how — they were getting thousands of artificial paid links. I remember I had a chance to talk about it with a Google engineer, who basically told me not to worry as “the algorithm would take care of it.” But soon these tactics were so successful that anyone who did SEO almost had to practice them. I think this is when SEO got a bad name in many people’s minds.

It took about five years, but Google finally introduced Panda, and then Penguin, and then Hummingbird. Yes, there was some collateral damage to certain sites, but overall I think they did a pretty good job at weeding out the worst offenders and promoting sites that did things right. So in a way we’ve come full circle — we’re at a point now where SEO is where it should have been all along — not trying to find holes in Google’s algorithm to exploit, but rather building the best experience for your users.

SEW: Google’s major local algorithm update “Pigeon” hit the U.S. last July. How were your clients affected by that change?

SL: In a very positive way. It’s funny that local search has evolved in a way that organic search has. Three or four years ago, it was very easy to rank in the Google’s local listings by using a couple of simple tactics: claim and optimize your profile, build citations, and so on. What happened was people were exploiting the algorithm, because it was so simple to do so. With “Pigeon,” Google started to get more sophisticated and started to apply to local search some of the signals that made its organic ranking algorithm so successful. In a world where mobile and local search is going to dominate, Google knew they had to get this one right quickly.

SEW: User experience is an indispensable part of mobile SEO and responsive design has since fallen into this. Is responsive design a one-size-fits-all strategy? Are there any other alternatives?

SL: I have been working on responsive design a lot in recent months. Years ago most companies were maintaining a separate mobile site and desktop site, which in most cases didn’t work out too well – most of them would focus on their desktop site and forget about their mobile site. So I see responsive design as a huge positive in terms of helping people manage that process better. They build a site once that can serve both desktop users and mobile users.

But I think lots of agencies stop here, which is a mistake. I think the next evolution is going to be tailoring the mobile experience for mobile users, and tailoring the desktop experience for desktop users. Technologically you can still use responsive technology or adaptive technology, but the most important thing from a user point of view is to understand the needs, the motivations, and the habits of users who are coming from a mobile device versus desktop device.

The winners in search aren’t going to be the ones who take a desktop experience and serve it up as-is on a mobile device, nor the ones who proclaim “mobile first” but end up serving their desktop users a limited experience. It’s the ones who will be smart enough to serve the right content to the right people in the right place. And since Google and Bing are getting smart enough to identify great mobile experiences, people who do this won’t be able to help but be successful.

SEW: If you were to give marketers advice on mobile and local search, what would it be?

SL: For local search, the biggest advice I can give is that big companies should never take their SEO for granted. While a lot of big companies will focus most of their energy on their corporate site and corporate branding, often they don’t realize that most consumers experience their brand at a local level. So for all the care and feeding they do for their corporate brand as far as content development, PR, and marketing, they need to do the same for their local presences as well.

Of the top 10 public companies in the world, eight of them have a local presence in the form of stores, stations, and dealers. And yet I’m still surprised whenever I see store locators that aren’t optimized or even indexed, or local listings on Google, Yelp, and other local search sites that aren’t properly claimed nor populated with rich content.

For mobile search, getting a responsive site is great to get into the game. But to differentiate yourself, you really need to optimize your user experience for mobile users versus desktop users.

Another piece of advice on mobile search revolves around one of the biggest changes I see coming this year. This is the advent of natural language search. Natural language processing was largely a curiosity in 2011 when IBM’s Watson won Jeopardy. But then Apple released Siri, Google followed with Google Now, and this year we’ll see Amazon Echo and Microsoft Cortana try to stake claims in natural language processing. Why? Because unlike other interfaces such as keyboards, mice, and touchscreens, humans have already mastered how to use their voice by the time they’re three. And these companies know that the more intuitive the interface, the more people will use it.

From an SEO perspective, whether you’re asking Google Now for the movies playing in your area or asking your Apple Watch where the closest burger joint is, that’s a form of search. So mastery of SEO practices – understanding the long-tail of how your audience searches for things, creating authoritative and engaging content to answer those needs, and having content so great your users will want to share it – have been and always will be the keys to success.

]]>
https://searchenginewatch.com/2015/02/24/spotlight-on-tribal-worldwides-director-of-seo-steve-liu/feed/ 0
Panda Remediation and CMS Limitations – 5 Problems You Might Encounter https://searchenginewatch.com/2015/02/11/panda-remediation-and-cms-limitations-5-problems-you-might-encounter/ https://searchenginewatch.com/2015/02/11/panda-remediation-and-cms-limitations-5-problems-you-might-encounter/#respond Wed, 11 Feb 2015 14:30:00 +0000 https://www.searchenginewatch.com/2015/02/11/panda-remediation-and-cms-limitations-5-problems-you-might-encounter/ cms-panda-challenge

When you’ve been hit by Panda it’s extremely important to quickly identify the root causes of the attack. I typically jump into a deep crawl analysis of the site while performing an extensive audit through the lens of Panda. The result is a remediation plan covering a number of core website problems that need to be rectified sooner than later.

And for larger-scale websites, the remediation plan can be long and complex. It’s one of the reasons I tend to break up the results into smaller pieces as the analysis goes on. I don’t want to dump 25 pages of changes into the lap of a business owner or marketing team all at one time. That can take the wind out of their sails in a hurry.

But just because problems have been identified, and a remediation plan mapped out, it does not mean all is good in Panda-land. There may be times that serious problems cannot be easily resolved. And if you can’t tackle low-quality content on a large-scale site hit by Panda, you might want to get used to demoted rankings and low traffic levels.

When Your CMS Is the Problem

One problem in particular that I’ve come across when dealing with Panda remediation is the dreaded content management system (CMS) obstacle. And I’m using “CMS” loosely here, since some internal systems are not actually content management systems. They simply provide a rudimentary mechanism for getting information onto a website. There’s a difference between that and a full-blown CMS. Regardless, the CMS being used can make Panda changes easy, or it can make them very hard. Each situation is different, but again, it’s something I’ve come across a number of times while helping clients.

When presenting the remediation plan to a client’s team, there are usually people representing various aspects of the business in the meeting. There might be people from marketing, sales, development, IT and engineering, and even C-level executives on the call. And that’s awesome. In my opinion, everyone needs to be on the same page when dealing with an issue as large as Panda.

But at times IT and engineering has the task of bringing a sense of reality with regard to how effectively changes can be implemented. And I don’t envy them for being in that position. There’s a lot of traffic and revenue on the line, and nobody wants to be the person that says, “we can’t do that.”

cms-panda-square-peg

For example, imagine you surfaced 300,000 pages of thin content after getting pummeled by Panda. The pages have been identified, including the directories that have been impacted, but the custom CMS will not enable you to easily handle that content. When the CMS limitations are explained, the room goes silent.

That’s just one real-world example I’ve come across while helping companies with Panda attacks. It’s not a comfortable situation, and absolutely needs to be addressed.

Trapped With Panda Problems

So what types of CMS obstacles could you run into when trying to recover from Panda? Unfortunately, there are many, and they can sometimes be specifically tied to your own custom CMS. Below, I’ll cover five problems I’ve seen first-hand while helping clients with Panda remediation. Note, I can’t cover all potential CMS problems that can inhibit Panda recovery, but I did focus on five core issues. Then I’ll cover some tips for overcoming those obstacles.

1. 404s and 410s

When you hunt down low-quality content, and have a giant list of URLs to nuke (remove from the site), you want to issue either 404 or 410 header response codes. So you approach your dev team and explain the situation. But unfortunately for some content management systems, it’s not so easy to isolate specific URLs to remove. And if you cannot remove those specific low-quality URLs, you may never escape the Panda filter. It’s a catch 22 with a Panda rub.

In my experience, I’ve seen CMS packages that could 404 pages, but only by major category. So you would be throwing the baby out with the bath water. When you nuke a category, you would be nuking high-quality content along with low-quality content. Not good, and defeats the purpose of what you are trying to accomplish with your Panda remediation.

cms-panda-urls-404

I’ve also seen CMS platforms that could only remove select content from a specific date forward or backward. And that’s not good either. Again, you would be nuking good content with low-quality content, all based on date. The goal is to boost the percentage of high-quality content on your site, not to obliterate large sections of content that can include both high- and low-quality URLs.

2. Meta Robots Tag

Similar to what I listed above, if you need to noindex content (versus remove), then your CMS must enable you to dynamically provide the meta robots tag. For example, if you find 50,000 pages of content on the site that is valuable for users to access, but you don’t want the content indexed by Google, you could provide the meta robots tag on each page using “noindex, follow.” The pages won’t be indexed, but the links on the page would be followed. Or you could use “noindex, nofollow” where the pages wouldn’t be indexed and the links wouldn’t be followed. It depends on your specific situation.

But once again, the CMS could provide obstacles to getting this implemented. I’ve seen situations where once a meta robots tag is used and in the page’s code, it’s impossible to change. Or I’ve seen multiple meta robots tags used on the page in an effort to noindex content that’s low quality.

And beyond that, there have been times where the meta robots tag isn’t even an option in the CMS. That’s right, you can’t issue the tag even if you wanted to. Or, similar to what I explained earlier, you can’t selectively use the tag. It’s a category or class-level directive that would force you to noindex high-quality content along with low-quality content. And we’ve already covered why that’s not good.

cms-panda-noindex

The meta robots tag can be a powerful piece of code in SEO, but you need to be able to use it correctly and selectively. If not, it can have serious ramifications.

3. Nofollow

The proper use of nofollow crosses algorithms, and I’ll include it in this post as I’ve encountered nofollow problems during Panda projects. But this can help across link penalties and Penguin situations too. And let’s hope your CMS cooperates when you need it to.

For example, I’ve helped some large affiliate websites that had a massive followed links problem. Affiliate links should be nofollowed and should not flow PageRank to destination websites (where there is a business relationship). But what if you have a situation where all, or most of, your affiliate links were followed? Let’s say your site has 2 million pages indexed and contains many followed affiliate links to e-commerce websites. The best way to handle this situation is to simply nofollow all affiliate links throughout the content, while leaving any natural links intact (followed). That should be easy, right? Not so fast…

What seems like a quick fix via a content management system could turn out to be a real headache. Some custom CMS platforms can only nofollow all links on the page, and that’s definitely not what you want to do. You only want to selectively nofollow affiliate links.

In other situations, I’ve seen CMS packages only be able to nofollow links from a certain date forward, as upgrades to the CMS finally enabled selective nofollows. But what about the 400,000 pages that were indexed before that date? You don’t want to leave those as-is if there are followed affiliate links. Again, a straightforward situation that suddenly becomes a challenge for business owners dealing with Panda.

cms-panda-nofollow

4. Rel Canonical

There are times that a URL gets replicated on a large-scale website (for multiple reasons). So that one URL turns into four or five URLs (or more). And on a site that houses millions of pages of content, the problem could quickly get out of control. You could end up with tens of thousands, hundreds of thousands, or even millions of duplicate URLs.

You would obviously want to fix the root problem of producing non-canonical versions of URLs, but I won’t go down that path for now. Let’s just say you wanted to use the canonical URL tag on each duplicate URL pointing to the canonical URL. That should be easy, right? Again, not always…

I’ve seen some older CMS packages not support rel canonical at all. Then you have similar situations to what I explained above with 404s and noindex. Basically, the CMS is incapable of selectively issuing the canonical URL tag. It can produce a self-referencing href (pointing to itself), but it can’t be customized. So all of the duplicate URLs might include the canonical URL tag, but they are self-referencing. That actually reinforces the duplicate content problem… instead of consolidating indexing properties to the canonical URL.

cms-panda-rel-canonical

By the way, I wrote a post in December covering a number of dangerous rel canonical problems that can cause serious issues. I recommend reading that post if you aren’t familiar with how rel canonical can impact your SEO efforts.

5. Robots.txt

And last, but not least, I’ve seen situations where a robots.txt file had more security and limitations around it than the President of the United States. For certain CMS packages or custom CMS platforms, there are times the robots.txt file can only contain certain directives, while only being implemented via the CMS itself (with no customization possible).

For example, maybe you can only disallow major directories on the site, but not specific files. Or maybe you can disallow certain files, but you can’t use wildcards. By the way, the limitations might have been put in place via the CMS developers with good intentions. They understood the power of robots.txt, but didn’t leave enough room for scalability. And they definitely didn’t have Panda in mind, especially since some of the content management systems I’ve come across were developed prior to Panda hitting the scene!

In other robots.txt situations, I’ve seen custom changes get wiped out nightly (or randomly) as the CMS pushes out the latest robots.txt file automatically. Talk about frustrating. Imagine customizing a robots.txt file only to see it revert back at midnight. It’s like a warped version of Cinderella, only this time it’s a bamboo slipper and the prince of darkness. Needless to say, it’s important to have control of your robots.txt file. It’s an essential mechanism for controlling how search bots crawl your website.

cms-panda-robots

What Could You Do?

When you run into CMS limitations while working on Panda remediation, you have several options for moving forward. The path you choose to travel completely depends on your own situation, the organization you work for, budget limitations, and resources. Below, I’ll cover a few ways you can move forward, based on helping a number of clients with similar situations.

1. Modify the CMS

This is the most obvious choice when you run into CMS functionality problems. And if you have the development chops in-house, then this can be a viable way to go. You can identify all of the issues SEO-wise that the CMS is producing, map out a plan of attack, and develop what you need. Then you can thoroughly test in a staging environment and roll out the new and improved CMS over time.

By tackling the root problems (the CMS functionality itself), you can be sure that the site will be in much better shape SEO-wise, not only in the short-term, but over the long-term as well. And if developed with the future in mind, then the CMS will be open to additional modifications as more technical changes are needed.

The downside is you’ll need seasoned developers, a budget, the time to work on the modifications, test them, debug problems, etc. Some organizations are large enough to take on the challenge and the cost, while other smaller companies will not. In my experience, this has been a strong path to take when dealing with CMS limitations SEO-wise.

2. Migrate to a New CMS

I can hear you groaning about this one already. 🙂 In serious situations, where the CMS is so bad and so limiting, some companies choose to move to an entirely new CMS. Remember, Panda hits can sometimes suck the life out of a website, so grave SEO situations sometimes call for hard decisions. If the benefits of migrating to a new CMS far outweigh the potential pitfalls of the migration SEO-wise, then this could be a viable way to go for some companies.

But make no bones about it, you will now be dealing with a full-blown CMS migration. And that bring a number of serious risks with it. For example, you’ll need to do a killer job of migrating the URLs, which includes a solid redirection plan. You’ll need to ensure valuable inbound links don’t get dropped along the way. You’ll need to make sure the user experience doesn’t suffer (across devices). And you’ll have a host of other concerns and mini-projects that come along with a redesign or CMS migration.

For larger-scale sites, this is no easy feat. Actually, redesigns and CMS migrations are two of the top reasons I get calls about significant drops in organic search traffic. Just understand this before you pull the trigger on migrating to a new CMS. It’s not for the faint of heart.

3. Project Frankenstein – Tackle What You Can, and When You Can

Panda is algorithmic, and algorithms are all about percentages. In a perfect world, you would tackle every possible Panda problem riddling your website. But in reality, some companies cannot do this. But you might be able to still recover from Panda without tackling every single problem. Don’t get me wrong, I’ve written before about band-aids not being a long-term solution for Panda recovery, but if you can tackle a good percentage of problems, then you might rid yourself of the Panda filter.

Let me emphasize that this is not the optimal path to take, but if you can’t take any other path, then do your best with Project Frankenstein.

For example, if you can’t make significant changes to your CMS (development-wise), and you can’t migrate to a new CMS, then maybe you can still knock out some tasks in the remediation plan that remove a good amount of thin and low-quality content. I’ve had a number of clients in this situation over the years, and this approach has worked for some of them.

As a quick example, one client focused on four big wins based on the remediation plan I mapped out. They were able to nuke 515,000 pages of thin and low-quality content from the site based on just one find from the crawl analysis and audit. Now, it’s a large site, but that’s still a huge find Panda-wise. And when you added the other three items they could tackle from the remediation plan, the total amount of low-quality content removed from the site topped 600,000 pages.

So although Frankenstein projects aren’t sexy or ultra-organized, they still have a chance of working from a Panda remediation standpoint. Just look for big wins that can be forced through. And try and knock out large chunks of low-quality content while publishing high-quality content on a regular basis. Again, it’s about percentages.

Summary – Don’t Let Your CMS Inhibit Panda Recovery

Panda remediation is tough, but it can be exponentially tougher when your content management system (CMS) gets in the way. When you’ve been hit by Panda, you need to work hard to improve content quality on your website (which means removing low-quality content, while also creating or boosting high-quality content.) Don’t let your CMS inhibit a Panda recovery by placing obstacles in your way. Instead, understand the core limitations, meet with your dev and engineering teams to work through the problems, and figure out the best way to overcome those obstacles. That’s a strong, long-term approach to ridding Panda from your site.

]]>
https://searchenginewatch.com/2015/02/11/panda-remediation-and-cms-limitations-5-problems-you-might-encounter/feed/ 0