Free Report ➝

OUR BLOG

24 Sep 2014
google webmaster central hangouts with john mueller

Google Webmaster Central: Office-Hours Hangout September 12th

The following is a complete transcript of a Hangout on Air on September 12th with Google’s John Mueller.  It has been edited for readability.

Secure and Sitemaps

Will Notifications to WebMasters about Algorithmic Penalties Improve?

Will pagerank pass to a site that has a 301 redirect but is blocked by robots.txt?

Can blocking duplicate pages prevent penalties and help a site rank better?

If you have pages with small variations, like size and color, how should you display these pages?

Does using meta description tags with detailed keywords help to rank for those keyword?

How long does it take for hreflang tags to take effect?

Is it better to add hreflang code in the HTML or XML Sitemap?

Is it okay to redirect visitors based on IP address to the correct language version?

Does using “Read More” buttons devalue a site?

Are Implied Links a Ranking factor?

If you move a Penguin Penalized site to HTTPS, do you start fresh with a new disavow?

Can a Penguin Penalized site recover it’s traffic once all the questionable links have been removed?

When you 301 Redirect a penalized site, do you lose rankings but keep 100% of the penalty?

Is Schema markup something that Google considers for rankings?

Do you use link equity with chan redirects at every jump?

Is the number of posts on your blog a ranking factor? Is lots of content better?

Does Google give new sites a temporary boost in rankings to track user interaction?

Is Authorship used in rankings, or as a measure for quality of content?

Should I continue promoting a Penguin penalized site if PageRank has been devalued?

Does Google test algorithms in a live environment?

How long should you keep a 301 Live for URL’s that have changed?

Do you support schema.org with JSON-LD to show rich snippets?

Will the Penguin 3.0 launch in 2014?

How do I fix Crawl Errors from a WordPress Blog when I can’t find the errors?

What can you do to recover traffic for a site after a manual penalty has been removed?

Does Google use server speed/capacity as a ranking signal?

John Mueller: Hi, welcome everyone to today’s Google Webmaster Central office-hours hangouts. My name is John Mueller. I’m a webmaster trends analyst here at Google Switzerland and I’m here to help answer your webmaster web search related questions. Looks like a lot of people are in here already, so to start off with, do any of you want to jump in and ask the first question?

Q: Secure and Sitemaps

Speaker 1: John, can I ask a question about secure and sitemaps?

John Mueller: Sure.

Speaker 1: Very recently we moved our domain to https, and we did that by enforcing 301 redirects, and within the sitemap as well. I think we did it wrong because suddenly we’re throwing errors from webmaster tools saying “you’ve submitted 15 hundred URLs and only 12 have been indexed.” I think because the sitemap immediately also 301ed over from non-secure to secure, and so as soon as the bot got to the site map, it couldn’t find any URL’s for the site it was supposed to be indexing, so the normal site should have a normal site map and the secure site should have a normal site map and just let google do its thing an index both and then 301 on a page level, is that right?

John Mueller: Yeah, so  basically you 301 redirect all of your old pages to the secure ones? Is that correct?

Speaker 1: Yes. We also did it on the sitemaps so webmaster tools on the normal one started to have a bit of an effect and say “I can’t find anything”.

John Mueller: Yeah. I mean there’s an option of temporarily submitting the old URLs in the sitemap file so that we can crawl those a little bit faster and what you could do there is say that you changed these old URLs and set the last modification date for those. We’ll go and crawl those URLs, see the redirect to the secure version and follow that. Within sitemaps it’s likely you’ll see some kind of a warning there saying “You’re submitting URLs that are actually redirecting”, something like that, and that’s completely fine. That’s something you know you’re doing, it’s something you’re kind of doing in purpose so that those redirects are found and followed, so that’s completely fine.

It would be a little bit different if you submitted those URL’s and you expected the insecure versions to show up in the index, because you submitted them in the sitemap, then that warning would be relevant. But if you’re moving from one version of a site to another version, then that warning essentially just tells you that they’re being prompted normally.

Speaker 1: Right. And I’m having two site maps, one for the secure, one for the not, that’s also fine?

John Mueller: That’s fine.

Speaker 1: And google will choose one?

John Mueller: Yeah, that’s completely fine.

Speaker 1: That would be better.

John Mueller: I think in the long run you’d want to focus on just the version that you want to keep, but in the short term, when you’re doing a site move like this, that’s totally fine. It would be a little bit different if you submitted those URLs  so the secure version where you have your content is in there.

Speaker 1: Ok. So give it two or three weeks to let that all fully really re-indexed on secure.

John Mueller: Yeah, exactly.

Speaker 1: I’ll go to to the Webmaster Tools and see what happens.

John Mueller: That should work, yeah.

Speaker 2: Hi John, can I ask a question?

John Mueller: Yeah, sure, go for it.

Q: Will Notifications to Webmasters about Algorithmic penalties improve?

Is there any possibility in the future of webmaster tools to show some information regarding algorithmic penalties such as Panda or Penguin. I assume you haven’t done so far because maybe if you did it you’d have to show what exactly you consider low quality links and things like that.

It would help us as SEOs, even a small notifications like “we noticed an increasing amount of low quality link to your website. Please check our guidelines” or something like that without actually giving examples. That would also actually help webmasters to know somebody is actually doing a bad job on their team and that would help them well maybe research more and find out more about what the quality guidelines are.

It actually will also help SEOs when, for example, I took some clients and they paid for a year with a ex SEO firm that brought them a lot of blackhat links, directories, and they were very skeptical in removing those links or disavowing because they paid for it and it’s just my word against the other SEO team’s words, so they were not convinced that they should remove those links without any proof that their site would be affected by those links negatively.

So even a small notification would help just saying that there’s nothing going on, you don’t even have to tell them they’re penalized just say that there are a bunch of low quality links that maybe you want to take a look at. That’s more of a suggestion than a question.

John Mueller: Yeah. It’s a tricky topic.

We bring this up with the engineering teams regularly to see what we can do to bring a little bit more of that algorithmic information to the webmaster tools so that webmasters have a little bit more insight into that as well.  The main problem is really that our algorithms aren’t really made for showing information to the webmaster, they’re made to  optimize their search results and that the good results bubble up, and because of that it’s something that we can’t really bring that information one to one in webmaster tools, because it would be more confusing to the webmasters.

So, sometimes for example when we recognize that a site is ranking for the wrong terms, it’s ranking really well for this term, but not for this other term that we think it’s more relevant. Whenever algorithms might step in and say “well, your site isn’t so relevant for this term here, but it is for this one here” and if we were to show that to the webmaster and say “we think your site is low quality because it doesn’t match what it was ranking for, what we were showing in our search results” then the webmaster might say “oh, well, this is something they have to fix”.

But from our point of view it’s just moving things to the right queries and not necessarily saying “this is something that the webmaster is doing wrong”, it’s something we were doing wrong in our algorithms in the past, and this algorithm is fixing that.

So it’s really tricky. I understand that with links specifically it’s perhaps a little easier, but even there it’s sometimes really hard to algorithmically bring this information out, because even when we do this manually from the website team when we send example links to sites that were manually flagged for having web spam issues, we have enough problems there finding really good example links too, so it’s…

I’d be kind of worried if we did this completely algorithmic and just tell the webmaster “oh, you did some spam, you should clean that up” and algorithms are maybe picking something up that isn’t really that critical, isn’t really that problematic or it’s something a competitor put up or an ex-SEO put up that we’re ignoring already. It’s not something that they really have to worry about.

So, on one hand, I would love to put more of this information into webmaster tools but on the other hand we really need to make sure that it’s really really actionable for the webmaster and really useful, not just exposing some internal details that won’t really have that much sense for the webmaster.

Speaker 2: Right, but since Penguin it’s being refreshed not so often, so it’s not like daily or something like that. It’s every few months maybe once the Penguin algorithm refreshes just trigger some sort of message or something like that.

John Mueller: It’s really hard. You know, I mean, this is something we do bring up with the engineering team regularly because we know that when we look into our internal tools we sometimes have information that can be really useful to the webmaster and if the webmaster wants to fix the problem, we should help them fix the problem as much as possible.

On one hand we’d love to do that, on the other hand it’s not that trivial, so it not something that I would imagine you’d find in the webmaster tools in the next month or so, but I wouldn’t be surprised if it showed up. I don’t know, I’m making a big guess, maybe in a year or something like that.

We’d love to bring more of that information into webmaster tools, but at the moment, especially with webmaster tools, the priorities are more on the really hardcore technical issues that you’re doing wrong, so if you have a site set up that’s completely wrong for mobile, that’s very black and white, we can test it, the webmaster can fix it, it’s done.

In all of these kind of grey areas where our algorithms  are doing something a little tricky or finding some small problems, that’s very tricky to communicate to the webmaster.  But we hear from you guys regularly, we do bring it up with the teams regularly so it’s not something I’d say is completely out of the question, it’s just not trivial in all.

Speaker 2: Yes, this is as I said related to my previous client’s ranked website in question because some notifications of this sort would at least help me to know where to target my efforts for going on, specifically if there’s a problem on this work, nothing has worked so far, even though we have invested a lot of effort, it would be very useful at least to know where to focus more than in other areas or something like that.

John Mueller: Yeah. We definitely hear you.

Q: Will pagerank pass to a site that has a 301 redirect but is blocked by robots.txt?

Speaker 3: I have a question, let’s say we have a URL which is blocked by robots.txt, and again it has been 301 redirected through from another URL, so would it pass the pagerank or not? I just wanted to confirm.

John Mueller: It wouldn’t pass any page rank because with the robots txt we wouldn’t see the redirect at all. So if it’s blocked by robots txt then the URL that’s blocked collects the pagerank, but the redirect doesn’t forward any of that.

Speaker 3: Ok. And, is there any possibility in that particular URL, the destination URL,and it’s not being crawled from anywhere else, so it is also being blocked.

John Mueller: You could submit a sitemap. That might help us, but if this is the only link to that URL, then that seems like something where our algorithms might think “maybe this is not such an important URL after all” and we might not index it that well. So I’d just make sure that it’s easy for your users to recommend these URLs so that these links do show up more and don’t hide everything behind the robots txt redirect. Make it so that people can just naturally link to your website.

Q: Can blocking duplicate pages prevent penalties and help a site rank better?

Speaker 3: Thank you. I have one more question. We have an ecommerce site and it grabs the data from the manufacturer directly in the database, so there is a possibility that they will be selling the same database to others as well, because the product is the same, so there’s a problem of duplicates.  It’s difficult when a site is so exhaustive that it is very difficult to clean up the entire data in a single stage.  We have created our own unique content at some stage, at some part of the website, but again the percentage is very high of the duplicates. So would it be a right choice to shut down some pages of the website so that the percentage of all our duplicates of the website can be balanced?  So in some sense we’re not eradicating the entire duplicate problem, but balancing out so our keyword rankings won’t be affected ?

John Mueller: So I think, first we don’t penalize the website for duplicate content, so just because you use your manufacturer’s descriptions, doesn’t mean that we’ll penalize your website. What will happen is when someone searches for those individual pages or that piece of content we’ll try to show something that’s relevant there. So that might be your page, that might be the manufacturer’s original page, that might be someone else’s selling the same product. It’d be hard to say.

But essentially it’s not something where we would penalize your site for having that so I wouldn’t say you need to remove it completely, but if you feel that this is lower quality content because it’s just the same as everyone else has, putting a no-index on there is a good idea, so that the content remains within your website, users can still find it by navigating through your website, but it’s removed from the search results, so that might be a possibility there.

Otherwise I’d just try to, where you can, make sure you have your unique content, your unique value that you’re adding to this content, which might be if you’re selling these products locally, then maybe your address  is something relevant there, so people are searching for the product and your city name so that would be a good match. But if you’re just selling it online and everyone else is selling it online in the same way then that’s kind of hard to provide that extra value, but be creative! Maybe there is something that you can do that can really add a value at some time.

Speaker 3: Ok. Thank you.

Q: If you have pages with small variations, like size and color, how should you display these pages?

Speaker 4: Just quickly on that one.  With regards to unique content on E-Commerce web sites, we’ve got a couple of sites where say for instance they’re both selling, I don’t know for example bottles of Coke, one’s 500 ML ,one’s 1 LT. The sort of content on those pages is going to be very similar apart from the actual size and the actual quantity of Coke being sold. How would you go about displaying that sort of thing on the web site?

John : It depends a bit on your website and how you want to go. There are essentially  two main options:,

One is to leave these variations on your web site under individual URL, so that they are available on your website, so if someone is searching specifically for this variation then they’ll be able to find it in search. And the downside there is that we have to crawl a lot of pages and if you have variations and variations like: Say you have T- shirts in different colors and different sizes for male and female, so then there’s a lot of variations we have to crawl and index separately. You could do that if you wanted to…so if someone’s’ searching for a blue T-shirt for this size, this type then you have the perfect page for that.

Another idea could be to kind of fold all these different variations into a single URL, where you can say this is the main product URL and all these options are essentially variations of the products I have available on this page, but this page is about this product in general. The advantage of doing that is that you have a much stronger individual product page because you have one of these, so you don’t spread yourself thin within the website with all the different pages and it’s something that we’ll probably rank a little bit better for the more general queries for someone who’s searching for a T-shirt with maybe a specific design on it, they don’t care about the color or the size or they pick the color or the size when they come to your site.

So those are essentially in two main options there.  It’s not like one is correct and the other one is not correct.  You kind of have to work out how it works for you website.  If you have T-shirts that are in different sizes then probably those variations aren’t so relevant that it doesn’t makes sense to make individual pages for that, but if you have different patterns on this T-shirts then maybe it makes sense to have individual pages for each of these patterns.

So, you kind of have to look at the product that you are selling and consider “is this specific variation relevant and unique enough that I want to have it indexed separately, or is it just a part of the more general product that I want to have visible in search and the user would pick the right variation that they want when they come to my site.”

Speaker 4: Thank you.

Q: Does using meta description tags with detailed keywords help to rank for those keyword?

Speaker 2: John, related to that question, I usually advocate for the second option of going for a single URL where the variations of the product are available as attributes or something like that.  Would it be useful to include, for example, on a T-shirts site the meta description with available sizes so we can also be able to rank better for people who search for “black t-shirt size 10” or something like that?

John Mueller: We don’t use a meta description for ranking, we do show that in the snippet but it’s not something we use for raking. So, from that point of view it would not help for your ranking but it might make the snippet a little bit easy to understand so maybe you can do that like that.

Speaker 2: So, you can’t take actual text from the meta description to use it for at least visibility of the page on the ranking?

John Mueller: No, if it’s just the meta description tag we don’t use that for ranking. So we use that for the snippet but we wouldn’t use that for ranking the page.

Speaker 5: I have a question about the HREF language tag.

Q: How long does it take for hreflang tags to take effect?

Speaker 5: And we have a big site with about…apps, and it’s in three languages. It’s in english, german and french and it was first indexed without any of these rel=”alternate” HREF tags.  If you search in Germany, you see the english version and vice versa. It’s really crazy. So we implemented the tag afterwards. That’s four week ago, but still it’s really crazy as… if you search in Germany, you find the english results and if you search in the US, you find the german results. How long should we be patient or should my customer be patient until those HREF language tags really show effect?

John Mueller: On the one hand we need to recrawl both of those variations so that takes a bit of time. Depending on the size of the website and how much we crawl, which pages you are seeing in the search results, if they’re the important pages in the search results, then that’s something we’d probably crawl and index a lot faster. If they’re random pages on the website, then maybe we’ll recrawl them every couple of months.

So that’s something to keep in mind, but in general, I imagine you should see some kind of visible result after four weeks.

Q: Is it better to add hreflang code in the HTML or XML Sitemap?

Speaker 5: Is there any way to speed it up?  And, is it better to put it in the code or on the XML sitemap?

John Mueller: Both options work. So they work at the same speed. We have to recrawl those pages even if the HREF lang is in the sitemap. What you can do to speed that up is submit a sitemap file with a new last modification date for those URLs.

Speaker 5: Ok.

John Mueller: That’s something we could process. With a site that size, I don’t think there’s any magic bullet that you can use to speed everything up, but if you still see these problems after, I guess, four or five weeks, then we definitely want to see those examples so that we can figure out what we need to do better on our side. It sounds like you’ve waited a reasonable time to let that get updated, so it almost feels like something on our side is sticking to something that is wrong.

Speaker 5: In the webmaster tool, where it should show that these tags are noticed, in the english site, we see six thousand of them and in the german version only six thousand of millions.  So there is something stuck there.

Q: Is it okay to redirect visitors based on IP address to the correct language version?

Another question related to that is, should we help the customer or the user out there? If they access the site, should we redirect them based on their IP address, in addition to that HREF language tag? Would that help? I mean, then the customer sees the correct version right away, and we would not redirect the Googlebot, of course, because that wouldn’t make sense.

John Mueller: Yeah. That’s the main problem, but we would almost see that as cloaking. If you were redirecting users in the US, but not Googlebot when it crawls from the US, then that would look kind of like cloaking.

Speaker 5: Sort of, but it would help the customer in this sense right?  It looks like cloaking, yes? But then if a quality rater looks at it, they would see that we don’t do anything bad, but we only try to help the customer.

John Mueller: I think that the bigger problem from our side, what we’d see there is that it’s confusing to Googlebot. When we don’t really know what exactly is happening when someone’s going to a page, that makes it really hard for us to understand how we should be treating this page.

What we generally recommend is showing a banner on top, so that’s something that the customer can see and they can make a choice on that because sometimes people in the US are searching in german and they want to find a german page, but of course if they’re searching in english, then finding the english page makes a lot more sense.

Speaker 5: Yeah, so you’re saying it’s best if we know they are coming from the US, put up a banner and say, “hey, why don’t you try the english version of this, you are based in the US” and vice versa.

John Mueller: Yeah, yeah. Exactly.

Speaker 5: And then set a cookie so they only come to the site correctly.

John Mueller: Yeah, that’s fine. That’s absolutely fine.

Speaker 5: And about this, I had some customers with banks, and they need for the US customer to ONLY  see the US version and vice versa. In that case, I mean, it is sort of looking like cloaking, but is there any way to notify Google that there is a legal responsibility for the bank to do that?

John Mueller: Not really.  It might be interesting to have some of those sample URLs, if you have some that you can share with me because I know that the team is currently looking into what we can do with those kind of sites that must show something different to users in different countries, and that would be useful to think about what we can do there to improve that.

Speaker 5: Ok. Alright, thanks.

John Mueller: Ok, let’s go through some of the submitted questions, because people seem to have voted on them and would be a shame to let them go. Let’s see what we have here.

Q: Does using “Read More” buttons devalue a site?

Q: We’ve been updating our content throughout our e-commerce site and placing some of the texts behind the READ MORE button for user and google to see.  Could a READ MORE button be devaluing the content that we have worked hard to produce?

John Mueller: To some extent, we do recognize that content is hidden and we try to ignore it, so if you’re putting a lot of unique value behind these READ MORE buttons than is possible that our algorithms are thinking, “oh it looks like it’s hidden and it’s probably not important.”

So, I’d really recommend as much as possible, putting the real content directly visible on the page, and if you have a kind of auxiliary content or additional value that you are providing there, using READ MORE or using kind of a tabbed layout, something like that, it’s absolutely fine, but the primary content that you really want to rank for, should be visible directly.

Q: Are Implied Links a Ranking factor?

Q: I noticed google patented the ability to use implied links as a ranking factor which I am guessing are brand mentions.  Is this currently being used as a ranking factor, or will it be used in the near future.  If so, when will it be used?

John Mueller:. That’s a lot of questions. So in general, we do a lot of research, we do a lot of work to find out what might work, what kind of algorithms might work, what kind of algorithms do we need to follow up on and we might patent some of those or we might publish research papers on those.  That doesn’t necessarily mean we use them at the moment for ranking, and that doesn’t mean that we will use them in the future not for ranking, so I wouldn’t assume that just because there’s a research paper out there or a patent out there, that this is something that you guys need to focus on.

Similarly, if this is something that we are going to be putting into use in the future, in general we are not going to pre announce that and we might not even announce it afterwards or using a very specific type of algorithm, so that’s not something I’d really be able to help you with, but at the same time is not something where I’d  spend too much energy on and say, “Oh this is really obscure patent from google.  If google were to do this then I’d have to add, I don’t know, 5 dollars signs on pages and I could rank all the time.”  That’s probably not going to happen that easily.

We do have a lot of really smart people doing cool stuff here but just because its published doesn’t mean it’s live or going to be live soon.

Q: If you move a Penguin Penalized site to HTTPS, do you start fresh with a new disavow?

Q: When moving a penguin hit site to HTTPS does a disavow file have to start to scratch again to no follow disavowed domains?  The disavow file is added to HTTP and HTTPS, but the HTTP file is one year old, whereas the HTTPS one is just a few weeks old?

John Mueller: So, if you’re moving from one site to another, is that’s across domains or from www to non-www  or from HTTP to HTTPS I’d just also upload the same disavow file for both of those versions so that we have that connection there, but it doesn’t mean that it starts over.

So, essentially what happens here is on a per URL basis we’ll crawl your site, we’ll see the 301 redirect, we’ll switch to the other URL in our index, but we’ll kind of try to forward as many of those signals as we can to that new URL and those signals include all those URLs that were disavowed, that we’ve crawled in the mean time, that we’re treating kind of like nofollow links now, so all of those signals get forwarded to the new URLs, you don’t start over from scratch.  It’s not that you’d  have a big disadvantage from moving from one version of a URL to another one, so all of that does get forwarded as much a possible

Q: Can a Penguin Penalized site recover it’s traffic once all the questionable links have been removed?

Q: If a website stuck in limbo due a penguin penalty from past SEO work, is it likely, if we have disavowed all the questionable backlinks and removed whatever we can with a top quality site, that a penguin refreshes we can see a boost again in google?

John Mueller: Yes.  I mean this is a very theoretical question where you are essentially saying you have turned your site around and made it the best of its kind and with an algorithm refresh we’ll try to take that into account as much as possible so if you’ve really cleaned up all the webspam issues associated with your site then the webspam algorithm will say, “Well this is fine” the next time it runs.

So from that point of view that’s definitely possible.  It’s certainly not the case that once an algorithm thinks your site is bad, that it’ll be bad forever and that you’ll be stuck forever, so  that shouldn’t be happening there and in some cases where it looks like something is being stock forever, contacting us so that we can go to the engineers and ask about the details there, that’s always useful, but in general cases like this where you’ve seen a lot of webspam on your site, and you’ve cleaned all that up and you are just waiting for a webspam algorithm to update again, that’s completely normal and you’d almost certainly see a change after that algorithm run again, after the data is refreshed with algorithms updated.

Q: When you 301 Redirect a penalized site, do you lose rankings but keep 100% of the penalty?

Speaker 1: John, can I just ask a question based off the previous question about moving a site’s disavow file?

A couple of weeks ago, when I asked about the 301 question, you said some of the authority passes  but not all of it, but we couldn’t get to the bottom of why.

Does it mean then, if you move a site and you move everything, when you are 301ing, in theory you get, let’s say some of the rankings follow you but all the penalty follows you?  Or do you apply the same kind of percentage of lost ranking as lost penalty, because essentially you’ll just end up worse off if you follow some of the rankings but all of the penalty.

John Mueller: I know what you mean, yeah. so…. I don’t know, it’s very theoretical, let’s put it that way, I think-

Speaker1: In our situation, it’s financial.  It’s not theoretical at all. In real world scenarios where people have their own business and losing real money it’s not theoretical in any sense.  It’s a real decision that costs real people their jobs.

John Mueller:  When it comes to the algorithmic issues, that kind of dampening factor definitely happens there as well, so you’d also see the same kind of dampening with the forwarding.

When it comes to manual actions I believe those will just be passed on one to one because there is often no dampening available with manual actions.  Either some one says your site is low quality or thin content and it’s removed from the index or it’s not removed from the index,.  There isn’t something like, “Oh your site 10% removed from the index, that kind of thing.”  So, from a manual action point of view, that would be forwarded completely.

Speaker1:  So you would be worse off ?

John Mueller: Yeah, but I wouldn’t assume that you could like chain ten of these redirects after each other, but it’s something that happens proportionally with all the rest, so it’s not that one side goes a little bit down and the other side stays as bad.

Speaker 1:   I’m not ever suggesting that you should move site to avoid a penalty anyways because you are not really fixing the underlying problem.  It was more because of what’s come to pass with the secure, so you are moving everything to a secure site, and therefore, you’re losing some of the ranking but you’re gaining some back if it’s secure.  If you move and lose rankings but still hold your penalty, you’d be worse off but you wouldn’t know why unless you’d watched this discussion.

John Mueller: Yeah but that’s not the case. If you don’t have a manual action and it’s really purely algorithmic, that kind of dampening happens regardless.  I am pretty sure if you change from HTTP to HTTPS that’s even something where we wouldn’t have any kind of dampening  because that’s essentially such a small variation that it’s something where we wouldn’t even say this is a site move.

Speaker 1: Alright, carry on .

John Mueller: Let me just go through some more of these questions and we have more time for questions from you guys afterwards.

Q: Is Schema markup something that Google considers for rankings?

Qs: I noticed after receive the search bar and the search results, you need to implement schema markup.  Is schema markup that google is going to look at more over time?

John Mueller: Yes.  We are looking at schema.org markup more and more and we are always looking at the kind of markup that’s being used, and that helps us to decide which kind of markup we should also support.

Q: Is schema markup currently used as a ranking factor?

John Mueller: Not in the sense that if you have schema.org markup we rank your site better,  but it definitely helps us to understand the content on your site a little bit better, so if you mark your entities on your site and you cleary mark up what you’re  talking about then that makes it a lot easier for us to say “Oh, this page is really about this topic, because they confirmed this with the schema.org markup or with other kind of metadata markup on these pages so that we could really be sure that this page is about this topic.”

It’s not the case that just adding random schema.org markup to a page makes us say… “Oh, they use schema.org markup, it must be a better page than all these other ones.”

Q: Do you use link equity with chan redirects at every jump?

Q: Can you clarify or confirm that with chained redirects links equity is reduced at each jump? By how much is it reduced? and does it vary or remain consistent at each step?

John Mueller: This is mostly a theoretical question in the sense that some part of the Page Rank rank is essentially dropped with these redirects.  If you chain  a lot of redirects then some of that will be kind of reduced during those chains. This is one of those reasons why we also recommend going out and contacting the sites that are linking to your pages when you do a site move, so that those links go directly to your new pages and essentially, you don’t have to worry about that drop.

If you had access to a very granular PageRank meter that was extremely live, that’s something where you might see a tiny drop there, but in practice, as long as you’re not going overboard with amount of redirects that you are chaining you wouldn’t see any change in rankings there directly.

Q: Is the number of posts on your blog a ranking factor? Is lots of content better?

Qs: Is the number of post on blogs or social media, like buzz, a ranking factor?  Is a big buzz better for google?

John Mueller: We do pick up content that’s changing on your site and we try to show that in the search results.  If it’s relevant content that matches new or current activities, then that could help us to say, “Well this page might be relevant for these new and upcoming things”, but it’s not the case that we’d say ”well, there is a lot of content that’s coming in and out if these pages therefore it must be good”, so we really look at the content and try to figure out what we need to do with that content and try to react accordingly to that.

So, if you just like add RSS feeds to your pages where you constantly add the newest news  from whatever on those pages, that doesn’t necessarily mean that these pages are suddenly much more valuable for us or much more relevant for us in the search results.  If you are doing a lot of work to update these regularly we are probably crawling them fairly quickly because we see a lot of changes. But just because we crawl them regularly doesn’t mean that they are suddenly a lot better.

So I wouldn’t just focus on the quantity of buzz or the quantity of blog posts that you put out, but really make sure that the quality is the main thing and the quality and relevance that you are putting out there matches what you are trying to achieve.  So it doesn’t look like you’re  just churning  things out automatically or that you have a squad of writers who are creating content like crazy.

If that content isn’t relevant or useful then that’s not going to help your site, so just because a lot of things are changing doesn’t mean that is better.

Q: Does Google give new sites a temporary boost in rankings to track user interaction?

Q: I’ve seen in the past websites that are relatively new, showing well in the search results for a short period and then dropping back?  Does google temporarily rank a site well to see how users interact with it and then adjust its ranking based on the interaction?

John Mueller: I imagine a number of our algorithms try to find the right approach with new websites where we don’t have a lot of signals, so specifically if the website is really new and we don’t have a lot of information on how useful this website is, how high quality this website might be, then our algorithms are going to make some estimations first based on what they’ve seen so far, and based on that, they might show in the search results, and then over time as we gather this data, then those algorithms are going to adjust their estimations and that might mean they’ll adjust them up, because they think “oh, this website is actually a lot better than assumed.”  It might be that they adjust them down and said, “Well ,we thought this was going to be great, but however created this website didn’t keep up and it’s kind of not great as we thought”

So when our algorithms have to work with estimations, might make estimations on the high side, might make estimations a little bit on the low side, and it just takes a while for things to shake down and land at a reasonable level where we say, “All the signals that we know about this website have been confirmed and we really trust that this is the right place to show this website in this search result.”

Q: Is Authorship used in rankings, or as a measure for quality of content?

Speaker 4: Hi John can I ask you a question?  There’s been reference that Authorship is a way to measure the quality of content, and now the authorship has been removed entirely from the search results, does this mean that the quality of content  based on authorship is being ignored by google?

John Mueller: We actually never used authorship for rankings directly.  You mentioned that there are some misconceptions around authorship being used for panda, for example, and that’s something we really never did.  We use authorship for the in-depth article articles in web search, but at the moment we don’t use authorship at all.  If you feel that your site has  low quality content, just adding the authorship information  wouldn’t have change the low quality content, but that’s something that you really have to step back and think about your website overall.

Q: Should I continue promoting a Penguin penalized site if PageRank has been devalued?

Q: My site is in limbo, affected by the Penguin update, removed and disavowed all known unnatural links, I don’t know if I can now move forward with the site for fear that work will be in vain due to ongoing penalty.  How to know if just the PageRank is reduced or if the site has an algorithmic penalty?

John Mueller: Ok, so if the Page Rank was reduced that’s essentially something that would have been done on a manual level, that’s something that would have been a manual action and visible in webmaster tools, so that’s something where if you see this in the webmaster tools, you kind of know, ok, my site has a lot of unnatural links, therefore the webspam team has decided to reduce my PageRank for that.

The thing to keep in my with PageRank in general is that it’ something that we don’t think is really an actionable metric, I believe we haven’t updated in over a year now and I imagine it’s something that probably won’t  be updated in the future anymore, so I wouldn’t focus on PageRank.

I would look at the rest of the situation with your website. If you see that your website is strongly affected by the webspam algorithm like Penguin and you’ve really cleaned up all of those links, disavowed the ones that you can’t remove, those kind of things, then generally it’s a matter of waiting for that algorithm to update again.

I know it’s been a while now, I imagine this is something where you might see changes in the reasonably near future but it’s not something where I say this is going to happen today or tomorrow, I know the engineers are working on this but I don’t have any hard dates where I can say on this date this algorithm is going to be updated again. Let’s see.

Q: Does Google test algorithms in a live environment?

Q: Does Google test algorithms like Penguin in a live environment before rolling them out? Could some webmasters have experienced changes attributed to Penguin 3.0 rolling out in the last few months?

John Mueller: So, yes, we do some live experiments with our algorithms. There’s a really good video from Matt, from I think March or April of this year where he talks about how we test new algorithms. Primarily, what we do with new algorithms is test them based on the known date that we have, the kind of the rated, existing URLs that we have and to see which queries that the results kind of show better, higher-rated results and for which ones we show lower rated ones.

Sometimes we’ll also do a new batch of rating where we will say “the search quality rater should rate the search results A and B”  and tell us which one is better, and kind of go through the individual URLs there and review them to see which one matches what the user is looking for, which one matches better the information that the user might be interested in, might want to find there.

That’s something that we do and as a last step, almost when we think our algorithms are really good, we will run live experiments on them, and that will take maybe 1% of the traffic and say “Ok, they will go through this version of the algorithm, another percent will go through a different version of the algorithm” and we will see how users react to those individual changes, and that’s something that we do for ranking algorithms. That’s something that we do for UI changes.

We generally have, I image, way over 100 experiments running at any given time. So that’s something where you almost always, when you do searches, you’ll be in some kind of an experiment, and we’ll be kind of double checking if we’re doing things right or if we’re doing things in a way that we could be doing better, and what we do is this kind of  AB test, so sometimes that’s something that people notice, sometimes that’s something Barry notices and writes about and does screenshots, those kind of things.

Ranking experiments are obviously a little bit less visible, but we do run those all the time, and we do try to figure out which of our algorithms need to be tweaked and how we can best tweak them to make sure that they’re really reflecting what users now want to find in search results, how they are searching, and all of that.

But I would really recommend checking out that video from Matt from earlier this year, because he covers that with a little bit more detail.

Q: How long should you keep a 301 Live for URL’s that have changed?

Qs: Does Google need a 301 redirect forever for every page on a site when a URL structure changed multiples times over years? Would Google pass ranking signals along the newest generation, or would they revert to an older version, if there’s no 301?

John Mueller: We generally would revert to an older version, because we forward those signals to the newer ones, and all of the internal linking will also be reflected in pointing at the new one. So if you’ve changed your site structure, and you’ve changed the internal links, then we’ll probably try to follow those internal links as well and focus on the newer ones.

Personally I’d recommend making sure you have 301 redirect in place for at least a half year. If you can keep it there longer, I would definitely do that. If you can track the usage of those redirects, like users actually following them, then you can probably make a little bit of a better decision there and say “well, nobody is looking at these URLs anymore. No user is looking at them. No search engine is crawling them anymore. Therefore I can turn them off.” And simplify the HTaccess, those kind of things

Speaker 5: John, regarding to that. So I’ve seen lately that 301 redirects are staying in the Google index for years. I have correctly redirected 301 sites, whole sites by 301 redirection, and they are still there literally after two years. I’ve seen that on several sites in the German Google index, and I see that Google is trying over and over again. I see that in webmaster tools, to access those old URLs. I would just like to clean out the old index, but they just don’t disappear.  Is this a new thing? Before 301 redirects really were a very clean way to clear up the old sites. But now it looks like it really stays.

John Mueller:  In general, we’ll try to follow 301 redirects as much as possible and really focus on the new URL. What you will sometimes see is that we know that there are multiple URLs that essentially are equivalent and lead to the same content, and if you do something like a site: query for a domain that has been redirecting for a while, you’ll still see a lot of URLs there, just because we think that you’re specifically looking for those URLs and we know that we have some equivalent URLs that were also on those URLs, so we’ll show them there.

What you’ll see in general is if you look at the cache page, you’ll find that the new URL is actually shown on top of the cache page, but you’re kind of looking at the cache page for the old URL. So we will have actually indexed everything under the new URLs, but we know that for those new URLs, there are also these old URLs, and they’re equivalent. So if you explicitly look for those, we will say “well, you’re looking for these, we will show them to you.”

We sometimes see people searching, for example, for the old website if they know that, I don’t know, my old domain, .de, and they search for that old domain, then we want to show them kind of what they’re looking for, and that’s something where what we’re trying to do for the users confuses webmasters, in the sense that if you do a site query, you expect, well, those numbers will go to 0 once everything is crawled. But from our point of view, we think, well, if you’re explicitly looking for that, maybe we should show you something even if we have everything moved over in the meantime.

Speaker 5: So how do I know that everything has been crawled?

John Mueller: I would focus on webmaster tools, the index to URL count in the sitemaps, for example.

Speaker 5: Well, there’s no webmaster tool for the old site, because it has been completely redirected.

John Mueller: Yeah.

Speaker 5: I cannot verify the old site, and actually, I have a customer with hundreds of old domains, which we all have redirected with a lot of work, and they still appear in the Google index if we search for them explicitly, of course, and there’s no way to check…because the webmasters ask us “can we turn off those redirections now?” Because it’s a lot of work to keep them up, and I’d have to tell them, sorry, I just don’t know. Google is trying to access those old ULs for ages, for really three years now.

John Mueller: Uhuh.

Speaker 5: Isn’t that strange that I cannot tell Google, hey, this domain is really gone. Take it off. I don’t want to see it anymore, and I don’t want any redirections there anymore.

John Mueller: Yeah.

Speaker 5: Wouldn’t that make you work easier?

John Mueller: To some extent, yes, as long as nobody else takes that domain.

Speaker 5: Yes, of course.

John Mueller: That’s always the tricky part. If you say “I don’t need this domain anymore” I’ll let it expire, and someone else picks it up and puts a website on there…

Speaker 5: That’s different, because there would be new content there, and there would be no redirection there.

John Mueller: But that’s something where we would have to crawl those again.

Speaker 5: Yes, of course.

John Mueller: But I do agree that… I’ve been seeing more of these questions as well around the web where maybe we can do something better with those queries and say “if we can recognize that these URLs have really moved, maybe we shouldn’t be showing them in the site query either” and only if someone is explicitly looking for the name, maybe but not on the site query.  I don’t know how quickly we’ll be able to make changes like that. It depends a bit on what the search quality teams have been thinking about the site queries, any way, but I totally see how this makes diagnosing things a lot harder.

Q: Do you support schema.org with JSON-LD to show rich snippets?

Qs:  It seems rich snippets are not displayed when I use schema.org markup with JSON-LD. Do you support schema.org with JSON-LD to show rich snippets?

John Mueller: We only support JSON-LD for a very limited number of schema.org markups. I think that’s only for events and for the new site link search box. So for the other types of root snippet markup, we currently don’t support JSON-LD. That might change over in the future as we move our markup testing tools to JSON-LD as well, but at the moment, we don’t support it for all different types.

Qs: Will the Penguin 3.0 launch in 2014?

John Mueller: My guess is yes. But as always, there are  things that can happen in between. I’m pretty confident that we’ll have something in the reasonable future, but not today. We’ll definitely let you know when things are happening.

Q: How do I fix Crawl Errors from a WordPress Blog when I can’t find the errors?

Qs: From webmaster tool, I get crawl errors from our WordPress blog. I’ve checked them, but nothing. The crawl errors come from the tags that we have on the blog.

John Mueller: In general, if we can crawl those URLs, we will try to crawl them if we find a link somewhere and depending on what we find there, we might crawl them more frequently or less frequently. But in general if this is just a URL that returns a 404, then we will slow down our crawling over time. It’s not something that would negatively affect your website.

It’s essentially just, we found a link to those pages, and we think maybe we should check them out, because we kind of like your website and would love to find more content on there. So we’ll try those URLs. If they don’t work, that’s fine. But we’ll let you know that we tried and we found an error in webmaster tools, just like you would find that error also in your log files.

So it’s not a sign that anything is broken or that you need to fix anything. It’s essentially just us saying, we tried this URL, it didn’t work, and maybe that’s fine.

Q: What can you do to recover traffic for a site after a manual penalty has been removed?

Qs: We have a site that we cleaned up that had a manual penalty. The traffic has significantly dropped since the reconsideration request was approved. We are earning links via content marketing, but things get worse, not better. What can we do to recover?

John Mueller: If the manual action has been resolved, that’s essentially a good step, because that means at least from a web spam point of view, the team, when they manually reviewed your website, didn’t find what they found in the past. So that’s a sign that you’re going in the right direction.

If you are seeing negative or no changes in the search results during that time, what’s likely happening is that our algorithms are picking up other issues that maybe the webspam team isn’t picking up on. That could be everything from the quality of your content, it could also be webspam issues. It could be keyword stuffing, the usual issues where our algorithms might pick up on this, might respond accordingly, but the webspam team would say “this is technically not webspam so we’re not going to flag it as manual action.”

So what I’d recommend doing there is, first of all, making sure that the quality of your site overall is really the highest it could possibly be. So if you compare it to other sites that are active in that area, it’s obviously clear that this is by far the best one of that kind. So that’s essentially the first step I would take there. Obviously, that’s easier said than done. So There’s a lot of work involved in that sometimes.

If you’re earning links via content marketing, I’m not really sure what you mean with that. That sounds like you’re doing guest blogging and just adding links to those guest blogs, which wouldn’t necessarily be the best way to earn links. Because essentially, you’re creating this content, and you’re trading this content for a link, which makes this link kind of an unnatural link. So that’s not something where I would say this is particularly a good thing.

On the other hand, it might be that you mean something completely different with earning links via content marketing. Maybe you are creating great content on your website, and people are referring to that, and they think that’s great, and that would generally be a good kind of link. But if you’re placing these links into content and other people are publishing your content with your links, then that seems like something you’d want to double check maybe with other people who have gone through this kind of process before to see that you’re not doing anything that’s causing more of a problem for you site than you think you’re actually providing an advantage for.

Speaker 6: John, can I ask a question please?

John Mueller: Sure.

Q: Does Google use server speed/capacity as a ranking signal?

Speaker 6: Ok. I have a question. Can Google read out server specification and use this as a ranking platform?

John Mueller: You mean from the certificates?

Speaker 6: No, no. I mean supposing that the server has a limited bandwidth, will that impact on our rankings because maybe the site would not support a large amount of traffic?

John Mueller: No. That wouldn’t be…the bandwidth for the server is mostly a technical problem for us, because we might not be able to crawl the content as well.

Speaker 6: Yeah, sorry. So even the fact that the server maybe is a very big one, very powerful, would not influence in our ranks?

John Mueller: No. That wouldn’t matter. The server speed is really primarily a problem for crawling and indexing.  We don’t make adjustments on how much traffic a server will be able to handle.

Speaker 6: Thank you very much.

John Mueller: And with that, we’re kind of over time. So thank you all for all of your questions and I hope you guys have a great weekend. And maybe we’ll see you guys tonight. Cheers.

The following two tabs change content below.
Search engine marketing expert with 15 years of experience in the industry, working with small mom and pop shops as well as large corporate websites. I have experience with all aspects of inbound marketing, including SEO, Link Building, Social Shares, Usability, Conversions, PPC, Email Marketing, and more.