Rand Fishkin Interview in Moz’ Headquarters

ENTREVISTA A RAND FISHKIN EN LA SEDE DE MOZ

¡Hola! ¿Qué tal? Aqui Romuald Fons de RomuTV. Hoy tenemos un invitado muy especial, ¡Rand Fishkin!

Romuald: Rand, thank you for being here.

Well, this is your place, thank you for welcoming us here.

Rand: My pleasure.

CTR manipulation as a ranking technique

Romuald: I have a few questions I want to ask you. The first one is about CTR, ok? There is like this trend, in Spain, in which some SEO influencers are talking about manipulation of the CTR as a ranking technique. What are your thoughts?

Rand: I’ve seen few of these organizations, and in fact I’ve done some testing on this myself and seen some of the results.

I can confirm that, sometimes, at least in the short run, if you have a large enough group of people who are real users, who are geography similar, who are searching for something in Google and they are all clicking a particular result and choosing it among all the other results, Google will frequently move that result up in the ranking – that usually last a really short period of time.

One of two days in most of my experiments.

That being said, Google has said that over time, if many hundreds and thousands of users, who are searching for a particular query, start to prefer one result over another, chances are good that Google’s learning algorithm will pick up on that, will look for signals and will try to find ways to have that piece of content or that website automatically rewarded and moved up in the search results.

So, there are two ways to think about it.

From a manipulative angle: «how can I win the system?» «how can I get enough people to do this?», «how can I fake it with bots?»…

That feels like short thinking to me.

I mean Google did a great work killing Adwords manipulation, years and years ago, there are fenomenal algorithms for this, you can get websites penalized. I unintentionally got seriouseats.com penalized.

It’s a great website, I feel terrible for penalize them.

Google if you are listening, you should unpenalize them. I did it, don’t penalize them.

And I think that’s a short time way of thinking. I wouldn’t rely on this.

I would rely in «how can I make content, how can I make a brand, how can I make a snippet, that searchers would prefer to clic on and stay on rather than the rest of the competition on the search results?».

That’s long time thinking, that’s alining your interest with users interest and with Google interest. And I think that’s the best way to think about SEO.

Romuald: So you’ve said 2 or 3 very important things here.

One is that you said real users. I think I remember I recall one experiment that you did, where you asked on Twitter to click on a website. Those are really people.

Rand: Real people on Twitter, I didn’t even make the link clickable. I had a little graphic, so they all had to go to Google in their own browsers and search for it.

ctr-rand-fishkin-experiment

Romuald: The second one is that this do not last.

Rand: That’s right. None of my experiments have last more than one week.

Romuald: Than a week?

Rand: That’s the longest.

Romuald: Ok then, if you want, don’t believe me, believe Rand Fishkin: CTR manipulation is not a good thing.

Rand: I mean, it’s not going to work for you in the long run, right?

And the problem is each time what would happen is we would take a page that it was ranking maybe between number 5 and number 10, we could move it to number 1 and then it would fall to page 2.

You’re costing yourself a lot of clics by falling from page 1 to page 2, and that can last for weeks or, or if Google thinks you’re actually responsible, as supposed to… some guy out there on the web, it could last for months, and that’s pretty dangerous.

Click Through Rate Rand Fishkin

User behavior or satisfaction as a ranking fact

Romu: Another thing that I find really interesting right now is that user behavior or satisfaction is a ranking fact.

There’s this thing that engineers in Google said they don’t use the data they have from Analytics to rank.

We’re talking about pogo-sticking, bouncing rate and on time site – it’s something they get from outside Analytics or we are talking about more things?

Rand: Both are true, we are talking about more things.

Google is definitely be looking – they are not going to be just looking at your particularly CTR, they are jut not looking at your time on site, your bounce rate… They are looking at you in relation of other sites in the search results, and in a broad set of search results.

So if there’s, you know, thousands of keywords that you rank for and your competitors also rank for, Google is going to be looking broadly across those and they are going to see that people are generally happier when they visit your site than when they visit your competition site, and that’s going to move you up in the rankings.

They are also getting this not from Analytics, I believe when they say they don’t use Google Analytics, but they are on Google Chrome, they are on Android, they have Google Wifi, they have google Fiber, and they can buy clickstream data.

They have access to 80% plus all the clicks that happen on the web, and of course they know everything that happens on Google.com.

So even if they are on none of these properties, they would at least know that these persons, these searchers, came, they searched in the search box, they clicked on result number 3 and then,16 seconds later, we saw them back on Google results and they clicked on position 4 and then we never saw them back again to that query.

Position 4 satisfied that search, position 3 did not satisfy that search.

So Google gets this data from all kind of places: they don’t need Analytics, they don’t need to have access to your website to get that.

Romuald: But we’re talking then not about what they do inside the website, no?

From the SEO perspective, they only thing we have to take care about in the users behavior as a ranking factor is this bouncing rate, this bound rate, this pogo sticking…

Rand: Well, not necessarily…

Romuald: Or we have to try to make the user to go through our website and all this stuff?

Rand: What we want to do is satisfy the searcher’s query, right?

Because Google is going to look, you know, the searchers IP address and they are logged in, so they know their account, so if they search for…

If they perform a search query and they get to your website, and maybe they don’t balance, maybe they stay there, maybe they go somewhere else or they click on their bookmark to close their browser, but a couple days later they come back to Google and they search for something else, something similar, or they search for something and they don’t click on you and they click on somebody else…

That is still telling Google you didn’t do a good job of satisfying the users’ entire query – they had to come back, they had to go somewhere else, right?

Now some searches, some search queries, like «buying a house», you do a lot of searches, maybe you click on the same sites many times, you click on different sites, you’re doing a lot of research, Google knows that.

But there are some queries where if you… the only time where you would search again it’s if you’re unhappy and that is what Google is watching, and that’s what we have to solve.

In the long run I don’t want to rely on where Google gets this amount of information or this bigger amount of information, I want to rely on the fact that my site solved the searchers problem.

Romuald: Great, so we’re talking that they have like this historic of the things that we are doing on Google, like Google My Activity, and they use this as a data to know…

Rand: Yes, that’s right. The way they’ve talked about it is as they train their machine and deep learning systems against that click and query resolution data, right? And essentially they have billions of searchers performing billions of queries, making billions and billions of clicks, and then they train their models against that to essentially produce a result that correlates with results that serve the most searchers in the best way.

The ghost linking factor

Romuald: Another thing I want to talk about with you it’s that this residual ranking power that the links have once they don’t exist anymore, is that true? Is that not true?
Because i made some tests, I will explain you my tests, but I found that yes, something happened there, no?

Rand: We found the same thing.

We did the same tests – I think it sounds like what you’re saying, where we took a bunch of websites, we pointed the links somewhere, we removed all the links, we waited for Google to reindex all the pages that had the links, so they clearly they knew that those links were not existing anymore, but the rankings didn’t fall immediately. Eventually, they did. But it took a while.

They call it the ghost of the link, the ghost linking factor.

Romuald: Yes, what I did was creating a website and made this not great backlink acquisition strategy -we didn’t do that for clients and, for all of you considering it: do not do that at home, we’re professionals…-.

No, but I did this website rank for certain good keywords and then I removed them, but the content was good, and the website stayed there and it’s still there one year after – that’s why I was asking about this user’s satisfaction, because I have no way to separate one thing from another.

Rand: That seems like a very strong predictor, right?

Once you’re ranking in the top 10 and you’re satisfying the user, even if your links start to fall away, Google doesn’t necessarily want to take you out of there.
They don’t want to take you out of the results where you’re making searchers happy. That’s bad for Google business.

I mean, a lot of links may be how you get to the top, but not necessarily why you stay there. You stay there, you know, because you serve the searcher’s well, and granted, you and I both know in competitive spaces you still need the links, but…

The power of the backlinks that don’t drive traffic

Rand: Yes, of course.

Well, that kind of pisses me off, I can say that, no?

It pisses me off because, Rand, why do you think the bad links still work so well? Those very black hat links. I mean, getting an old domain with a lot of backlinks, put there some content and then put some bad links to another website and they rank with this… Because Google have enough tools to know these backlinks are not driving traffic, so why?

If they can remove that, why they are still being that powerful?

Rand: I think we’ve seen a few things:

It’s much harder than it used to be: 4 years ago, of course, super easy, right? 2 years ago, a little bit easier, today hard not impossible, but hard, 2 years from now it’s going to be a lot tougher. Four years from now? I don’t even know if it’s going to be possible.

I mean, we all can see Google is trending in a direction.

Yes, it takes them time – I think their engineers spend a lot of energy in the last few years working on identification of content, on searchers satisfaction, on query recognition, on things like RankBrain and Humming Bird… and I think in the next few years we are going to see a lot more concentration on spam and manipulation and on links again.

We saw a very heavy effort in that 2011/2012, I suspect 2017/2018 we’re going to see it again.

And you know, it’s one of these things, if you only need to rank for a few days or a few weeks, even a couple months, all right. You want to try some bad links. But if you are trying to build a business or you’re trying to build a brand, you can’t rely on that stuff. Because even if you build phenomenal content and a great brand, if you build bad links…. You can be penalized for years.

I am sure you’ve tried to do reconsideration requests to be back on Google and it’s a nightmare. And sometimes they don’t even care, they don’t even listen, they won’t do anything: you can have the best site in the world, they’ll do nothing. It’s pretty dangerous.

Romuald: It’s dangerous, but because what we talked before that you can make one website rank putting those backlinks and then removing them, and it sticks there, it’s like «wow».

Rand: Yes, I guess my question would be «how helpful and useful is that», right?

Can you really get users to trust that website? Can you get them to convert highly, the way that they would on a brand website that they recognize? Probably not.
Can you get it to stick around long enough that the effort that you put into that is more worthwhile than building good links and good content in a good website that is going to rank for a long long time?

It doesn’t make sense, you’re pouring hours and days down the drain and Google is catching up…

How do you sleep, right? You wake up in the middle of the night and you’re thinking «oh my god, they are going to ban me because of didn’t do this!»

What a nightmare!

Crawl Budget and Ranking Reputation

Romu: Yes. There’s something else I want to talk about – why would anybody want to noindex something and dofollow something, if we don’t want it in the index on Google and also because we are spending there some crawl budget.

Rand: Yes, there’s 2 things to think about here: Crawl budget and ranking reputation.

So crawl budget is essentially just how much of these spiders bandwidth am I using out and on how many pages.

But if you have a smaller mid-sized website and it has some good links to it and it has mostly good content on it, you might say: «Hey, it’s totally fine if Google comes and crawls all these pages, I am not worried about crawl bandwidth, what I am worried about is making sure that the pages that do get indexed by Google represent the best part of the website, represent the pages of the site that are going to serve searchers really well».

And, it turns out, I have a lot of pages on the site that are subcategory pages or very very similar content pages in an e-commerce platform, for example, they’re different colors. And I want Google to be able to see those pages and to crawl them because people do link to them, I link to them, I want Google to be able to crawl all those links out from them, but I don’t want them separately in index, right?

So in those cases you’re not optimizing crawl budget, you’re more optimizing for, you know, those ranking reputation and both of those can make sense.

So there’s a great post on Moz a few weeks ago by Everett Sizemore – he talked about kind of the iceberg model of a website: how can you have this giant piece of ice under the surface that it’s existentially cruft, right? It’s not content that is serving searches well – even though it might be serving website visitors as well. And those are two different things.

So you want to cut that cruft out potentially by using the noindex metarobot tag as opposed to completely removing Google’s ability to even see those pages.

The other problem is Google can still show pages in index and even if you say: «metarobots, don’t crawl that».

That sucks, right?

Because then they can see these thousands of pages. Those pages may rank well on Google, but they may say «because of the robots tags we can’t show descriptions to these files, so that’s why this snippet just looks like the URL».

You see this on Google quite a bit, that’s non optimal.

If you really want to remove those pages from Google but you don’t want to remove them from your site, you can use the metarobots. So there are cases where each of these different things make sense.

Does it make sense to spend time putting nofollow on internal links?

Romuald: In the case we want to disallow those spiders to access those URLs, the internal links that point to those URLs should be nofollow or we should leave them dofollow? Because there’s not clear answer to that.
 
Rand: Yes, I think there was a time in history when that made sense, but that time is mostly past.

I will not spend time or energy putting nofollow on internal link to pages that you don’t really care about.
The benefit is going to be almost dead.

I’ve seen a few cases on very very large websites where, you know, for no reason, that made sense – but it’s pretty unusual.

I would say for the vast majority of sites it doesn’t make sense. But what can make good sense is, if you have, for example, a large of numbers of links that are pointing to pages that you don’t really want Google accessing that internal and you decide «hey, you know what? I might as well put those whole sidebar pointing to a bunch of pages that Google doesn’t need, they are not for Google, in an embebed file that’s disallowed from robots and tags, so it’s basically an iframe.
And then the iframe file is disallowed so Google doesn’t see that, and now you’ve got your links on the page for users but Google doesn’t need those links.

Romuald: But I think Google see the iframes, but don’t take them into consideration, no?

Rand: Yes, but then they’ll also see a robots.txt don’t go – don’t go crawl the iframe.

The future of Google’s RankBrain

Romuald: Ok then, for finishing, I just want to ask you if you think that RankBrain is going to stop all this private networks linkbuilding and all this stuff because, what I don’t know is what RankBrain is here for, but I think Google really has a big problem – you were talking about this problem before- with their penalization process, this disallow thing, this requesting and all this stuff that is horrible, really horrible for them, because it’s a lot of work, and for all the webmasters, no?

Do you think that RankBrain will be used just to don’t give power to the links that shouldn’t have power and all this stuff or what do you think RankBrain is here for?

Rand: So, Rankbrain especially is just a query interpretation model so, essentially, when I say you know, «what are the top jeans big brands» and then someone else searches for «best denim brands» or «best denim makers» or «companies making finest quality blue jeans»… What RankBrain does is say: «These languages models are indicating us that this query should all return this same or very nearly the same results, because they basically mean exactly the same things». We are very close to that.

That’s RankBrain for right now.

I think what its more interesting is underlying RankBrain is a deep learning machine model that is essentially using neural networks to interprete all that.

That’s where I think you can take that underlying tag of neural networks which, look, Google has said publicly that they are retraining their Engineering Crops and refocusing on machine learning at the base of all their ranking algorithms, and because of that what they can start to do is saying: «Ok, new spam machine learning system, here’s a million link that we never want to count, here’s a million links that we do want to count, train the model against each of this and now go out to the rest of the web and count the right links and don’t count the wrong links.
The Penguin was like this a little bit, right? But it was a very early model on this.

Google is going to be doubling down on this.

If you think you can fool these deep learning systems long term, I think you’re in for a nasty surprise, right?

Google is just going to get better and better in this stuff, it’s going to get harder and harder… So yes, I think the technology underlying RankBrain and Penguin, and all these algorithms, is going to be manifesting itself more and more.

Gaming that is going to be great.

Romuald: Absolutely great.

Rand: For some reason, I don’t know if you felt this way, but right around like 2009/2010, it wasn’t fun for me to spam google anymore.
What was fun was building the real stuff, yeah because then you cant talk about it, then you can earn those links you can be so proud of them, and you never have to worry that google is going to take them away.

The spam thing just lost its cache for me.

It just wasn’t fun anymore, it was too nerve-racking and annoying… But building authentic stuff? Love that.

Rand Fishkin interview in MOZ

Romuald: Yes, love that, and it makes me proud when something works in the correct way.

Rand: Exactly! You don’t have to hide it, you know?

Romuald: You can say «hey, I can sleep very calmly, I will be OK tomorrow».

Rand: I think sleep is worth a certain amount of a paycheck, right?

Romuald: Yeah, sure man.

Rand: I would pay a certain amount of my paycheck to sleep better.

Romuald: And in this travel we had a lot of sleep deprivation. Look at Josep, we slept 3h today.

Well Rand, thank you very much for having us here.

Rand: My welcome, thank you for coming to Moz.

Prediction: Next Google Penguin Update

Romuald: Just, one more last thing, I know you like predictions, when do you think the next role of the Penguin will be released?

Rand: I believe it will be between December, no… Actually, I don’t think it will be in December, I think it will be between January and April of 2017.

por Romuald Fons

CEO & Founder de BIGSEO

Deja un comentario

Twittear
Compartir
Compartir