When you are trying to rank a webpage, you want to give search engines every little possible clue you can about what the page is about. One opportunity to do that, which many people fail to take advantage of, is utilizing proper URL structure as well as a sensible folder structure for HTML based sites.

Not too long ago, exact match domains (EMDs) were all the rage. Search engines gave a ranking boost in ranking a homepage for the phrase identified in its domain (Hint: Bing still does to a large degree.) For example, if you wanted to rank for how to make money online, the ideal domain to build your site on would be

www.howtomakemoneyonline.com

Next preferable option would be the .net or .org versions. From there, marketers started adding letters and numbers in front of the phrases like

www.howtomakemoneyonlinefast.com

www.howtomakemoneyonlinefaq.com

www.ehowtomakemoneyonline.com

If all of those were exhausted, the next solution was hyphens.

www.how-to-make-money-online.com

From there it was .info, org, .co, .wtf.

People built small empires around these exact match domains. Some of us still are in Bing. Google released an update a couple of years ago that turned the dial down on the boost they used to give to EMD domains, and largely put an end to this practice.

While they turned the dial down on that, there was a noticeable ranking impact to URLs that contained exact match or partial match phrases. This boost has never been quite as automatic or powerful as the old EMD boost but certainly does provide a benefit. What I mean by this is if you are targeting life insurance for smokers, using something like

www.mysite.com/life-insurance-for-smokers

will give you a ranking boost.

It doesn’t always have to be an exact match either. There can be words before or after the phrase such as

www.mysite.com/life-insurance-for-smokers-quotes

www.mysite.com/best-life-insurance-for-smokers

or the phrase can even be separated like

www.mysite.com/life-insurance-quotes-for-smokers

The important thing is that you want to have the words in the phrase show up in the correct order, so

www.mysite.com/best-place-for-smokers-life-insurance

is still beneficial because the keywords are in the URL, but nowhere near as impactful as having the words show up in the correct order for the phrase you are targeting.

With silos, you can take this a step further and give search engines even stronger clues as to what your webpage is about. Using the same target search term life insurance for smokers, the target page could be listed under a category of high-risk life insurance like so

www.mysite.com/high-risk-life-insurance/life-insurance-for-smokers

In this example, some people will tell you that having life insurance show up twice in the URL could be seen as spammy. I have seen no evidence of this in my testing, but it certainly could be something Google goes after in the future. To be on the safer side, you might want to just use a category of high risk like this

www.mysite.com/high-risk/life-insurance-for-smokers

On a much bigger site, there might even be a subcategory like

www.mysite.com/insurance-types/high-risk/life-insurance-for-smokers

Let’s expand this a little bit so you can see the big picture.

An example of a site listing life insurance types might include

Term Life

Whole Life

Funeral

Mortgage

High Risk

These would all by categories under insurance types. Under the high-risk category we have been discussing, you might have

Life insurance for smokers

Senior life insurance

Life insurance with no medical exam

Life insurance for people with pre-existing conditions

By having a URL structure this way, you are giving all kinds of relevance and LSI clues to the search engines. In just that one URL, search engines could easily extrapolate that life insurance for smokers is a type of high-risk life insurance.

Not only does this increase the chance of your webpage ranking for life insurance for smokers, but it also increases its chances of ranking for many related terms. It will help your high-risk category page rank for things like high-risk life insurance, and it will help your higher level category page rank for phrases such as types of life insurance.

If you are building an HTML site, you will have your folder structure setup in this way. I have not tested this myself yet, but a lot of folks that I trust believe this to give an additional signal.

The other place where this URL structure can help you to rank web pages is when you are building links. Having links from pages with a strong URL structure like we discussed above, will add value to those links. In a lot of cases, we will have no control over the URL. Probably one of the only times we will have any control is on a guest post, where we can select the title of the post, and that is likely to be in the URL of the page. Of course, on our own private network sites, we can have complete control over the URL structure and take advantage of this as well.

Although it is too often overlooked, putting some thought and planning into your URL structure can have a big impact on your rankings.

And why you can no longer ignore it. You have to change to HTTPS and soon.

Google has been steadily ramping up the pressure on website owners and administrators who are not utilizing HTTPS to make the change. They announced that HTTPS was being included as one of their 200+ ranking signals. Although no evidence has ever been presented showing it having a positive impact on rankings, many companies quickly made the change after Google’s announcement.

I have resisted making the change for my own sites or those of my clients, but a more recent announcement from Google has altered my view on whether or not you should change to HTTPS. You absolutely must make the change to HTTPS if you have not already.

What is HTTPS?

Hyper Text Transfer Protocol Secure (HTTPS) is the “secure” version of Hyper Text Transfer Protocol (HTTP). HTTP is the protocol over which data is transferred between the website you are connecting to and your web browser. In the HTTPS version, all communication between your browser and the website is encrypted. It is most commonly used to protect confidential online transactions such as accessing your online banking or submitting credit card information to make an online purchase.

Communications sent over regular HTTP connections are basically just in plain text that can be read by any hacker with the skills to access the connection between your browser and the website. If the communication is an order form with credit card, checking account details, or other personal and financial details, it presents an obvious danger and can lead to falling victim to fraud.

With HTTPS, all communications are securely encrypted. Even if a hacker breaks into your connection, they would not be able to decrypt the data passed between you and the website.

HTTPS pages typically use SSL (Secure Socket Layer) to encrypt communications. SSL uses an asymmetric Public Key Infrastructure (PKI) system. A PKI makes use of two ‘keys’ to encrypt communications, a public and a private key. Information encrypted with the public key can only be decrypted by the private key and vice-versa.

The private key is kept on the web server. The public key is provided to anyone that needs to be able to decrypt information that was encrypted by the private key.

When you request an HTTPS connection to a page, the website first sends its SSL certificate to your browser. The certificate contains the public key needed to begin the secure session. Your browser and the website you are accessing initiate what is referred to as an SSL handshake. All information passed between your browser and the website are is now encrypted.

Does HTTPS Make My Website Secure?

Many people believe that changing to HTTPS protects their website from hacking. This is not true. If you migrate your website to HTTPS, it is no more secure from hacking than it was before the migration. HTTPS is just securing the connections between a website and its visitors.

Why Do I Need to Switch to HTTPS Now?

As I said in the beginning, I was not an advocate for websites migrating to HTTPS. Of course, if they were accepting orders or transmitting personal or financial information, they should have been using HTTPS already. For everyone else, there was no benefit to changing.

Despite Google saying that they were using it as a ranking signal, nobody has been able to run a test where they saw an increase in rankings or organic search traffic. In fact, many people who made the switch, actually saw a drop in traffic. It is not a matter of just installing an SSL certificate. There are several things you must do to migrate properly and avoid losing traffic.

Until recently, SSL certificates were an added expense, and sometimes quite costly. If you were not transferring sensitive data, why incur the extra cost of an SSL certificate?

To me, the risk of something going wrong in the migration to HTTPS causing a drop in traffic along with the extra cost made the process not worthwhile.

So what changed? Why am I now telling you the change to HTTPS is a must?

On February 8th, 2018 Google posted this article to their Security Blog: A secure web is here to stay. In the post, it was shared that beginning in July of 2018 with the release of Chrome 68, Chrome will begin marking ALL sites using HTTP as “not secure”. They shared this image detailing how Chrome displays HTTP URLs now and how they will display them starting in July:

Treatment of HTTP pages

I would expect Firefox to follow suit. In fact, they may even rush to make a similar change before July to beat Google to the punch. Both Chrome and Firefox currently display warnings if you try to sign in to a website that is not using HTTPS. Here is an example:

treatment of login to non-HTTPS WordPress site

With Google’s push for HTTPS, I will not be surprised if their “Not secure” notification in Chrome becomes more pronounced than what you see in the image above. I could see them making the text red and perhaps even having a popup warning display.

Whether you are running a local business, a national brand, or a small affiliate website, take a moment to think about the average visitor to your website. Are they as technologically savvy as you? Do they keep up to date with internet protocols and Google updates? For most of us, the answers to those questions are going to be a resounding no.

Now, think about what people see and hear in the news constantly about identity theft, stolen financial information, viruses, etc. and how they must be careful and vigilant when visiting websites on the internet. When they visit your website and see a message that says Not secure, are they going to know the message simply is referring to data transferred between their browser and your website or are they going to worry that it means data could be stolen from their computer or they could get a computer virus or malware?

How much of your traffic could be scared away because of the Not secure warnings their browser is giving them? Could it be 5%? 10%? 20%? More? Is it even worth the risk?

How to Change to HTTPS

There are a few steps you are going to want to follow to migrate from HTTP to HTTPS. I’m listing the main ones here, but based on your website’s unique situation there might be additional steps you need to consider. For the majority of website owners, these steps will be enough to get you through the migration in one piece.

Crawl the Website.

You might have a website with only a few pages or a site with hundreds or thousands of pages. Either way, you are going to want to know what you are starting with and have something to compare the end result to. You also are going to want to make note of any independent sections of your website that might need additional attention or cease working when you migrate such as payment gateways, external scripts, membership scripts, downloads, etc.

Check Your Rankings.

You will never know every search phrase your site ranks for, but you are going to want to have a solid list providing a general overview of where your rankings are and have them broken down by category. You are likely to see some ranking fluctuations as you make the change. That is unavoidable. Tracking your rankings may help you to identify issues that were not resolved during the migration.

If rankings recover except for keywords in one category, that can help you to know where to start looking for a potential problem. Without the rank tracking, you are just looking at a drop in traffic and no idea where to start looking.

Obtain an SSL Certificate

The next step is to obtain and configure an SSL certificate on your server. There are plenty of solid providers, and your web host might sell them or have recommendations for you.

Once you obtain it, follow the instructions they provide for deploying it on your server. Again, a web host will normally help with this if you ask.

For a really simple option, I recommend Let’s Encrypt. Let’s Encrypt offers free SSL certificates, and many web hosts support them directly through cPanel. It literally only takes a few button clicks to obtain a deploy your certificate on the server. I made a quick video in which I migrated this website using Let’s Encrypt in less than 10 minutes to show how easy it is to use.

If your web host does not support Let’s Encrypt, I would seriously consider switching web hosts. Let’s Encrypt certificates can be installed manually like any other SSL certificate without the cPanel integration, but the cPanel integration makes it mind numbingly easy to do.

Implement 301 Redirects

Once your SSL certificate is installed on the server, it is time to officially make the change on your site and migrate to HTTPS. You will want to set server-side 301 redirects to the HTTPS version of your URLs. It depends on your server, but for most websites, this will be handled through your .htaccess file. If you are on a Windows server, the method is a little different. You can find tutorials online for both.

In addition to redirecting to HTTPS, you will want to make sure that your site is maintaining its preferred WWW or non-WWW version as well through the redirects.

When you are finished, visitors to your site should not be able to access any HTTP versions of your URLs. If both versions are accessible, this can confuse the search engines, create duplicate content issues, and cause your HTTPS pages to not rank in search engines.

The 301 redirects are not only making your HTTP pages unreachable, they are also telling search engines to credit any authority, relevance, and link power to the HTTPS version of the pages. If you want those to rank, the 301 redirects are vital.

After The Migration

After completing the switch to HTTPS, there are a few common problems you are going to want to look for and fix where appropriate.

What is mixed content? Mixed content occurs when you load an HTTPS page but there is an image, script, or some other type of content featured on the page being called through HTTP. Any unencrypted resource potentially gives hackers a way to break into the data being transferred between your website and its visitors. It happens most often with images, and can easily be fixed by adjusting the URL the image is being called from.

The most common sources of mixed content are internal media (images, videos, audio, including those called inside of JS and CSS files), Iframes, JS and CSS files inside HTML code, web fonts, and internal links.

Next, you want to re-crawl the website to make sure all HTTPS versions of your URLs are present and returning the proper status code.

Double check that all systems are working correctly, especially those you identified before the migration that might need some additional attention (payment gateways, membership scripts, downloads, etc.).

Now you need to let Google know that you have moved. You will need to set up the HTTPS version of your site in Search Console. Google treats these as separate properties. If you are using a Disavow File, do not forget to migrate it to the new property.

You Are Finished

That’s it. You are done. That wasn’t so bad, right?

Just be sure to continue testing the site and looking for any issues.

I want to start off by making one thing crystal clear. Bounce rate is not a ranking factor.

I am going to say that again. Google does not factor bounce rate into its rankings. They never have, and they never will. I’m going to both explain why that is and show you a real life example of a page with great rankings that has a bounce rate over 90%. I know people usually don’t like to share their websites publicly, but I am going to share one just so we can put this debate to rest.

What is Bounce Rate?

Before we get into that, for those who do not know, we need to explain what bounce rate is on a website. A “bounce” occurs when someone enters your site through one of its pages and leaves without visiting any other page on the same site.

If someone performed a search in Google and found one of your pages ranking at the top, clicked on it, and then either closed their browser, entered a new URL into the URL bar on the browser, hit one of their bookmarks, or in some other way exited the site without clicking to another page on your site, they “bounced”.

It does not matter how they found your site. I used a search in Google in the example above, but they could have entered your site from an AdWords ad, a link shared on Facebook or Twitter, a link sent to them in an email, or even just entered the URL directly into their browser. As long as they only visit that initial page without visiting any other pages on the site in that visit, the visit is registered as a bounce.

The percentage of visitors who do that is the bounce rate. You can have a bounce rate on a specific page or across an entire website.

Is Bounce Rate A Search Engine Ranking Factor?

I am going to say this again so there is no misunderstanding. Bounce rate is not a ranking factor for search engines.

There are two very simple and very logical reasons why Google and other search engines do not factor bounce rate into their ranking algorithm.

The first reason is just a matter of access. Google does not have access to bounce rate data on the majority of the webpages in existence. For some reason, many people seem to think of Google as this omnipotent power that can see and knows all. That is just not the case.

The only way for Google, or any other search engine, to know the bounce rate of any given page is if you grant them access to that visitor data. They can see your visitor’s behavior if you are running Google Analytics on your site (most websites are not). They have never verified this to my knowledge, but a lot of people also suspect they can gather data like that if you are serving AdSense ads on your pages. Again, that has never been confirmed, but it would hardly be surprising.

If you want to go all conspiracy theorist, you can also say that they are collecting data on everything you do in your browser if you are using Google Chrome. Google Chrome is currently estimated to control about half of the web browser market share.

Even if I give you that they are pulling bounce rate data from users using Chrome, at most 10% of webpages are using Google Analytics, and some of that will be overlapped data with Chrome users, so they might be catching the data on 50-55% of web visitors.

That leaves a lot of missing data.

There are over 200 rankings signals in Google’s algorithm, all of which have different weightings.

When a search query is made, Google is pulling data from its index and comparing all of the websites it has indexed based on those 200 signals.

If Google were using bounce rate data, how would the algorithm compare a webpage where it has bounce rate data versus a webpage in which it has no bounce rate data? Which one is performing “better” for that ranking signal?

Still not convinced?

Okay, let’s say you have read this far and you still are one of those believers that Google knows everything. They have the bounce rate data for every webpage because they are everywhere and into everything.

Fine.

Let’s look at the second reason that Google is not using bounce rate in its ranking algorithm.

Bounces are not always bad. They are not always a signal that there is something wrong with the page.

For some reason, many marketers have this stigma stuck in their head that all bounces are bad. They are not.

Let’s say you are running an emergency plumbing service and repair business. Someone in your community has a toilet that has suddenly started to overflow and they cannot fix it. They search for a local plumber in Google, see your page ranking first. They click the search result which brings them to the home page of your site. They like what they see and pick up the phone to call you (or your office) and see how fast you can help them out with their problem.

They never visited another page on your site. They will register as a bounce, but they did exactly what you wanted them to do, right? Your webpage converted them immediately into a phone call and a possible job.

That’s a good thing.

And why should Google see it differently or ding your site for that?

The same thing could be said if I am running an affiliate site. Usually an affiliate site is setup to drive traffic to a landing page and get them to click on an affiliate link. If they do not browse around on your site but click on the link, they are going to register as a bounce. Again, there is nothing wrong with that. They did exactly what you were hoping they would do.

We obviously do not have access to their analytics to prove it, but look at a site like Wikipedia. I would venture a guess that their bounce rate is quite high. People generally end up at Wikipedia because they were looking for an answer to a specific query and one of Wikipedia’s pages came up. They visit the page and find the answer they were looking for. Some might click on an internal link on the page if they see something that interests them. The vast majority most likely do not and simply leave.

Yet, Wikipedia ranks for everything.

So How About an Example That We Can See?

I know that marketers usually hate to publicly share their websites. They fear that people will reverse engineer them, report them to Google for some obscure reason, or steal their niche. I understand all of those concerns, but I am going to share one publicly anyhow.

This is a site that has been largely torn down because parts of the project were abandoned. There is nothing special about the site. It is very basic. The majority of the content has been removed. It is in the opiate addiction niche.

Despite the changes made to the site, there is one page that is ranking for a moderately competitive group of keywords around the topic of The Thomas Recipe. The Thomas Recipe is a collection of minerals, herbs, vitamins, and prescription medication taken on a specific regimen that will hopefully help relieve some of the severe symptoms of opiate withdrawal.

The truth is, The Thomas Recipe really does not work and will only help someone who is going through the mildest of withdrawals. The article points that out, which is why I have left it up. Hopefully, it points people away from this old wives’ tale of a remedy and encourages them to seek out something else that will actually help them.

It is currently ranking #2 for the Thomas Recipe and Thomas Recipe. It is ranking just behind Drugs.com which is an authoritative forum in the drug addiction niche.

Despite being #2, it currently has the featured snippet above the SERP. This goes back and forth periodically with Drugs.com (and drastically changes the traffic I might add).

SERP of search 'the thomas recipe'

I’m sharing a screen shot because I know that by sharing this publicly all kinds of bad things could happen to those rankings.

It also ranks #1 or #2 for all kinds of terms like what is the Thomas Recipe, how to use the Thomas Recipe, does the Thomas Recipe work, what is the Thomas Recipe schedule, etc.

Again, I would characterize this as a moderately to lightly competitive keyword. There is not a lot of money in it unless you happen to manufacture a remedy for opiate withdrawal. I just tossed AdSense ads on the site a few months back. It is making about $100 per month off of the ads. Nothing big, but hey, it’s $100 a month for a page that I have not touched in almost a year.

Ranking just below the site is OpiateAddictionSupport.com, which is a pretty authoritative site in the drug addiction niche, and Withdrawal-Ease.com which is a supplement manufacturer that has been around since 2008.

So why am I sharing all of this?

Because I want to show you a webpage that is ranking very highly despite an astronomical bounce rate. And here is the proof.

Google Analytics screenshot of data for past 30 days

A bounce rate over 93%. You would almost have try to get a bounce rate higher than that, right?

And just so nobody thinks I did something to skew the data over the past 30 days specifically for this article, here is roughly the last 6 months of Google Analytics data.

screenshot of Google Analytics for past 6 months

The bounce rate has held steady over 92%.

And here is the ranking over that time period.

ranking screenshot from RankWatch.com

If bounce rate was a ranking factor, it is highly doubtful that this page would be given the featured snippet above the search results or be ranking #2 for these search terms. If Google felt that bounce rate was in any way an indication of how visitors felt about a webpage, we probably would not see a page like this anywhere in the top 50 or top 100 (it is tough to get much higher than 92%!!!), much less top 2.

Does That Mean Bounce Rate Data is Useless?

No. Not at all. Bounce rate data is useful for you. Not for search engines.

A high bounce rate could be indicative of a problem on a webpage. It really depends what type of website you are running and what it is you are trying to get visitors to do.

If you are running an ecommerce site where a particular page is bringing in a lot of traffic, but then the visitors are leaving without browsing other products, adding anything to their cart, etc. there is something wrong with that page or the traffic coming to that page.

Even then, believe it or not, it may not be a bad thing. You always want to take a closer look. I’ve relayed this story before, but I will share it again here.

Before you go reworking a whole page or website, it is important to understand where the bounces are coming from. Who is bouncing, how did they find your site, and what pages are they bouncing from?

I was looking at a client’s website recently and noticed that the bounce rate across the site was 43%. Most of the pages fit around that number, but there was one page where the bounce rate was 89%. That was unusual. Average time on the site was over 6 minutes, but on this particular page it was under 30 seconds.

I took a closer look at the analytics, and found that search traffic was bouncing from that page at a much, much higher rate than traffic from other sources. Generally, if there is something wrong with the page, the bounce rate will be consistent among all sources of traffic. This was not the case.

Through some digging, we found that the page was not only ranking highly for our target keyword, but it was also ranking highly for another keyword that was similar but highly unrelated to the page. In other words, the words in the phrase were close, but the definitions were much different.

I cannot reveal the client’s site, but the difference in keyword phrases would be something like doggy style versus styles of dogs. The words are close, but have two completely different meanings.

The targeted phrase was searched about 500 times per month on average. The untargeted phrase was searched about 12,000 times per month. That’s why the percentage of bounces was so high.

In this situation, it was nothing to worry about. The bounces were coming from untargeted traffic.

This is a perfect example of why you really need to take a close look at what the bounces are actually telling you.

The next time someone tries to tell you that bounce rate is a ranking factor, share this post with them.

One question I get asked a lot is how to write a good meta description tag for a webpage. You will hear many “guru” IM’ers preach about writing an enticing description that will encourage searchers to click to your website from the search engine results page versus clicking on one of your competitors.

A few years ago, that was good advice. Today, I’m going to recommend something for you that will make it much easier than struggling to come up with the perfect description.

Do no write meta description tags for your webpages.

There. Done. Easy.

I know, for some of you, this is going to be rather shocking. You have been told time and time again of how important a good meta description tag is. Years ago, many felt it helped your rankings.

Today many people think it is important because it helps the click through rates on your page once you do have some decent rankings.

Well, they no longer play a role in rankings. They are useless from that standpoint. Can they help your click through rates? Sure.

However, having no description can actually help your click through rates even more.

Some of you might be thinking, “What the hell are you talking about Mike? How is having no description going to actually get me more clicks?”

The first time this idea popped into my head was when I was studying some of the things that Wikipedia does on their pages a few years ago. I noticed that they do not use the meta description tag. So I started digging into this. Why do they do that? Wikipedia ranks for damn near everything, so they clearly know what they are doing.

The answer is actually pretty simple. Google has become pretty good over the past few years at understanding what a webpage is about.

In fact, even if you have a meta description that you wrote for your webpage, sometimes Google will override that and display one that they feel is more relevant to the search query.

Chances are, if you are doing the right things, eventually your webpage is going to rank for its targeted keywords, as well as other keywords you were not targeting. Sometimes even some pretty obscure words you never thought of.

The advantage of not having a predefined meta description is that Google will select one, from the content of your page, that it feels best matches the search query of the user. Instead of one standard meta description for every searcher, you now have what I call a dynamic meta description tailored to every search user that might find your page in the SERPs.

The only exception I make to this tactic is if I have a webpage that is laser-targeting a specific search phrase (which is very rare) or for things like contact or about pages. I will write those myself.

Outside of those exceptions, I never use meta descriptions, and have not done so for several years now.

Interesting little test that Google is performing. This one is well worth keeping your eye on. My question is if it does go live, will the average smartphone user realize it or even know what it means? I’m going to guess probably not. If they do though, you need to start making your content mobile friendly right away.

You can read the article here.

Matt Cutts has confirmed that Googlebot can now read AJAX & JavaScript for indexing some dynamic content on webpages.

Statement from Matt Cutts about Google's ability to read AJAX and JavaScript

What does that mean?  Many websites that accept comments have been using some kind of scripting software.  Previously, Google could not read these comments.  One common one that many people have implemented is a Facebook comments script.  Google can now index these comments, so they now become part of the content of the website, where previously they were ignored.