Switch Monkey

Here’s another one of our latest website designs, and one of our largest projects in quite some time. Switch Monkey is one of our longest-running websites, taking quite a number of months to do the back-end programming for.

It’s not all about design and marketing services for our clients; we also have great programmers to completely customize your website’s user experience and functionality!

Thank you to the entire development, programming, and design teams involved in this one, thank you to the client, and thank you to all of the current and new users of the website! We really hope you enjoy the site as it goes live in the near future, especially if you are a child or parent.

Switch Monkey Home PageHow Switch Monkey WorksSwitch Monkey For Kids

Four SEO Ranking Factors – On-Page Optimization, Unique Linking Domains, Global Link Popularity, Niche Links

Earlier in the week, we looked at four ranking factors that website owners can analyze to determine how their competition is doing in the search engine results. If you missed it, those factors were the age of a website, the exact match domain bonus, anchor text used to link to pages on a site, and the growing field of social metrics. In this article, we’ll take a quick look at on-page optimization, and then look at three factors all relating to link popularity.

Search Engine OptimizationIn terms of on-page optimization, it has lost some of its importance over the years and can now hurt your site, but may not help it too much. Cover your bases like the title and meta tags, optimize your headings and footer links, use relevant and appropriate internal linking in the navigation and body copy, and write good body copy. Many of your closest competitors will be doing at least these, as they have become standard best practices for designing and optimizing a website. If you don’t have them taken care of, your site’s rankings can be affected negatively, but you won’t get huge bonuses for doing the on-page optimization.

When looking at links, there are three different ways to segment them: unique linking domains, global link popularity, and local set links. We’ll look at all of these in more depth in a second, but it should be noted that sites that rank well usually do the best in all of these sectors. Out of a large number of links, there will be quite a few unique domains, a lot of links to and from popular pages, and a niche that interlinks with other related websites.

The first is by unique linking domains. SEOBook used to have a Link Harvester tool, which is now retired. However, there are more reputation and analysis tools out there that can tell you how many sites are linking to another, and what types of links those are (.gov, .edu, .com, etc.). The number of unique linking domains may be the most important factor in how well a website ranks, so this is a vital metric to track.

There is also the global link popularity of a website to consider. This is the total number of links to a website (not the total number of unique domains, like above), the PageRank or MozRank of those websites, and how authoritative the pages linking to a site are. Moz does a good job ranking websites in a relative sense with page and domain authority, and RavenTools is an excellent program to pull in lots of competitive intelligence data from Moz, MajesticSEO, and others.

Finally, let’s consider the niche-specific links that a site can get, also referred to as local set links by Todd Malicoat. These links are from sites within the same industry, broadly speaking, as your or your competitor’s website, and are extremely important depending on your industry or locality. For local sites, local citations are vital, and for competitive industries niche-specific links are incredibly useful for rankings. Hubfinder from SEOBook and Touchgraph are two tools that can be used to analyze the most relevant links for a website.

That wraps up our investigation into the eight most important ranking factors for websites. If yours isn’t doing as well as you would like, you can start analyzing your performance in these factors, and compare it to that of your competitors. You can also have us do that type of search optimization analysis for you by contacting us. Future articles will look at ways to improve your site in all of the factors we discussed, either on your own or with professional online marketing help.

Four SEO Ranking Factors – Age, Domain Match, Anchor Text, Social Metrics

Ranking a website for a particular keyword or phrase can take quite a bit of work. However, one way to get the drop on your competition is to do some competitive analysis, looking at various ranking factors and how the websites that currently rank on the first page of search engine results are doing in terms of these factors. Let’s take a quick look at four different factors and how you can research your competitors’ data and see how your site stacks up.

Search Engine OptimizationThe first, and one of the least controllable, aspects of ranking is the age of a website. If a site was registered in 1996, and yours was registered in 2012, the older site has a much longer track record. Search engines will tend to rank that site better than yours simply on the basis of it being around for a longer period of time. This is one reason why auctions can be quite fierce on expired domains that had been online for several years before the owners let them expire. The easiest way to determine how long a site has been online is to check its history at the Web Archive.

Another ranking factor that is not controllable is the bonus that websites get from having an exact match domain. If the website’s domain name matches exactly to a search query, it will be likely to get a boost in its rankings, as long as there is any relevant content and some minimal link building for the site. While Google has said that domain name matches are not as important as they once were, they continue to rank exact match domains higher than other websites with similar content but a different domain name.

Third, the anchor text used for links pointing to a site is another important ranking factor. Essentially, the search engines are attempting to discover what other sites say about your site by looking at what words other sites use to link to yours. The OpenSiteExplorer does a good job of analyzing these links and giving a representation of the anchor text. MajesticSEO also does this type of analysis. The important point is to use relevant anchor text, but also distribute it through several different relevant and related phrases.

The final factor we’ll consider here is the newest one, social metrics. While social media links are almost always nofollow and do not pass PageRank to your site, large numbers of shares, likes, favorites, and so on do indicate an engaged audience and popular content. Social reviews are also taken into consideration for local search, but even Facebook Likes and Shares can be used when ranking content from sites that focus more on research. Followers of a social account may not help your main website, but links to and likes of a particular page on your site may make it more likely to be ranked higher, regardless of the nofollow status of the link.

In our next article later this week, we’ll look at a number of other ranking factors that primarily have to do with links and on-page optimization of your website. Check back soon for that second installment in our short series on ranking factors, and make sure to look at all of the internet marketing services that Traffic Motion offers, including search engine optimization for national, regional, and local websites.

Link Juice and Keeping Pages Out of the Search Engine Index

On any website there will be at least a handful of low-quality, unimportant pages from a search engine optimization perspective. What’s to do with those pages? Some of them are important from a legal standpoint (like the privacy policy and terms of service pages), while others are nice-to-haves, like very small product add-on pages. They may provide value to people already on the site, but have little value if they show up in the search engines.

SEO link juice draining away
Don’t let your SEO efforts drain away on unimportant pages.

There are three main ways to deal with such pages, and we’ll look at each in a little more depth. The first is using the site’s robots.txt file to disallow entire directories of low-quality pages. The second is using the “noindex, nofollow” meta attribute on a page-by-page basis. The third is to use rel=”nofollow” href attributes on a link basis.

Robots.txt Disallow

Disallowing pages from being indexed by search engines is the best solutioin when there are pages that are mostly unimportant to the search engines. Applying the concept of link juice or equity, the juice that flows into a page is then distributed to the outgoing (internal or external) links on that page. Disallowing some pages such as privacy policies or terms of service or low-value pages will ensure that the link equity flows to the more important pages.

The easiest way to disallow a page is by planning ahead, putting all of those low value pages into their own directory, and using a robots.txt file to mark them as disallowed. One way to disallow all robots from seeing two different directories is here:

User-agent: *
Disallow: /product-color-differences/
Disallow: /help-pages/

There are also other ways to keep link juice from flowing to certain pages, including rel=”nofollow” in anchor links or <meta name=”robots” content=”noindex, nofollow“> in the <head> section of a webpage. These can be done on a link or page basis, where robots.txt is preferable for entire directories. These other ways to disallow pages or links are good for fully developed sites with pages that are already on a site and not in their own directory, whereas creating a /unimportant-pages/ directory and disallowing it in the robots.txt file would be a great idea for new websites where the information architecture can be planned in advance.

NOINDEX, NOFOLLOW Meta Tags

For existing pages, there’s a meta tag that can be placed in the <head> section of the HTML. This is the “noindex, nofollow” attribute of a meta tag.

<head>
<title>Traffic Motion Privacy Policy</title>
<meta name="robots" content="noindex, nofollow">
<meta name="description" content="This is the Traffic Motion website's privacy policy.">
</head>

The “noindex” tells robots not to index the current page (so only put that on pages you don’t want indexed). The “nofollow” command tells them not to pass link popularity to any of the links on that page. Web designers and search optimizers can also use “noindex, follow” or “index, nofollow” or any combination of index/noindex/follow/nofollow, but only some combinations make sense.

Rel=”nofollow” Link Attribute

There is also a rel=”nofollow” attribute you can attach to links in the <body> section of the website, which will contain all of the navigation, headers, footers, and body content.

<a href="http://www.trafficmotion.com/privacy-policy" rel="nofollow">Traffic Motion's privacy policy</a>

That command will tell robots not to follow that particular link or pass any link popularity to it.

Which One to Choose?

So, if I had an entire directory I did not want link popularity to flow to, I’d put all the low-quality pages in that directory and use a robots.txt Disallow. If I have a page that I don’t want indexed, I’d use the noindex meta tag. If I had a page on which all the links I didn’t want to pass popularity to, I’d use the nofollow meta tag. If I had one link or a handful on a particular page that I didn’t want to pass popularity to, I’d use rel=”nofollow” in the link tag.

These are three easy ways to make sure that your site’s link juice is flowing to the most important pages. Don’t waste that link juice unless you have a good reason or tons of authority and popularity to spare. For most small websites, doing even a little bit of link equity sculpting in this manner can make their external links that much more powerful when that link juice enters their site and flows through the links on the website itself.

Site vs. Search Structure for Websites

There are two considerations when planning the design and information architecture of a brand new (or even a redesigned) website: site structure and search engine structure. The old days of frame-based websites is over, but search engines still have a difficult time reading Flash or Java sites (though they are getting better). Even for sites built in HTML or using PHP, it is still important to plan ahead for both what the users will see and what the search engines can follow.

matrix green site architecture
Don’t let your poor architecture turn into a search engine nightmare.

There are several differences between the structure of a site for search engines and for users. Since search bots are fairly dumb and will only follow the code on the page, sites must be accessible with just a keyboard and relatively simple for them to be spidered well, and the content needs to be accurately described with a preference towards text rather than images. This is also why alt attributes on images are strongly recommended, both from an SEO and a legal standpoint (equal website access to the blind).

Rich internet applications and anything that adds complexity to the site may make the architecture more convenient to users while less accessible to search engines. User experiences that are generated dynamically, such as with Flash, can keep more complex sites’ architecture to a minimum (even keeping the entire user experience on one “page”), while making it difficult for search engines to determine what the webpages are about, not to mention how to rank them.

Site and search architecture can come into conflict depending on the keywords and brand names used on the site. A simple “Home” or “About Us” link may work well for directing users to important pages, but the search engines would rather those links use relevant keywords for better categorizing and ranking on results pages. Of course, turning your homepage link into the keyword you would like it to rank for could cause confusion as users look for a “Home” link that does not exist.

With simple sites, the site and search architecture may be relatively easy to map out beforehand or change. The more complex a site gets, the better it is to plan the architecture before content is created for keywords that are chosen. Blogs and ecommerce websites are two examples where establishing the architecture can make it easier for both users and search engines to find the higher-level categories and individual pages.

Of course, planning all of this in advance is quite a bit easier than redesigning an existing site with a new structure. Search engines will eventually get used to the redesign of a site, but changing it all at once from a static experience to a dynamic one, or vice-versa, can easily cause a drop in rankings that may last weeks or months. Visitors have a tendency to adapt to changes much faster than search engine robots, despite users usually being more vocal about the changes.

Overall, simple site structure is almost universally preferred to complicated experiences, at least from the search engines’ perspective. However, some sites just can not do this, especially automobile manufacturers and other consumer goods industries where a large amount of customization is essential for an ideal customer experience. For most websites, though, simple is better, and planning in advance can reduce headaches down the road.