16 DIY Magento SEO Tweaks

Over the past year, I’ve been learning more and more about optimizing a Magento store. Unfortunately, Magento is not search engine friendly, at least up to Community Edition 1.8 and Enterprise Edition 1.12.

Magento Tricks by Traffic MotionLater releases of the platform may have already solved some of these problems, but a lot of Magento sites out there are on these older versions. Even worse, the cost of upgrading from one version to the next can cost several tens of thousands of dollars, and there is little incentive to do that if the only benefit to doing to is some minor SEO improvements.

I’ve divided up this post into three sections. The first is attempting to speed up Magento, the second is on-site improvements including a subsection on structured data, and the final section is on collecting more data from third party tools like Google Analytics and Webmaster accounts from Google, Bing, and Yandex.

Speeding Up Magento

Server Speed – Hardware & Software

The first thing you should do is either read Magento Site Performance Optimization, or give a copy of the book to your IT team or Magento developers. While not every host will want you running HHVM or have a different type of PHP caching solution, you’ll want to familiarize yourself with how hardware and software on the server can improve your store’s speed.

Improve Speed from the Backend


There are a lot of little tricks and button to press in the Magento System Configuration panels, and some of them may be very useful for incremental gains in speed.

Disable These Four Caches

Oddly enough, disabling a handful of Magento’s caches may actually speed up your site. The reasoning is counterintuitive, but makes perfect sense once you realize the database is the bottleneck at which Magento can get stuck. Allowing Magento to generate these resources on the fly, rather than querying the database to find cached versions that may not exist, can make your server response time increase (or at least stabilize).

.htaccess Tweaks

Whether you’re on NGinx or Apache, there are a number of tweaks you can make to your store’s .htaccess file that can boost your speed. These mostly involve compression and setting expiry headers, but they can make your load faster for new and returning visitors.

On-Site Improvements for Better Crawling

Setting a Unique Home Page Title

While “best practice” for product pages is “Brand Product | Description | Site Brand,” the home page should have the site’s name first. But with how Magento handles title suffixes, you might have a home page title of “Site Brand | Site Brand” if you don’t use this very simple trick to set the home page’s title tag.

Turn Off Category Paths in URLs

This trick was confirmed from Everett Sizemore, but it’s something I’ve recommended. Instead of letting Magento build product URLs with all sorts of different category structures depending on how many categories a product is in, just turn them off. It’s an easy fix in the System > Configuration > Catalog > Search Engine Optimization tab. While you’re there, turn on the Canonical tags.

Cache Breadcrumbs

Magento has a problem in that breadcrumbs are generated the first time a user visits a page (although the breadcrumbs will be different depending on the path they took to get there). Unfortunately, the breadcrumbs are not added to the page cache, so subsequent users and search engines will not have the benefit of seeing the breadcrumb navigation links. This seems like a strange bug, but it’s an easy fix.

NoIndex Filtered Navigation Pages

In order to keep your site from having tons of duplicate content indexed by Google due to all of the different filtered navigation, use this simple extension to set the robots meta tag to NOINDEX,FOLLOW. That way, you can focus the search robots on crawling and indexing the more important pages – categories and product pages.

NoIndex Internal Search Results

While we’re using the scalpel on our pages to keep them out of the search index, let’s get internal searches out of the index. This easy layout update will set the robots meta tag to NOINDEX,FOLLOW on any search results pages that Google or Bing may crawl.

Structured Data Implementation

Basic Schema.org Markup

I can’t do any better than the job Robert Kent did in his book Magento Search Engine Optimization, so just buy that book, follow the instructions for modifying some of your template files, and enjoy easy, valid structured data for your product pages, logo, and any contact info you have on the site.

Extened Schema.org with ProductOntology

One thing I can help you add to your basic Schema.org markup is ProductOntology.org data, which provides external reference classes for each of your products. What’s the benefit? I don’t know. What’s the potential downside? None that I can think of.

Facebook/Pinterest Open Graph Cards

Facebook and Pinterest both use Open Graph markup to provide metadata that they can read and use in enhanced links and pins of your site’s content. Implementing this type of markup is actually very simple.

Twitter Cards

Similarly, Twitter has its own set of meta tags that you can add to your site. The result is more information contained in tweets of your products, including images, prices, and references to your site’s official Twitter page. Again, the implementation is easy.

Collect More Visitor and Search Engine Data

Increase Google Analytics Site Speed Sample Rate

By default, Google Analytics samples 1% of your site’s visitors for server response, page load, and other site speed data. For websites with 50,000 monthly page views, this is only 500 samples per month, which is definitely not enough data to determine if your store is fast or slow. This simple tutorial develops a custom extension to up that sample rate to 10%, which gives you a much better gauge of the performance of your website.

Add Webmaster Verification Meta Tags

Magento has a section in the backend to add miscellaneous tags and scripts to every page of the site. But what if you just want to add tags to your homepage and maintain them in a separate block? Google, Bing, and Yandex require a verification meta tag on the homepage, so why include it on all 10,000 other pages of your site? Let’s keep the HTML cleaner and implement the tags on just one page.

Google Removes Author Photos from Search Results

The big news in SEO this week was Google’s announcement that author photos will no longer appear in most search results. This is a curious move by Google, and signals the end of one of the big reasons bloggers and writers of all types flocked to Google Plus.

Links

Going forward, instead of an author photo showing up in the search results for logged out users, along with Circles data from Google Plus, all that will appear in the SERPs is a by-line. Author photos can still show up for users logged in to their Google Plus account, depending on if they have an individual author in their circles. In Google News, a smaller author photo may appear.

So why were author photos dropped by Google? According to John Mueller, “we’re simplifying the way authorship is shown in mobile and desktop search results, removing the profile photo and circle count. (Our experiments indicate that click-through behavior on this new less-cluttered design is similar to the previous one.)”

Interestingly, Mueller doesn’t indicate whether click-through behavior is similar on the actual search results that showed authorship photos compared to results without a photo. Just that “click-through behavior” is similar from one to the other.

His explanation also flies in the face of research done by Google and others on eye tracking. Remember this eye tracking study from 2006? With 10 blue links, search behavior is F-shaped, while “chunking” is exhibited around images where rich search results are displayed. Google is now indicating that click behavior is the same on results with images as with only 10 blue links. But eye-tracking behavior has been proven to be different depending on the presence or absence of rich media in search results.

The elimination of widespread author photos in the search results is also curious, as it raises questions about Google’s intentions with its Google Plus social network. One of the big draws of Google Plus was the ability to have one’s photo show up in the search results. This was Google’s enticement for anonymous authors to come out from the cold, join their network, start building relationships, and dive deep into Google Plus.

With Google’s focus on authority, authenticity, and veracity in its semantic search strategy, verifying an account as a real person through Google Plus was a huge piece of the puzzle. According to David Amerland’s book Google Semantic Search, Google specifically gave people an incentive to join Google Plus, establish authorship markup on their articles, and link all of their other social networks together. The main ego-bait was the author photo showing up in SERPs.

Will a simple by-line, rather than the ego-bait of an actual photo, be enough to draw new users to Google Plus? I can’t speak for anyone else, but my reaction has been a resounding “Meh.” When I do research on Google (something I do increasingly less as I’ve taken a liking to DuckDuckGo‘s results), I’ve always been drawn to authorship photos. It’s a lot easier to remember a face than a name, especially since I’m far more likely to skim over by-lines than photos.

Without the increased click-through rates I was seeing on properly verified articles due to the author photo showing on Google, I can’t think of much reason to spend a lot more time developing a presence on the social network. I barely use Facebook, LinkedIn, or Twitter, because there’s almost NO incentive to use them and I have more important things to do than establish a presence on those other networks.

It was Google Plus that I used the most, and the selfish incentive of building my “author rank” over time and having my image appear in more searches was a strong incentive to add people and companies to my circles and engage in discussions with them (even though I was often the sole person commenting on Google Plus posts for many companies).

For now, I’m sure I’ll keep my Google Plus and see where Google takes its authorship markup, but I’ll be far less likely to continue developing my own “contributor to/authorship” profiles around the web at the same pace and with the same enthusiasm at which I once did.

Internet Marketing Tools Collection by TrafficMotion

This list is an ongoing work in progress. The goal is to provide a useful and broad list of internet marketing links. It will range from web analytics, to search engine optimization, social media, paid online advertising, conversion optimization, affiliate and email marketing, and more.

Online Marketing Tools from TrafficMotion

Stay tuned for weekly, if not daily, updates to this list of internet marketing tools.

Web Analytics

Competitive Intellligence

Call Tracking

Search Engine Optimization

Local SEO

Social Media

Paid Online Advertising

Keyword Research

Email Marketing

Conversion Optimization

Miscellaneous Tools

How to Noindex Internal Search Results in Magento

If you’re like me, you’re tired of seeing twice as many pages indexed by Google as you have actual pages and products on your ecommerce website. One way to combat this is to use the Robots.txt file to Disallow Google, Bing, and other web bots from crawling the content of your pages.

Magento Tricks by Traffic MotionBut that doesn’t actually keep the pages out of the index! It only keeps them from being crawled and having their content indexed and searchable. The page URLs themselves can (and most likely will) still appear in search results, with a message on Google saying that the page’s content is blocked by the site’s Robots.txt file.

So using the Disallow command in Robots.txt won’t reduce the number of pages indexed by the search engines. The only way to do that (besides tedious manual removal requests) is to include a NOINDEX meta tag on pages that you do not want indexed. We already took a look at Noindexing filtered navigation pages of product listings.

In this article, we’ll take a quick look at Noindexing the search results. Why is this important? Well, read it from the man himself, Matt Cutts, who wrote a “Search Results in Search Results” post on his blog all the way back in 2007. So yes, keeping Magento search results pages out of the Google index is important. Let’s look at how to do it!

Updating the local.xml Layout File

Navigate to the magentoroot/app/design/frontend/base/default/layout/local.xml file and open it. If it doesn’t exist, create it.

The code that we have to add to the file is this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- other code -->
<layout>
<!-- other layout code -->
	<catalogsearch_result_index translate="label">
		<reference name="head">
			<action method="setRobots"><value>NOINDEX,FOLLOW</value></action>
		</reference>
	</catalogsearch_result_index>
</layout>

Wrapping Up Noindexing Internal Search Results

That’s it? We’re done? Well, yea, that’s basically all there is to it. In some cases, you may also want to Noindex the catalog_advanced_index and catalog_advanced_result but you can use the exact same code snippet as above, just replacing catalog_result_index with the other two variations.

This is a pretty simple fix, and can help Google keep a lot of your internal search pages out of their index, especially if you occasionally link to a search result as the most relevant and user-friendly method to find a product or list of related products.

If you followed along with the tutorial in the base default theme, you can do a search for anything, get the results, and check out the Page Source. You’ll see something like the following code, indicating that this simple fix has worked.

<title>Search results for: 'nokia'</title>
<meta name="description" content="Default Description" />
<meta name="keywords" content="Magento, Varien, E-commerce" />
<meta name="robots" content="NOINDEX,FOLLOW" />

Fixing Magento’s Breadcrumbs – Cache and Microdata

Magento has a nice, built-in breadcrumbs feature, where breadcrumbs can appear on any type of page: static CMS page, product listings, or catalog product pages.

Picture of chocolate crumbs to illustrate breadcrumbs
I prefer chocolate over bread anyway. More paleo. So we’re going with chocolate crumbs in this article.

Frustratingly, though, Magento has not added in RDFa microdata to breadcrumbs by default, leaving it up to store owners to utilize this type of semantic markup or not.

There is also the problem of the breadcrumbs completely disappearing when the Full Page Cache is enabled! It would be especially annoying to add the microdata, but not have it be read by search engines when they crawl pages with the breadcrumbs completely missing.

So, in this article, we’ll dress up the default Magento breadcrumbs, adding RDFa markup and making sure the Full Page Cache remembers to grab and display them.

RDFa for Breadcrumbs

We’ll work with the base package and default theme in the basic Magento 1.8.1.0 installation. If you have your own package and theme, though, it would make more sense just to copy over this base theme file into your own theme and modify it from there. Here’s the file in its original status:

<?php if($crumbs && is_array($crumbs)): ?>
<div class="breadcrumbs">
    <ul>
        <?php foreach($crumbs as $_crumbName=>$_crumbInfo): ?>
            <li class="<?php echo $_crumbName ?>">
            <?php if($_crumbInfo['link']): ?>
                <a href="<?php echo $_crumbInfo['link'] ?>" title="<?php echo $this->escapeHtml($_crumbInfo['title']) ?>"><?php echo $this->escapeHtml($_crumbInfo['label']) ?></a>
            <?php elseif($_crumbInfo['last']): ?>
                <strong><?php echo $this->escapeHtml($_crumbInfo['label']) ?></strong>
            <?php else: ?>
                <?php echo $this->escapeHtml($_crumbInfo['label']) ?>
            <?php endif; ?>
            <?php if(!$_crumbInfo['last']): ?>
                <span>/ </span>
            <?php endif; ?>
            </li>
        <?php endforeach; ?>
    </ul>
</div>
<?php endif; ?>

And here’s all we have to do to make the breadcrumbs more friendly to Google and other search engine robots that visit our store. Check out the added code to the <ul> and <li> lines, as well as the <a> and <strong> tags.

<?php if($crumbs && is_array($crumbs)): ?>
<div class="breadcrumbs">
    <ul xmlns:v="http://rdf.data-vocabulary.org/#">
        <?php foreach($crumbs as $_crumbName=>$_crumbInfo): ?>
            <li class="<?php echo $_crumbName ?>" typeof="v:Breadcrumb">
            <?php if($_crumbInfo['link']): ?>
                <a href="<?php echo $_crumbInfo['link'] ?>" title="<?php echo $this->escapeHtml($_crumbInfo['title']) ?>" rel="v:url" property="v:title"><?php echo $this->escapeHtml($_crumbInfo['label']) ?></a>
            <?php elseif($_crumbInfo['last']): ?>
                <strong rel="v:url" property="v:title"><?php echo $this->escapeHtml($_crumbInfo['label']) ?></strong>
            <?php else: ?>
                <?php echo $this->escapeHtml($_crumbInfo['label']) ?>
            <?php endif; ?>
            <?php if(!$_crumbInfo['last']): ?>
                <span>/ </span>
            <?php endif; ?>
            </li>
        <?php endforeach; ?>
    </ul>
</div>
<?php endif; ?>

That’s it for the breadcrumbs! Now, you can copy the code from a product page on your development site (or live site) into the Google Webmaster Tools Structured Data Testing Tool and see how your pages may appear in search results.

That is, IF Google can read the breadcrumbs. Which it can’t do if the Full Page Cache disappears them.

Make Magento Cache the Breadcrumbs

Full credit for this fix goes to Luis Tineo of the “Memoir of a Magento Developer” website, who figured out how to fix the problem of Magento not caching breadcrumbs.

The fix is deceptively simple. In any of your modules (or you can create one if you want), just add a cache.xml file with the following code in it:

<?xml version="1.0" encoding="UTF-8"?>
<config>
	<placeholders>
		<catalog_breadcrumbs>
			<block>page/html_breadcrumbs</block>
			<name>breadcrumbs</name>
			<placeholder>CONTAINER_BREADCRUMBS</placeholder>
			<container>Enterprise_PageCache_Model_Container_Breadcrumbs</container>
			<cache_lifetime>86400</cache_lifetime>
		</catalog_breadcrumbs>
	</placeholders>
</config>

Read Luis’ site for the full details of why Magento fails to cache the breadcrumbs, but this fix rewrites the placeholder for the cache so that the breadcrumbs will be included on cached pages.

Wrapping Up Magento Breadcrumbs Debug & Upgrade

So with these two easy fixes, we now have Magento caching breadcrumbs, and giving Google more information on the structure of your store’s category and product pages.

NoIndexing Filtered Product Listings in Magento

If you run a Magento website, you’re probably used to seeing at least 10-20,000 more pages indexed by Google than you have products in your store. The main problem is pages being indexed when visitors use your filtered navigation on product listing pages.

Magento Tricks by Traffic MotionYou can see this problem in the default Magento installation with sample data. On the Mens – Shoes – Apparel page, filter by Shoe Type: Running, and check out the Page Source. The meta robots tag will still be set to “INDEX, FOLLOW,” telling Google, Bing, and other search engines to index the page.

We don’t want those pages indexed. We want category and products and listing pages indexed, but not filtered views. Setting up your robots.txt file won’t help with this, because Disallowing the filters will only prevent them from being crawled — not from being indexed! Furthermore, Google Webmaster Tool’s URL Parameters tool is spotty at best in removing pages, in my experience.

So let’s get rid of these pages by setting the meta robots tag to NOINDEX, FOLLOW on product listings if there is a filter activated. Almost all of this code comes from Robert Kent’s book Magento Search Engine Optimization, but I’m going to take it a step further and explain exactly how to set up this amazingly simply custom module from scratch. If you want copy-and-paste code for this module, register your email at PacktPub and download the code package for this book for free.

We’ll be using the Namespace of Trafficmotion and Module name of Robots for this article.

Activating the Module

First, navigate to the magentoroot/app/etc/modules folder, and set up a file named Trafficmotion_Robots.xml with the following content:

<?xml version="1.0"?>
<config>
	<modules>
		<Trafficmotion_Robots>
			<active>true</active>
			<codePool>local</codePool>
		</Trafficmotion_Robots>
	</modules>
</config>

Working With the Module

Module Folder Structure

Next, we have to set up the module itself in the magentoroot/app/code/local/ folder. Make new folder for the following:

  • magentoroot/app/code/local/Trafficmotion/
  • magentoroot/app/code/local/Trafficmotion/Robots/
  • magentoroot/app/code/local/Trafficmotion/Robots/etc/
  • magentoroot/app/code/local/Trafficmotion/Robots/Model/

Module Config.xml File

Our next step is creating the config.xml file to go in magentoroot/app/code/local/Trafficmotion/Robots/etc/ so add this code to the new file.

<?xml version="1.0" encoding="UTF-8"?>
<config>
	<modules>
		<Trafficmotion_Robots>
			<version>0.1.0</version>
		</Trafficmotion_Robots>
	</modules>
	<global>
		<models>
			<robots>
				<class>Trafficmotion_Robots_Model</class>
			</robots>
		</models>
	</global>
	<frontend>
		<events>
			<controller_action_layout_generate_xml_before>
				<observers>
					<noindex>
						<type>singleton</type>
						<class>robots/observer</class>
						<method>changeRobots</method>
					</noindex>
				</observers>
			</controller_action_layout_generate_xml_before>
		</events>
	</frontend>
</config>

This code both creates the module (the previous file activated it and told Magento it was there, now this one actually creates it), as well as tells Magento what to do with the module. In a nutshell, Magento will look for the observer file before generating the page layout, and if the observer finds the event it’s looking for, will make a change to the page.

Module Observer.php

Finally, the last file we need to set up. Create a file named Observer.php and put it in the magentoroot/app/code/local/Trafficmotion/Robots/Model/ folder. Use this code snippet in it:

<?php

class Trafficmotion_Robots_Model_Observer
{
    public function changeRobots($observer)
    {
        if($observer->getEvent()->getAction()->getFullActionName() == 'catalog_category_view')
        {
            $uri = $observer->getEvent()->getAction()->getRequest()->getRequestUri();
            if(stristr($uri,"?")): // looks for a ? in the URL of the page
                $layout       = $observer->getEvent()->getLayout();
                $product_info = $layout->getBlock('head');
                $layout->getUpdate()->addUpdate('<reference name="head"><action method="setRobots"><value>NOINDEX,FOLLOW</value></action></reference>');
                $layout->generateXml();
            endif;
        }
        return $this;
    }
}

Wrapping Up the Module

That’s all! Now, do the filtered navigation again for Running Shoes, and check out the page source. The meta robots will be the following:

<meta name="robots" content="NOINDEX,FOLLOW" />

Four SEO Ranking Factors – On-Page Optimization, Unique Linking Domains, Global Link Popularity, Niche Links

Earlier in the week, we looked at four ranking factors that website owners can analyze to determine how their competition is doing in the search engine results. If you missed it, those factors were the age of a website, the exact match domain bonus, anchor text used to link to pages on a site, and the growing field of social metrics. In this article, we’ll take a quick look at on-page optimization, and then look at three factors all relating to link popularity.

Search Engine OptimizationIn terms of on-page optimization, it has lost some of its importance over the years and can now hurt your site, but may not help it too much. Cover your bases like the title and meta tags, optimize your headings and footer links, use relevant and appropriate internal linking in the navigation and body copy, and write good body copy. Many of your closest competitors will be doing at least these, as they have become standard best practices for designing and optimizing a website. If you don’t have them taken care of, your site’s rankings can be affected negatively, but you won’t get huge bonuses for doing the on-page optimization.

When looking at links, there are three different ways to segment them: unique linking domains, global link popularity, and local set links. We’ll look at all of these in more depth in a second, but it should be noted that sites that rank well usually do the best in all of these sectors. Out of a large number of links, there will be quite a few unique domains, a lot of links to and from popular pages, and a niche that interlinks with other related websites.

The first is by unique linking domains. SEOBook used to have a Link Harvester tool, which is now retired. However, there are more reputation and analysis tools out there that can tell you how many sites are linking to another, and what types of links those are (.gov, .edu, .com, etc.). The number of unique linking domains may be the most important factor in how well a website ranks, so this is a vital metric to track.

There is also the global link popularity of a website to consider. This is the total number of links to a website (not the total number of unique domains, like above), the PageRank or MozRank of those websites, and how authoritative the pages linking to a site are. Moz does a good job ranking websites in a relative sense with page and domain authority, and RavenTools is an excellent program to pull in lots of competitive intelligence data from Moz, MajesticSEO, and others.

Finally, let’s consider the niche-specific links that a site can get, also referred to as local set links by Todd Malicoat. These links are from sites within the same industry, broadly speaking, as your or your competitor’s website, and are extremely important depending on your industry or locality. For local sites, local citations are vital, and for competitive industries niche-specific links are incredibly useful for rankings. Hubfinder from SEOBook and Touchgraph are two tools that can be used to analyze the most relevant links for a website.

That wraps up our investigation into the eight most important ranking factors for websites. If yours isn’t doing as well as you would like, you can start analyzing your performance in these factors, and compare it to that of your competitors. You can also have us do that type of search optimization analysis for you by contacting us. Future articles will look at ways to improve your site in all of the factors we discussed, either on your own or with professional online marketing help.

Four SEO Ranking Factors – Age, Domain Match, Anchor Text, Social Metrics

Ranking a website for a particular keyword or phrase can take quite a bit of work. However, one way to get the drop on your competition is to do some competitive analysis, looking at various ranking factors and how the websites that currently rank on the first page of search engine results are doing in terms of these factors. Let’s take a quick look at four different factors and how you can research your competitors’ data and see how your site stacks up.

Search Engine OptimizationThe first, and one of the least controllable, aspects of ranking is the age of a website. If a site was registered in 1996, and yours was registered in 2012, the older site has a much longer track record. Search engines will tend to rank that site better than yours simply on the basis of it being around for a longer period of time. This is one reason why auctions can be quite fierce on expired domains that had been online for several years before the owners let them expire. The easiest way to determine how long a site has been online is to check its history at the Web Archive.

Another ranking factor that is not controllable is the bonus that websites get from having an exact match domain. If the website’s domain name matches exactly to a search query, it will be likely to get a boost in its rankings, as long as there is any relevant content and some minimal link building for the site. While Google has said that domain name matches are not as important as they once were, they continue to rank exact match domains higher than other websites with similar content but a different domain name.

Third, the anchor text used for links pointing to a site is another important ranking factor. Essentially, the search engines are attempting to discover what other sites say about your site by looking at what words other sites use to link to yours. The OpenSiteExplorer does a good job of analyzing these links and giving a representation of the anchor text. MajesticSEO also does this type of analysis. The important point is to use relevant anchor text, but also distribute it through several different relevant and related phrases.

The final factor we’ll consider here is the newest one, social metrics. While social media links are almost always nofollow and do not pass PageRank to your site, large numbers of shares, likes, favorites, and so on do indicate an engaged audience and popular content. Social reviews are also taken into consideration for local search, but even Facebook Likes and Shares can be used when ranking content from sites that focus more on research. Followers of a social account may not help your main website, but links to and likes of a particular page on your site may make it more likely to be ranked higher, regardless of the nofollow status of the link.

In our next article later this week, we’ll look at a number of other ranking factors that primarily have to do with links and on-page optimization of your website. Check back soon for that second installment in our short series on ranking factors, and make sure to look at all of the internet marketing services that Traffic Motion offers, including search engine optimization for national, regional, and local websites.

Link Juice and Keeping Pages Out of the Search Engine Index

On any website there will be at least a handful of low-quality, unimportant pages from a search engine optimization perspective. What’s to do with those pages? Some of them are important from a legal standpoint (like the privacy policy and terms of service pages), while others are nice-to-haves, like very small product add-on pages. They may provide value to people already on the site, but have little value if they show up in the search engines.

SEO link juice draining away
Don’t let your SEO efforts drain away on unimportant pages.

There are three main ways to deal with such pages, and we’ll look at each in a little more depth. The first is using the site’s robots.txt file to disallow entire directories of low-quality pages. The second is using the “noindex, nofollow” meta attribute on a page-by-page basis. The third is to use rel=”nofollow” href attributes on a link basis.

Robots.txt Disallow

Disallowing pages from being indexed by search engines is the best solutioin when there are pages that are mostly unimportant to the search engines. Applying the concept of link juice or equity, the juice that flows into a page is then distributed to the outgoing (internal or external) links on that page. Disallowing some pages such as privacy policies or terms of service or low-value pages will ensure that the link equity flows to the more important pages.

The easiest way to disallow a page is by planning ahead, putting all of those low value pages into their own directory, and using a robots.txt file to mark them as disallowed. One way to disallow all robots from seeing two different directories is here:

User-agent: *
Disallow: /product-color-differences/
Disallow: /help-pages/

There are also other ways to keep link juice from flowing to certain pages, including rel=”nofollow” in anchor links or <meta name=”robots” content=”noindex, nofollow“> in the <head> section of a webpage. These can be done on a link or page basis, where robots.txt is preferable for entire directories. These other ways to disallow pages or links are good for fully developed sites with pages that are already on a site and not in their own directory, whereas creating a /unimportant-pages/ directory and disallowing it in the robots.txt file would be a great idea for new websites where the information architecture can be planned in advance.

NOINDEX, NOFOLLOW Meta Tags

For existing pages, there’s a meta tag that can be placed in the <head> section of the HTML. This is the “noindex, nofollow” attribute of a meta tag.

<head>
<title>Traffic Motion Privacy Policy</title>
<meta name="robots" content="noindex, nofollow">
<meta name="description" content="This is the Traffic Motion website's privacy policy.">
</head>

The “noindex” tells robots not to index the current page (so only put that on pages you don’t want indexed). The “nofollow” command tells them not to pass link popularity to any of the links on that page. Web designers and search optimizers can also use “noindex, follow” or “index, nofollow” or any combination of index/noindex/follow/nofollow, but only some combinations make sense.

Rel=”nofollow” Link Attribute

There is also a rel=”nofollow” attribute you can attach to links in the <body> section of the website, which will contain all of the navigation, headers, footers, and body content.

<a href="http://www.trafficmotion.com/privacy-policy" rel="nofollow">Traffic Motion's privacy policy</a>

That command will tell robots not to follow that particular link or pass any link popularity to it.

Which One to Choose?

So, if I had an entire directory I did not want link popularity to flow to, I’d put all the low-quality pages in that directory and use a robots.txt Disallow. If I have a page that I don’t want indexed, I’d use the noindex meta tag. If I had a page on which all the links I didn’t want to pass popularity to, I’d use the nofollow meta tag. If I had one link or a handful on a particular page that I didn’t want to pass popularity to, I’d use rel=”nofollow” in the link tag.

These are three easy ways to make sure that your site’s link juice is flowing to the most important pages. Don’t waste that link juice unless you have a good reason or tons of authority and popularity to spare. For most small websites, doing even a little bit of link equity sculpting in this manner can make their external links that much more powerful when that link juice enters their site and flows through the links on the website itself.

Content Creation for SEO and Marketing

When it comes to marketing a website, content is still king, even in this era of social media, YouTube, and blended search results. If you want your site to show up in the search results that people type in, you need content that the search engines can read. And if you want to turn those visitors into sales, you need content that is of high enough quality to convert.

Create Compelling ContentIdeally, content marketing educates and empowers people to take action, preferably an action on your site that will lead to a sale or new follower. The goal is to get people to know about your brand, like your company, and trust you enough either to recommend you to others or to become a customer yourself. Becoming an authority in your industry or locality is the ultimate goal of content marketing.

While content marketing includes all types of content, let’s look at some of the most important popular forms of it. First up is article marketing, a tried and true method of getting content out both on your site and on external sites. You can customize articles depending on your target audience and whether you are publishing on your own site or blog or an article distribution website.

The best tip for article marketing is to maintain an editorial tone throughout all of your content. You do not want to come off as too promotional. People like learning and they will be open to learning about your topic of expertise, but they do not want to be exposed to nothing but commercials. Save that for your television or radio campaigns where you can do more promoting and less educating. For your article marketing, focus on the information and education.

Social media is becoming a great way not to to share content, but to publish it as well. Longer posts of strictly informational or entertaining content can result in a larger number of Likes and Shares. One company that I follow on Facebook has recently begun adding exclusive articles on their official Facebook Page, giving social followers a reason to become more engaged with their brand.

Additionally, if you run a corporate blog, make sure people are able to share your content. You can do this with sharing buttons on each post so people can share your articles with their own friends and followers. Or, you can include links to your official Facebook, Twitter, and LinkedIn pages so people can follow you through the social networks. Both strategies are important for social marketing success.

In larger businesses, press releases, news articles, and other public relations material are often created and distributed. Use these pieces of content created by the marketing or PR department and publish them on your blog or New section of your website, with appropriate internal and external linking. Anything newsworthy can be used as an opportunity to promote your target keywords for SEO purposes, if nothing else.

Finally, if you run an email newsletter, tie in your email marketing messages to your website content. Remember to keep emails compelling, though. People scan emails more than they read them, so use numbered lists or bullet points, and keep the long paragraphs to a minimum. Include some basic styling, along with links back to your site’s main content, and tie everything together on the landing page.

Content marketing will probably be the one constant throughout the entire life of the internet. People enjoy reading and learning, and writers enjoy writing and educating. The web is the most convenient place for both writers and the general public to connect, and turning readers and learners into customers and brand advocates starts with having the content they are looking for.