Category Archives: Search Engine

The Basics of Search Engine Friendly Design and Development

Connect to Facebook
Search engines are limited in how they crawl the web and interpret content. A webpage doesn’t always look the same to you and me as it looks to a search engine. In this section, we’ll focus on specific technical aspects of building (or modifying) web pages so they are structured for both search engines and human visitors alike. Share this part of the guide with your programmers, information architects, and designers, so that all parties involved in a site’s construction are on the same page.

Indexable Content

To perform better in search engine listings, your most important content should be in HTML text format. Images, Flash files, Java applets, and other non-text content are often ignored or devalued by search engine crawlers, despite advances in crawling technology. The easiest way to ensure that the words and phrases you display to your visitors are visible to search engines is to place them in the HTML text on the page. However, more advanced methods are available for those who demand greater formatting or visual display styles:

1. Provide “alt” text for images. Assign images in “gif”,jpg, or png format “alt attributes” in HTML to give search engines a text description of  the visual content.

2. Supplement search boxes with navigation and crawlable links.

3. Suppliment Flash or Java plug-ins with text on the page.

4. Providae a transcript for video and audio content if the words and phrases used araea meant to be indexed by the engines.

Seeing your site as the search engines do

Many websites have significant problems with indexable content, so double-checking is worthwhile. By using tools like Google’s cache, SEO-browser.com, and the MozBar you can see what elements of your content are visible and indexable to the engines. Take a look at Google’s text cache of this page you are reading now. See how different it looks?

 

“I have a problem with getting found. I built a huge Flash site for juggling pandas and I’m not showing up anywhere on Google. What’s up?”

Whoa! That’s what we look like?

Using the Google cache feature, we can see that to a search engine, JugglingPandas.com’s homepage doesn’t contain all the rich information that we see. This makes it difficult for search engines to interpret relevancy.

Axe Battling Monkeys Comparison

Hey, where did the fun go?

Uh oh … via Google cache, we can see that the page is a barren wasteland. There’s not even text telling us that the page contains the Axe Battling Monkeys. The site is built entirely in Flash, but sadly, this means that search engines cannot index any of the text content, or even the links to the individual games. Without any HTML text, this page would have a very hard time ranking in search results.

It’s wise to not only check for text content but to also use SEO tools to double-check that the pages you’re building are visible to the engines. This applies to your images, and as we see below, to your links as well.

Crawlable Link Structures

Just as search engines need to see content in order to list pages in their massive keyword-based indexes, they also need to see links in order to find the content in the first place. A crawlable link structure—one that lets the crawlers browse the pathways of a website—is vital to them finding all of the pages on a website. Hundreds of thousands of sites make the critical mistake of structuring their navigation in ways that search engines cannot access, hindering their ability to get pages listed in the search engines’ indexes.

Below, we’ve illustrated how this problem can happen:

Index Diagram

 

In the example above, Google’s crawler has reached page A and sees links to pages B and E. However, even though C and D might be important pages on the site, the crawler has no way to reach them (or even know they exist). This is because no direct, crawlable links point pages C and D. As far as Google can see, they don’t exist! Great content, good keyword targeting, and smart marketing won’t make any difference if the crawlers can’t reach your pages in the first place.

shepherdLink tags can contain images, text, or other objects, all of which provide a clickable area on the page that users can engage to move to another page. These links are the original navigational elements of the Internet – known as hyperlinks. In the above illustration, the “<a” tag indicates the start of a link. The link referral location tells the browser (and the search engines) where the link points. In this example, the URL http://www.jonwye.com is referenced. Next, the visible portion of the link for visitors, called anchor text in the SEO world, describes the page the link points to. The linked-to page is about custom belts made by Jon Wye, thus the anchor text “Jon Wye’s Custom Designed Belts.” The “</a>” tag closes the link to constrain the linked text between the tags and prevent the link from encompassing other elements on the page.
shepherd

Submission-required forms

If you require users to complete an online form before accessing certain content, chances are search engines will never see those protected pages. Forms can include a password-protected login or a full-blown survey. In either case, search crawlers generally will not attempt to submit forms, so any content or links that would be accessible via a form are invisible to the engines.

Links in unparseable JavaScript

If you use JavaScript for links, you may find that search engines either do not crawl or give very little weight to the links embedded within. Standard HTML links should replace JavaScript (or accompany it) on any page you’d like crawlers to crawl.

Links pointing to pages blocked by the Meta Robots tag or robots.txt

The Meta Robots tag and the robots.txt file both allow a site owner to restrict crawler access to a page. Just be warned that many a webmaster has unintentionally used these directives as an attempt to block access by rogue bots, only to discover that search engines cease their crawl.

Frames or iframes

Technically, links in both frames and iframes are crawlable, but both present structural issues for the engines in terms of organization and following. Unless you’re an advanced user with a good technical understanding of how search engines index and follow links in frames, it’s best to stay away from them.

Robots don’t use search forms

Although this relates directly to the above warning on forms, it’s such a common problem that it bears mentioning. Some webmasters believe if they place a search box on their site, then engines will be able to find everything that visitors search for. Unfortunately, crawlers don’t perform searches to find content, leaving millions of pages inaccessible and doomed to anonymity until a crawled page links to them.

Links in Flash, Java, and other plug-ins

The links embedded inside the Juggling Panda site (from our above example) are perfect illustrations of this phenomenon. Although dozens of pandas are listed and linked to on the page, no crawler can reach them through the site’s link structure, rendering them invisible to the engines and hidden from users’ search queries.

Links on pages with many hundreds or thousands of links

Search engines will only crawl so many links on a given page. This restriction is necessary to cut down on spam and conserve rankings. Pages with hundreds of links on them are at risk of not getting all of those links crawled and indexed.If you avoid these pitfalls, you’ll have clean, spiderable HTML links that will allow the spiders easy access to your content pages.

Links can have lots of attributes. The engines ignore nearly all of them, with the important exception of the rel=”nofollow” attribute. In the example above, adding the rel=”nofollow” attribute to the link tag tells the search engines that the site owners do not want this link to be interpreted as an endorsement of the target page.

Nofollow, taken literally, instructs search engines to not follow a link (although some do). The nofollow tag came about as a method to help stop automated blog comment, guest book, and link injection spam, but has morphed over time into a way of telling the engines to discount any link value that would ordinarily be passed. Links tagged with nofollow are interpreted slightly differently by each of the engines, but it is clear they do not pass as much weight as normal links.

Are nofollow links bad?

Although they don’t pass as much value as their followed cousins, nofollowed links are a natural part of a diverse link profile. A website with lots of inbound links will accumulate many nofollowed links, and this isn’t a bad thing. In fact, Moz’s Ranking Factors showed that high ranking sites tended to have a higher percentage of inbound nofollow links than lower-ranking sites.

Google

Google states that in most cases</a>, they don’t follow nofollow links, nor do these links transfer PageRank or anchor text values. Essentially, using nofollow causes Google to drop the target links from their overall graph of the web. Nofollow links carry no weight and are interpreted as HTML text (as though the link did not exist). That said, many webmasters believe that even a nofollow link from a high authority site, such as Wikipedia, could be interpreted as a sign of trust.

Bing and Yahoo

Bing, which powers Yahoo search results, has also stated that they do not include nofollow links in the link graph, though their crawlers may still use nofollow links as a way to discover new pages. So while they may <em>follow</em> the links, they don’t use them in rankings calculations.

Keyword Usage and Targeting

Keywords are fundamental to the search process. They are the building blocks of language and of search. In fact, the entire science of information retrieval (including web-based search engines like Google) is based on keywords. As the engines crawl and index the contents of pages around the web, they keep track of those pages in keyword-based indexes rather than storing 25 billion web pages all in one database. Millions and millions of smaller databases, each centered on a particular keyword term or phrase, allow the engines to retrieve the data they need in a mere fraction of a second.

Obviously, if you want your page to have a chance of ranking in the search results for “dog,” it’s wise to make sure the word “dog” is part of the crawlable content of your document.

Keyword MapKeyword Domination

Keywords dominate how we communicate our search intent and interact with the engines. When we enter words to search for, the engine matches pages to retrieve based on the words we entered. The order of the words (“pandas juggling” vs. “juggling pandas”), spelling, punctuation, and capitalization provide additional information that the engines use to help retrieve the right pages and rank them.

Search engines measure how keywords are used on pages to help determine the relevance of a particular document to a query. One of the best ways to optimize a page’s rankings is to ensure that the keywords you want to rank for are prominently used in titles, text, and metadata.

Generally speaking, as you make your keywords more specific, you narrow the competition for search results, and improve your chances of achieving a higher ranking. The map graphic to the left compares the relevance of the broad term “books” to the specific title Tale of Two Cities. Notice that while there are a lot of results for the broad term, there are considerably fewer results (and thus, less competition) for the specific result.

Keyword Abuse

Since the dawn of online search, folks have abused keywords in a misguided effort to manipulate the engines. This involves “stuffing” keywords into text, URLs, meta tags, and links. Unfortunately, this tactic almost always does more harm than good for your site.

In the early days, search engines relied on keyword usage as a prime relevancy signal, regardless of how the keywords were actually used. Today, although search engines still can’t read and comprehend text as well as a human, the use of machine learning has allowed them to get closer to this ideal.

The best practice is to use your keywords naturally and strategically (more on this below). If your page targets the keyword phrase “Eiffel Tower” then you might naturally include content about the Eiffel Tower itself, the history of the tower, or even recommended Paris hotels. On the other hand, if you simply sprinkle the words “Eiffel Tower” onto a page with irrelevant content, such as a page about dog breeding, then your efforts to rank for “Eiffel Tower” will be a long, uphill battle. The point of using keywords is not to rank highly for all keywords, but to rank highly for the keywords that people are searching for when they want what your site provides.

On-Page Optimization

Keyword usage and targeting are still a part of the search engines’ ranking algorithms, and we can apply some effective techniques for keyword usage to help create pages that are well-optimized. Here at Moz, we engage in a lot of testing and get to see a huge number of search results and shifts based on keyword usage tactics. When working with one of your own sites, this is the process we recommend. Use the keyword phrase:

  • In the title tag at least once. Try to keep the keyword phrase as close to the beginning of the title tag as possible. More detail on title tags follows later in this section.
  • Once prominently near the top of the page.
  • At least two or three times, including variations, in the body copy on the page. Perhaps a few more times if there’s a lot of text content. You may find additional value in using the keyword or variations more than this, but in our experience adding more instances of a term or phrase tends to have little or no impact on rankings.
  • At least once in the alt attribute of an image on the page. This not only helps with web search, but also image search, which can occasionally bring valuable traffic.
  • Once in the URL. Additional rules for URLs and keywords are discussed later on in this section.
  • At least once in the meta description tag. Note that the meta description tag does not get used by the engines for rankings, but rather helps to attract clicks by searchers reading the results page, as the meta description becomes the snippet of text used by the search engines.

And you should generally not use keywords in link anchor text pointing to other pages on your site; this is known as Keyword Cannibalization.

Keyword Density Myth

Keyword density is not a part of modern ranking algorithms, as demonstrated by Dr. Edel Garcia in <a href=”http://www.e-marketing-news.co.uk/Mar05/garcia.html”>The Keyword Density of Non-Sense

If two documents, D1 and D2, consist of 1000 terms (l = 1000) and repeat a term 20 times (tf = 20), then a keyword density analyzer will tell you that for both documents Keyword Density (KD) KD = 20/1000 = 0.020 (or 2%) for that term. Identical values are obtained when tf = 10 and l = 500. Evidently, a keyword density analyzer does not establish which document is more relevant. A density analysis or keyword density ratio tells us nothing about:

1. The relative distance between keywords in documents (proximity)
2. Where in a document the terms occur (distribution)
3. The co-citation frequency between terms (co-occurance)
4. The main theme, topic, and sub-topics (on-topic issues) of the documents

The Conclusion:

Keyword density is divorced from content, quality, semantics, and relevance. That should optimal page density look like then? You can read more information about On-Page Optimization in this post.

The title tag of any page appears at the top of Internet browsing software, and is often used as the title when your content is shared through social media or republished.

Using keywords in the title tag means that search engines will bold those terms in the search results when a user has performed a query with those terms. This helps garner a greater visibility and a higher click-through rate.

The final important reason to create descriptive, keyword-laden title tags is for ranking at the search engines. In Moz’s biannual survey of SEO industry leaders, 94% of participants said that keyword use in the title tag was the most important place to use keywords to achieve high rankings.

Title Tags

The title element of a page is meant to be an accurate, concise description of a page’s content. It is critical to both user experience and search engine optimization.

As title tags are such an important part of search engine optimization, the following best practices for title tag creation makes for terrific low-hanging SEO fruit. The recommendations below cover the critical steps to optimize title tags for search engines and for usability.

Be mindful of length

Search engines display only the first 65-75 characters of a title tag in the search results (after that, the engines show an ellipsis – “…” – to indicate when a title tag has been cut off). This is also the general limit allowed by most social media sites, so sticking to this limit is generally wise. However, if you’re targeting multiple keywords (or an especially long keyword phrase), and having them in the title tag is essential to ranking, it may be advisable to go longer.

Place important keywords close to the front

The closer to the start of the title tag your keywords are, the more helpful they’ll be for ranking, and the more likely a user will be to click them in the search results.

Include branding

At Moz, we love to end every title tag with a brand name mention, as these help to increase brand awareness, and create a higher click-through rate for people who like and are familiar with a brand. Sometimes it makes sense to place your brand at the beginning of the title tag, such as your homepage. Since words at the beginning of the title tag carry more weight, be mindful of what you are trying to rank for.

Consider readability and emotional impact

Title tags should be descriptive and readable. The title tag is a new visitor’s first interaction with your brand and should convey the most positive impression possible. Creating a compelling title tag will help grab attention on the search results page, and attract more visitors to your site. This underscores that SEO is about not only optimization and strategic keyword usage, but the entire user experience.

Meta Tags

Meta tags were originally intended as a proxy for information about a website’s content. Several of the basic meta tags are listed below, along with a description of their use.

Meta Robots

The Meta Robots tag can be used to control search engine crawler activity (for all of the major engines) on a per-page level. There are several ways to use Meta Robots to control how search engines treat a page:

  • index/noindex tells the engines whether the page should be crawled and kept in the engines’ index for retrieval. If you opt to use “noindex,” the page will be excluded from the index. By default, search engines assume they can index all pages, so using the “index” value is generally unnecessary.
  • follow/nofollow tells the engines whether links on the page should be crawled. If you elect to employ “nofollow,” the engines will disregard the links on the page for discovery, ranking purposes, or both. By default, all pages are assumed to have the “follow” attribute.
    Example: <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”>
  • noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.
  • nosnippet informs the engines that they should refrain from displaying a descriptive block of text next to the page’s title and URL in the search results.
  • noodp/noydir are specialized tags telling the engines not to grab a descriptive snippet about a page from the Open Directory Project (DMOZ) or the Yahoo! Directory for display in the search results.

The X-Robots-Tag HTTP header directive also accomplishes these same objectives. This technique works especially well for content within non-HTML files, like images.

Meta Description

The meta description tag exists as a short description of a page’s content. Search engines do not use the keywords or phrases in this tag for rankings, but meta descriptions are the primary source for the snippet of text displayed beneath a listing in the results.

The meta description tag serves the function of advertising copy, drawing readers to your site from the results. It is an extremely important part of search marketing. Crafting a readable, compelling description using important keywords (notice how Google bolds the searched keywords in the description) can draw a much higher click-through rate of searchers to your page.

Meta descriptions can be any length, but search engines generally will cut snippets longer than 160 characters, so it’s generally wise to stay within in these limits.

In the absence of meta descriptions, search engines will create the search snippet from other elements of the page. For pages that target multiple keywords and topics, this is a perfectly valid tactic.

Not as important meta tags

Meta Keywords: The meta keywords tag had value at one time, but is no longer valuable or important to search engine optimization. For more on the history and a full account of why meta keywords has fallen into disuse, read Meta Keywords Tag 101 from SearchEngineLand.

Meta Refresh, Meta Revisit-after, Meta Content-type, and others: Although these tags can have uses for search engine optimization, they are less critical to the process, and so we’ll leave it to Google’s Search Console Help to discuss in greater detail.

Well, How do you like this offering?

Chuck Reynolds
Contributor

MarketHive

Why Search Engine Marketing is Necessary

Connect to Facebook

 

An important aspect of SEO is making your website easy for both users and search engine robots to understand. Although search engines have become increasingly sophisticated, they still can’t see and understand a web page the same way a human can. SEO helps the engines figure out what each page is about, and how it may be useful for users.

A Common Argument Against SEO

We frequently hear statements like this:

“No smart engineer would ever build a search engine that requires websites to follow certain rules or principles in order to be ranked or indexed. Anyone with half a brain would want a system that can crawl through any architecture, parse any amount of complex or imperfect code, and still find a way to return the most relevant results, not the ones that have been ‘optimized’ by unlicensed search marketing experts.”

But Wait …

Imagine you posted online a picture of your family dog. A human might describe it as “a black, medium-sized dog, looks like a Lab, playing fetch in the park.” On the other hand, the best search engine in the world would struggle to understand the photo at anywhere near that level of sophistication. How do you make a search engine understand a photograph? Fortunately, SEO allows webmasters to provide clues that the engines can use to understand the content. In fact, adding proper structure to your content is essential to SEO.

Understanding both the abilities and limitations of search engines allows you to properly build, format, and annotate your web content in a way that search engines can digest. Without SEO, a website can be invisible to search engines.

The Limits of Search Engine Technology

The major search engines all operate on the same principles. Automated search bots crawl the web, follow links, and index content in massive databases. They accomplish this with dazzling artificial intelligence, but modern search technology is not all-powerful. There are numerous technical limitations that cause significant problems in both inclusion and rankings. We’ve listed the most common below:

Problems Crawling and Indexing

  • Online forms: Search engines aren’t good at completing online forms (such as a login), and thus any content contained behind them may remain hidden.
  • Duplicate pages: Websites using a CMS (Content Management System) often create duplicate versions of the same page; this is a major problem for search engines looking for completely original content.
  • Blocked in the code: Errors in a website’s crawling directives (robots.txt) may lead to blocking search engines entirely.
  • Poor link structures: If a website’s link structure isn’t understandable to the search engines, they may not reach all of a website’s content; or, if it is crawled, the minimally exposed content may be deemed unimportant by the engine’s index.
  • Non-text Content: Although the engines are getting better at reading non-HTML text, content in rich media format is still difficult for search engines to parse. This includes text in Flash files, images, photos, video, audio, and plug-in content.

Problems Matching Queries to Content

  • Uncommon terms: Text that is not written in the common terms that people use to search. For example, writing about “food cooling units” when people actually search for “refrigerators.”
  • Language and internationalization subtleties: For example, “color” vs. “colour.” When in doubt, check what people are searching for and use exact matches in your content.
  • Incongruous location targeting: Targeting content in Polish when the majority of the people who would visit your website are from Japan.
  • Mixed contextual signals: For example, the title of your blog post is “Mexico’s Best Coffee” but the post itself is about a vacation resort in Canada which happens to serve great coffee. These mixed messages send confusing signals to search engines.

Make sure your content gets seen

Getting the technical details of search engine-friendly web development correct is important, but once the basics are covered, you must also market your content. The engines by themselves have no formulas to gauge the quality of content on the web. Instead, search technology relies on the metrics of relevance and importance, and they measure those metrics by tracking what people do: what they discover, react, comment, and link to. So, you can’t just build a perfect website and write great content; you also have to get that content shared and talked about.

                                       The Competitive Nature of Search Engines

Take a look at any search results page and you’ll find the answer to why search marketing has a long, healthy life ahead.  There are, on average, ten positions on the search results page. The pages that fill those positions are ordered by rank. The higher your page is on the search results page, the better your click-through rate and ability to attract searchers. Results in positions 1, 2, and 3 receive much more traffic than results down the page, and considerably more than results on deeper pages. The fact that so much attention goes to so few listings means that there will always be a financial incentive for search engine rankings. No matter how search may change in the future, websites and businesses will compete with one another for this attention, and for the user traffic and brand visibility it provides.

Constantly Changing SEO

When search marketing began in the mid-1990s, manual submission, the meta keywords tag, and keyword stuffing were all regular parts of the tactics necessary to rank well. In 2004, link bombing with anchor text, buying hordes of links from automated blog comment spam injectors, and the construction of inter-linking farms of websites could all be leveraged for traffic. In 2011, social media marketing and vertical search inclusion are mainstream methods for conducting search engine optimization. The search engines have refined their algorithms along with this evolution, so many of the tactics that worked in 2004 can hurt your SEO today.

The future is uncertain, but in the world of search, change is a constant. For this reason, search marketing will continue to be a priority for those who wish to remain competitive on the web. Some have claimed that SEO is dead, or that SEO amounts to spam. As we see it, there’s no need for a defense other than simple logic:

Websites compete for attention and placement in the search engines, and those with the knowledge and experience to improve their website’s ranking will receive the benefits of increased traffic and visibility.

Chuck Reynolds
Contributor

MarketHive

How People Interact With Search Engines

Connect to Facebook
One of the most important elements to building an online marketing strategy around SEO is empathy for your audience. Once you grasp what your target market is looking for, you can more effectively reach and keep those users.

Robot Evolution

Build for users, not for search engines

We like to say, “Build for users, not for search engines.” There are three types of search queries people generally make:

  • “Do” Transactional Queries: I want to do something, such as buy a plane ticket or listen to a song.
  • “Know” Informational Queries: I need information, such as the name of a band or the best restaurant in New York City.
  • “Go” Navigation Queries: I want to go to a particular place on the Internet, such as Facebook or the homepage of the NFL.

When visitors type a query into a search box and land on your site, will they be satisfied with what they find? This is the primary question that search engines try to answer billions of times each day. The search engines’ primary responsibility is to serve relevant results to their users. So ask yourself what your target customers are looking for and make sure your site delivers it to them.

The True Power of Inbound Marketing with SEO

Why should you invest time, effort, and resources on SEO? When looking at the broad picture of search engine usage, fascinating data is available from several studies. We’ve extracted those that are recent, relevant, and valuable, not only for understanding how users search but to help present a compelling argument about the power of SEO.

A Broad PictureGoogle leads the way in an October 2011 study by comScore:
  • Google led the U.S. core search market in April with 65.4 percent of the searches conducted, followed by Yahoo! with 17.2 percent, and Microsoft with 13.4 percent. (Microsoft powers Yahoo Search. In the real world, most webmasters see a much higher percentage of their traffic from Google than these numbers suggest.)
  • Americans alone conducted a staggering 20.3 billion searches in one month. Google accounted for 13.4 billion searches, followed by Yahoo! (3.3 billion), Microsoft (2.7 billion), Ask Network (518 million), and AOL LLC (277 million).
  • Total search powered by Google properties equaled 67.7 percent of all search queries, followed by Bing which powered 26.7 percent of all search.

Billions spent on online marketing from an August 2011 Forrester report:

  • Online marketing costs will approach $77 billion in 2016.
  • This amount will represent 26% of all advertising budgets combined.

Search is the new Yellow Pages from a Burke 2011 report:

  • 76% of respondents used search engines to find local business information vs. 24% who turned to print yellow pages.
  • 67% had used search engines in the past 30 days to find local information, and 23% responded that they had used online social networks as a local media source.

An August 2011 Pew Internet study revealed:

  • The percentage of Internet users who use search engines on a typical day has been steadily rising from about one-third of all users in 2002, to a new high of 59% of all adult Internet users.
  • With this increase, the number of those using a search engine on a typical day is pulling ever closer to the 61 percent of Internet users who use e-mail, arguably the Internet’s all-time killer app, on a typical day.

StatCounter Global Stats reports the top 5 search engines sending traffic worldwide:

  • Google sends 90.62% of traffic.
  • Yahoo! sends 3.78% of traffic.
  • Bing sends 3.72% of traffic.
  • Ask Jeeves sends .36% of traffic.
  • Baidu sends .35% of traffic.

A 2011 study by Slingshot SEO reveals click-through rates for top rankings:

  • A #1 position in Google’s search results receives 18.2% of all click-through traffic.
  • The second position receives 10.1%, the third 7.2%, the fourth 4.8%, and all others under 2%.
  • A #1 position in Bing’s search results averages a 9.66% click-through rate.
  • The total average click-through rate for first ten results was 52.32% for Google and 26.32% for Bing.

That's Some Spicey Data You Got There

All of this impressive research data leads us to important conclusions about web search and marketing through search engines. In particular, we’re able to make the following statements:

  • Search is very, very popular. Growing strong at nearly 20% a year, it reaches nearly every online American, and billions of people around the world.
  • Search drives an incredible amount of both online and offline economic activity.
  • Higher rankings in the first few results are critical to visibility.
  • Being listed at the top of the results not only provides the greatest amount of traffic but also instills trust in consumers as to the worthiness and relative importance of the company or website.

Learning the foundations of SEO is a vital step in achieving these goals.

Chuck Reynolds
Contributor

MarketHive

How Search Engines Operate

Connect to Facebook
Search engines have two major functions: crawling and building an index, and providing search users with a ranked list of the websites they’ve determined are the most relevant.

Crawling and Indexing

Imagine the World Wide Web as a network of stops in a big city subway system.

Each stop is a unique document (usually a web page, but sometimes a PDF, JPG, or other files). The search engines need a way to “crawl” the entire city and find all the stops along the way, so they use the best path available—links.

Crawling and indexing the billions of documents, pages, files, news, videos, and media on the World Wide Web. Providing

Providing answers to user queries, most frequently through lists of relevant pages that they’ve retrieved and ranked for relevancy.

The link structure of the web serves to bind all of the pages together.

Links allow the search engines’ automated robots, called “crawlers” or “spiders,” to reach the many billions of interconnected documents on the web.

Once the engines find these pages, they decipher the code from them and store selected pieces in massive databases, to be recalled later when needed for a search query. To accomplish the monumental task of holding billions of pages that can be accessed in a fraction of a second, the search engine companies have constructed datacenters all over the world.

These monstrous storage facilities hold thousands of machines processing large quantities of information very quickly. When a person performs a search at any of the major engines, they demand results instantaneously; even a one- or two-second delay can cause dissatisfaction, so the engines work hard to provide answers as fast as possible.

Search engines are answer machines. When a person performs an online search, the search engine scours its corpus of billions of documents and does two things: first, it returns only those results that are relevant or useful to the searcher’s query; second, it ranks those results according to the popularity of the websites serving the information. It is both relevance and popularity that the process of SEO is meant to influence.

How do search engines determine relevance and popularity?

To a search engine, relevance means more than finding a page with the right words. In the early days of the web, search engines didn’t go much further than this simplistic step, and search results were of limited value. Over the years, smart engineers have devised better ways to match results to searchers’ queries. Today, hundreds of factors influence relevance, and we’ll discuss the most important of these in this guide.

Search engines typically assume that the more popular a site, page, or document, the more valuable the information it contains must be. This assumption has proven fairly successful in terms of user satisfaction with search results.

Popularity and relevance aren’t determined manually. Instead, the engines employ mathematical equations (algorithms) to sort the wheat from the chaff (relevance), and then to rank the wheat in order of quality (popularity).

These algorithms often comprise hundreds of variables. In the search marketing field, we refer to them as “ranking factors.” Moz crafted a resource specifically on this subject: Search Engine Ranking Factors.

 You can surmise that search engines believe that Ohio State is the most relevant and popular page for the query “Universities” while the page for Harvard is less relevant/popular.

How Do I Get Some Success Rolling In?

Or, “how search marketers succeed”

The complicated algorithms of search engines may seem impenetrable. Indeed, the engines themselves provide little insight into how to achieve better results or garner more traffic. They do provide us with Knowlege concerning optimization and best practices is described below:

SEO Information from Google Webmaster Guidelines

Google recommends the following to get better rankings in their search engine:

Make pages primarily for users, not for search engines. Don’t deceive your users or present different content to search engines than you display to users, a practice commonly referred to as “cloaking.”

  • Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
  • Create a useful, information-rich site, and write pages that clearly and accurately describe your content. Make sure that your <title> elements and ALT attributes are descriptive and accurate.
  • Use keywords to create descriptive, human-friendly URLs. Provide one version of a URL to reach a document, using 301 redirects or the rel=”canonical” attribute to address duplicate content.

SEO Information from Bing Webmaster Guidelines

Bing engineers at Microsoft recommend the following to get better rankings in their search engine:

Ensure a clean, keyword rich URL structure is in place.

  • Make sure content is not buried inside rich media (Adobe Flash Player, JavaScript, Ajax) and verify that rich media doesn’t hide links from crawlers.
  • Create keyword-rich content and match keywords to what users are searching for. Produce fresh content regularly.
  • Don’t put the text that you wanted indexed inside images. For example, if you want your company name or address to be indexed, make sure it is not displayed inside a company logo.

Have No Fear, Fellow Search Marketer!

 

In addition to this freely-given advice, over the 15+ years, that web search has existed, search marketers have found methods to extract information about how the search engines rank pages. SEOs and marketers use that data to help their sites and their clients achieve better positioning.

Surprisingly, the engines support many of these efforts, though the public visibility is frequently low. Conferences on search marketing, such as the Search Marketing Expo, Pubcon, Search Engine Strategies, Distilled, and Moz’s own MozCon attract engineers and representatives from all of the major engines. Search representatives also assist webmasters by occasionally participating online in blogs, forums, and groups.

 

 

There is perhaps no greater tool available to webmasters researching the activities of the engines than the freedom to use the search engines themselves to perform experiments, test hypotheses, and form opinions. It is through this iterative—sometimes painstaking—process that a considerable amount of knowledge about the functions of the engines has been gleaned. Some of the experiments we’ve tried go something like this:

  1. Register a new website with nonsense keywords (e.g., ishkabibbell.com).
  2. Create multiple pages on that website, all targeting a similarly ludicrous term (e.g., Yoo ew gally).
  3. Make the pages as close to identical as possible, then alter one variable at a time, experimenting with placement of text, formatting, use of keywords, link structures, etc.
  4. Point links at the domain from indexed, well-crawled pages on other domains.
  1. Record the rankings of the pages in search engines.
  2. Now make small alterations to the pages and assess their impact on search results to determine what factors might push a result up or down against its peers.
  3. Record any results that appear to be effective, and re-test them on other domains or with other terms. If several tests consistently return the same results, chances are you’ve discovered a pattern that is used by the search engines.

An Example Test We Performed

In our test, we started with the hypothesis that a link earlier (higher up) on a page carries more weight than a link lower down on the page. We tested this by creating a nonsense domain with a home page with links to three remote pages that all have the same nonsense word appearing exactly once on the page. After the search engines crawled the pages, we found that the page with the earliest link on the home page ranked first.

This process is useful but is not alone in helping to educate search marketers.

In addition to this kind of testing, search marketers can also glean competitive intelligence about how the search engines work through patent applications made by the major engines to the United States Patent Office. Perhaps the most famous among these is the system that gave rise to Google in the Stanford dormitories during the late 1990s, PageRank, documented as Patent #6285999: “Method for node ranking in a linked database.” The original paper on the subject – Anatomy of a Large-Scale Hypertextual Web Search Engine – has also been the subject of considerable study. But don’t worry; you don’t have to go back and take remedial calculus in order to practice SEO!

Through methods like patent analysis, experiments, and live testing, search marketers as a community has come to understand many of the basic operations of search engines and the critical components of creating websites and pages that earn high rankings and significant traffic.

Chuck Reynolds
Contributor

MarketHive

Personalization & Search Engine Rankings

 

 

 

 

 

 

 

 

 

 

 

Connect to Facebook

Years ago, everyone saw exactly the same search results. Today, no one sees exactly the same search results, not on Google, not on Bing. Everyone gets a personalized experience to some degree, even in private browsing windows.

Of course, there’s still a lot commonality. It’s not that everyone sees completely different results. Instead, everyone sees many of the same “generic” listings. But there will also be some listings appearing because of where someone is, whom they know or how they surf the web.

Pc: Country

One of the easiest personalization ranking factors to understand is that people are shown results relevant to the country they’re in.

Someone in the US searching for “football” will get results about American football; someone in the UK will get results about the type of football that Americans would call soccer.

If your site isn’t deemed relevant to a particular country, then you’ve got less chance of showing up when country personalization happens. If you feel you should be relevant, then you’ll probably have to work on your international SEO.

Pl: Locality

Search engines don’t stop personalizing at the country level. They’ll tailor results to match the city or metropolitan area based on the user’s location.

As with country personalization, if you want to appear when someone gets city-specific results, you need to ensure your site is relevant to that city.

Ph: Personal History

What has someone been searching for and clicking on from their search results? What sites do they regularly visit? Have they “Liked” a site using Facebook, shared it via Twitter or perhaps +1’d it?

This type of personal history is used to varying degrees and ways by both Google and Bing to influence search results. Unlike country or city personalization, there’s no easy way to try and make yourself more relevant.

Instead, it places more importance on first impressions and brand loyalty. When a user clicks on a “regular” search result, you want to ensure you’re presenting a great experience so they’ll come again. Over time, they may seek out your brand in search results, clicking on it despite it being below other listings.

This behavior reinforces your site as one that they should be shown more frequently to that user. Even better if they initiate a social gesture, such as a Like, +1 or Tweet that indicates a greater affinity for your site or brand.

History is even more important in new search interfaces such as Google Now, which will proactively present “cards” to users based on explicit preferences (i.e. – which sports teams or stocks do you track) and search history.

Ps: Social Connections

What do someone’s friends think about a website? This is one of the newer ranking factors to impact search results. Someone’s social connections can influence what they see on Google and Bing.

Those connections are what truly matter because search engines view those connections as a user’s personal set of advisors. Offline, you might trust and ask your friends to give you advice on a restaurant or gardening.

Increasingly, when you search today search engines are trying to emulate that offline scenario. So if a user is connected to a friend and that friend has reviewed a restaurant or shared an article on growing tomatoes then that restaurant and article may rank higher for that user.

If someone can follow you, or easily share your content, that helps get your site into their circle of trust and increases the odds that others they know will find you. Nowhere is this more transformative than Google+, where circling a site’s Google+ Page will change the personalized search results for that user.

Chuck Reynolds
Contributor

 

MarketHive
Connect to Facebook