Technical Search Engine Optimization

Several search engine optimization development and design features are employed in websites to boost up the ranking of the website. Search engines have what are called spiders to crawl the web and no by webs I’m not referring to spiderwebs or anything close for that matter :) . Unfortunately these are limited in the ways which they can crawl through the web and thus we shall discuss some basic do’s and don’ts which sometimes tend to be misunderstood or completely unaccounted for to obtain an excellent SEO website.

Every search engine has a unique way of processing content but an important factor to note is that search engines do not visualize it as you and I would. Google, Bing, Ask.com and all the other search engines are just robotos following a particular algorithm and although they are improving day by day they are not able to interpret certain media such as images, videos and other Flash or Java plug-ins. Just to give you a simple example, although for humans it come natural to understand and identify a painting of a black dog on a white background, for a computer this is a monumental task. Thus when crawlers go through your website they skip any related media and hence it is of extreme importance to supplement images, videos etc. with text in an html format. Hence crawlers can identify keywords in your text and improve your website’s relevance and rating when comparing it to the search that someone made. To give a clear idea of what search engines see, at Moz they have an excellent illustration which should help you out alot :)

Keep in mind the following points to get a better chance of search engines rating your website higher if you have images and media:

  • Search engines are robots and thus do not interpret websites like humans do
  • Supplement images, videos and audio with html text if the content is intended to be in the website
  • Use tools such as SEO-browser.com and Google Cache View to check how search engines see your website

It is crucial to remember that if search engines can’t see your content they don’t even know it exists which unfortunately sets you at the bottom of the list.

Images, videos and similar media content are not the only features which harm your optimization. Links are essential for any website because their job is to connect pages to one another and the only way spiders can crawl through your website is through links. If there is no way for a crawler to access an important page on your website to them it doesn’t even exist even though it might have all the information which the user asked. And so you might be asking,

  • What makes a page unaccessable to crawlers?

Well it all depends on the link placement. As it was already mentioned, search engines can’t interpret images, videos, Flash and Java plug-ins so if your link is in any one of those, although it is accessible to a user who is already on your website, it is not accessible to crawlers which implies that for a search engine that link doesn’t even exist. If that was the only link to a certain page then that page doesn’t exist to Google, Yahoo etc. Moreover links in protected pages such as in forms which are password protected are not crawlable. Note that even though a search box is included in your website and users are able to access all the pages through the search box, crawl bots do not search and thus pages are doomed until a crawlable link is done.

A couple of more technical factors affecting SEO are that links pointing to pages blocked by Meta Robots tag or Robots.txt are not crawled. However, note that it is suggested to block pages containing a lot of Javascript simply because links in Javascript do not weigh as much as links in html. Moreover block any forms and iframes (even though I highly suggest you stay away from iframes in the first place). This can be done using /blocked-page.html in the Robots.txt or employ the meta noindex tag <meta name=”robots” content=”noindex”> in individual pages for Meta Robots. Moz explains in very good detail of the process and why Meta Robots are better than Robots.txt.

Last but not least, by adding a link below more than a 100 inbound links on a page it is ignored by search engines due to spamming reduction systems.

There are more things to consider for good SEO such as keyword usage, duplicate content and much more but I will not tell you all the good stuff in a single post :p. By now you should have a clear idea of what search engine optimization is and elements to implement in your system. If you thought this was a bit technical I’d suggest you check out Introduction to Search Engine Optimization to jog back some memory :) . Keep in touch for future posts on search engine optimization.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>