Are Links From High Page Rank Really Important?

SEO No Comments »

People focus too much on Page Rank (PR) of a site when looking to build links. It’s a bad strategy. Why? Because the page rank that is displayed for public viewing is not the real page rank of that page. Google maintains the real page rank of the web in its own servers and that is used for ranking purposes. The one we see is not the real thing. So why bother getting link from a high PR page when in reality it could have been banned by the search engines. Recently this was confirmed in an interview by an ex-member of search quality team at Google. This is exactly what he said:

Getting a link from a high PR page used to always be valuable, today it’s more the relevance of the site’s theme in regards to yours, relevance is the new PR.

It means even if you get a link from a real high PR web-page, its useless if its not relevant. If you are in the business of mobile application development, you should try to get links from websites that deal with mobiles, network providers etc., but a link from a carpet cleaner website even with a high PR will be of no help. You get the point.

Why do you think Google has kept the PR of websites confidential? Its because if people start knowing the real page rank of a website or a page they will do whatever it takes to get a link from that page and this will manipulate the results of Google. Clearly Google does not want it. Therefore to keep link SPAM away from the Internet, this was done.

Good move Google!!!

I would also like to dwell upon the other important aspects of the interview as it may be of help to you. It will give you some more insights in your SEO strategies. Here they are:

1. Google will come down very hard on website’s that go against the webmaster guidelines, or try to SPAM them in whatever way. Panda and Penguin are recent example, more will be followed.

2. Google looks for on-page signals like keyword stuffing (using keywords too many times in a page for no reason), hiding content using CSS or hiding the keyword using the same color of the background of the page, cloaking (presenting different content to the search engines and visitors), etc. if these are present anywhere in a site – it may get banned.

3. Content quality is very important. You need to focus on the quality of your content. Yes we are no great writers, but you need to at least put in some effort. Lot of spelling errors, too much grammatical mistakes, scraped or poorly written content can all downgrade the quality of your website.

4. Check your back link profile. After the Penguin update, quality of the websites that link to your site is important. Too many poor quality links may signal SPAM. If you ever bought thousands of links for pennies, chances are they were all generated through a software. Get rid of these links. They will harm your site. As mentioned earlier, get natural links from relevant websites.

5. On-page SEO is important. Use good and unique Title Tags, write a meaningful Description in the tag. Do not use the Keyword tag, as more or less they are useless. Use H1 tags where its relevant. Have a relevant domain name. Write good content and add them often. I am not saying that you add an article everyday, but its good to add 5 or 6 articles every month.

6. To build links build relationships with fellow webmasters. You can find them in forums or social networking sites. Most of them will happily link back if your site is good and of value to their visitors. Get links from authentic human reviewed directory.

7. Create a profile of your site in social networking sites such as Google+, FaceBook etc. and try to get people follow your site.

How To Recover from Panda Update

SEO 1 Comment »

If your website is hit with Google Panda update, you need not panic. Here is step by step guide to help your site recover from the update.

First a word about the Panda Update: It was a major update by Google first released in 23 February 2011. It impacted 11.8% of search queries. Thereafter Google has released almost 25 updates of Panda and now it seems they have incorporated Panda into their indexing processes. Google normally does not confirm small updates. Since Panda is now incorporated in their normal indexing process, we will never know if any Panda updates were released.

Wikipedia has a in-depth article on Panda update. I will not dwell much into it. The reason to write this article is to help your site recover from Panda.

To know if your site was ever hit by Panda, you should know what type of sites were hit.

According to Wikipedia:

Google Panda is a change to Google’s search results ranking algorithm that was first released in 23 February 2011. The change aimed to lower the rank of “low-quality sites” or “thin sites”, and return higher-quality sites near the top of the search results.

It is clear that Panda algorithm reduced the rankings of website that it thought of as low quality.

Now what are “low-quality” or “thin sites”?

These are the type of websites that offer little or no value to their visitors. For example a one page website that says nothing unique and has an affiliate link or asking their visitor to buy something from them. (BTW you will rarely find a single page website topping the search engine rankings.)

What about a website with hundreds or thousands of pages but with scrapped content? In the views of Panda both of these websites are of low quality and a good target. Post-Panda these types of websites are gone from the top search engine rankings.

If your website rankings were hit after 23 February 2011, chances are your website got hit by Panda. Check your website thoroughly before submitting a reconsideration request to Google.

Check for the following:

1. No duplicate content – Panda hated duplicate content. Same content found in any other website SHOULD NOT be there in your website. Some lazy people have this habit of copy-pasting someone else content in their website and hitting the Publish button. This gives them the joy of seeing the number of posts in their website grow and they think they will get a lot of rankings – unfortunately this is counterproductive.

Remove any duplicate content from your site. Check using If you have a website with 100% scrapped content, it’s better to leave that domain and look for another domain and start from scratch. We know there are many people out there who used a WP-Plugin software that automatically scraped content from the internet and posted it online in their website without even login in their website. They wasted money big time. These things do not work.

2. Remove duplicated content in your site itself – Sometimes inadvertently a page can be found through many URLs. For humans it’s not a problem, but for the search engines this may be a problem as they might think it as a duplicate content. This is tricky because you might yourself not know how or from where the search engines are getting the different URLs with the same content.

If you are finding it difficult to locate duplicate content in your site then it’s better to look in your Google webmasters account. In your GWT account if you go to “Optimization” — > “HTML Improvements”, you will find “Duplicate meta descriptions”. You can note them and use rel=”canonical” in your meta tags of the duplicated pages to help your site save from Panda. Note that Google has said many times to leave it to them to decide the duplicate URLs, but it’s better that you take this step to make sure duplicate pages are not indexed. One of the best ways to do this is to use the rel=”canonical” tag to tell the search engines the original page. More info about this tag can be found here.

3. Check your site for low value content: Search engines don’t like content that say nothing unique or say what is already there somewhere in the internet. That doesn’t mean you just cannot say what someone else has said, but if it’s said the same way with no other insights or research then that’s called rehashed content. It’s not copy-paste, but it creates no value. Low value content website will almost certainly lose its top rankings.

Check every page of your website. Why every page? Because even a single page with low value can take down the rankings of your entire website. Unfortunately this is not a good thing about Panda. I personally think one or two pages can be of low quality – we are all no experts. Yes I totally agree that 100% lifting content from another website is very bad even for a single page. However Google should have taken a slightly lenient view if only few pages were of low quality. But as of now that’s not the case so make sure every page is of high quality. If you think one or two of the pages are of low quality, either delete them or make sure Google does not index these pages. To do that put the noindex code in your low quality pages. That way they will remain in your site but will not be there for the search engines.

4. In future add great content: Just removing the duplicate content and low value content is not sufficient. Remember you have lost your rankings and you want it back. To do so you must do two things henceforth, write great quality content and get great links for your site. If you do just these two things – no Panda or Penguin can ever damage your rankings. If your intent is good, if you are not taking any short-cuts search engines will for sure reward you in the long run and you won’t have to worry every now and then about the rankings of your site.

Once the above is done you also need to at least try to remove bad links to your site. Yes I agree that it’s very hard to control who links to your site, but wherever it’s possible get those spammy links removed. If you cannot use the Disavow links tool.

When everything looks fine send a reconsideration request to Google. Be 100% sure before sending the request. You have nothing to lose. I am sure if you followed the above written steps your site may regain its rankings. All the best!!!

Creating Robots.txt File and its Importance

SEO 1 Comment »

Do you know the importance of a Robots.txt file? Read to know.

Success of big companies lies in keeping their confidential data a secret, hidden from all. This enables them to execute their future course of action easily and change plans according to the situation. Job of robots.txt file is the same. It can or cannot allow a search engine to visit some or all of your web pages. Of course a human visitor is free to visit these pages. That being the case, for the search engines your website may be different than what a visitor is seeing. If you think one or some of the pages aren’t good enough to be visited by search engines you can do it.

Advertisement: If you are serious about earning money online this one video will change your life forever. This guy is a millionaire from Australia who was a broke a couple of years ago. You Must Watch This Video Now!!! Free to Watch. (Will open in new tab.)

Learn about one way link building and get to the top of search engines

Every search engine has a “robot” (a software program) that does the job of visiting a website. Their purpose is to gather a copy of the site and keep them in their database. So, if your site is not there in their database it never shows up in the search results.

Web Robots are sometimes referred to as Web Crawlers, or Spiders. Therefore the process of a robot visiting your website is called “Spidering” or “Crawling”. When somebody says “the search engines have spidered my website”, it means the search engine robots have visited their website. This robot is known by a name and has an independent IP address. This IP address is of no importance to us, but knowing their names will help since this name will be used when we create a robots.txt file. This is why the file is called “robots.txt.”

Given below is the list of the robots of some of the very popular search engines:

Search Engine Robot ia_archiver Scooter FAST-WebCrawler ArchitextSpider Arachnoidea Googlebot
(uses Inktomi’s robot)
Slurp Slurp UltraSeek MantraAgent Lycos_Spider_(T-Rex) NationalDirectory-SuperSpider UK Searcher Spider

Writing Robots.txt:

Let’s learn to write robots command. Note that there are two ways to write robots command. One is to include all the commands in a text file called “robots.txt” and another is to write robots command in the meta tag.

We will learn both ways of writing robots command.

Writing robots command in Meta tag:

There are 4 things you can tell a search engine robot when it visits your page:

1) Do not index this page – the search engines will not index the page.

2) Do not follow any links on this page – the search engines will not follow the links included in the page, i.e. they will not index any page that this page links to.

3) Do index this page – the search engines will index the page.

4) Do follow the links – the search engines will index the pages that this page links to.

Note that “index” is different than “spider”. A search engine first spiders a page and then indexes it. Indexing is giving a certain importance to the page on the basis of its content, information, meta tags, link popularity with respect to the searched keyword. All this is decided at run time. When you tell search engines not to index a page, it means they know that “certain” page exists but do not rank them. That is, a no-index page will never be shown in their search results. This in any case does not mean a no-index page will not get visitors, it might get visitors indirectly from a page which links to it. Yes, no direct visitors from the search engines.

Suppose you want the search engines to index and also index (follow) its linked pages then include the following command in the Meta Tag:

<meta name=”robots” content=”index, follow”>

Suppose you want the search engines to index a page but not follow its links then include the following command in the Meta Tag:

<meta name=”robots” content=”index, nofollow”>

Suppose you do not want the search engines to index a page but follow its links then include the following command in the Meta Tag:

<meta name=”robots” content=”noindex, follow”>

Suppose you do not want the search engines to either index or follow links of a particular page then include the following command in the Meta Tag:

<meta name=”robots” content=”noindex, nofollow”>


Google makes a “Cached” of every file it spiders. It’s a small snap shot of the page. Want to stop Google from doing so? Include the following Meta Tag:

<meta name=”robots” content=”noindex, nofollow, noarchive”>

Like any meta tag the above written tags should be placed in the HEAD section of an HTML page:

<title>your title</title>
<meta name=”description” content=”your description.”>
<meta name=”keywords” content=”your keywords”>
<meta name=”robots” content=”index, follow”>

Creating robots.txt file:

A robots.txt file is an independent file and should be written in a plain text editor like Notepad. Do not use MS-Word or any other text editor to create robots.txt. The bottom line is this file should have the extension “.txt” else it will be useless.

Let’s begin. Open Notepad (it comes free with Microsoft Windows) and save the file with the name “robots.txt”. Make sure that the extension is .txt.

By the way, did you note we did not use name of any robot in the meta tag! What does it indicate? Simple – by using meta you direct all the search engines to do something or not do something on a page. You do not have control over any one search engine. The solution is robots.txt.

It can always happen you do not want a particular search engine to index a page for certain reasons. In that case using a robots.txt file will help. Even though I do not recommend such a thing. The search engines get you traffic, why hate them. Stop them from doing their job and they hate you. I again repeat keep your pages smart for the search engines and welcome them. Fine, then why take the trouble to learn robots.txt? Why should you include a robots.txt file at all?

Let’s suppose yours is a dynamic database site containing information of your newsletter subscribers, customers, their address, phone numbers etc. All these confidential information is kept in a separate directory called “admin”. (It is recommended to keep such information in a separate directory. Handling data will be easier for you and so will be easy to keep the search engines away. We will just know how.) I am sure you would never want any unauthorized person to visit this area leave alone the search engines. It does not help the search engines either since they have nothing to do with the data or files there. Here comes the role of a robots.txt file.

Write the following in the robots.txt file:

User-agent: *
Disallow: /admin/

This does not allow the spiders to index anything in the admin directory also including sub-directories if any.

The asterisk (*) mark indicates all the search engines. How do you stop a particular search engine from spidering your files or directory?

Suppose you want to stop Excite from spidering this directory:

User-agent: ArchitextSpider
Disallow: /admin/

Suppose you want to stop Excite and Google from spidering this directory:

User-agent: ArchitextSpider
Disallow: /admin/

User-agent: Googlebot
Disallow: /admin/

Files are no different. Suppose you want a file datafile.html not to be spidered by Excite:

User-Agent: ArchitextSpider
Disallow: /datafile.html

Similarly, you do not want it to be spidered by Google too:

User-agent: ArchitextSpider
Disallow: /datafile.html

User-agent: Googlebot
Disallow: /datafile.html

Suppose you want two files datafile1.html and datafile2.html not to be spidered by Excite:

User-Agent: ArchitextSpider
Disallow: /datafile1.html
Disallow: /datafile2.html

Can you guess what does the following mean?

User-agent: ArchitextSpider
Disallow: /datafile1.html
Disallow: /datafile2.html

User-agent: Googlebot
Disallow: /datafile1.html

Excite will not spider datafile1.html and datafile2.html, but Google will not spider only datafile1.html. It will spider datafile2.html and the rest of the files in the directory.

Imagine you have a file kept in a sub-directory that you wouldn’t like to be spidered. What do you do? Lets suppose the sub-directory is “official” and the file is “confidential.html”.

User-agent: *
Disallow: /official/confidential.html

If the syntax of your robots.txt file is not written correctly, the search engines will ignore that particular command. Before uploading the robots.txt file double check for any possible errors. You should upload robots.txt file in the ROOT Directory of your server. The search engines look for robots.txt file only in the root directory.


You should be able to see robots.txt file if you type the following in the address bar of your Internet browser.

Here is Google’s Robots.txt file:

All search engines follow robots.txt command.

You can look in your web server log files to see what search engine robots have visited. They all leave signatures that can be detected. These signatures are nothing but name of their robots. For instance if Google has spidered your site it will leave a log file called Googlebot. This is how you know which search engine has spidered your pages and when!

We are highly experienced in SEO/SEM/Pay Per Click Management. Contact us regarding any query you may have.

Meta Tags For High Search Engine Ranking

SEO 6 Comments »

Meta Tags (Keyword and Description) and the Title Tag are an absolute essential for any page to get top ranks in the search engines. In fact this is a very important criterion the search engines take into consideration before deciding to give rank to a page with respect to a keyword.

Advertisement: If you are serious about earning money online this one video will change your life. This guy is a millionaire from Australia who was a broke a couple of years ago. You Must Watch This Video Now!!! Free to Watch. (Will open in new tab.)

We discuss them one by one.

The Meta Keyword Tag:

The Meta Keyword tag is written like this:

<META NAME=”Keywords” CONTENT=”keyword1, keyword2, …., keywordX”>

Where keyword1, keyword2, …., keywordX are important keyword phrase with respect to the site.

Note: The Meta Keyword Tag is not displayed to the viewer while browsing. It’s there for certain technical reason. An explanation is beyond the scope of this article, but know that it does help in search engine ranking – therefore we are discussing it here!

Some of the search engines ignore this tag, however it doesn’t take much on your part to include it. Search engines that give importance to the Meta Keyword Tags look for it.

Punching this tag with a lot of Keywords reduces the weight of this Tag. The search engines might even ignore them. Also, repetition of the same keyword more than twice is unnecessary. The best idea would be to write the most important keyword in upper as well as lower case. For example, “search engine optimization, search engine, SEARCH ENGINE OPTIMIZATION”.

Suppose your site deals in online booking of hotel rooms in the US. You have one page dedicated to hotels in Florida. You can write

<META NAME=”Keywords” CONTENT=”hotels in florida, US Hotels, Florida accommodation, US accommodations, HOTELS IN FLORIDA”>

Note that none of the keywords is repeated more than twice and somehow we achieve the target of pushing the most important keywords. There is no need to stuff this tag with a lot of keywords – even the search engines that do recognize the Meta Keyword Tag will ignore them.

You can repeat some very popular and important keywords in different pages of your website. It does make your site stronger for that keyword. Keep in mind not to include this keyword in any page that has absolutely nothing to do with the meaning of the keyword. The index page is the best example. Your index page is the most important page for the search engines. Since the index page contains the general information of your business/site you can include the most popular keywords here. It’s recommended you include the top 10 keywords in the index page. Repeat these keywords in the respective page meant to target that portion of your business. For example the index page could contain the keyword “Hotels in Florida” and a page that is dedicated to online booking/information in hotels in Florida could also contain this keyword.

The Bottom Line – even if not-so-important tag, do include the Keyword Meta Tags in all of your web pages. No need to stuff this tag with a lot of keywords.

We can write Meta Tags for your website for upto 10 pages for $10 only. This is what we will do:

  • Research your site and come up with highly efficient 10 keywords. These keywords will get you good traffic in the long run.
  • Write Meta Tags (Description and Keywords tag) and Title Tag for these keywords that you can implement in 10 pages of your website.
  • For 10 Dollars this service is by far cheaper then any Meta Tag writing service online. Please pay by clicking the button below and do not forget to mention the URL of your website. We will send you the list of 10 keywords and ready Meta Tags to be used in your pages within 1 business day.

    The Meta Description Tag:

    The Meta Description Tag is written like this:<META NAME=”Description” CONTENT=”Describe your site in a single line here.”>Where in the CONTENT part you write the description of your website in general and the page in particular. Note that your description should not exceed 20 words. Try to make it less than 20 words. Actually you can write a long description but the search engines are too busy to check what’s written approximately after that many words, so anything after 20 words is ignored.

    Note: The Meta Description Tag too is not displayed to the viewer, but it does help tremendously in search engine ranking. This tag is displayed by most of the search engines in their search results. The surfers read this sentence before they click on to visit your site. Imagine the importance of this tag – a smart description can force a surfer to click on it and a weak one may drive your traffic away!

    These 20 odd words carry some serious weight as far as search engine placement is concerned. Try to target the most important keywords within the first 15 words. Your description should be a grammatically correct sentence with no spell errors. Start your description with an upper case letter (just as you would do to write a sentence) and end it with a period. The more your target keyword is near to the beginning of the sentence, the better.

    An example:
    Lets take the same travel agent site featuring online reservations in the hotels in USA. The page that features accommodation in Florida could include the following description:

    <META NAME=”Description” CONTENT=”Florida accommodation – online reservation available with huge discount.”>

    The first two words “Florida accommodation” is our important keyword. The description begins with a popular keyword. Online reservation is a general keyword and helps even if somebody types “Florida accommodation online” or “online Florida accommodation”. Note that there is no other keyword ‘diluting’ the importance of the keyword – Florida accommodation. Since this page we assume is dedicated to Florida accommodation there is no need to push any other keyword. Now, read the sentence. Any surfer interested in visiting Florida would click on it.

    The index page can contain all the important keywords in its description, but as said earlier do not exceed 20 words.

    The Bottom Line – no more than 20 words in a description; grammatically correct sentence; no spelling mistakes; try to start with an important keyword; only the first letter in upper case, of course nouns should begin with upper case.

    We can write Meta Tags for your website for upto 10 pages for $10 only. This is what we will do:

  • Research your site and come up with highly efficient 10 keywords. These keywords will get you good traffic in the long run.
  • Write Meta Tags (Description and Keywords tag) and Title Tag for these keywords that you can implement in 10 pages of your website.
  • For 10 Dollars this service is by far cheaper then any Meta Tag writing service online. Please pay by clicking the button below and do not forget to mention the URL of your website. We will send you the list of 10 keywords and ready Meta Tags to be used in your pages within 1 business day.

    Get Adobe Flash player