Firefox 2

Duplicate Contents and SEO

Duplicate contents always bring harm to the websites. The sad side of the story is that even if you wish to avoid duplicate contents in your website, you can’t. This is because someone else might have copied or incorporated the same contents that you have, and the search engine spiders fail to recognize which website is ‘original’,

The search engines penalize the duplicated websites in many ways. It can push the duplicated contents to the supplementary level in the Search Engine Result Page (SERP) by maintaining one website at the main level. The search engines choose the website that was registered the earliest. Sometimes the search engines may dump all the websites that contain the duplicate contents. It is also possible that the duplicate website’s index is completely removed from the search engine database.

Search engines ignore the duplicate contents mainly because of two reasons. Firstly, it does not want to spend its effort in displaying many websites that have the similar contents. Secondly, search engines think that duplication is a non-ethical practice adopted by the webmasters in order to gain popularity to their websites.

The search engines do not have any specific rules in deciding the exact quantity of the content that can be called as duplicate. It may be one line, 10 lines or one paragraph. There are different ways in which duplicate contents appear in a website.

• Description of a product : A manufacturer gives the description of a particular product in the same manner to all the websites that sell the same.

• Different URLs for the same webpage: Search engines index the pages by the name of the URLs; therefore, if they find different URLs for the same website, they will label it as duplicate. For example, a link http://www.essay.com; https://www.essay.com; http://www.essay.com/index.htm, etc., may lead to same webpage. However, the search engine crawlers do not recognize them as a single URL, and so they will be considered as duplicate webpages.

• Sites with session IDs : Session IDs are used by certain websites to track the visitors to it. As in the previous case, such URLs with different session IDs will be considered as duplicate links.

• Pages with similar structure : The layout and contents of many similar sites may look similar. For example, many e-commerce websites may have similar structure for them. These may be identified by the search engine crawlers as duplicate.

The damage caused by duplicate contents is enormous. It is the duty of every website owner to make sure that their content is not duplicated. There are various techniques available in the Internet to verify whether the content is duplicated or not. Also the search engines have set up certain guidelines to make the website owners aware of various SEO procedures that will avoid duplication. They insist that the webpages that contain the duplicate texts, such as marketing materials or URLs with session IDs, should not be given for search engine indexing.

Considering the damage that duplicate contents cause to a website, the issue needs to be addressed appropriately.

0 Comments:

Post a Comment



Newer Post Older Post Home

Blogger Template by Blogcrowds