|
Are People Just Picking on Google?
So why do people write and talk and discuss seemingly only Google?s filter? Perhaps it?s because no one can make any sense of how duplicate content is figured. And in all honesty, it doesn?t make any sense. Why go into all that work to determine how to get rid of duplicate content, when no effort is made on Google?s part to determine what duplicate content really is, and which content is the original? Google?s algorithms have always been baffling, but when you see the results of a filter that is designed to get rid of the duplicate content, and fails to keep the ORIGINAL, it goes beyond baffling; it is infuriating. After all, why should someone else get to take the credit for content that you?ve worked so hard to create, simply because Google likes their site better? And while I?m positive Google hasn?t purposely played favorites, performed popularity contests, or hand-chosen sites it wants in its results pages; it is starting to appear that way to many hardworking webmasters. Is it fair? Not really.
Along the same line, it appears as though extra influence is given to those sites that employ Google?s AdSense, and end up positioning better in the SERPs. Now, we have commercially motivated results, which Google vehemently denies in their mission statement. There is also some speculation as to if Google treats duplicate content from cached links similarly. In a forum on the subject, one poster says, ?What would happen if another search engine that had duplicate content filtering were able to spider Google's cached links from SERPs but didn't obey the robots.txt file? Would the Google cached copy, which is technically zero-levels-deep on a site with enormous Link Popularity, cause your version to be filtered out as the lower ranked? Just playing devil's advocate to inspire some thinking here.? He may have hit closer to home than anyone actually may realize, and it certainly makes my head spin to think about.
|