View Single Post
Old 03-28-2007, 02:11 PM  
StarkReality
Confirmed User
 
StarkReality's Avatar
 
Join Date: May 2004
Location: 4 8 15 16 23 42
Posts: 4,444
Quote:
Originally Posted by sarettah View Post
Why provide the rss feed if you don't want someone to use it?

as far as duplicate content goes, I was yakking with Dravyk last night about this stuff and he says that the spiders can tell the difference between the original site and the sites using the rss feeds and that the spiders do not count it as duplicate content.

I am not sure if he is right but he has done a hell of a lot of research on it and is fast becoming an seo expert.
Well, in theory it should be like this, but google tends to rely on domain authority/popularity/links when it comes to guessing which site is the original. I highly doubt that google can detect if content was syndicated via RSS or just copied manually. Maybe different for the new blogsearch, but not for the main google index. Since they prefer sites with a certain "trust", it's only logical that they credit the authorship to the most trusted site among those that have duplicate content...and it doesn't have to be the site of the author.

You can watch a phenomenon backing up this theory when looking for paysite names, especially new ones using search engine friendly URLs...the domain.com disappears from page one and a link like domain.com/affiliatecode/ shows up in the serps...because the link with the affiliate code has more and stronger links.

That brings me back to the nph proxies (cgi proxies, often referred as "anonymous surfing services"), why are they kicking hundreds and thousands of sites from google, hijacking their listings and stealing all their SE traffic this way, filling their wallets with adsense & co. ? Because google guesses !
StarkReality is offline   Share thread on Digg Share thread on Twitter Share thread on Reddit Share thread on Facebook Reply With Quote