|
yeah
faking UserAgent is no big deal..it can be done in Perl using LWP in a few minutes.
the big problem is proving this as I have no working partner in business.com.
They are happy to charge me more and don't realize that they will instead lose my business and everyone's else business if they don't fight on the customer's side
about the images as I said, I will post below a 'valid' click:
144.132.24.154 - - [12/Jan/2005:00:50:20 -0500] "GET / HTTP/1.1" 200 9309
144.132.24.154 - - [12/Jan/2005:00:50:20 -0500] "GET /gstyle.css HTTP/1.1" 200 2308
144.132.24.154 - - [12/Jan/2005:00:50:20 -0500] "GET /images/gtech_02.jpg HTTP/1.1" 200 32090
144.132.24.154 - - [12/Jan/2005:00:50:20 -0500] "GET /images/gtech_03.jpg HTTP/1.1" 200 4522
and so on...
User fires up browser, comes to my website, gets index.html which in turn tells the browser to get all my images/css, etc. That's what you see above.
In the clicks I complain about, like in the first post, the '/' is retrieved several times and each time it is retrieved completely - all the bytes. Usually, when a browser got a page once and the page is unchanged, it will not download it again and you will see a log without a transferred size which might look like this:
209.28.22.13 - - [26/Jan/2005:13:43:44 -0500] "GET / HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0"
instead of:
209.28.22.13 - - [26/Jan/2005:13:43:44 -0500] "GET / HTTP/1.1" 200 9883 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0"
in other instances, as I said in a previous post, these 'visitors' ask for /robot.txt too! And as funny as it may sound, business.com reps told me that, that click too, was valid and billable.
Yeah, I should probably switch providers but so far nobody else can provide specialized traffic the way they do. I am just pissed that instead of getting 600 real clicks at 3.5$ and being #1, I get 900 'shady' clicks at 2.5$ and am #3.
|