|
Originally Posted by Gaybucks
I'd like to see the data on that. For one thing, it would be nearly impossible to collect, since DNS by definition is a distributed architecture and it would be nearly impossble to tell how many requests are dropped.
It is absolutely true that most ISPs have maybe one guy (if they are lucky) that really understands DNS. However, as I said, both Enom and ZoneEdit run very reliable, highly diverse DNS networks, with servers spread all over, and in the case of Enom, DNS is free, while ZoneEdit charges only a few bucks a year, depending on traffic. And anyone running DNS that knows what they're doing will never reboot multiple DNS servers at the same time, so the restart issue is pretty much nonexistent. Even if a restart is required, it takes only a few seconds at most. As for the security of BIND, it's not perfect, but it does run about 90% of DNS on the Net, and when the occasional exploit is found, because it's such a crucial app, patches are generally issued near instantly. But the bigger issue is below.
This sounds like the UltraDNS pitch, and it's only somewhat valid. If you have a highly trafficked site, the likelihood is that SOMEBODY on just about every dialup or broadband provider has accessed your site within the last 6-12 hours.
If so, your site's DNS info is ALREADY IN CACHE at the local ISP, and your DNS (or Ultra, if you use them) will never even get a DNS hit. Only if none of the subscribers to the broadband or dialup provider that Joe Pornviewer uses has recently requested DNS info will Jpe's provider even make a request. Otherwise, it gets pulled from the local ISP's DNS cache, without ever hitting your site DNS.
And, assuming that no one else from the European ISP where the request ir originating has requested to view the site, this will yield an increase in response time of maybe 50-100ms... for the very first request for that domain name made in a 12 hour period. Otherwise, no one else on that ISP will ever touch UltraDNS at all.
But even if it's 100ms faster, the actual response to the site itself is what really matters. All of the arguments about network latency and traffic delays to DNS apply even more so to HTTP traffic. So what happens if you get lightning fast DNS response and there's a logjam at one of the peering points? You lose the customer anyway, as they won't be able to see the site, even if they have the correct IP address to your server. So it's rather pointless.
Honestly, as an affiliate, I'd be far more concerned about the reliability of the network where the actual servers live than the DNS. If the argument is reliability, I'd much rather see the money spent with UltraDNS go toward a geographically diverse load balancing system, with servers in two different data centers so that if one has catastrophic problems, the other one can still handle traffic. Those sorts of problems are actually far more common -- a good DDOS attack can take an entire data center offline for hours -- and with servers spread among different data centers and proper failover routing and/or load balancing, the sites will still be live.
Totally reliable DNS won't mean much if the data center has issues, and that is where far, far more failures occur -- at the server, not at DNS.
|