Quote:
|
Originally Posted by Quick Buck
DNS is one of the single largest points of failure on the internet and it is absolutely true that a significant amount of requests for your domain name are dropped.
|
I'd like to see the data on that. For one thing, it would be nearly impossible to collect, since DNS by definition is a distributed architecture and it would be nearly impossble to tell how many requests are dropped.
Quote:
|
Previously we used our host's (several of them) DNS but we found that host's primary focus is not on DNS, their dns is typically run by bind, very eploitable, typically they have a few boxes handling far too many domains, and every time a change is made to somebody else who uses you hosting service the DNS/Bind Daemon has to be restarted.
|
It is absolutely true that most ISPs have maybe one guy (if they are lucky) that really understands DNS. However, as I said, both Enom and ZoneEdit run very reliable, highly diverse DNS networks, with servers spread all over, and in the case of Enom, DNS is free, while ZoneEdit charges only a few bucks a year, depending on traffic. And anyone running DNS that knows what they're doing will never reboot multiple DNS servers at the same time, so the restart issue is pretty much nonexistent. Even if a restart is required, it takes only a few seconds at most. As for the security of BIND, it's not perfect, but it does run about 90% of DNS on the Net, and when the occasional exploit is found, because it's such a crucial app, patches are generally issued near instantly. But the bigger issue is below.
Quote:
|
More importantly however is the fact that at any given time there are routing issues on the web and a request from one ISP's dns to another's simply may fail. A repeated request may succeed, but in a competitive market place where you are simply one of 1000 links on a web page, why would you choose anything other than guaranteeing that your page loads first and fastest?
|
This sounds like the UltraDNS pitch, and it's only somewhat valid. If you have a highly trafficked site, the likelihood is that SOMEBODY on just about every dialup or broadband provider has accessed your site within the last 6-12 hours.
If so, your site's DNS info is ALREADY IN CACHE at the local ISP, and your DNS (or Ultra, if you use them) will never even get a DNS hit. Only if none of the subscribers to the broadband or dialup provider that Joe Pornviewer uses has recently requested DNS info will Jpe's provider even make a request. Otherwise, it gets pulled from the local ISP's DNS cache, without ever hitting your site DNS.
Quote:
|
The other reason we use ultradns is that they have a distributed network much like a good content caching network. If you are in california you can bet that you're getting your dns requests served from a location that is near to you rather than a server located on the other side of the planet.
|
And, assuming that no one else from the European ISP where the request ir originating has requested to view the site, this will yield an increase in response time of maybe 50-100ms... for the very first request for that domain name made in a 12 hour period. Otherwise, no one else on that ISP will ever touch UltraDNS at all.
But even if it's 100ms faster, the actual response to the site itself is what really matters. All of the arguments about network latency and traffic delays to DNS apply even more so to HTTP traffic. So what happens if you get lightning fast DNS response and there's a logjam at one of the peering points? You lose the customer anyway, as they won't be able to see the site, even if they have the correct IP address to your server. So it's rather pointless.
Quote:
Would you rather use a sponsor who's entire DNS system rests on the capabilities of one or two tired tech's at 3am or one who spends the extra money to take no chances?
See sig.
|
Honestly, as an affiliate, I'd be far more concerned about the reliability of the network where the actual servers live than the DNS. If the argument is reliability, I'd much rather see the money spent with UltraDNS go toward a geographically diverse load balancing system, with servers in two different data centers so that if one has catastrophic problems, the other one can still handle traffic. Those sorts of problems are actually far more common -- a good DDOS attack can take an entire data center offline for hours -- and with servers spread among different data centers and proper failover routing and/or load balancing, the sites will still be live.
Totally reliable DNS won't mean much if the data center has issues, and that is where far, far more failures occur -- at the server, not at DNS.