![]() |
1) 10 single cpu boxes behind a load balancer or or 2) 5 dual cpu boxes behind a load
1) 10 single cpu boxes behind a load balancer or or 2) 5 dual cpu boxes behind a load balancer ---whats better?
Whats better for performance, any anecdotal experience is appreciated: price is about the same and cpus are about the same . This would be strictly for heavy web serving html and graphics. sustaining 50 plus megs. and running Linux/ Apache. |
I would go for one huge (Sun) server. It will save you all hassles of loadbalancing....
less maintenance, less programming, less downtime |
unless that one server goes down.... regardless we are stuck with a linux/apahe platform,
|
10 Single Servers behind a load balancer will give you more scalability.
A second CPU in a server doesn't mean 2x the performance. Linear scalability can only be achieved.. well.. linearly. XFR Networks can build this for you. Just say the word. ICQ: 74335014 |
It's always better to scale horizontally rather than vertically when possible.
Putting one huge server in the place of a fault tolerant cluster is assinine. It will end up costing more, and you will have a single point of failure. PC gear is cheap, clustering is the way to do with web apps, since it's so easy to do. Single vs. dual is just up to your requirements. How much do you pay for space and power? I know we'd do all dual webservers, to save on (Expensive) datacenter space.. But you may have a different situation. If space, power, and cooling were not a consideration I'd definitely go with double the single CPU machines. More fault tolerant (you can have more machines die w/o anyone noticing), and will actually give you more performance than 5 duals. peace, -Phil |
I would go for the 10 single CPU machines. But I wouldn't go Load Balanced, I would go with a round robin. Load Balance waits for machine ones machine to hit a certain CPU % then it throws traffic over to the second machine and so on and so on.
With Round Robin, it sends one person to machine one, then the next to machine two, the next to machine three, so on and so on. I would prefer that, myself. JDog |
we actually have a good amount of datacenter space so thats not an issue at least not yet. regarding round robin vs load balancers we have had good exp. so far using foundry server irons w dsr and various health checks to deal with failing boxes or outages the only other consideration in the decision is that we are running ssl as well on these boxes.
|
Quote:
I would go with what the other dude said and take the 10 machines over the single machine, or even the dual machines. Think of it like this: If you have 10 machines and one dies you only lose 10% of your overall processing power. If you have 5 machines and one dies you lose 20% of your operating power If you have one single machine you go down completely. Also, 10 machines being served with round robin would quite possibly function quicker than a balancer, plus bowsers are smart enough to detect if the hostname has multiple IPS and try a different one should the one its attempting to use is down. |
Have not had much exp w round robin, what large sites are using that over load balncers?
|
Speaking of asinine.. Do NOT do round robin. Round Robin is the poor-mans load balancer.
1: It's not as effective for balancing load as a load balancer. 2: Should one of the servers in the round-robin fail, you lose 10% of your traffic (assuming a 10 server cluster). The Foundry ServerIron is a great platform. So is the Nortel Networks 184E (Formally Alteon). We use Foundry here. Hardware Load Balancing works great with stateless applications and can accommodate statefull (database driven) serving as well. Load balancers, like the ServerIron, can pull a server out of rotation if it's state is determined to be down (http pull, ICMP, etc). They also distribute load intelligently - you can set up the switch to distribute load in many ways including least connections, max reverse proxy hits, etc.. none of which you will get with Round Robin. |
With 10 servers you could easily do much much more than 50mpbs sustained. You should be fine with 2-3 servers in a round robin setup with one failover machine, unless you plan on growing substantially within the near future.
|
we were timing out with 3 dual xeon boxes in rotation at a 70 meg spike yesterday
|
Quote:
Good load balansers are expensive to buy and expensive to maintain. Also, with the amount of traffic we have in the adult world, not all load balancers are up to the task. |
Quote:
|
Never mind, I'm retarded. I missed that you stated Linux + Apache in your first message.
jules: I would recommend you have someone who knows how to tune Linux and Apache to look at your servers. Something tells me their straight out of the box. You should be able to serve that amount of traffic without wasting money on more hardware. :) |
huh. i have individual dual-CPU FreeBSD servers doing >50Mbit . they're just doing images and html, no php no cgi etc. they could do more but they just need more requests. :) i've seen 'em hit wirespeed without breaking a sweat (well, until they hit wirespeed, then things start slowing down. dual NIC is the answer to that.)
|
Quote:
gfy at does-it-hurt.net |
i saw the most boner-inspiring setup a long while back. dude was working on a single clustered desktop... an xwindows box with 10 more linux machines strapped on.
i know you can do this with freebsd, i want to try it out with gigabit ethernet. my god that would be fast, think how fast you could run SETI or folding@home.. lol. 11 processors sharing the load your dekstop normally does. *drool*. yeah, i would wonder about your setup. i've seen p133 laptops with freebsd & apache be able to handle _way_ more traffic than i ever thought it should. hell, i know of a porn site that ran on a 333 celeron w/ 128mb ram with freebsd, and it handled more than i thought it would. |
jules180 - Here are some points you can tune in apache to improve performance. I can do these for you if you like. Just ICQ me: 74335014 - I wont try and sell you anything.
1) turn KeepAlives off 2) Depending on the amount of memory installed on your server, increase MaxClients to 256 (max install from generic install). I tune MaxClients for a heavy loaded server to the number of MB or RAM I have. You simply have to change the HARD_LIMIT to reflect the new MaxClients in httpd.h and recompile. 256 should work though. We only have 50 or 60 servers here that need to have a higher setting and they are pushing 50mbps + per server. 3) Turn MaxRequestsPerChild 0 (which is unlimited) 4) Change StartServers to like: StartServers 50 or StartServers 75 StartServers is the number of httpd servers apache keeps loaded at all times (number of connections + StartServers) 4) Change the following. Keeps Apache ahead of the game! MinSpareServers 64 MaxSpareServers 128 5) Also make sure to turn off loging or turn on log rotation. Log files more than 2GB in Linux fuck apache up. Check to make sure you don't have a log file over 2GB by doing a du -h | grep G in the log directory. |
um have fun with round robin and aol's massive cache. You will lose quite abit of traffic if a box goes down or is too busy to answer.
Also there are several linux solutions to create load balancers for free if money is an issue. |
first off, stop using linux. it's a horrible waste of hardware. here's a link that will help you out:
here so anyway, has anyone used a clustered desktop? thinking about it is making me quite hot and bothered... |
a trade-off between reliability (the more servers the better) and administration hassles (fewer servers better)
in your case I'd go with the 5 servers as the admin time increases with the amount of boxes. And 5 average servers will easily push 50 MBit |
I don't think the issue is the load balancer.
I find it funny that people think that RR is better. Think folks. Why do you think every major website pushing more than 100mbs uses a load balancer? Do you think that so many companies - Foundry, Extreme, Cisco, F5, Alteon, etc, etc.. made these things because people like the color? the blinking lights? |
FreeBSD and Linux are both very good for your application. IMHO.
|
Quote:
no reason a RR DNS setup can't be accompanied by having each server have a primary IP address and then a set of secondary IPs. when one server goes down, distribute secondary IPs to the live servers. easy-peasy. :) |
Quote:
This guy is serving some HTML and images, only about 50mbps sustained traffic. He could easily run that on 3 servers and if one goes down just move the traffic over until the third is fixed. Do you figure in the costs of having someone maintain your switch, doing software upgrades, etc. Another thing to think of, if you do have a load balancer you really should have two. What if the one you have dies? You're screwed. |
Did I say everyone one with a webserver needs it?
Of course not. Maintain a switch? Your kidding, right? But RR does not work as described above. Lose one server, you lose traffic. Plain and simple. Sense when did RR check server status and take pages that didn't load out of a zone file? Wish it could.. but it doesn't. |
You might also want to look into using squid as an http accelerator. You'll probably end up with much more capacity and much lower cost.
http://www.squid-cache.org/Doc/FAQ/FAQ-20.html |
we use 2 load balancers for fail over maint has been a breeze on them. cache type solutions wont work for us becuse it is ad serving/tracking related. just out of curiosity does anyone think that having downloads on the cluster may be hurting webserving performance due to long connection times etc. all boxes are logging to a seperate "log server"
|
I think if you have a cluster system you're better off with 10 Servers with single processors, rather then 5 with dual.
A good loadbalancer would also be a plus, something like foundry networks serveriron. Laters |
Quote:
|
Quote:
|
OHHH Get a Cray www.cray.com
|
|
also any anecdotal evidnce w those using gzip pros cons etc on high volume sys. BTW regarding the serving requirements: content type may be the issue we are serving up lots of small ads via this cluster approx 50 mil imps per day, so any thoughts much appreciated.
|
Contact me, you dont need to talk to anyone else. 408-209-8949 I have 2 $30,000 load balancers you can use for free.
|
Quote:
ouch! sorry it was too easy, I couldn't resist!!! |
10 web servers and Cisco Localdirector
:thumbsup Works for me |
Quote:
Never, ever underestimate blinky lights, I worked for a vendor for a while, we sold multi millions $ pieces of hardware to carriers, and first question everyone asked was where are the blinky lights.... guess what, in the next gen of the lines cards we had blinky lights :) |
Cluster that fucker...apache isn't multi-threaded...just the kernel is going to be...feel free to correct me if I am wrong
|
Quote:
70meg spike is nothing... We had a PW leak once that sustained the maximum of the network card at 100mbps for a few hours before the pw was disabled.... on a single FREEBSD Box... and it was I believe a Dual 1000mhz with 1 gig ram.... 72 gigs scsi it's all about the tweaks I guess... oh, we deal with movie member sites, so yes, it had big downloads... |
| All times are GMT -7. The time now is 10:01 PM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2026, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123