GoFuckYourself.com - Adult Webmaster Forum

GoFuckYourself.com - Adult Webmaster Forum (https://gfy.com/index.php)
-   Fucking Around & Business Discussion (https://gfy.com/forumdisplay.php?f=26)
-   -   1) 10 single cpu boxes behind a load balancer or or 2) 5 dual cpu boxes behind a load (https://gfy.com/showthread.php?t=122213)

jules180 04-03-2003 04:23 PM

1) 10 single cpu boxes behind a load balancer or or 2) 5 dual cpu boxes behind a load
 
1) 10 single cpu boxes behind a load balancer or or 2) 5 dual cpu boxes behind a load balancer ---whats better?

Whats better for performance, any anecdotal experience is appreciated: price is about the same and cpus are about the same . This would be strictly for heavy web serving html and graphics. sustaining 50 plus megs. and running Linux/ Apache.

FreeOnes 04-03-2003 04:27 PM

I would go for one huge (Sun) server. It will save you all hassles of loadbalancing....

less maintenance, less programming, less downtime

jules180 04-03-2003 04:32 PM

unless that one server goes down.... regardless we are stuck with a linux/apahe platform,

Smegma 04-03-2003 05:05 PM

10 Single Servers behind a load balancer will give you more scalability.

A second CPU in a server doesn't mean 2x the performance.

Linear scalability can only be achieved.. well.. linearly.

XFR Networks can build this for you. Just say the word.

ICQ: 74335014

Phil21 04-03-2003 05:18 PM

It's always better to scale horizontally rather than vertically when possible.

Putting one huge server in the place of a fault tolerant cluster is assinine. It will end up costing more, and you will have a single point of failure.

PC gear is cheap, clustering is the way to do with web apps, since it's so easy to do.

Single vs. dual is just up to your requirements. How much do you pay for space and power? I know we'd do all dual webservers, to save on (Expensive) datacenter space.. But you may have a different situation.

If space, power, and cooling were not a consideration I'd definitely go with double the single CPU machines. More fault tolerant (you can have more machines die w/o anyone noticing), and will actually give you more performance than 5 duals.

peace,

-Phil

JDog 04-03-2003 05:26 PM

I would go for the 10 single CPU machines. But I wouldn't go Load Balanced, I would go with a round robin. Load Balance waits for machine ones machine to hit a certain CPU % then it throws traffic over to the second machine and so on and so on.

With Round Robin, it sends one person to machine one, then the next to machine two, the next to machine three, so on and so on. I would prefer that, myself.

JDog

jules180 04-03-2003 05:51 PM

we actually have a good amount of datacenter space so thats not an issue at least not yet. regarding round robin vs load balancers we have had good exp. so far using foundry server irons w dsr and various health checks to deal with failing boxes or outages the only other consideration in the decision is that we are running ssl as well on these boxes.

notjoe 04-03-2003 06:17 PM

Quote:

Originally posted by jules180
we actually have a good amount of datacenter space so thats not an issue at least not yet. regarding round robin vs load balancers we have had good exp. so far using foundry server irons w dsr and various health checks to deal with failing boxes or outages the only other consideration in the decision is that we are running ssl as well on these boxes.
Are you looking for the servers to communicate back to one database server or will the content/websites be updated on each server?

I would go with what the other dude said and take the 10 machines over the single machine, or even the dual machines.

Think of it like this:

If you have 10 machines and one dies you only lose 10% of your overall processing power.

If you have 5 machines and one dies you lose 20% of your operating power

If you have one single machine you go down completely.

Also, 10 machines being served with round robin would quite possibly function quicker than a balancer, plus bowsers are smart enough to detect if the hostname has multiple IPS and try a different one should the one its attempting to use is down.

jules180 04-03-2003 06:26 PM

Have not had much exp w round robin, what large sites are using that over load balncers?

Smegma 04-03-2003 06:47 PM

Speaking of asinine.. Do NOT do round robin. Round Robin is the poor-mans load balancer.

1: It's not as effective for balancing load as a load balancer.
2: Should one of the servers in the round-robin fail, you lose 10% of your traffic (assuming a 10 server cluster).

The Foundry ServerIron is a great platform. So is the Nortel Networks 184E (Formally Alteon). We use Foundry here.

Hardware Load Balancing works great with stateless applications and can accommodate statefull (database driven) serving as well.

Load balancers, like the ServerIron, can pull a server out of rotation if it's state is determined to be down (http pull, ICMP, etc).

They also distribute load intelligently - you can set up the switch to distribute load in many ways including least connections, max reverse proxy hits, etc.. none of which you will get with Round Robin.

vending_machine 04-03-2003 06:53 PM

With 10 servers you could easily do much much more than 50mpbs sustained. You should be fine with 2-3 servers in a round robin setup with one failover machine, unless you plan on growing substantially within the near future.

jules180 04-03-2003 06:57 PM

we were timing out with 3 dual xeon boxes in rotation at a 70 meg spike yesterday

vending_machine 04-03-2003 06:57 PM

Quote:

Originally posted by Smegma
Speaking of asinine.. Do NOT do round robin. Round Robin is the poor-mans load balancer.

1: It's not as effective for balancing load as a load balancer.
2: Should one of the servers in the round-robin fail, you lose 10% of your traffic (assuming a 10 server cluster).

You will NOT lose 10% of your traffic in a Round Robin setup if one of the servers fail. Both Netscape and IE will go on to the next machine in the RR if the one it tried failed. Don't ask me how, but I've seen it happen many a time. Sure, you'll probably lose some surfers due to the millisecond delay it took to try the other server instead.

Good load balansers are expensive to buy and expensive to maintain. Also, with the amount of traffic we have in the adult world, not all load balancers are up to the task.

vending_machine 04-03-2003 07:02 PM

Quote:

Originally posted by jules180
we were timing out with 3 dual xeon boxes in rotation at a 70 meg spike yesterday
What OS are you running?

vending_machine 04-03-2003 07:06 PM

Never mind, I'm retarded. I missed that you stated Linux + Apache in your first message.

jules: I would recommend you have someone who knows how to tune Linux and Apache to look at your servers. Something tells me their straight out of the box. You should be able to serve that amount of traffic without wasting money on more hardware. :)

salsbury 04-03-2003 07:08 PM

huh. i have individual dual-CPU FreeBSD servers doing >50Mbit . they're just doing images and html, no php no cgi etc. they could do more but they just need more requests. :) i've seen 'em hit wirespeed without breaking a sweat (well, until they hit wirespeed, then things start slowing down. dual NIC is the answer to that.)

vending_machine 04-03-2003 07:12 PM

Quote:

Originally posted by jules180
we were timing out with 3 dual xeon boxes in rotation at a 70 meg spike yesterday
I know some top notch people that I can refer you to if you want to, hit me up at:
gfy at does-it-hurt.net

iroc409 04-03-2003 07:26 PM

i saw the most boner-inspiring setup a long while back. dude was working on a single clustered desktop... an xwindows box with 10 more linux machines strapped on.

i know you can do this with freebsd, i want to try it out with gigabit ethernet. my god that would be fast, think how fast you could run SETI or folding@home.. lol. 11 processors sharing the load your dekstop normally does. *drool*.


yeah, i would wonder about your setup. i've seen p133 laptops with freebsd & apache be able to handle _way_ more traffic than i ever thought it should. hell, i know of a porn site that ran on a 333 celeron w/ 128mb ram with freebsd, and it handled more than i thought it would.

Smegma 04-03-2003 07:27 PM

jules180 - Here are some points you can tune in apache to improve performance. I can do these for you if you like. Just ICQ me: 74335014 - I wont try and sell you anything.

1) turn KeepAlives off

2) Depending on the amount of memory installed on your server, increase MaxClients to 256 (max install from generic install).

I tune MaxClients for a heavy loaded server to the number of MB or RAM I have. You simply have to change the HARD_LIMIT to reflect the new MaxClients in httpd.h and recompile.

256 should work though. We only have 50 or 60 servers here that need to have a higher setting and they are pushing 50mbps + per server.

3) Turn MaxRequestsPerChild 0 (which is unlimited)

4) Change StartServers to like:

StartServers 50
or
StartServers 75

StartServers is the number of httpd servers apache keeps loaded at all times (number of connections + StartServers)

4) Change the following. Keeps Apache ahead of the game!

MinSpareServers 64
MaxSpareServers 128


5) Also make sure to turn off loging or turn on log rotation. Log files more than 2GB in Linux fuck apache up.

Check to make sure you don't have a log file over 2GB by doing a du -h | grep G in the log directory.

msg 04-03-2003 07:29 PM

um have fun with round robin and aol's massive cache. You will lose quite abit of traffic if a box goes down or is too busy to answer.

Also there are several linux solutions to create load balancers for free if money is an issue.

iroc409 04-03-2003 07:31 PM

first off, stop using linux. it's a horrible waste of hardware. here's a link that will help you out:

here





so anyway, has anyone used a clustered desktop? thinking about it is making me quite hot and bothered...

http 04-03-2003 07:33 PM

a trade-off between reliability (the more servers the better) and administration hassles (fewer servers better)

in your case I'd go with the 5 servers as the admin time increases with the amount of boxes. And 5 average servers will easily push 50 MBit

Smegma 04-03-2003 07:33 PM

I don't think the issue is the load balancer.

I find it funny that people think that RR is better.

Think folks. Why do you think every major website pushing more than 100mbs uses a load balancer?

Do you think that so many companies - Foundry, Extreme, Cisco, F5, Alteon, etc, etc.. made these things because people like the color? the blinking lights?

Smegma 04-03-2003 07:34 PM

FreeBSD and Linux are both very good for your application. IMHO.

salsbury 04-03-2003 07:44 PM

Quote:

Originally posted by Smegma
I don't think the issue is the load balancer.

I find it funny that people think that RR is better.

Think folks. Why do you think every major website pushing more than 100mbs uses a load balancer?

Do you think that so many companies - Foundry, Extreme, Cisco, F5, Alteon, etc, etc.. made these things because people like the color? the blinking lights?

why do so many companies use IIS for high traffic web servers, requiring that they have like 10x the systems than a linux/freebsd network? same answer here. they have more money than brains. ;)

no reason a RR DNS setup can't be accompanied by having each server have a primary IP address and then a set of secondary IPs. when one server goes down, distribute secondary IPs to the live servers. easy-peasy. :)

vending_machine 04-03-2003 07:46 PM

Quote:

Originally posted by Smegma
I don't think the issue is the load balancer.

I find it funny that people think that RR is better.

Think folks. Why do you think every major website pushing more than 100mbs uses a load balancer?

Do you think that so many companies - Foundry, Extreme, Cisco, F5, Alteon, etc, etc.. made these things because people like the color? the blinking lights?

By people I'm sure you are also referring to me. Just because those companies make the load balancing switches, doesn't mean everyone with a webserver needs it. If you run your round robin right, it's a GOOD cheap solution to a much spendier boxed solution. Sure, in some cases you can never get any better than a boxed load balancer, but you gotta think if you really need it.

This guy is serving some HTML and images, only about 50mbps sustained traffic. He could easily run that on 3 servers and if one goes down just move the traffic over until the third is fixed.

Do you figure in the costs of having someone maintain your switch, doing software upgrades, etc.

Another thing to think of, if you do have a load balancer you really should have two. What if the one you have dies? You're screwed.

Smegma 04-03-2003 07:59 PM

Did I say everyone one with a webserver needs it?

Of course not.

Maintain a switch? Your kidding, right?

But RR does not work as described above. Lose one server, you lose traffic. Plain and simple. Sense when did RR check server status and take pages that didn't load out of a zone file?

Wish it could.. but it doesn't.

NetRodent 04-03-2003 08:01 PM

You might also want to look into using squid as an http accelerator. You'll probably end up with much more capacity and much lower cost.

http://www.squid-cache.org/Doc/FAQ/FAQ-20.html

jules180 04-03-2003 08:21 PM

we use 2 load balancers for fail over maint has been a breeze on them. cache type solutions wont work for us becuse it is ad serving/tracking related. just out of curiosity does anyone think that having downloads on the cluster may be hurting webserving performance due to long connection times etc. all boxes are logging to a seperate "log server"

Captain 04-03-2003 08:54 PM

I think if you have a cluster system you're better off with 10 Servers with single processors, rather then 5 with dual.

A good loadbalancer would also be a plus, something like foundry networks serveriron.

Laters

Smegma 04-03-2003 10:32 PM

Quote:

Originally posted by jules180
we use 2 load balancers for fail over maint has been a breeze on them. cache type solutions wont work for us becuse it is ad serving/tracking related. just out of curiosity does anyone think that having downloads on the cluster may be hurting webserving performance due to long connection times etc. all boxes are logging to a seperate "log server"
Large file downloads will always kill performance. Even if it's a 14.4 modem downloading a large file, that's one less http process unavailable to serve data.

smut4all 04-03-2003 11:31 PM

Quote:

Originally posted by Smegma
10 Single Servers behind a load balancer will give you more scalability.

A second CPU in a server doesn't mean 2x the performance.

Linear scalability can only be achieved.. well.. linearly.

XFR Networks can build this for you. Just say the word.

ICQ: 74335014

right on the mark

oscer 04-03-2003 11:34 PM

OHHH Get a Cray www.cray.com

Theo 04-03-2003 11:38 PM

Quote:

Originally posted by oscer
OHHH Get a Cray www.cray.com
best answer so far



:thumbsup

jules180 04-04-2003 02:04 AM

also any anecdotal evidnce w those using gzip pros cons etc on high volume sys. BTW regarding the serving requirements: content type may be the issue we are serving up lots of small ads via this cluster approx 50 mil imps per day, so any thoughts much appreciated.

pimpshost 04-04-2003 02:48 AM

Contact me, you dont need to talk to anyone else. 408-209-8949 I have 2 $30,000 load balancers you can use for free.

KC 04-04-2003 03:57 AM

Quote:

Originally posted by pimpshost
Contact me, you dont need to talk to anyone else. 408-209-8949 I have 2 $30,000 load balancers you can use for free.
Are you sure an employee won't walk out of the data center with them???

ouch! sorry it was too easy, I couldn't resist!!!

blazin 04-04-2003 04:44 AM

10 web servers and Cisco Localdirector

:thumbsup Works for me

cyberpunk 04-04-2003 11:48 PM

Quote:

Originally posted by Smegma
I don't think the issue is the load balancer.

I find it funny that people think that RR is better.

Think folks. Why do you think every major website pushing more than 100mbs uses a load balancer?

Do you think that so many companies - Foundry, Extreme, Cisco, F5, Alteon, etc, etc.. made these things because people like the color? the blinking lights?


Never, ever underestimate blinky lights, I worked for a vendor for a while, we sold multi millions $ pieces of hardware to carriers, and first question everyone asked was where are the blinky lights.... guess what, in the next gen of the lines cards we had blinky lights :)

Eve 04-05-2003 12:16 AM

Cluster that fucker...apache isn't multi-threaded...just the kernel is going to be...feel free to correct me if I am wrong

magnatique 04-05-2003 01:30 AM

Quote:

Originally posted by jules180
we were timing out with 3 dual xeon boxes in rotation at a 70 meg spike yesterday
my guess is it's your host....

70meg spike is nothing...


We had a PW leak once that sustained the maximum of the network card at 100mbps for a few hours before the pw was disabled.... on a single FREEBSD Box... and it was I believe a Dual 1000mhz with 1 gig ram.... 72 gigs scsi

it's all about the tweaks I guess...

oh, we deal with movie member sites, so yes, it had big downloads...


All times are GMT -7. The time now is 10:01 PM.

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2026, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123