![]() |
hypothetical question for server gurus
a hope everyone has a good holiday and spends some time with those they love.
that said, i hope some of you really smart people would opine on this: let's say you want to design a server setup with the best reliability/redundancy on a 10mbps shared connection. i have 2 boxes on a connection like this, and my higher end box is bogging down, running out of connections, etc. there's lots of perl scripts running, paysite activity, lots of images, lots of java. server currently has fast processors, big scsi drives, and we are bumping ram from 2 to 4 gig. the other server is just running tgp galleries on a big IDE drive and is running fine, it also has a big backup HD for both boxes. someone told me today that google doesn't run "high" end hardware, but does it's job with 15,000 $200 boxes and rollover load balancing. that got me to thinking---is that the way to go? i need ultimate reliability, fast loading of images and a few mpg files and i want to stay under 2 grand a month---i need redundancy. given that budget, if you had to setup for best reliability how would you do it? thanks for any advice! |
Are you running MySQL or some other database?
|
mySQL yes is being used by autolinks pro 2.1
|
|
There isn't a one sized fits all answer. Scaling out to multiple servers can result in higher uptime (if they all do the same jobs in round robin or some other scheme) or worse uptime (like when 3 servers are required for each user (for example database, apache, back end).
And multiple servers always adds complexity, unless its simply movig different parts to different servers -- too unreliable though. My preference is multiple redundant servers, but again, you have to look at the particulars of each case. |
| All times are GMT -7. The time now is 10:10 PM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2026, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123