GoFuckYourself.com - Adult Webmaster Forum

GoFuckYourself.com - Adult Webmaster Forum (https://gfy.com/index.php)
-   Fucking Around & Business Discussion (https://gfy.com/forumdisplay.php?f=26)
-   -   TGP-MGP OWNERS Using Multi-Catagory Gallery Scripts Please Contact me. (https://gfy.com/showthread.php?t=329749)

boneprone 07-25-2004 05:05 PM

did this make your loads go down?

again, how is simply hosting some thumbs elsewhere gunna give me back 2 Gigs of RAM memory back and get me off swap? And give me some idle CPU back?

Ill try it, but im hoping for some better suggestions to some along.

Its Sunday right now, im gunna figure out where or what to do on Monday about this. I dont want to by Friday be posting again about how nothing was improved after all the hastle.

But so far you are the only one who has actually reconmneded a solution.


Right or Wrong.

Babaganoosh 07-25-2004 05:08 PM

Quote:

Originally posted by sandman!
thumbs on a seperate server tmanger stays on your domain.

5 min's of work per site to do it and see if load goes down.

That's not going to do shit. Serving images isn't the problem.

boneprone 07-25-2004 05:09 PM

Quote:

Originally posted by Armed & Hammered
That's not going to do shit. Serving images isn't the problem.
Yeah didnt think so.

Fabuleux 07-25-2004 05:21 PM

What I also do is adjusting the crontab file so no 2 scripts are running at the same time.

For example:
Run gallery spider
Stop gallery spider
Optimize database
Wait 10 minutes
Rebuild pages TGP 1
Wait 10 minutes
Rebuild pages TGP 2
Wait 10 minutes
etc.
Wait 10 minutes
etc.

When 2 script run at the same time, it takes more then 2 times the time to complete.

boneprone 07-25-2004 06:46 PM

Quote:

Originally posted by Fabuleux
What I also do is adjusting the crontab file so no 2 scripts are running at the same time.

For example:
Run gallery spider
Stop gallery spider
Optimize database
Wait 10 minutes
Rebuild pages TGP 1
Wait 10 minutes
Rebuild pages TGP 2
Wait 10 minutes
etc.
Wait 10 minutes
etc.

When 2 script run at the same time, it takes more then 2 times the time to complete.

Yeah i already did that. That was simple enough that I could do that one on my own.

Like a few months ago I did it but Im not even really talking cron loads here. Im talking about when the server is at Idle just doing normal functions im having problems.

You see this problem is happeing even when the crobs arent working but when the cron does come on it makes things almost crippled. But its only for 15 minutes so I guess I can live with it. Been doing that for almost a year so hell I can deal with it even longer. But the concern I have now is that im dealing with loads and swap even when crons arent working. Im having it just with normal functions.

But yeah, I already did the simple solutions, like timing the Crons not to go off at the same time and limiting them to run as little as possible. Did the other simple solutions like upgrading from1 gig of RAM to 2 gig of RAM and that kinda thing, and that helped as well deal with the crashes during the cron, but now that my movie sites are really pulling in more traffic back Like I used to have in 1999-2000 the 2 gigs RAM upgrade seem like a thing of the past and the same problems are back.


Helped for about 1 month now Im swappin again. But

4Pics 07-25-2004 07:09 PM

What kinda server are you running on?

Why not get a dual Xeon 2.8 box with 2gigs of ram, that outta handle more traffic.

Or...

Split your largest tgp to a new box.

4Pics 07-25-2004 07:11 PM

I also think if you run the thumbs on a seperate machine it will help load.

Reason...

Tim said 3 thumbs is = to ucj..

Well don't you run like 90+ thumbs per site ? and around 700k of traffic?

That's a shitload of processing for it to just load thumbs.

A quick solution might be to cut a block of thumbs out from a site and see if the speed improves.

kongen 07-25-2004 07:36 PM

Run "top" from the shell to see which possesses that uses your CPU and memory.

If it is the gallery spiders, or others processes that don?t have to response fast to user requests, that slows the box down, try running they with "nice" (http://wwwhahahahahahagroup.org/onli.../xcu/nice.html) to reduce they system scheduling priority.

By using nice the posses will run as quickly as it can, but will wait for other processes with a higher priority number. In effect, the posses will only use the spare resources of the box. Sow thing like the webserver gets priority.

Bake 07-25-2004 09:11 PM

I have what your looking for ICQ 77762980

Babaganoosh 07-25-2004 09:17 PM

Quote:

Originally posted by 4Pics
I also think if you run the thumbs on a seperate machine it will help load.

Reason...

Tim said 3 thumbs is = to ucj..

Well don't you run like 90+ thumbs per site ? and around 700k of traffic?

That's a shitload of processing for it to just load thumbs.

A quick solution might be to cut a block of thumbs out from a site and see if the speed improves.

You're misunderstanding the meaning of his statement. He meant UCJ is so light on the server that it's equal to loading 3 thumbs which takes next to nothing. Serving HTML is just as hard on a server as serving thumbs. It's just a HTTP request. Serving those thumbs isn't the problem. If it's not the script, I'd almost guarantee it's something that could be corrected in httpd.conf.

Oracle Porn 07-25-2004 09:34 PM

didn't read the whole thread just the first few replys...
and it seems to me like a problem in the script.... get a new one, yours sucks!

abshard 07-25-2004 09:38 PM

For my tgp's I have 2 servers, 1 has the scripts, http,mail,namesever and the other has only have mysql currently. Since I added the dedicated database server my other server has had alot less load on it when the pages get rebuilt. It takes about 35-45min to build 1 site. Im thinking about moving most of my tgp script over to the database server along with my mail and 1 nameserver to make my http server faster.

http server - load average right now is around 1.5 its building a site
dual p3 933 with 2 gigs of ram

database server - load average right now is around 1
dual xeon 2ghz 2 gigs ram

I also have a spider and I have a linkchecker running 24/7

Oh i also run linux.

boneprone 07-25-2004 11:25 PM

Getting a lot of better reconmendations now it seems.

Just got done as well talking to another webmaster who runs the same scripts who deals with more traffic than me, and he help shed some light on things..

Looks like I need to shift a lot more focus to the hosting side of this problem and make some changes.

Webmaster (09:20 PM) :
Hey man, I think that once you start hitting around 650K, it might start testing your server a little. On one of my servers it runs 650-700K and it runs fine, sometimes things are tight, but it handles it. The one thing about the server is that it has 2 Apaches running on it, one of them for the pages and one for the graphics, so that lightens things up, before I only had one Apache and it was killing it, so I had to go the 2 Apache route.


Webmaster (09:20 PM) :
Most server techies think that 2 Apaches is crazy and unheard of though, ha ha ha. I know that mine did until my guys showed him how it's done, ha ha.


Webmaster (09:22 PM) :
I actually have 3 servers now, one that handles the 650-700K, another that handles 400-450K and another that has a little 5K site on it with a pile of galleries.


Webmaster (09:22 PM) :
I keep all of my sites on their own server, I don't split them up by graphics on one server and html on another. That might be a good idea though to pull in another server just for graphics.


Boneprone (11:06 PM) :
is your big server a dual? Or is it a regular P4 Single processor server?
2 Gig of RAM?
Message was sent. User is Offline.
The message will be delivered when user goes Online.

Webmaster (11:07 PM) :
My big server is a dual 2-Ghz P4 with 2 gigs ram
Boneprone (11:13 PM) :
ok.
coool. I had one of those at Likewhoa in the day and was fine.
Damn im only on a single processor.
Single Apache. (never heard of double)Have 6 sites all running UCJ real hard and Tmanager on it totalling over 650k a day.
Not to mention a spider script that runs a heavy load or perl 4 times a day that cripples the server for 20 minutes at a time every time it runs..
So all in all, traffic number aside, all the scripts alone on a single server is no shock to pound a server then I guess?


Webmaster (11:13 PM) :
For me if I added anything else, it'd push me over. I know that even on my other server, enabling hotlink protection on the server shuts it right down, so I have to use more basic methods for doing it.


Webmaster (11:14 PM) :
ha ha, nope, you should expect that.

Boneprone (11:16 PM) :
Damn.
So even if I add 2 Apaches (which sounds like a nice thing to try) Im probably gunna still have the same problems.
Sounds like perhaps Ive just grown to the max of my server


Webmaster (11:17 PM) :
Yeah, that could be. A second Apache will give you added cushion there to hold you over for a while, but you have to be careful with that. I had the pros do it for me, but I know of someone else that tried it and had some bigger problems with it.

Stramm 07-26-2004 12:17 AM

Quote:

Originally posted by boneprone
A single UCJ site was running close to 2 million raw hits per day a few years ago on a server of the day with minimal overall load

Tim - UCJ

that was mine.. it was not only 2 but >3 mill raw on a 800Mhz PIII (Zeus webserver). Load was 0.3
Spidering the galleries can't consume lots of CPU time
creating the html from a huge gallery db may bog the box down for a few mins but not all the time
if you move the thumbs another box it'll help a lot I bet

toddler 07-26-2004 12:39 AM

BP:

849 processes: 12 running, 780 sleeping, 57 zombie

those 57 zombies are part of your 'lag' problem. I would recommend a reboot if/when you can. No the zombies aren't taking up resources per se, but that are taking up process slots.

Dunno if you answered this earlier, but is this linux or freebsd?

boneprone 07-26-2004 12:49 AM

Quote:

Originally posted by toddler
BP:

849 processes: 12 running, 780 sleeping, 57 zombie

those 57 zombies are part of your 'lag' problem. I would recommend a reboot if/when you can. No the zombies aren't taking up resources per se, but that are taking up process slots.

Dunno if you answered this earlier, but is this linux or freebsd?

No dude, those zombies are normal part of UCJ

toddler 07-26-2004 12:55 AM

Quote:

Originally posted by boneprone
No dude, those zombies are normal part of UCJ
shitty code then.

boneprone 07-26-2004 12:56 AM

Ok seems the webmasters Ive been wanting to talk to are back home now after the weekend..

Here's another guy who runs similar scripts as me and also has more traffic and has dealt with this..

Again another apporach..
But intresting..


"Another Webmaster (12:16 AM) :
yo dude, I see your multiple category topic at gfy is still going strong1 server for ucj/html/category pages.
I have dual cpu's and 2 gig of mem on all my servers by the way.....single cpu just isnt strong enough

1 server for movie thumbs
1 server for pic thumbs
all 3 are dual cpu + 2 gb ram....that's the only way to go if you want your servers to be very very fast. those 3 sites have around 1.1 million visitors a day in total

Another Webmaster (12:16 AM) :
by the way, buying seperate servers for the thumbs really DID solve our problems :-)
I'm not sure how your script is written but it completely solved our speed problem....I have like 12 servers now so I know what I'm talking about :-D

Boneprone (12:19 AM) :
ahh. Yeah, so the thumbs really do take a lot of apache?
Another Webamster (12:19 AM) :
yeah, for us it did + buying a dual cpu also made a huge difference

boneprone 07-26-2004 12:58 AM

Quote:

Originally posted by Bake
I have what your looking for ICQ 77762980
post it dude

boneprone 07-26-2004 01:03 AM

anyone use TUX over Apache?

Does it help?

XP 07-26-2004 01:13 AM

I'm the script owner, I'm also server guru ;)
boneprone has a p4 2.4ghz cpu (400 or 533mhz fsb I suppose) with no Hyperthreading. 2Gb ram and Freebsd 4.X

last pid: 26998; load averages: 1.12, 1.35, 1.83 up 7+10:32:19 01:09:37
849 processes: 3 running, 789 sleeping, 57 zombie
CPU states: 19.2% user, 0.0% nice, 26.8% system, 4.7% interrupt, 49.3% idle
Mem: 1022M Active, 394M Inact, 375M Wired, 59M Cache, 199M Buf, 161M Free
Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse

as I see, too many sleeping proccess that eat memory cause to use SWAP sometimes. When there is no script running, server runs ok. 1.0 to 2.0 loads are acceptable. But its at border. When a script runs from crontab, it begins to delay proccess and server is loaded with 3.0 to 7.0

I use same script on AMD Athlon XP 2.4 ghz, 512 ram server, which works fine without any load. Of course I only have one site hosted at this server, thats why.

I think Single Xeon 2.8ghz (800mhz fsb) with 1mb cache will resolve problem temporarily. But if he increases his hits like double them, a second xeon cpu may required.

Having a second server for thumbs also may help, but then they have to be syncronised frequently. Even thumbs hosted at a second server, a new cpu is required. He gets soo much clicks for UCJ ;)

toddler 07-26-2004 01:19 AM

Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse

I don't agree. I take care of 1800+ linux and solaris machines. _swap_ per se isn't the problem.

toddler 07-26-2004 01:30 AM

Quote:

Originally posted by toddler
Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse

I don't agree. I take care of 1800+ linux and solaris machines. _swap_ per se isn't the problem.

caveat: you can't really do perfformance tuning off of 1 screen of the top screen of top. BP, is this is till a problem tomorrow I'll lob an icq your way.

night folks, meetings in the am.

boneprone 07-26-2004 01:53 AM

Quote:

Originally posted by XP
I'm the script owner, I'm also server guru ;)
boneprone has a p4 2.4ghz cpu (400 or 533mhz fsb I suppose) with no Hyperthreading. 2Gb ram and Freebsd 4.X

last pid: 26998; load averages: 1.12, 1.35, 1.83 up 7+10:32:19 01:09:37
849 processes: 3 running, 789 sleeping, 57 zombie
CPU states: 19.2% user, 0.0% nice, 26.8% system, 4.7% interrupt, 49.3% idle
Mem: 1022M Active, 394M Inact, 375M Wired, 59M Cache, 199M Buf, 161M Free
Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse

as I see, too many sleeping proccess that eat memory cause to use SWAP sometimes. When there is no script running, server runs ok. 1.0 to 2.0 loads are acceptable. But its at border. When a script runs from crontab, it begins to delay proccess and server is loaded with 3.0 to 7.0

I use same script on AMD Athlon XP 2.4 ghz, 512 ram server, which works fine without any load. Of course I only have one site hosted at this server, thats why.

I think Single Xeon 2.8ghz (800mhz fsb) with 1mb cache will resolve problem temporarily. But if he increases his hits like double them, a second xeon cpu may required.

Having a second server for thumbs also may help, but then they have to be syncronised frequently. Even thumbs hosted at a second server, a new cpu is required. He gets soo much clicks for UCJ ;)


Hey murat, those specs you just posted are rare.
Usually it looks more stressed than that 2:00 am Sunday Morning figure you posted.

The problem is yeah the server cripples when your script runs. No big deal really bro. It only runs for like 20 minutes 4 times a day.. I can deal with that..

The problem Im worried about is when the site isnt using cron tab im having some serious memory loads and lack idle of cpu avaible and swap running. I can see with my own eyes the links on my sites stalling when i click em and not shooting right to where the link is supposed to take em.

Its really affecting sites like boneprone.com which relys on this speed abilty of the surfer to click as much as possible.

My 200k site, is down at 50k after this weekend. And the click and prod according to the stats have never been lower..

Less clicks per surfer is clearly the problem. With less clicks on a tgp it can naturally have such a dramatic effect as going from 200k to 50k real fast. Now some people say that there may be a memory leak in a script that is casusing this.

Is this possible, if so how do we trace it.


Also 2 months ago we just said bahh lets put in 2 gigs of RAM from 1 gig and many of the load issues will be gone rather than looking deeper into the issue.

Now we have the 2 gigs of RAM and the same problem still exists.

toddler 07-26-2004 02:03 AM

"Is this possible, if so how do we trace it."

Process lists and performance monitoring over a period of time. It may not be his script that is doing it. It may be you need to increase the number of apache daemons running, it may be another part of whatever else you have running causing it, it could be any of those things. Do you have an email addr that i can discuss this with you tomorrow(monday) ? Don't know how much unix skills you have, but i can walk you through some things to test....


t

boneprone 07-26-2004 02:08 AM

Quote:

Originally posted by toddler
"Is this possible, if so how do we trace it."

Process lists and performance monitoring over a period of time. It may not be his script that is doing it. It may be you need to increase the number of apache daemons running, it may be another part of whatever else you have running causing it, it could be any of those things. Do you have an email addr that i can discuss this with you tomorrow(monday) ? Don't know how much unix skills you have, but i can walk you through some things to test....


t


that would be great!

bp4l at boneprone.com

toddler 07-26-2004 02:18 AM

sent. ok, really off to bed now...:)

t

Matthew 07-26-2004 03:02 AM

okay,

I'm the guy behind tmanager, and I'll try to explain what do I think about it.


First, if done right, 2nd apache for thumbs help a lot. Also a slight FreeBSD kernel tuning never hurted anyone :Graucho


Why? Correctly compiled and configured 2nd apache will consume LESS memory, which is what this server needs!

But this won't solve the whole problem. The problem is that perl script consumes a lot of memory while checking links. I think there is a memory leak somewhere in it because it shouldn't take 100 MB of RAM to check links.

And of course it's good to get a faster server anyway. Then you have the space to grow.

Also I'm suprised that guys still suggest "use c++ script only". You guys should check out how much time does launching a process via CGI takes. really. But don't start a flame war again about php and C. This is not the point I'm trying to make.

The point is, that in this particular case, the load occurs because

1) server runs this perl script, this consumes memory.
2) server goes into swap, starting a chain reaction. More and more processes have to wait for memory/CPU time to become available.
3) over a time, server processes those requests and becomes available just till next cronjob.

toddler 07-26-2004 03:43 AM

so...you're saying due to inefficiencies in your perl code, your customers need to spend money on hardware to compensate?

I'd like to take a look at the actual code in question. Is the script parsable, or has it been made a binary?(perl2exe, whateve).

V_RocKs 07-26-2004 03:59 AM

Quote:

Originally posted by Armed & Hammered
Definitely not necessary. That's overkill. If you're not serving completely dynamic content, load balancing is a waste. Sounds like more of a script issue to me.
Have to disagree with you there. SouthernCharms is in a load balanced situation and it is all static content. Once you are sending out a large ammount of data you are going to need a better CPU, more of them or load balance the servers.

Do you serve any movies from your server BP? If you do, don't. tmanager is not going to add a significant level of activety since it doesn't really do much of anything. Opening a text file, picking out some lines and then placing them into an html file once every 5 minutes isn't going to add much to the server. Same goes for the traffic script. If you are making a decent ammount of cash I'd add a server to the mix and either load balance or put 3 sites on it. Another thing to look into is to put the spider script on it.

Matthew 07-26-2004 04:02 AM

toddler:

take it easy, this is not my script.

my script has nothing to do with perl.

so no, I'm just hinting the ways on how to solve it.

V_RocKs 07-26-2004 04:03 AM

Quote:

Originally posted by Matthew
okay,

I'm the guy behind tmanager, and I'll try to explain what do I think about it.


First, if done right, 2nd apache for thumbs help a lot. Also a slight FreeBSD kernel tuning never hurted anyone :Graucho


Why? Correctly compiled and configured 2nd apache will consume LESS memory, which is what this server needs!

But this won't solve the whole problem. The problem is that perl script consumes a lot of memory while checking links. I think there is a memory leak somewhere in it because it shouldn't take 100 MB of RAM to check links.

And of course it's good to get a faster server anyway. Then you have the space to grow.

Also I'm suprised that guys still suggest "use c++ script only". You guys should check out how much time does launching a process via CGI takes. really. But don't start a flame war again about php and C. This is not the point I'm trying to make.

The point is, that in this particular case, the load occurs because

1) server runs this perl script, this consumes memory.
2) server goes into swap, starting a chain reaction. More and more processes have to wait for memory/CPU time to become available.
3) over a time, server processes those requests and becomes available just till next cronjob.


Hmmm... thinking about what you said... Perl scripts don't act on memory directly like C or C++ so memory leaks is kinda a moot point, unless you simply mean it doesn't clear a buffer after using certain data or close of file it has opened and stored into a handler.

If the script worked on smaller chunks of data (used a smaller buffer or worked on chuncks of 2000 links at a time) it might give you better performance system resource wise but it will take a significant ammount of more time to run.

V_RocKs 07-26-2004 04:05 AM

Quote:

Originally posted by toddler
so...you're saying due to inefficiencies in your perl code, your customers need to spend money on hardware to compensate?

I'd like to take a look at the actual code in question. Is the script parsable, or has it been made a binary?(perl2exe, whateve).

Damn your sig really says it all... hehe

Fabuleux 07-26-2004 07:09 AM

If your traffic is dropping, reboot your box now and it will be fast for at least a few days :2 cents:

Blackrose 07-26-2004 07:30 AM

looks like the problem has caused you more problem than what we have speculated ... 200k down to 50k is a total wreck :(

i really think having another server up to split up the load wouldnt hurt ya :winkwink:

XP 07-26-2004 11:16 AM

Well your links out stall. I can see it just by clicking on your sites it takes some 5-6 seconds just to process to go to gallery. For a tgp this is a huge hit. Seems the load is not from Perl but they are they are %80 apache proccess
mostly ucj, tmanager out etc. Not the perl script.

XP 07-27-2004 10:40 AM

I noticed memory management in Perl is awful... There was really memory leaks in old versions and newest version of my script (that boneprone uses)

memory leaks are nothing, for small databases, but boneprone using it since 1+ years and database files are huge (no purging for old galleries)

I optimized code for memory leaks, uses %50 less ram now. Damn, why should I take care of variables? they should be handled by virtual machine?!

Anyway, problem is solved temporarly now. New version uses heavy mysql operations (like building pages from 700K galleries database), it requires more cpu and less ram.

Owners of script on boneprone.com should contact me as soon as possible, we need upgrade them to latest version asap

boneprone 07-27-2004 10:46 AM

Thanks bud!

My sites are so dependent on your scripts, and when consultants reconmended I not use it, they did not understand that it was not an option. This script is a must for me.

But you mentioned I still need more CPU? CPU is something i didnt have much of to begin with. Are you saying I need more now with this update? Or just in general I need more?

Does this new version use more CPU?

XP 07-27-2004 10:53 AM

Yes, but getting more cpu or a server that has more CPU than you currently have is quite easy to do now days, and in fact the industry standard for your kind of site is is that of a Server with more CPU as you can see from the posts here on GFY alone by other webmasters in your same niche. "If you cant spring for a dual like what most have, at least get a processor with hyperthreading

rowan 07-27-2004 11:06 AM

Here's another thumbs up for a second, custom httpd process. A few years ago one of my sites had some severe overload problems, compiling a "lean and mean" Apache with most modules stripped out (including many of the default ones) for serving IMAGES ONLY made a huge difference in server load. It will also make a difference in your memory usage, standard Apache with PHP consumed about 7Mb, the lean Apache consumed 4Mb. Multiply that by 100 or 200 httpd processes and you get back a sizeable chunk of RAM. Most TGPs probably spend most of their server time sending images, so it is well worth considering the benefits. :2 cents:

If you run templates that can rebuild your site with img src's pointing to a different domain then you can set the whole thing up in less than an hour.

rowan 07-27-2004 11:10 AM

One other thing to bear in mind, once a server starts overloading it's usually the start of an exponential spiral. Doubling the load might triple or quadruple the delay that the surfers experience. So making even minor changes to ease the load can produce a much more apparent benefit.

The site I mentioned above was once reviewed as "the slowest site on the net" before I did those tweaks. :)

boneprone 07-27-2004 11:44 AM

Yeah i hear ya Rowan.

There are two options here i need to decide on.

Getting a better server, or keeping the one I have now and getting one to offload the thumbs onto and have the thumbs there.

Im just wondering how much Apache load that will save putting the thumbs elsewhere.. And XP's cpu demanding script will still be on the main server though. Its seprate from the thumb scripot.

boneprone 07-27-2004 11:46 AM

Quote:

Originally posted by rowan
One other thing to bear in mind, once a server starts overloading it's usually the start of an exponential spiral. Doubling the load might triple or quadruple the delay that the surfers experience. So making even minor changes to ease the load can produce a much more apparent benefit.

The site I mentioned above was once reviewed as "the slowest site on the net" before I did those tweaks. :)

Yeah i hear ya.

And ive been getting some real slow comments on my sites for a while now too. Just trying to figure out the best way to tackle it

boneprone 07-27-2004 12:13 PM

Doesnt seem like the new system is really making thing lighter.

Hmmm.


CPU states: 48.6% user, 0.0% nice, 47.9% system, 3.5% interrupt, 0.0% idle

Mem: 1159M Active, 371M Inact, 383M Wired, 88M Cache, 199M Buf, 9724K Free

Swap: 2048M Total, 25M Used, 2023M Free, 1% Inuse

:(

4Pics 07-27-2004 12:28 PM

hey icq me #5051691 boney

Volantt 07-27-2004 01:24 PM

You really need to reply to my ICQ and hit me up. :2 cents:

XP 07-28-2004 12:11 AM

rowan got a point. I didn't compiled apache in Boneprone's server, they did ;)
I personally don't use SO in apache, instead compile it with PHP core. And some other modules in core (only required ones)

It all works fine for me.

The bad thing about boneprone's host is, they calculate servers powers by Mbits that server push. I don't care if a server with all static content push 50mbit with same config.
We use heavy mysql operations (even they are highly optimized, you can do nothing about picking up from 700.000 galleries database)


All times are GMT -7. The time now is 05:27 PM.

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123