![]() |
Quote:
Or the thumbs script too? I dont think its the actual images bogging down apache is it? Its the clicks and the processing of the program itself, and the php updates its does on the network of sites that does it isnt it? |
1. That load is nothing, start to worrie when it goes over 20
2. Use only script written in c++ 3. If you use mysql use mysql 4, much faster 4. You are out of memory, insert more 5. If you don't already, use freebsd, not linux |
Quote:
Prod is way down. So much so that boneprone.com is 50% less traffic than it should be. The only reason the over all number to my sites is doing ok is becasue of the latest additions of new sites. But what im trying to figure out here, is if these loads are normal for someone doing this much traffic with these scripts or abnormal? As far as I know not many people either run these scripts AND do this much traffic, or if they do it seems they have them spread over a few servers. Im just wondering what needs to be done to fix this. I dont want to have to utilize another server when its not needed and a simple solution is avialble, but it seems there just arent many people to cosult with who know, or do my kind of traffic with these scripts. Everyone has an idea or two, but Id really like to talk to someone who has been or has dealt with what im going through. As of now, my guesses or the guesses of my host techs are just that. Guesses. Neither of us have really dealt much with TGP script load issues.. But I know many of you have. Im suprised Im not getting much more insite here. Ive sent this thread to a bunch of people on icq, and only a small hand full of replies. What the fuck? |
Hit me up when you get a chance. I have been through this numerous time.
|
Quote:
|
shit sorry bro.
My bad. Just processed im my head who you were. Yeah ill hit ya up. Id like to pick your brain. Seems the only people hitting me up and trying to help me have been hosting companies.. Kinda irratating. Im not in the market of changing. Im just trying to gather some more info so I can make a educated suggestion and refrences for my hosting tech so he can have a better idea of whats going on.. This is all new to him too. Jupiter hosts many TGPs im sure, but there are only a few sites or networks like mine with the combination of traffic and similar scripts running, and I dont think any of em are on Jupiter.. And those that do have waht I have going on arent exactly reaching out a lending hand to help me or tell me how they configured or deal with these issues. Again... FUCKERS. |
Quote:
dont tell me the family hasent got an update yet :( |
Quote:
|
Quote:
Its not a bandwith issue, it seems to be a php, mysql, apache, CPU and memory load issue. Putting the thumbs on another server will only speed up the surfers download time wont it and not the script performance. Or will it? YEah I have Gold on socal and digitalmpegs, but not modelsgroup. Should I get it? And is thummanger casuing this? Or is it my traffic amounts to the scripts causing it? Did you have loads with your sites? |
Quote:
5 min's of work per site to do it and see if load goes down. |
did this make your loads go down?
again, how is simply hosting some thumbs elsewhere gunna give me back 2 Gigs of RAM memory back and get me off swap? And give me some idle CPU back? Ill try it, but im hoping for some better suggestions to some along. Its Sunday right now, im gunna figure out where or what to do on Monday about this. I dont want to by Friday be posting again about how nothing was improved after all the hastle. But so far you are the only one who has actually reconmneded a solution. Right or Wrong. |
Quote:
|
Quote:
|
What I also do is adjusting the crontab file so no 2 scripts are running at the same time.
For example: Run gallery spider Stop gallery spider Optimize database Wait 10 minutes Rebuild pages TGP 1 Wait 10 minutes Rebuild pages TGP 2 Wait 10 minutes etc. Wait 10 minutes etc. When 2 script run at the same time, it takes more then 2 times the time to complete. |
Quote:
Like a few months ago I did it but Im not even really talking cron loads here. Im talking about when the server is at Idle just doing normal functions im having problems. You see this problem is happeing even when the crobs arent working but when the cron does come on it makes things almost crippled. But its only for 15 minutes so I guess I can live with it. Been doing that for almost a year so hell I can deal with it even longer. But the concern I have now is that im dealing with loads and swap even when crons arent working. Im having it just with normal functions. But yeah, I already did the simple solutions, like timing the Crons not to go off at the same time and limiting them to run as little as possible. Did the other simple solutions like upgrading from1 gig of RAM to 2 gig of RAM and that kinda thing, and that helped as well deal with the crashes during the cron, but now that my movie sites are really pulling in more traffic back Like I used to have in 1999-2000 the 2 gigs RAM upgrade seem like a thing of the past and the same problems are back. Helped for about 1 month now Im swappin again. But |
What kinda server are you running on?
Why not get a dual Xeon 2.8 box with 2gigs of ram, that outta handle more traffic. Or... Split your largest tgp to a new box. |
I also think if you run the thumbs on a seperate machine it will help load.
Reason... Tim said 3 thumbs is = to ucj.. Well don't you run like 90+ thumbs per site ? and around 700k of traffic? That's a shitload of processing for it to just load thumbs. A quick solution might be to cut a block of thumbs out from a site and see if the speed improves. |
Run "top" from the shell to see which possesses that uses your CPU and memory.
If it is the gallery spiders, or others processes that don?t have to response fast to user requests, that slows the box down, try running they with "nice" (http://wwwhahahahahahagroup.org/onli.../xcu/nice.html) to reduce they system scheduling priority. By using nice the posses will run as quickly as it can, but will wait for other processes with a higher priority number. In effect, the posses will only use the spare resources of the box. Sow thing like the webserver gets priority. |
I have what your looking for ICQ 77762980
|
Quote:
|
didn't read the whole thread just the first few replys...
and it seems to me like a problem in the script.... get a new one, yours sucks! |
For my tgp's I have 2 servers, 1 has the scripts, http,mail,namesever and the other has only have mysql currently. Since I added the dedicated database server my other server has had alot less load on it when the pages get rebuilt. It takes about 35-45min to build 1 site. Im thinking about moving most of my tgp script over to the database server along with my mail and 1 nameserver to make my http server faster.
http server - load average right now is around 1.5 its building a site dual p3 933 with 2 gigs of ram database server - load average right now is around 1 dual xeon 2ghz 2 gigs ram I also have a spider and I have a linkchecker running 24/7 Oh i also run linux. |
Getting a lot of better reconmendations now it seems.
Just got done as well talking to another webmaster who runs the same scripts who deals with more traffic than me, and he help shed some light on things.. Looks like I need to shift a lot more focus to the hosting side of this problem and make some changes. Webmaster (09:20 PM) : Hey man, I think that once you start hitting around 650K, it might start testing your server a little. On one of my servers it runs 650-700K and it runs fine, sometimes things are tight, but it handles it. The one thing about the server is that it has 2 Apaches running on it, one of them for the pages and one for the graphics, so that lightens things up, before I only had one Apache and it was killing it, so I had to go the 2 Apache route. Webmaster (09:20 PM) : Most server techies think that 2 Apaches is crazy and unheard of though, ha ha ha. I know that mine did until my guys showed him how it's done, ha ha. Webmaster (09:22 PM) : I actually have 3 servers now, one that handles the 650-700K, another that handles 400-450K and another that has a little 5K site on it with a pile of galleries. Webmaster (09:22 PM) : I keep all of my sites on their own server, I don't split them up by graphics on one server and html on another. That might be a good idea though to pull in another server just for graphics. Boneprone (11:06 PM) : is your big server a dual? Or is it a regular P4 Single processor server? 2 Gig of RAM? Message was sent. User is Offline. The message will be delivered when user goes Online. Webmaster (11:07 PM) : My big server is a dual 2-Ghz P4 with 2 gigs ram Boneprone (11:13 PM) : ok. coool. I had one of those at Likewhoa in the day and was fine. Damn im only on a single processor. Single Apache. (never heard of double)Have 6 sites all running UCJ real hard and Tmanager on it totalling over 650k a day. Not to mention a spider script that runs a heavy load or perl 4 times a day that cripples the server for 20 minutes at a time every time it runs.. So all in all, traffic number aside, all the scripts alone on a single server is no shock to pound a server then I guess? Webmaster (11:13 PM) : For me if I added anything else, it'd push me over. I know that even on my other server, enabling hotlink protection on the server shuts it right down, so I have to use more basic methods for doing it. Webmaster (11:14 PM) : ha ha, nope, you should expect that. Boneprone (11:16 PM) : Damn. So even if I add 2 Apaches (which sounds like a nice thing to try) Im probably gunna still have the same problems. Sounds like perhaps Ive just grown to the max of my server Webmaster (11:17 PM) : Yeah, that could be. A second Apache will give you added cushion there to hold you over for a while, but you have to be careful with that. I had the pros do it for me, but I know of someone else that tried it and had some bigger problems with it. |
Quote:
Spidering the galleries can't consume lots of CPU time creating the html from a huge gallery db may bog the box down for a few mins but not all the time if you move the thumbs another box it'll help a lot I bet |
BP:
849 processes: 12 running, 780 sleeping, 57 zombie those 57 zombies are part of your 'lag' problem. I would recommend a reboot if/when you can. No the zombies aren't taking up resources per se, but that are taking up process slots. Dunno if you answered this earlier, but is this linux or freebsd? |
Quote:
|
Quote:
|
Ok seems the webmasters Ive been wanting to talk to are back home now after the weekend..
Here's another guy who runs similar scripts as me and also has more traffic and has dealt with this.. Again another apporach.. But intresting.. "Another Webmaster (12:16 AM) : yo dude, I see your multiple category topic at gfy is still going strong1 server for ucj/html/category pages. I have dual cpu's and 2 gig of mem on all my servers by the way.....single cpu just isnt strong enough 1 server for movie thumbs 1 server for pic thumbs all 3 are dual cpu + 2 gb ram....that's the only way to go if you want your servers to be very very fast. those 3 sites have around 1.1 million visitors a day in total Another Webmaster (12:16 AM) : by the way, buying seperate servers for the thumbs really DID solve our problems :-) I'm not sure how your script is written but it completely solved our speed problem....I have like 12 servers now so I know what I'm talking about :-D Boneprone (12:19 AM) : ahh. Yeah, so the thumbs really do take a lot of apache? Another Webamster (12:19 AM) : yeah, for us it did + buying a dual cpu also made a huge difference |
Quote:
|
anyone use TUX over Apache?
Does it help? |
I'm the script owner, I'm also server guru ;)
boneprone has a p4 2.4ghz cpu (400 or 533mhz fsb I suppose) with no Hyperthreading. 2Gb ram and Freebsd 4.X last pid: 26998; load averages: 1.12, 1.35, 1.83 up 7+10:32:19 01:09:37 849 processes: 3 running, 789 sleeping, 57 zombie CPU states: 19.2% user, 0.0% nice, 26.8% system, 4.7% interrupt, 49.3% idle Mem: 1022M Active, 394M Inact, 375M Wired, 59M Cache, 199M Buf, 161M Free Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse as I see, too many sleeping proccess that eat memory cause to use SWAP sometimes. When there is no script running, server runs ok. 1.0 to 2.0 loads are acceptable. But its at border. When a script runs from crontab, it begins to delay proccess and server is loaded with 3.0 to 7.0 I use same script on AMD Athlon XP 2.4 ghz, 512 ram server, which works fine without any load. Of course I only have one site hosted at this server, thats why. I think Single Xeon 2.8ghz (800mhz fsb) with 1mb cache will resolve problem temporarily. But if he increases his hits like double them, a second xeon cpu may required. Having a second server for thumbs also may help, but then they have to be syncronised frequently. Even thumbs hosted at a second server, a new cpu is required. He gets soo much clicks for UCJ ;) |
Swap: 2048M Total, 76M Used, 1972M Free, 3% Inuse
I don't agree. I take care of 1800+ linux and solaris machines. _swap_ per se isn't the problem. |
Quote:
night folks, meetings in the am. |
Quote:
Hey murat, those specs you just posted are rare. Usually it looks more stressed than that 2:00 am Sunday Morning figure you posted. The problem is yeah the server cripples when your script runs. No big deal really bro. It only runs for like 20 minutes 4 times a day.. I can deal with that.. The problem Im worried about is when the site isnt using cron tab im having some serious memory loads and lack idle of cpu avaible and swap running. I can see with my own eyes the links on my sites stalling when i click em and not shooting right to where the link is supposed to take em. Its really affecting sites like boneprone.com which relys on this speed abilty of the surfer to click as much as possible. My 200k site, is down at 50k after this weekend. And the click and prod according to the stats have never been lower.. Less clicks per surfer is clearly the problem. With less clicks on a tgp it can naturally have such a dramatic effect as going from 200k to 50k real fast. Now some people say that there may be a memory leak in a script that is casusing this. Is this possible, if so how do we trace it. Also 2 months ago we just said bahh lets put in 2 gigs of RAM from 1 gig and many of the load issues will be gone rather than looking deeper into the issue. Now we have the 2 gigs of RAM and the same problem still exists. |
"Is this possible, if so how do we trace it."
Process lists and performance monitoring over a period of time. It may not be his script that is doing it. It may be you need to increase the number of apache daemons running, it may be another part of whatever else you have running causing it, it could be any of those things. Do you have an email addr that i can discuss this with you tomorrow(monday) ? Don't know how much unix skills you have, but i can walk you through some things to test.... t |
Quote:
that would be great! bp4l at boneprone.com |
sent. ok, really off to bed now...:)
t |
okay,
I'm the guy behind tmanager, and I'll try to explain what do I think about it. First, if done right, 2nd apache for thumbs help a lot. Also a slight FreeBSD kernel tuning never hurted anyone :Graucho Why? Correctly compiled and configured 2nd apache will consume LESS memory, which is what this server needs! But this won't solve the whole problem. The problem is that perl script consumes a lot of memory while checking links. I think there is a memory leak somewhere in it because it shouldn't take 100 MB of RAM to check links. And of course it's good to get a faster server anyway. Then you have the space to grow. Also I'm suprised that guys still suggest "use c++ script only". You guys should check out how much time does launching a process via CGI takes. really. But don't start a flame war again about php and C. This is not the point I'm trying to make. The point is, that in this particular case, the load occurs because 1) server runs this perl script, this consumes memory. 2) server goes into swap, starting a chain reaction. More and more processes have to wait for memory/CPU time to become available. 3) over a time, server processes those requests and becomes available just till next cronjob. |
so...you're saying due to inefficiencies in your perl code, your customers need to spend money on hardware to compensate?
I'd like to take a look at the actual code in question. Is the script parsable, or has it been made a binary?(perl2exe, whateve). |
Quote:
Do you serve any movies from your server BP? If you do, don't. tmanager is not going to add a significant level of activety since it doesn't really do much of anything. Opening a text file, picking out some lines and then placing them into an html file once every 5 minutes isn't going to add much to the server. Same goes for the traffic script. If you are making a decent ammount of cash I'd add a server to the mix and either load balance or put 3 sites on it. Another thing to look into is to put the spider script on it. |
All times are GMT -7. The time now is 05:29 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123