|
|
|
||||
|
Welcome to the GoFuckYourself.com - Adult Webmaster Forum forums. You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today! If you have any problems with the registration process or your account login, please contact us. |
![]() |
|
|||||||
| Discuss what's fucking going on, and which programs are best and worst. One-time "program" announcements from "established" webmasters are allowed. |
|
|
Thread Tools |
|
|
#1 |
|
Confirmed User
Join Date: May 2001
Location: Cyprus
Posts: 209
|
To Lightning - about girlshost.com (trouble....)
Yesterday I try to submit my gallery to Richards-Realm....
But get "Your site or your host has been banned." .... I drop mail to he, and he ask: "Unfortunately your host had trouble serving the pages and 404s kept popping up." Can you comment this? ![]() ------------------ Make Money, Not War! |
|
|
|
|
|
#2 |
|
Confirmed User
Join Date: Jan 2001
Location: o-HI-o
Posts: 7,183
|
What is the gal URL?
|
|
|
|
|
|
#3 |
|
Confirmed User
Join Date: May 2001
Location: Cyprus
Posts: 209
|
|
|
|
|
|
|
#4 |
|
Confirmed User
Join Date: Jan 2001
Location: o-HI-o
Posts: 7,183
|
It works fine here. I'm about 500 miles from the servers, maybe 800 tops.
RR doesn't accept teen pages with that word either. [This message has been edited by Gemini (edited 06-07-2001).] |
|
|
|
|
|
#5 |
|
Confirmed User
Join Date: May 2001
Location: Cyprus
Posts: 209
|
Heh
![]() 1. Richard take my prev. galleries - with "Teen" words... 2. Yes.... page is visible.... but you can't caught redirect in single load.... I send this page to 5 of my friends... 3 of their get 404... |
|
|
|
|
|
#6 |
|
Too lazy to set a custom title
Industry Role:
Join Date: May 2001
Location: My network is hosted at TECHIEMEDIA.net ...Wait, you meant where am *I* located at? Oh... okay, I'm in Winnipeg, Canada. Oops. :)
Posts: 51,460
|
Loads fine for me.....way up here in the great white north
![]() <font face="Arial">___________ CD ![]() * <a href="http://www4.smutserver.com/babes/bgnetwork/submit.html" TARGET="_blank"><font color="#27FFFC">Babe Galleries Network</font></a> < -- submit galleries here * <a href="http://www.oliver-klozov.com/cgi-bin/refer.cgi?ref=cdsmithok" TARGET="_blank"><font color="#CBE6FF">60% of all signups, 40% of all rebills</font></a> + High Quality free content, mthly cash bonuses * <a href="http://members.home.net/cyberdogs/Anti-Censorship%20Site/" TARGET="_blank"><font color="#FFCCCC">Sites Against Censorship</a><font color="#EDDDDD"> Support us, support your future</font></font> |
|
|
|
|
|
#7 |
|
Confirmed User
Join Date: Jan 2001
Location: o-HI-o
Posts: 7,183
|
I see that now Lust on the redir.
On RR he killed our pages with teen and such words and sent a link to his no no's and sure enough those words are banned. He prefers Young Women I think. Been awhile since I read it. Might be his new reviewers missing the wording. (If he ever got them) ![]() |
|
|
|
|
|
#8 |
|
GFY Chaperone
Join Date: Jan 2001
Location: Adult.com
Posts: 9,846
|
Works for me.
|
|
|
|
|
|
#9 |
|
Too lazy to set a custom title
Industry Role:
Join Date: May 2001
Location: My network is hosted at TECHIEMEDIA.net ...Wait, you meant where am *I* located at? Oh... okay, I'm in Winnipeg, Canada. Oops. :)
Posts: 51,460
|
Richard rejects anything to do with "teen" or "teens" or "lOlita" totally. He has a category for that called "young ladies" but that's it...and you can't have the word "teen" in your description either.
I'm 3 for 3 this week on RR, woooohooooo ![]() <font face="Arial">___________ CD ![]() * <a href="http://www4.smutserver.com/babes/bgnetwork/submit.html" TARGET="_blank"><font color="#27FFFC">Babe Galleries Network</font></a> < -- submit galleries here * <a href="http://www.oliver-klozov.com/cgi-bin/refer.cgi?ref=cdsmithok" TARGET="_blank"><font color="#CBE6FF">60% of all signups, 40% of all rebills</font></a> + High Quality free content, mthly cash bonuses * <a href="http://members.home.net/cyberdogs/Anti-Censorship%20Site/" TARGET="_blank"><font color="#FFCCCC">Sites Against Censorship</a><font color="#EDDDDD"> Support us, support your future</font></font> |
|
|
|
|
|
#10 |
|
Confirmed User
Join Date: May 2001
Location: Cyprus
Posts: 209
|
People!
I told about POSSIBLE redirect.... NOT about accepting RR teen-galleries or not.... [This message has been edited by Lust (edited 06-07-2001).] |
|
|
|
|
|
#11 |
|
Registered User
Join Date: Jan 2001
Location: Kansas
Posts: 3,560
|
So I guess the story that ends with "she was the sweetest teen I ever had the pleasure to shake hands" would be out at Richard's Realm?
------------------ Moongem Erotica Moongem Fiction |
|
|
|
|
|
#12 |
|
Confirmed User
Join Date: Mar 2001
Location: ReliableServers.Com
Posts: 1,462
|
Lust I agree with you on this. I get caught in a lightningfree 404 hell hole at least once a week, but I refresh and the page "magically" appears back. Ill take some screenshots of it next time it happends so I can post them in here.
|
|
|
|
|
|
#14 |
|
Confirmed User
Industry Role:
Join Date: Feb 2001
Location: The right place
Posts: 847
|
Witnessed the same thing on galleries posted to my TGP's.
I know how it occurs too ![]() It's when a server gets hit with traffic so hard it cannot deliver a page within the ammount of time that has been set for it. Thus generating a timeout error and I suspect it has been set to serve a 404 on all errors. No biggy actualy ![]() I think a large part of TGP's blacklisting freehosts for reasons like that is the fact that the owners that run the TGP basically only know how to switch on a pc and a little HTML, they don't have a clue on how servers and connections work(The Hun is an exception on that though )My 2 cents a minute, Wolfshade Wolfshade ------------------ Get paid per minute! Dialerclopedia |
|
|
|
|
|
#15 |
|
Confirmed User
Join Date: Feb 2001
Location: Montreal
Posts: 1,526
|
Some clues for those who need them:
<CENTER> <H2> Analyzing the Overload Behavior of a Simple Web Server </H2> Niels Provos, University of Michigan <A HREF="maimailto:[email protected]"> <TT><[email protected]></TT></A> Chuck Lever, AOL-Netscape, Incorporated <A HREF="mailto:[email protected]"> <TT><[email protected]></TT></A> Stephen Tweedie, Red Hat Software <A HREF="mailto:[email protected]"> <TT><[email protected]></TT></A> Linux Scalability Project Center for Information Technology Integration University of Michigan, Ann Arbor <TT> <A HREF="mailto:[email protected]"> [email protected]</A> <A HREF="http://www.citi.umich.edu/projects/linux-scalability"> http://www.citi.umich.edu/projects/linux-scalability</A> </TT> Abstract <TABLE> <TR> <TD WIDTH="600"> Linux introduces POSIX Real Time signals to report I/O activity on multiple connections with more scalability than traditional models. In this paper we explore ways of improving the scalability and performance of POSIX RT signals even more by measuring system call latency and by creating bulk system calls that can deliver multiple signals at once. </TD> </TR> </TABLE> <FONT FACE="Helvetica" SIZE="-2"> This document was written as part of the Linux Scalability Project. The work described in this paper was supported via generous grants from the Sun-Netscape Alliance, Intel, Dell, and IBM. This document is Copyright © 2000 by AOL-Netscape, Inc., and by the Regents of the University of Michigan. Trademarked material referenced in this document is copyright by its respective owner. </FONT> </CENTER> <HR> <H3>1. Introduction</H3> Experts on network server architecture have argued that servers making use of I/O completion events are more scalable than today's servers [<A HREF="#2">2</A>, <A HREF="#3">3</A>, <A HREF="#5">5</A>]. In Linux, POSIX Real-Time (RT) signals can deliver I/O completion events. Unlike traditional UNIX signals, RT signals carry a data payload, such as a specific file descriptor that just completed. Signals with a payload can enable network server applications to respond immediately to network requests, as if they were event-driven. An added benefit of RT signals is that they can be queued in the kernel and delivered to an application one at a time, in order, leaving an application free to pick up I/O completion events when convenient. The RT signal queue is a limited resource. When it is exhausted, the kernel signals an application to switch to polling, which delivers multiple completion events at once. Even when no signal queue overflow happens, however, RT signals may have inherent limitations due to the number of system calls needed to manage events on a single connection. This number may not be critical if the queue remains short, for instance while server workload is easily handled. When the server becomes loaded, the signal queue can cause system call overhead to dominate server processing, with the result that events are forced to wait a long time in the signal queue. Linux has been carefully designed so that system calls are not much more expensive than library calls. There are no more cache effects for a system call than there are for a library call, and few virtual memory effects because the kernel appears in every process?s address space. However, added security checks during system calls and hardware overhead caused by crossing protection domains make it expedient to avoid multiple system calls when fewer will do. Process switching is still comparatively expensive, often resulting in TLB flushes and virtual memory overhead. If a system call must sleep, it increases the likelihood that the kernel will switch to a different process. By lowering the number of system calls required to accomplish a given task, we reduce the likelihood of harm to an application?s cache resident set. Improving the scalability and reducing the overhead of often-used system calls has a direct impact on the scalability of network servers [<A HREF="#1">1</A>, <A HREF="#4">4</A>]. Reducing wait time for blocking system calls gives multithreaded server applications more control over when and where requested work is done. Combining several functions into fewer system calls has the same effect. In this paper, we continue work begun in "Scalable Network I/O for Linux" by Provos, et al. [<A HREF="#9">9</A>]. We measure the effects of system call latency on the performance and scalability of a simple web server based on an RT signal event model. Of special interest is the way server applications gather pending RT signals. Today applications use <TT>sigwaitinfo()</TT> to dequeue pending signals one at a time. We create a new interface, called <TT>sigtimedwait4()</TT>, that is capable of delivering multiple signals to an application at once. We use <TT>phhttpd</TT> as our web server. <TT>Phhttpd</TT> is a static-content caching front end for full-service web servers such as Apache [<A HREF="#8">8</A>]. Brown created <TT>phhttpd</TT> to demonstrate the POSIX RT signal mechanism, added to Linux during the 2.1 development series and completed during the 2.3 series [<A HREF="#2">2</A>]. We drive our test server with <TT>httperf</TT> [<A HREF="#6">6</A>]. An added client creates high-latency, low-bandwidth connections, as in Banga and Druschel [<A HREF="#7">7</A>]. Section 2 introduces POSIX Real-Time signals and describes how server designers can employ them. It also documents the <TT>phhttpd</TT> web server. Section 3 motivates the creation of our new system call. We describe our benchmark in Section 4, and discuss the results of the benchmark in Section 5. We conclude in Section 6. <HR> ------------------ wiZd0m Fortune Pussy Adult Links |
|
|
|
|
|
#16 |
|
Confirmed User
Join Date: Feb 2001
Location: Montreal
Posts: 1,526
|
<H3>2. POSIX Real-Time Signals and the <TT>phhttpd</TT> Web Server</H3> In this section, we introduce POSIX Real-Time signals (RT signals), and provide an example of their use in a network server. <H4>2.1 Using SIGIO with non-blocking sockets</H4> To understand how RT signals provide an event notification mechanism, we must first understand how signals drive I/O in a server application. We recapitulate Stevens' illustration of signal-driven I/O here [<A HREF="#10">10</A>]. An application follows these steps to enable signal-driven I/O: <OL> [*] The application assigns a <TT>SIGIO</TT> signal handler with <TT>signal()</TT> or <TT>sigaction()</TT>. </LI> [*] The application creates a new socket via <TT>socket()</TT> or <TT>accept()</TT>. </LI> [*] The application assigns an owner pid, usually its own pid, to the new socket with <TT>fcntl(fd, F_SETOWN, newpid)</TT>. The owner then receives signals for this file descriptor. </LI> [*] The application enables non-blocking I/O on the socket with <TT>fcntl(fd, F_SETFL, O_ASYNC)</TT>. </LI> [*] The application responds to signals either with its signal handler, or by masking these signals and picking them up synchronously with <TT>sigwaitinfo()</TT>. </LI> [/list=a] The kernel raises <TT>SIGIO</TT> for a variety of reasons: <UL> [*] A connection request has completed on a listening socket. </LI> [*] A disconnect request has been initiated. </LI> [*] A disconnect request has completed. </LI> [*] Half of a connection has been shut down. </LI> [*] Data has arrived on a socket. </LI> [*] A write operation has completed. </LI> [*] An out-of-band error has occurred. </LI> [/list] When using old-style signal handlers, this mechanism has no way to inform an application which of these conditions occurred. POSIX defines the <TT>siginfo_t</TT> struct (see Fig. 1), which, when used with the <TT>sigwaitinfo()</TT> system call, supplies a signal reason code that distinguishes among the conditions listed above. Detailed signal information is also available for new-style signal handlers, as defined by the latest POSIX specification [<A HREF="#15">15</A>]. This mechanism cannot say what file descriptor caused the signal, thus it is not useful for servers that manage more than one TCP socket at a time. Since its inception, it has been used successfully only with UDP-based servers [<A HREF="#10">10</A>]. <H4>2.2 POSIX Real-Time signals</H4> POSIX Real-Time signals provide a more complete event notification system by allowing an application to associate signals with specific file descriptors. For example, an application can assign signal numbers larger than <TT>SIGRTMIN</TT> to specific open file descriptors using <TT>fcntl(fd, F_SETSIG, signum)</TT>. The kernel raises the assigned signal whenever there is new data to be read, a write operation completes, the remote end of the connection closes, and so on, as with the basic <TT>SIGIO</TT> model described in the previous section. Unlike normal signals, however, RT signals can queue in the kernel. If a normal signal occurs more than once before the kernel can deliver it to an application, the kernel delivers only one instance of that signal. Other instances of the same signal are dropped. However, RT signals are placed in a FIFO queue, creating a stream of event notifications that can drive an application?s response to incoming requests. Typically, to avoid complexity and race conditions, and to take advantage of the information available in <TT>siginfo_t</TT> structures, applications mask the chosen RT signals during normal operation. An application uses <TT>sigwaitinfo()</TT> or <TT>sigtimedwait()</TT> to pick up pending signals synchronously from the RT signal queue. The kernel must generate a separate indication if it cannot queue an RT signal, for example, if the RT signal queue overflows, or kernel resources are temporarily exhausted. The kernel raises the normal signal <TT>SIGIO</TT> if this occurs. If a server uses RT signals to monitor incoming network activity, it must clear the RT signal queue and use another mechanism such as <TT>poll()</TT> to discover remaining pending activity when <TT>SIGIO</TT> is raised. Finally, RT signals can deliver a payload. <TT>Sigwaitinfo()</TT> returns a <TT>siginfo_t</TT> struct (see Fig. 1) for each signal. The <TT>_fd</TT> and <TT>_band</TT> fields in this structure contain the same information as the <TT>fd</TT> and <TT>revents</TT> fields in a <TT>pollfd</TT> struct (see Fig. 2). <PRE>struct siginfo { int si_signo; int si_errno; int si_code; union { /* other members elided */ struct { int _band; int _fd; } _sigpoll; } _sifields; } siginfo_t; </PRE> <FONT SIZE="-1"> Figure 1. Simplified <TT>siginfo_t</TT> struct. </FONT> <PRE>struct pollfd { int fd; short events; short revents; }; </PRE> <FONT SIZE="-1"> Figure 2. Standard <TT>pollfd</TT> struct </FONT> <H5>2.2.1 Mixing threads and RT signals</H5> According to the GNU <TT>info</TT> documentation that accompanies <TT>glibc</TT>, threads and signals can be mixed reliably by blocking all signals in all threads, and picking them up using one of the system calls from the <TT>sigwait()</TT> family [<A HREF="#16">16</A>]. POSIX semantics for signal delivery do not guarantee that threads waiting in <TT>sigwait()</TT> will receive particular signals. According to the standard, an external signal is addressed to the whole process (the collection of all threads), which then delivers it to one particular thread. The thread that actually receives the signal is any thread that does not currently block the signal. Thus, only one thread in a process should wait for normal signals while all others should block them. In Linux, however, each thread is a kernel process with its own PID, so external signals are always directed to one particular thread. If, for instance, another thread is blocked in <TT>sigwait()</TT> on that signal, it will not be restarted. This is an important element of the design of servers using an RT signals-based event core. All normal signals should be blocked and handled by one thread. On Linux, other threads may handle RT signals on file descriptors, because file descriptors are "owned" by a specific thread. The kernel will always direct signals for that file descriptor to its owner. <H5>2.2.2 Handling a socket close operation</H5> Signals queued before an application closes a connection will remain on the RT signal queue, and must be processed and/or ignored by applications. For instance, when a socket closes, a server application may receive previously queued read or write events before it picks up the close event, causing it to attempt inappropriate operations on the closed socket. When a socket is closed on the remote end, the local kernel queues a <TT>POLL_HUP</TT> event to indicate the remote hang-up. <TT>POLL_IN</TT> signals occurring earlier in the event stream usually cause an application to read a socket, and when it does in this case, it receives an EOF. Applications that close sockets when they receive <TT>POLL_HUP</TT> must ignore any later signals for that socket. Likewise, applications must be prepared for reads to fail at any time, and not depend only on RT signals to manage socket state. Because RT signals queue unlike normal signals, server applications cannot treat these signals as interrupts. The kernel can immediately re-use a freshly closed file descriptor, confusing an application that then processes (rather than ignores) <TT>POLL_IN</TT> signals queued by previous operations on an old socket with the same file descriptor number. This introduces to the unwary application designer significant vulnerabilities to race conditions. <H4>2.3 Using RT Signals in a Web Server</H4> <TT>Phhttpd</TT> is a static-content caching front end for full-service web servers such as Apache [<A HREF="#2>2</A>, <A HREF="#8">8</A>]. Brown created <TT>phhttpd</TT> to demonstrate the POSIX RT signal mechanism, added to the Linux kernel during the 2.1.x kernel development series and completed during the 2.3.x series. We describe it here to document its features and design, and to help motivate the design of <TT>sigtimedwait4()</TT>. Our discussion focuses on how <TT>phhttpd</TT> makes use of RT signals. <H5>2.3.1 Assigning RT signal numbers</H5> Even though a unique signal number could be assigned to each file descriptor, <TT>phhttpd</TT> uses one RT signal number for all file descriptors in all threads for two reasons. <OL> [*] Lowest numbered RT signals are delivered first. If all signals use the same number, the kernel always delivers RT signals in the order in which they arrive. </LI> [*] There is no standard library interface for multithreaded applications to allocate signal numbers atomically. Allocating a single number once during startup and giving the same number to all threads alleviates this problem. </LI> [/list=a] <H5>2.3.2 Threading model</H5> <TT>Phhttpd</TT> operates with one or more worker threads that handle RT signals. Additionally, an extra thread is created for managing logs. A separate thread pre-populates the file data cache, if requested. Instead of handling incoming requests with signals, <TT>phhttpd</TT> may use polling threads instead. Usually, though, <TT>phhttpd</TT> creates a set of RT signal worker threads, and a matching set of polling threads known as sibling threads. The purpose of sibling threads is described later. Each RT signal worker thread masks off the file descriptor signal, then iterates, picking up each RT signal via <TT>sigwaitinfo()</TT> and processing it, one at a time. To reduce system call rate, <TT>phhttpd</TT> <TT>read()</TT>s on a new connection as soon as it has <TT>accept()</TT>ed it. Often, on high-bandwidth connections, data is ready to be read as soon as a connection is <TT>accept()</TT>ed. <TT>Phhttpd</TT> reads this data and sends a response immediately to prevent another trip through the "event" loop, reducing the negative cache effects of handling other work in between the accept and the read operations. Because the read operation is non-blocking, it fails with <TT>EAGAIN</TT> if data is not immediately present. The thread proceeds normally back to the "event" loop in this case to wait for data to become available on the socket. <H5>2.3.3 Load balancing</H5> When more than one thread is available, a simple load balancing scheme passes listening sockets among the threads by reassigning the listener's owner via <TT>fcntl(fd, F_SETOWN, newpid)</TT>. After a thread accepts an incoming connection, it passes its listener to the next worker thread in the chain of worker threads. This mechanism requires that each thread have a unique pid, a property of the Linux threading model. <H5>2.3.4 Caching responses</H5> Because <TT>phhttpd</TT> is not a full-service web server, it must identify requests as those it can handle itself, or those it must pass off to its backing server. Local files that <TT>phhttpd</TT> can access are cached by mapping them and storing the map information in a hash, along with a pre-formed http response. When a cached file is requested, <TT>phhttpd</TT> sends the cached response via <TT>write()</TT> along with the mapped file data. Logic exists to handle the request via <TT>sendfile()</TT> instead. In the long run, this may be more efficient for several reasons. First, there is a limited amount of address space per process. This limits the total number of cached bytes, especially because these bytes share the address space with the pre-formed responses, hash information, heap and stack space, and program text. Using <TT>sendfile()</TT> allows data to be cached in extended memory (memory addressed higher than one or two gigabytes). Next, as the number of mapped objects grows, mapping a new object takes longer. On Linux, finding an unused area of an address space requires at least one search that is linear in the number of mapped objects in that address space. Finally, creating these maps requires expensive page table and TLB flush operations that can hurt system-wide performance, especially on SMP hardware. <H5>2.3.5 Queue overflow recovery</H5> The original <TT>phhttpd</TT> web server recovered from signal queue overflow by passing all file descriptors owned by a signal handling worker thread to a pre-existing poll-based worker thread, known as its sibling. The sibling then cleans up the signal queue, polls over all the file descriptors, processes remaining work, and passes all the file descriptors back to the original signal worker thread. On a server handling perhaps thousands of connections, this creates considerable overhead during a period when the server is already overloaded. We modified the queue overflow handler to reduce this overhead. The server now handles signal queue overflow in the same thread as the RT signal handler; sibling threads are no longer needed. This modification appears in <TT>phhttpd</TT> version 0.1.2. It is still necessary, however, to build a fresh <TT>poll_fd</TT> array completely during overflow processing. This overhead slows the server during overflow processing, but can be reduced by maintaining the <TT>poll_fd</TT> array concurrently with signal processing. RT signal queue overflow is probably not as rare as some would like. Some kernel designs have a single maximum queue size for the entire system. If any aberrant application stops picking up its RT signals (the thread that picks up RT signals may cause a segmentation fault, for example, while the rest of the application continues to run), the system-wide signal queue will fill. All other applications on the system that use RT signals will eventually be unable to proceed without recovering from a queue overflow, even though they are not the cause of it. It is well known that Linux is not a real-time operating system, and that unbounded latencies sometimes occur. Application design may also prohibit a latency upper bound guarantee. These latencies can delay RT signals, causing the queue to grow long enough that recovery is required even when servers are fast enough to handle heavy loads under normal circumstances. <HR> <H3>3. New interface: <TT>sigtimedwait4()</TT></H3> To reduce system call overhead and remove a potential source of unnecessary system calls, we'd like the kernel to deliver more than one signal per system call. One mechanism to do this is implemented in the <TT>poll()</TT> system call. The application provides a buffer for a vector of results. The system call returns the number of results it stored in the buffer, or an error. Our new system call interface combines the multiple result delivery of <TT>poll()</TT> with the efficiency of POSIX RT signals. The interface prototype appears in Fig. 3. <PRE>int sigtimedwait4(const sigset_t *set, siginfo_t *infos, int nsiginfos, const struct timespec *timeout); </PRE> <FONT SIZE="-1"> Figure 3. <TT>sigtimedwait4()</TT> prototype </FONT> Like its cousin <TT>sigtimedwait()</TT>, <TT>sigtimedwait4()</TT> provides the kernel with a set of signals in which it is interested, and a timeout value that is used when no signals are immediately ready for delivery. The kernel selects queued pending signals from the signal set specified by <TT>set</TT>, and returns them in the array of <TT>siginfo_t</TT> structures specified by <TT>infos</TT> and <TT>nsiginfos</TT>. Providing a buffer with enough room for only one <TT>siginfo_t</TT> struct forces <TT>sigtimedwait4()</TT> to behave almost like <TT>sigtimedwait()</TT>. The only difference is that specifying a negative timeout value causes <TT>sigtimedwait4()</TT> to behave like <TT>sigwaitinfo()</TT>. The same negative timeout instead causes an error return from <TT>sigtimedwait()</TT>. Retrieving more than one single signal at a time has important benefits. First and most obviously, it reduces the average number of transitions between user space and kernel space required to process a single server request. Second, it reduces the number of times per signal the per-task signal spinlock is acquired and released. This improves concurrency and reduces cache ping-ponging on SMP hardware. Third, it amortizes the cost of verifying the user's result buffer, although some believe this is insignificant. Finally, it allows a single pass through the signal queue for all pending signals that can be returned, instead of a pass for each pending signal. The <TT>sigtimedwait4()</TT> system call enables efficient server implementations by allowing the server to "compress" signals-- if it sees multiple read signals on a socket, for instance, it can empty that socket's read buffer just once. The <TT>sys_rt_sigtimedwait()</TT> function is a moderate CPU consumer in our benchmarks, according to the results of kernel EIP histograms. About three fifths of the time spent in the function occurs in the second critical section in Fig. 4. <PRE>spin_lock_irq(&current->sigmask_lock); sig = dequeue_signal(&these, &info); if (!sig) { sigset_t oldblocked = current->blocked; sigandsets(&current->blocked, &current->blocked, &these); recalc_sigpending(current); spin_unlock_irq(&current->sigmask_lock); timeout = MAX_SCHEDULE_TIMEOUT; if (uts) timeout = (timespec_to_jiffies(&ts) + (ts.tv_sec || ts.tv_nsec)); current->state = TASK_INTERRUPTIBLE; timeout = schedule_timeout(timeout); spin_lock_irq(&current->sigmask_lock); sig = dequeue_signal(&these, &info); current->blocked = oldblocked; recalc_sigpending(current); } spin_unlock_irq(&current->sigmask_lock); </PRE> <FONT SIZE="-1"> Figure 4. This excerpt of the <TT>sys_rt_sigtimedwait()</TT> kernel function shows two critical sections. The most CPU time is consumed in the second critical section. [/b] </FONT> The <TT>dequeue_signal()</TT> function contains some complexity that we can amortize across the total number of dequeued signals. This function walks through the list of queued signals looking for the signal described in <TT>info</TT>. If we have a list of signals to dequeue, we can walk the signal queue once picking up all the signals we want. <HR> ------------------ wiZd0m Fortune Pussy Adult Links |
|
|
|
|
|
#17 |
|
Confirmed User
Join Date: Feb 2001
Location: Montreal
Posts: 1,526
|
<H3>4. Benchmark description</H3> Our test harness consists of two machines running Linux connected via a 100 Mb/s Ethernet switch. The workload is driven by an Intel SC450NX with four 500Mhz Xeon Pentium III processors (512Kb of L2 cache each), 512Mb of RAM, and a pair of SYMBIOS 53C896 SCSI controllers managing several LVD 10KRPM drives. Our web server runs on custom-built hardware equipped with a single 400Mhz AMD K6-2 processor, 64Mb of RAM, and a single 8G 7.2KRPM IDE drive. The server hardware is small so that we can easily drive the server into overload. We also want to eliminate any SMP effects on our server, so it has only a single CPU. Our benchmark configuration contains only a single client host and a single server host, which makes the simulated workload less realistic. However, our benchmark results are strictly for comparing relative performance among our implementations. We believe the results also give an indication of real-world server performance. A web server's static performance naturally depends on the size distribution of requested documents. Larger documents cause sockets and their corresponding file descriptors to remain active over a longer time period. As a result the web server and kernel have to examine a larger set of descriptors, making the amortized cost of polling on a single file descriptor larger. In our tests, we request a 1 Kbyte document, a typical <TT>index.html</TT> file from the <TT>monkey.org</TT> web site. <H4>4.1 Offered load</H4> Scalability is especially critical to modern network service when serving many high-latency connections. Most clients are connected to the Internet via high-latency connections, such as modems, whereas servers are usually connected to the Internet via a few high bandwidth, low-latency connections. This creates resource contention on servers because connections to high-latency clients are relatively long-lived, tying up server resources. They also induce a bursty and unpredictable interrupt load on the server [<A HREF="#7">7</A>]. Most web server benchmarks don't simulate high-latency connections, which appear to cause difficult-to-handle load on real-world servers [<A HREF="#5>5</A>].We've added an extra client that runs in conjunction with the<TT>httperf</TT> benchmark to simulate these slowerconnections to examine the effects of our improvements on more realistic serverworkloads[<A HREF="#6">6</A>]. This client program opens a connection, but does not complete an http request. To keep the number of high-latency clients constant, these clients reopen their connection if the server times them out. In previous work, we noticed server performance change as the number of inactive connections varied [<A HREF="#9">9</A>]. As a result of this work, one of the authors modified <TT>phhttpd</TT> to correct this problem. The latest version of <TT>phhttpd</TT> (0.1.2 as of this writing) does not show significant performance degradation as the number of inactive connections increases. Therefore, the benchmark results we present here show performance with no extra inactive connections. There are several system limitations that influence our benchmark procedures. There are only a limited number of file descriptors available for single processes; <TT>httperf</TT> assumes that the maximum is 1024. We modified <TT>httperf</TT> to cope dynamically with a large number of file descriptors. Additionally, because we use only a single client and server in our test harness, we can have only about 60,000 open sockets at a single point in time. When a socket closes it enters the TIMEWAIT state for sixty seconds, so we must avoid reaching the port number limitation. We therefore run each benchmark for 35,000 connections, and then wait for all sockets to leave the TIMEWAIT state before we continue with the next benchmark run. In the following tests, we run <TT>httperf</TT> with 4096 file descriptors, and <TT>phhttpd</TT> with five thousand file descriptors. <H4>4.2 Execution Profiling</H4> To assess our modifications to the kernel, we use the EIP sampler built in to the Linux kernel. This sampler checks the value of the instruction pointer (EIP register) at fixed intervals, and populates a hash table with the number of samples it finds at particular addresses. Each bucket in the hash table reports the results of a four-byte range of instruction addresses. A user-level program later extracts the hash data and creates a histogram of CPU time matched against the kernel's symbol table. The resulting histogram demonstrates which routines are most heavily used, and how efficiently they are implemented. The granularity of the histogram allows us to see not only which functions are heavily used, but also where the most time is spent in each function. <H3>5. Results and Discussion</H3> In this section we present the results of our benchmarks, and describe some new features that our new system call API enables. <H4>5.1 Basic performance and scalability results</H4> As described previously, our web server is a single processor host running a Linux 2.2.16 kernel modified to include our implementation of <TT>sigtimedwait4()</TT>. The web server application is phhttpd version 0.1.2. We compare an unmodified version with a version modified to use <TT>sigtimedwait4()</TT>. Our benchmark driver is a modified version of <TT>httperf 0.6</TT> running on a four-processor host. Our first test compares the scalability of unmodified <TT>phhttpd</TT> using <TT>sigwaitinfo()</TT> to collect one signal at a time with the scalability of <TT>phhttpd</TT> using <TT>sigtimedwait4()</TT> to collect many signals at once. The modified version of <TT>phhttpd</TT> picks up as many as 500 signals at once during this test. <TABLE> <TR> <TD> [img]normal.png[/img] </TD> <TD> [img]wait2.png[/img] </TD> </TR> <TR> <TD> <FONT SIZE="-1"> Graph 1. Scalability of the <TT>phhttpd</TT> web server. This graph shows how a single threaded <TT>phhttpd</TT> web server scales as request rate increases. The axes are in units of requests per second. </FONT> </TD> <TD> <FONT SIZE="-1"> Graph 2. Scalability of <TT>phhttpd</TT> using <TT>sigtimedwait4()</TT>. The signal buffer size was five hundred signals, meaning that the web server could pick up as many as five hundred events at a time. Compared to Graph 1, there is little improvement. </FONT> </TD> </TR> </TABLE> Graphs 1 and 2 show that picking up more than one RT signal at a time gains little. Only minor changes in behavior occur when varying the maximum number of signals that can be picked up at once. The maximum throughput attained during the test increases slightly. This result suggests that the largest system call bottleneck is not where we first assumed. Picking up signals appears to be an insignificant part of server overhead. We hypothesize that responding to requests, rather than picking them up, is where the server spends most of its effort. <H4>5.2 Improving overload performance</H4> While the graphs for <TT>sigtimedwait4()</TT> and <TT>sigwaitinfo()</TT> look disappointingly similar, <TT>sigtimedwait4()</TT> provides new information that we can leverage to improve server scalability. Mogul, et al., refer to "receive livelock," a condition where a server is not deadlocked, but makes no forward progress on any of its scheduled tasks [<A HREF="#12">12</A>]. This is a condition that is typical of overloaded interrupt-driven servers: the server appears to be running flat out, but is not responding to client requests. In general, receive livelock occurs because processing a request to completion takes longer than the time between requests. Mogul's study finds that dropping requests as early as possible results in more request completions on overloaded servers. While the study recommends dropping requests in the hardware interrupt level or network protocol stack, we instead implement this scheme at the application level. When the web server becomes overloaded, it resets incoming connections instead of processing the requests. To determine that a server is overloaded, we use a weighted load average, essentially the same as the TCP round trip time estimator [<A HREF="#11">11</A>, <A HREF="#13">13</A>, <A HREF="#14">14</A>]. Our new <TT>sigtimedwait4()</TT> system call returns as many signals as can fit in the provided buffer. The number of signals returned each time <TT>phhttpd</TT> invokes <TT>sigtimedwait4()</TT> is averaged over time. When the load average exceeds a predetermined value, the server begins rejecting requests. Instead of dropping requests at the application level, using the listen backlog might allow the kernel to drop connections even before the application becomes involved in handling a request. Once the backlog overflows, the server's kernel can refuse connections, not even passing connection requests to the server application, further reducing the workload the web server experiences. However, this solution does not handle bursty request traffic gracefully. A moving average such as the RTT estimator smoothes out temporary traffic excesses, providing a better indicator of server workload over time. The smoothing function is computed after each invocation of <TT>sigtimedwait4()</TT>. The number of signals picked up by <TT>sigtimedwait4()</TT> is one of the function's parameters: <CENTER> [img]formula.gif[/img] </CENTER> where S is the number of signals picked up by the most recent invocation of <TT>sigtimedwait4()</TT>; Avg is the moving load average; a is the gain value, controlling how much the current signal count influences the load average; and t is time. In our implementation, <TT>phhttpd</TT> picks up a maximum of 23 signals. If Avg exceeds 18, <TT>phhttpd</TT> begins resetting incoming connections. Experimentation and the following reasoning influenced the selection of these values. As the server picks up fewer signals at once, the sample rate is higher but the sample quantum is smaller. Only picking up one signal, for example, means we're either overloaded, or we're not. This doesn't give a good indication of the server's load. As we increase the signal buffer size, the sample rate goes down (it takes longer before the server calls <TT>sigtimedwait4()</TT> again), but the sample quantum improves. At some point, the sample rate becomes too slow to adequately detect and handle overload. That is, if we pick up five hundred signals at once, the server either handles or rejects connections for all five hundred signals. The gain value determines how quickly the server reacts to full signal buffers (our "overload" condition). When the gain value approaches 1, the server begins resetting connections almost immediately during bursts of requests. Reducing the gain value allows the server to ride out smaller request bursts. If it is too small, the server may fail to detect overload, resulting in early performance degradation. We found that a gain value of 0.3 was the best compromise between smooth response to traffic bursts and overload reaction time. Graphs 3 and 4 reveal an improvement in overload behavior when an overloaded server resets connections immediately instead of trying to fulfill the requests. Server performance levels off then declines slowly, rather than dropping sharply. In addition, connection error rate is considerably lower. <TABLE> <TR> <TD> [img]wait1.png[/img] </TD> <TD> [img]errors.png[/img] </TD> </TR> <TR> <TD> <FONT SIZE="-1"> Graph 3. Scalability of <TT>phhttpd</TT> with averaged load limiting. Overload behavior improves considerably over the earlier runs, which suggests that formulating and sending responses present much greater overhead for the server than handling incoming signals. </FONT> </TD> <TD> <FONT SIZE="-1"> Graph 4. Error rate of <TT>phhttpd</TT> with averaged load limiting. When the server drops connections on purpose, it actually reduces its error rate. </FONT> </TD> </TR> </TABLE> <HR> ------------------ wiZd0m Fortune Pussy Adult Links |
|
|
|
|
|
#18 |
|
Confirmed User
Join Date: Feb 2001
Location: Montreal
Posts: 1,526
|
<H3>6. Conclusions and Future Work</H3>
Using <TT>sigtimedwait4()</TT> enables a new way to throttle web server behavior during overload. By choosing to reset connections rather than respond to incoming requests, our modified web server survives considerable overload scenarios without encountering receive livelock. The <TT>sigtimedwait4()</TT> system call also enables additional efficiency: by gathering signals in bulk, a server application can "compress" signals. For instance, if the server sees multiple read signals on a socket, it can empty that socket's read buffer just once. Further, we demonstrate that more work is done during request processing than in handling and dispatching incoming signals. Lowering signal processing overhead in the Linux kernel has little effect on server performance, but reducing request processing overhead in the web server produces a significant change in server behavior. It remains to be seen whether this request processing latency is due to: <UL> [*] accepting incoming connections (<TT>accept()</TT> and <TT>read()</TT> system calls) </LI> [*] writing the response (nonblocking <TT>write()</TT> system call and accompanying data copy operations) </LI> [*] managing the cache (server-level hash table lookup and <TT>mmap()</TT> system call) </LI> [*] some unforeseen problem. </LI> [/list] Even though sending the response back to clients requires a copy operation, it is otherwise nonblocking. Finding the response in the server's cache should also be fast, especially considering the cache in our test contains only a single document. Thus we believe future work in this area should focus on the performance of the system calls and server logic that accept and perform the initial read on incoming connections. This paper considers server performance with a single thread on a single processor to simplify our test environment. We should also study how RT signals behave on SMP architectures. Key factors influencing SMP performance and scalability include thread scheduling policies, the cache-friendliness of the kernel implementation of RT signals, and how well the web server balances load among its worker threads. <H4>6.1. Acknowledgements</H4> The authors thank Peter Honeyman and Andy Adamson for their guidance. We also thank the reviewers for their comments. Special thanks go to Zach Brown for his insights, and to Intel Corporation for equipment loans. <HR> <H3>7. References</H3> <FONT SIZE="-1"> <A NAME="1">[1]</A> G. Banga and J. C. Mogul, "Scalable Kernel Performance for Internet Servers Under Realistic Load," Proceedings of the USENIX Annual Technical Conference, June 1998. <A NAME="2">[2]</A> Z. Brown, phhttpd, <A HREF="http://www.zabbo.net/phhttpd/"> <TT>www.zabbo.net/phhttpd</TT></A>, November 1999. <A NAME="3">[3]</A> Signal driven IO (thread), <A HREF="mailto:[email protected]"> linux-kernel</A> mailing list, November 1999. <A NAME="4">[4]</A> G. Banga. P. Druschel. J. C. Mogul. "Better Operating System Features for Faster Network Servers," SIGMETRICS Workshop on Internet Server Performance, June 1998. <A NAME="5">[5]</A> J. C. Hu, I. Pyarali, D. C. Schmidt, "Measuring the Impact of Event Dispatching and Concurrency Models on Web Server Performance Over High-Speed Networks," Proceedings of the 2<SUP>nd</SUP> IEEE Global Internet Conference, November 1997. <A NAME="6">[6]</A> D. Mosberger and T. Jin, "httperf -- A Tool for Measuring Web Server Performance," SIGMETRICS Workshop on Internet Server Performance, June 1998. <A NAME="7">[7]</A> G. Banga and P. Druschel, "Measuring the Capacity of a Web Server," Proceedings of the USENIX Symposium on Internet Technologies and Systems, December 1997. <A NAME="8">[8]</A> Apache Server, The Apache Software Foundation. <A HREF="http://www.apache.org/"> <TT>www.apache.org</TT></A>. <A NAME="9">[9]</A> N. Provos and C. Lever, "Scalable Network I/O in Linux," Proceedings of the USENIX Technical Conference, FREENIX track, June 2000. <A NAME="10">[10]</A> W. Richard Stevens, UNIX Network Programming, Volume I: Networking APIs: Sockets and XTI, 2<SUP>nd</SUP> edition, Prentice Hall, 1998. <A NAME="11">[11]</A> W. Richard Stevens, TCP/IP Illustrated, Volume 1: The Protocols, pp. 299-309, Addison Wesley professional computing series, 1994. <A NAME="12">[12]</A> J. C. Mogul, K. K. Ramakrishnan, "Eliminating Receive Livelock in an Interrupt-driven Kernel," Proceedings of USENIX Technical Conference, January 1996. <A NAME="13">[13]</A> P. Karn and C. Partridge, "Improving Round-Trip Time Estimates in Reliable Transport Protocols," Computer Communication Review, pp. 2-7, vol. 17, no. 5, August 1987. <A NAME="14">[14]</A> V. Jacobson, "Congestion Avoidance and Control," Computer Communication Review, pp. 314-329, vol. 18, no. 4, August 1988. <A NAME="15">[15]</A> 1003.1b-1993 POSIX -- Part 1: API C Language -- Real-Time Extensions (ANSI/IEEE), 1993. ISBN 1-55937-375-X. <A NAME="16">[16]</A> GNU <TT>info</TT> documentation for <TT>glibc</TT>. </FONT> <HR> <H3>Appendix A: Man page for <TT>sigtimedwait4()</TT></H3> <PRE>SIGTIMEDWAIT4(2) Linux Programmer's Manual SIGTIMEDWAIT4(2) NAME sigtimedwait4 - wait for queued signals SYNOPSIS #include <signal.h> int sigtimedwait4(const sigset_t *set, siginfo_t *infos, int nsiginfos, const struct timespec *timeout); typedef struct siginfo { int si_signo; /* signal from signal.h */ int si_code; /* code from above */ ... int si_value; ... } siginfo_t; struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* and nanoseconds */ }; DESCRIPTION sigtimedwait4() selects queued pending signals from the set specified by <EM>set</EM>, and returns them in the array of siginfo_t structs specified by <EM>infos</EM> and <EM>nsiginfos</EM>. When multiple signals are pending, the lowest numbered ones are selected. The selection order between realtime and non- realtime signals, or between multiple pending non-realtime signals, is unspecified. sigtimedwait4() suspends itself for the time interval specified in the timespec structure referenced by <EM>timeout</EM>. If <EM>timeout</EM> is zero-valued, or no timespec struct is specified, and if none of the signals specified by <EM>set</EM> is pending, then sigtimedwait4() returns immediately with the error EAGAIN. If <EM>timeout</EM> contains a negative value, an infinite timeout is specified. If no signal in <EM>set</EM> is pending at the time of the call, sigtimedwait4() suspends the calling process until one or more signals in <EM>set</EM> become pending, until it is inter- rupted by an unblocked, caught signal, or until the timeout specified by the timespec structure pointed to by <EM>timeout</EM> expires. If, while sigtimedwait4() is waiting, a signal occurs which is eligible for delivery (i.e., not blocked by the process signal mask), that signal is handled asynchronously and the wait is interrupted. If <EM>infos</EM> is non-NULL, sigtimedwait4() returns as many queued signals as are ready and will fit in the array specified by <EM>infos</EM>. In each siginfo_t struct, the selected signal number is stored in si_signo, and the cause of the signal is stored in the si_code. If a payload is queued with the signal, the payload value is stored in si_value. If the value of si_code is SI_NOINFO, only the si_signo member of a siginfo_t struct is meaningful, and the value of all other members of that siginfo_t struct is unspecified. If no further signals are queued for the selected signal, the pending indication for that signal is reset. RETURN VALUES sigtimedwait4() returns the count of siginfo_t structs it was able to store in the buffer specified by <EM>infos</EM> and <EM>nsiginfos</EM>. Otherwise, the function returns -1 and sets errno to indicate any error condition. ERRORS EINTR The wait was interrupted by an unblocked, caught signal. ENOSYS sigtimedwait4() is not supported by this implementation. EAGAIN No signal specified by <EM>set</EM> was delivered within the specified timeout period. EINVAL timeout specified a tv_nsec value less than 0 or greater than 1,000,000,000. EFAULT The array of siginfo_t structs specified by <EM>infos</EM> and <EM>nsiginfos</EM> was not contained in the calling program's address space. CONFORMING TO Linux AVAILABILITY The sigtimedwait4() system call was introduced in Linux 2.4. SEE ALSO time(2), sigqueue(2), sigtimedwait(2), sigwaitinfo(2) Linux 2.4.0 Last change: 23 August 2000 1 </PRE> <HR> <ADDRESS> <FONT SIZE="2">This paper was originally published in the Proceedings of the 4th Annual Linux Showcase and Conference, Atlanta, October 10-14, 2000, Atlanta, Georgia, USA </FONT> ------------------ wiZd0m Fortune Pussy Adult Links [This message has been edited by wiZd0m (edited 06-08-2001).] |
|
|
|
|
|
#19 |
|
Confirmed User
Join Date: Feb 2001
Location: Montreal
Posts: 1,526
|
Computing is not Meteorology.
------------------ wiZd0m Fortune Pussy Adult Links [This message has been edited by wiZd0m (edited 06-08-2001).] |
|
|
|
|
|
#20 |
|
Confirmed User
Industry Role:
Join Date: Feb 2001
Location: The right place
Posts: 847
|
You got wayyyyyyyyyyy to much time on your hands
![]() Wolfshade ------------------ Get paid per minute! Dialerclopedia |
|
|
|
|
|
#21 |
|
Confirmed User
Industry Role:
Join Date: Jan 2001
Posts: 3,092
|
Lust, first of all I really dont like when prople have a problem and post it on a public board without even attempting to contact me first. OK now that I've said that, Their was an issue with Richards yesterday, but problem is resolved. Go ahead and Post all your Girls Host galleries you want with him.. it's all cool.
Rodent, this post was about a possible richard realm and Girls Host !! Not anything to do with Lightning Free... It seems that you have had several negative comments lately in regards to my hosting. I have 1 suggestion for you, and that is to not ever use us, this way you can keep your comments to yourself, or atleast to me in private. Wolfshade, once again you are soooo right.. Wizdom my dear friend.. You really have way to much free time..LOL...But your extensive knowledge of just about everything regarding the internet amazes me to no end.. I think I need to hire you as my personnal advisor......LOL This will be my only comment in regards to this topic. If anyone wants to talk to me then [email protected] will work I don't use Lensmans board to do any type of public pissing matches.. ------------------ Smile & Be Happy Lightning Free Hosting Girls Host Gay Free Hosting [This message has been edited by Lightning (edited 06-08-2001).] |
|
|
|
|
|
#22 |
|
Confirmed User
Join Date: May 2001
Location: Cyprus
Posts: 209
|
Lightning, hear my please:
Text of my mail to RR and he's answer: ************************************** Hi Unfortunately your host had trouble serving the pages and 404s kept popping up. Regards Jeremy [email protected] http://www.richards-realm.com/ ----- Original Message ----- From: Lust <[email protected]> To: <[email protected]> Sent: 07 June 2001 13:53 Subject: QUESTION! Greet! I have submit my gallery to your site: http://www.girlshost.com/teen/dirtyt...ns0008-06.html But get: "Preparing...............FAILED Your site or your host has been banned." WHY ??? Respectfully yours, Lust ICQ #:86897452 ************************************** Not I, but RR said that your host has redirect to 404..... I did not accuse you, I ask to comment! [This message has been edited by Lust (edited 06-08-2001).] |
|
|
|
|
|
#23 |
|
赤い靴 call me 202-456-1111
Industry Role:
Join Date: Feb 2001
Location: The Valley
Posts: 14,831
|
Lightning, he just doesn't get it.
|
|
|
|
|
|
#24 |
|
Confirmed User
Industry Role:
Join Date: Jan 2001
Posts: 3,092
|
OK to all who have read and or commented to this thread.
After talking with Lust on icq, we are now friends. He was correct that their was a problem with richard realms yesterday and My Girls Host. I spoke with Richard and their was only a misunderstanding. Their is NO problems at richards anymore with girls host.. (Lightning Free still has a few things to workout there..LOL) But be rest assured Lust and I have straightened things out. As a matter of fact, in my heat of rage I went into his account ready to delete it, and found that he makes some Ver Nice galleries. So we spoke and now everything is cool. See how things can workout if people work together and not just piss on each other.. ![]() This thread is over... ------------------ Smile and Be Happy Lightning Free Hosting Girls Host Gay Free Hosting |
|
|
|
|
|
#25 | |
|
Confirmed User
Join Date: Mar 2001
Location: ReliableServers.Com
Posts: 1,462
|
Quote:
|
|
|
|
|
|
|
#26 |
|
GFY Chaperone
Join Date: Jan 2001
Location: Adult.com
Posts: 9,846
|
Sometimes I get 404s with Yahoo, welcome to the Internet.
If you never want 404's, get your own server. |
|
|
|
|
|
#27 |
|
Confirmed User
Industry Role:
Join Date: Jan 2001
Posts: 3,092
|
Rodent, Theat post has a "negative Effect" on my business. So yes I take it as negative, especially because it is in public.
These types of comments are personnal and can be dealt with via email. How would you like it if someone started putting posts on message boards about you blind linking a thumb ??? Now wether it was true or not you would be pissed. But I bet you would be much happier if instead of them putting up in public, they sent you an email instead??.... ------------------ Smile and Be Happy Lightning Free Hosting Girls Host Gay Free Hosting |
|
|
|
|
|
#28 |
|
Confirmed User
Join Date: Feb 2001
Location: The bushes behind your house
Posts: 2,303
|
Lightning - pleeaase calm down! The chances of anyone visiting this site and deciding that you are a poor host is absolutely NIL!!!
Now go and make yourself a TGP only listing your lightningfree and girls hosts sites - chop chop! |
|
|
|
|
|
#29 |
|
HAL 9000
Industry Role:
Join Date: May 2001
Posts: 34,515
|
May people tell companies, "You acquire data about my privacy without my knowledge", or "You are doing things that are not written in your terms of service", This is why there are third party Audit firms. To make sure that company do what's written in their Terms of Service. And if they don't, they tell everyone (Because they have protection under the Federal whistle blower act.)
The only way you will stop suspicion is if you bring 1 or 2 engineers qualified to review your operations. They will look at how you handle the traffic and make a transparent report. This is the only way you will have 100% defense against any claims of redirect..( i.e., That the 404 is exactly what you say the problem is) This is a friendly recommendation, that a honest business person should evaluate in order to dissipate any confusion about it's business practices. Look at Lensman, he has 3rd party stats, 3rd party billing, (He could have his own merchant account and his own stats collection) but he choose not to so he cannot be accused of doing things that he is not doing. You can do the same and clear you name for ever. So, next time someone bitch for your service you'll send him to the page where you'll have the report and you can easily tell him GFY ;-) |
|
|
|
|
|
#30 |
|
Confirmed User
Join Date: Jan 2001
Location: Planet Earth
Posts: 130
|
Some freehost owners are impossible to reach thru e-mail or Icq...This is not the case with Lightning.
I e-mailed him about the RR ban on lightningfree and girls host and not only he answered promptly but the ban on girlshost was lifted in about 5 hours and the one on Lightning will probably be lifted soon... So when someone is efficient like this I think it is not necessary to go public... just my 2 cents... |
|
|
|