View Single Post
Old 03-21-2005, 05:30 PM  
zagi
Confirmed User
 
Join Date: Jan 2004
Posts: 1,238
The limits change from one OS to another, but one thing is true, Unix + Linux is notoriously bad with a large number of files in one directory.

I would have to estimate the upper limit between 20,000 - 30,000 after that performance suffers heavily. Yes the script may not use sub directories but it can always be modified. Usuaully you wont notice until its too late and then rewriting + sorting archived files is a big pain.

Take this into account before going production with this system would be my advice.

Under FreeBSD you can tune the following sysctl variables to increase performance for large # of files in a directory:

vfs.ufs.dirhash_minsize: 2560
vfs.ufs.dirhash_maxmem: 2097152
__________________
Managed US/NL Hosting [ [Reality Check Network ]
Dell XEON Servers + 1/2/3 TB Packages ICQ: 4-930-562
zagi is offline   Share thread on Digg Share thread on Twitter Share thread on Reddit Share thread on Facebook Reply With Quote