What is the maximum allowed depth of sub-folders?
The limit is not on the depth of the nested subdirectories (you could have dozens of them, even more), but on the file systems and its quotas.
Also having very long file paths is inconvenient (and could be slightly inefficient). Programmatically, a file path of several hundreds or even thousands of characters is possible; but the human brain is not able to remember such long file paths.
Most file systems (on Linux) have a fixed limit to their number of inodes.
Some file systems behave poorly with directories containing ten thousand entries (e.g. because the search is linear not dichotomic). And you have hard time to deal with them (e.g. even ls *
gives too long output). Hence, it could be wise to have /somepath/a/0001
... /somepath/z/9999
instead of /somepath/a0001
... /somepath/z9999
If you have many thousands of users each with his directory, you might want to e.g. group the users by their initials, e.g. have /some/path/A/userAaron/images/foobar
and /some/path/B/userBasile/images/barfoo
etc. So /some/path/A/
would have only hundreds of subdirectories, etc...
A convenient rule of thumb might be: avoid having more than a few hundreds entries -either subdirectories or files- in each directory.
Some web applications store small data chunk in individual rows of a SQL databases and use files (whose name might be generated) for larger data chunks, storing the filepath in the database. Having millions of files with only a few dozen bytes in most is probably not efficient.
Some sysadmins are also using quotas on filesystems.
Maximum Number of Folders Windows Server
You can see in the following link the limits of windows file system
http://technet.microsoft.com/en-gb/library/bb457112.aspx
taken from
https://serverfault.com/questions/18692/what-is-the-maximum-number-of-files-or-folders-that-can-be-stored-in-a-single
the limits of files and sub folders changed depending the kind of server/netapp.
you can read in the link above the limits for each one.
I would use subfolders but it is up to you, the important thing is that you need to know the limits and then decide.
List only folders of certain depth using Java 8 streams
To list only sub-directories of a given directory:
Path dir = Paths.get("/path/to/stuff/");
Files.walk(dir, 1)
.filter(p -> Files.isDirectory(p) && ! p.equals(dir))
.forEach(p -> System.out.println(p.getFileName()));
How to Limit The Depth of a Recursive Sub-Directory Search
First, avoid declaring the recCount
field outside as a “global” variable. In recursive scenarios it's usually more manageable to pass state along the recursive calls.
Second, move the depth test out of the foreach
to remove unnecessary querying of the file system for subdirectories.
Third, place the actual processing logic at the beginning of your method, again out of the subdirectories processing loop.
Your code would then look like:
void StepThroughDirectories(string dir)
{
StepThroughDirectories(dir, 0)
}
void StepThroughDirectories(string dir, int currentDepth)
{
// process 'dir'
...
// process subdirectories
if (currentDepth < MaximumDepth)
{
foreach (string subdir in Directory.GetDirectories(dir))
StepThroughDirectories(subdir, currentDepth + 1);
}
}
How to set recursive depth for the Windows command DIR?
I'm sure it is possible to write a complex command that would list n levels of directories. But it would be hard to remember the syntax and error prone. It would also need to change each time you want to change the number of levels.
Much better to use a simple script.
EDIT 5 Years Later - Actually, there is a simple one liner that has been available since Vista. See my new ROBOCOPY solution.
Here is a batch solution that performs a depth first listing. The DIR /S command performs a breadth first listing, but I prefer this depth first format.
@echo off
setlocal
set currentLevel=0
set maxLevel=%2
if not defined maxLevel set maxLevel=1
:procFolder
pushd %1 2>nul || exit /b
if %currentLevel% lss %maxLevel% (
for /d %%F in (*) do (
echo %%~fF
set /a currentLevel+=1
call :procFolder "%%F"
set /a currentLevel-=1
)
)
popd
The breadth first version is nearly the same, except it requires an extra FOR loop.
@echo off
setlocal
set currentLevel=0
set maxLevel=%2
if not defined maxLevel set maxLevel=1
:procFolder
pushd %1 2>nul || exit /b
if %currentLevel% lss %maxLevel% (
for /d %%F in (*) do echo %%~fF
for /d %%F in (*) do (
set /a currentLevel+=1
call :procFolder "%%F"
set /a currentLevel-=1
)
)
popd
Both scripts expect two arguments:
arg1 = the path of the root directory to be listed
arg2 = the number of levels to list.
So to list 3 levels of the current directory, you could use
listDirs.bat . 3
To list 5 levels of a different directory, you could use
listDirs.bat "d:\my folder\" 5
How to limit depth for recursive file list?
Checkout the -maxdepth
flag of find
find . -maxdepth 1 -type d -exec ls -ld "{}" \;
Here I used 1 as max level depth, -type d
means find only directories, which then ls -ld
lists contents of, in long format.
How many files can I put in a directory?
FAT32:
- Maximum number of files: 268,173,300
- Maximum number of files per directory: 216 - 1 (65,535)
- Maximum file size: 2 GiB - 1 without LFS, 4 GiB - 1 with
NTFS:
- Maximum number of files: 232 - 1 (4,294,967,295)
- Maximum file size
- Implementation: 244 - 26 bytes (16 TiB - 64 KiB)
- Theoretical: 264 - 26 bytes (16 EiB - 64 KiB)
- Maximum volume size
- Implementation: 232 - 1 clusters (256 TiB - 64 KiB)
- Theoretical: 264 - 1 clusters (1 YiB - 64 KiB)
ext2:
- Maximum number of files: 1018
- Maximum number of files per directory: ~1.3 × 1020 (performance issues past 10,000)
- Maximum file size
- 16 GiB (block size of 1 KiB)
- 256 GiB (block size of 2 KiB)
- 2 TiB (block size of 4 KiB)
- 2 TiB (block size of 8 KiB)
- Maximum volume size
- 4 TiB (block size of 1 KiB)
- 8 TiB (block size of 2 KiB)
- 16 TiB (block size of 4 KiB)
- 32 TiB (block size of 8 KiB)
ext3:
- Maximum number of files: min(volumeSize / 213, numberOfBlocks)
- Maximum file size: same as ext2
- Maximum volume size: same as ext2
ext4:
- Maximum number of files: 232 - 1 (4,294,967,295)
- Maximum number of files per directory: unlimited
- Maximum file size: 244 - 1 bytes (16 TiB - 1)
- Maximum volume size: 248 - 1 bytes (256 TiB - 1)
Related Topics
Git: Can't Push (Strange Config Issue)
What's The Purpose of Mmap Memory Protection Prot_None
Bash Loop Through Directory Including Hidden File
Finding Threading Bottlenecks and Optimizing for Wall-Time with Perf
Calculate Total Disk I/O by a Single Process
What Ncurses Frameworks Are Available for Bash
Get Subnet Mask in Linux Using Bash
File Format Differences Between a Static Library (.A) and a Shared Library (.So)
Shared Volume in Docker Through Vagrant
Command to Measure Tlb Misses on Linux
Why Kernel Needs Virtual Addressing
When Using Cpan in Linux Ubuntu Should I Run It Using Sudo/As Root or as My Default User
How to Remember Multiple Tabs' Session in Terminal? (Alike Ff Session Manager)
What Is The Fastest Way to Display an Image in Qt on X11 Without Opengl
Distro for Linux Kernel Development
Are Mlock()-Ed Pages Static, or Can They Be Moved in Physical Ram