Find Is Returning "Find: .: Permission Denied", But I Am Not Searching In

Find is returning find: .: Permission denied, but I am not searching in

I ran:

strace find /dev -maxdepth 1

on GNU/Linux (Ubuntu) and it turns out that find uses fchdir syscall to traverse the directory tree and finally executes fchdir to go back to the original working directory. Here's a snippet:

open(".", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_NOFOLLOW) = 4
fchdir(4) = 0

... irrelevant ...

write(1, "/dev\n", 5) = 5
open("/dev", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_CLOEXEC) = 5
fcntl64(5, F_GETFD) = 0x1 (flags FD_CLOEXEC)
fchdir(5) = 0

... potentially more fchdirs ...

fchdir(4) = 0
close(4) = 0

My hint? cd /tmp (or some other fully accessible directory) before running find.

Cancel find before the first file with permission denied

Assuming you're using GNU find, I would use the following :

find "$input" ! -readable -fprintf /dev/stderr '%p cannot be read ! Switch to root.\n' -a -quit -o -type f -exec md5sum {} + > "$tempdirectory/archive.txt"

As soon as it finds an unreadable file or directory, it outputs a message on stderr mentionning the unreadable file then aborts. Until this happens, it aggregates file names to pass them to md5sum whose output (on stdout) will be redirected to the archive.txt file.

From my tests it seems like the solution isn't perfect either because -quit won't quit immediately but run the -exec [...] + that was in construction when it was reached (find's manual mentions that for -execdir [...] +, but I guess it extends to -exec). You could avoid that by using -exec [...] \; instead, but the impact on nominal performances would likely be huge.

It relies on the -readable predicate to determine whether the file can be read by your user, otherwise it runs the -fprintf and -quit actions to first print an error message to stderr (so that it won't be redirected to your output file) and then abort the search.

You can try it here.

Why does find comand return Permission denied when it is command substituted and stored to a variable within a bash script

Don't store command lines in variables. For one thing, you'll screw up field separation and substitution. Instead use a function:

myFindCmd() {
find . -type f -empty -not -path "./node_modules/*"
}

myFindCmd # execute it whereever you like

How can I exclude all permission denied messages from find?

Note:

  • This answer probably goes deeper than the use case warrants, and find 2>/dev/null may be good enough in many situations. It may still be of interest for a cross-platform perspective and for its discussion of some advanced shell techniques in the interest of finding a solution that is as robust as possible, even though the cases guarded against may be largely hypothetical.

If your shell is bash or zsh, there's a solution that is robust while being reasonably simple, using only POSIX-compliant find features; while bash itself is not part of POSIX, most modern Unix platforms come with it, making this solution widely portable:

find . > files_and_folders 2> >(grep -v 'Permission denied' >&2)

Note:

  • If your system is configured to show localized error messages, prefix the find calls below with LC_ALL=C (LC_ALL=C find ...) to ensure that English messages are reported, so that grep -v 'Permission denied' works as intended. Invariably, however, any error messages that do get displayed will then be in English as well.

  • >(...) is a (rarely used) output process substitution that allows redirecting output (in this case, stderr output (2>) to the stdin of the command inside >(...).

    In addition to bash and zsh, ksh supports them as well in principle, but trying to combine them with redirection from stderr, as is done here (2> >(...)), appears to be silently ignored (in ksh 93u+).

    • grep -v 'Permission denied' filters out (-v) all lines (from the find command's stderr stream) that contain the phrase Permission denied and outputs the remaining lines to stderr (>&2).

    • Note: There's a small chance that some of grep's output may arrive after find completes, because the overall command doesn't wait for the command inside >(...) to finish. In bash, you can prevent this by appending | cat to the command.

This approach is:

  • robust: grep is only applied to error messages (and not to a combination of file paths and error messages, potentially leading to false positives), and error messages other than permission-denied ones are passed through, to stderr.

  • side-effect free: find's exit code is preserved: the inability to access at least one of the filesystem items encountered results in exit code 1 (although that won't tell you whether errors other than permission-denied ones occurred (too)).



POSIX-compliant solutions:

Fully POSIX-compliant solutions either have limitations or require additional work.

If find's output is to be captured in a file anyway (or suppressed altogether), then the pipeline-based solution from Jonathan Leffler's answer is simple, robust, and POSIX-compliant:

find . 2>&1 >files_and_folders | grep -v 'Permission denied' >&2

Note that the order of the redirections matters: 2>&1 must come first.

Capturing stdout output in a file up front allows 2>&1 to send only error messages through the pipeline, which grep can then unambiguously operate on.

The only downside is that the overall exit code will be the grep command's, not find's, which in this case means: if there are no errors at all or only permission-denied errors, the exit code will be 1 (signaling failure), otherwise (errors other than permission-denied ones) 0 - which is the opposite of the intent.

That said, find's exit code is rarely used anyway, as it often conveys little information beyond fundamental failure such as passing a non-existent path.

However, the specific case of even only some of the input paths being inaccessible due to lack of permissions is reflected in find's exit code (in both GNU and BSD find): if a permissions-denied error occurs for any of the files processed, the exit code is set to 1.

The following variation addresses that:

find . 2>&1 >files_and_folders | { grep -v 'Permission denied' >&2; [ $? -eq 1 ]; }

Now, the exit code indicates whether any errors other than Permission denied occurred: 1 if so, 0 otherwise.

In other words: the exit code now reflects the true intent of the command: success (0) is reported, if no errors at all or only permission-denied errors occurred.

This is arguably even better than just passing find's exit code through, as in the solution at the top.


gniourf_gniourf in the comments proposes a (still POSIX-compliant) generalization of this solution using sophisticated redirections, which works even with the default behavior of printing the file paths to stdout:

{ find . 3>&2 2>&1 1>&3 | grep -v 'Permission denied' >&3; } 3>&2 2>&1

In short: Custom file descriptor 3 is used to temporarily swap stdout (1) and stderr (2), so that error messages alone can be piped to grep via stdout.

Without these redirections, both data (file paths) and error messages would be piped to grep via stdout, and grep would then not be able to distinguish between error message Permission denied and a (hypothetical) file whose name happens to contain the phrase Permission denied.

As in the first solution, however, the the exit code reported will be grep's, not find's, but the same fix as above can be applied.



Notes on the existing answers:

  • There are several points to note about Michael Brux's answer, find . ! -readable -prune -o -print:

    • It requires GNU find; notably, it won't work on macOS. Of course, if you only ever need the command to work with GNU find, this won't be a problem for you.

    • Some Permission denied errors may still surface: find ! -readable -prune reports such errors for the child items of directories for which the current user does have r permission, but lacks x (executable) permission. The reason is that because the directory itself is readable, -prune is not executed, and the attempt to descend into that directory then triggers the error messages. That said, the typical case is for the r permission to be missing.

    • Note: The following point is a matter of philosophy and/or specific use case, and you may decide it is not relevant to you and that the command fits your needs well, especially if simply printing the paths is all you do:

      • If you conceptualize the filtering of the permission-denied error messages a separate task that you want to be able to apply to any find command, then the opposite approach of proactively preventing permission-denied errors requires introducing "noise" into the find command, which also introduces complexity and logical pitfalls.
      • For instance, the most up-voted comment on Michael's answer (as of this writing) attempts to show how to extend the command by including a -name filter, as follows:

        find . ! -readable -prune -o -name '*.txt'

        This, however, does not work as intended, because the trailing -print action is required (an explanation can be found in this answer). Such subtleties can introduce bugs.
  • The first solution in Jonathan Leffler's answer, find . 2>/dev/null > files_and_folders, as he himself states, blindly silences all error messages (and the workaround is cumbersome and not fully robust, as he also explains). Pragmatically speaking, however, it is the simplest solution, as you may be content to assume that any and all errors would be permission-related.

  • mist's answer, sudo find . > files_and_folders, is concise and pragmatic, but ill-advised for anything other than merely printing filenames, for security reasons: because you're running as the root user, "you risk having your whole system being messed up by a bug in find or a malicious version, or an incorrect invocation which writes something unexpectedly, which could not happen if you ran this with normal privileges" (from a comment on mist's answer by tripleee).

  • The 2nd solution in viraptor's answer, find . 2>&1 | grep -v 'Permission denied' > some_file runs the risk of false positives (due to sending a mix of stdout and stderr through the pipeline), and, potentially, instead of reporting non-permission-denied errors via stderr, captures them alongside the output paths in the output file.

Find file in Perl — skip permission denied

The Permission denied (for cd) is only a warning for me (managed using a $SIG{__WARN__} hook as below), so the program isn't getting killed.

However, the search does get hampered, I suspect by the fact that in find operation

Additionally, for each directory found, it will chdir() into that directory and continue the search, invoking the &wanted function on each file or subdirectory in the directory.

and apparently once this chdir is foiled the search fails and I get no results. (Note added -- on another system the search works, except for the no-no directory of course and with the warning. But this clearly needs fixing.)

I see two ways around this: use preprocess option to filter out directories, or disable chdir.

— Use preprocess option to filter out directories with bad permissions

The preprocess option invokes a user given subroutine, which receives the list of entries (in the current directory, to be processed) and should return a list, of entries that will then be processed.

So we can check the input list of entries (in @_) for directories with bad permissions and exclude them from the return list, so they are never attempted

use warnings;
use strict;
use feature 'say';

use File::Find;

my @dirpath = @ARGV ? @ARGV : die "Usage: $0 dir-list\n";

my @files;

find( {
wanted => sub {
push @files, $File::Find::name
if -f $File::Find::name and /\.txt$/
},
preprocess => sub {
#say "--> Reading: $File::Find::dir";
return grep { not (-d and (not -x or not -r)) } @_;
}
}, @dirpath);

say for @files;

In the return statement we filter out directories that are non-executable or unreadable. See -X (filetest) operators. That statement can be rewritten to print names for logging etc.

    — Disable the chdir-ing, via the no_chdir option

When chdir to the directory (before listing its contents) is suppressed the search works overall. I don't know why the whole search fails with chdir but works without it.

use warnings;
use strict;
use feature 'say';

use File::Find;

$SIG{__WARN__} = sub { warn "WARN: @_" }; # manage the warnings

sub bad_perm {
push @files, $File::Find::name
if (-f $File::Find::name and /\.txt$/);
};

my @dirpath = @ARGV ? @ARGV : die "Usage: $0 dir-list\n";

my @files;

find( { no_chdir => 1, wanted => \&bad_perm }, @dirpath );

say for @files;

Now the search runs fine and assembles the correct list of (.txt) files.

Along with that a warning is printed, now for opendir


Can't opendir(tmp/this_belongs_to_root): Permission denied

The warning can be manipulated as you please in the $SIG{__WARN__} subroutine. If you don't want to see this warning then re-emit warnings for all else except it, for example by

$SIG{__WARN__} = sub { 
warn @_ unless $_[0] =~ /^Can't opendir\(.*?: Permission denied/;
};

See %SIG variable


I check both ways with a small hierarchy of files and directories made for this purpose, which contain a directory made by root with chmod go-rwx permissions (for which I duly get errors whichever way I try to touch it as a user).

I also tweak -x / -r permissions on yet other directories and the code works as intended.

APACHE Permissions are all set but still don't have permission

Problem just solved!

I needed to grant 775 or above permission to the whole "html" folder in this path :

"/var/www/html/..."

Not just files and folders in it because i tried that completely.

Also apache:apache permission is not needed, root does the work.

This comment helped a lot:

move_uploaded_file gives "failed to open stream: Permission denied" error

How to skip error of Get-ChildItem returning Access denied

Avoid combining the -Recurse and -Include/-Exclude parameters when using Get-ChildItem against the FileSystem provider - they are not mutually exclusive in a technical sense, but their behavior is partially redundant (-Include/-Exclude tries to recurse the file tree independently) and this can sometimes lead to unexpected, buggy and slow enumeration behavior.

For simple inclusion patterns, use -Filter in place of -Include:

$path = (Get-ChildItem -path $ENV:TEMP -force -Recurse -Filter logMyApp.txt -ErrorAction SilentlyContinue).FullName


Related Topics



Leave a reply



Submit