How to Automatically Pipe to Less If the Result Is More Than a Page on My Shell

How to automatically pipe to less if the result is more than a page on my shell?

The most significant problem with trying to do that is how to get it to turn off when running programs that need a tty.

What I would recommend is that, for programs and utilities you frequently use, create shell functions that wrap them and pipe to less -F. In some cases, you can name the function the same as the program and it will take precedence, but can be overridden.

Here is an example wrapper function which would need testing and perhaps some additional code to handle edge cases, etc.

#!/bin/bash
foo () {
if [[ -p /dev/stdout ]] # you don't want to pipe to less if you're piping to something else
then
command foo "$@" | less -F
else
command foo "$@"
fi
}

If you use the same name as I have in the example, it could break things that expect different behavior. To override the function to run the underlying program directly precede it with command:

command foo

will run foo without using the function of the same name.

How to set less to clear the screen on exit only if the output fills more than a single page?

This very question has been answered in Unix.SE. The top-voted answer there has actually been expanded into a full-fledged command-line tool that can act as a replacement for less: https://github.com/stefanheule/smartless.

I've been using it myself with great results (plus the author is very responsive to bug reports and feature requests on Github), so I highly recommend it to anyone facing this issue.

pipe the output of a command into less or into cat depending on length

In the news for less version 406, I see “Don't move to bottom of screen on first page.”. Which version do you have? My system version is 382 and it moves to the bottom of the screen before printing (causing blank lines if there is only one screenful and -F is used).

I just installed version 436, and it seems to do what you want when given -FX (put it in the LESS env var with your other prefs to let anything use those prefs by just running less).

If you can not get the new version, you might try this instead:

function catless() {
local line buffer='' num=0 limit=$LINES
while IFS='' read -r line; do
buffer="$buffer$line"$'\n'
line=''
num=$(( num+1 ))
[[ $num -ge $limit ]] && break
done
if [[ $num -ge $limit ]]; then
{ printf %s "$buffer$line"; cat } | less
else
printf %s "$buffer$line"
fi
}

The key is that the shell has to know if the there are more lines in the file than the screen before it (potentially) launches less (the multi-io technique you initially used can only run things in the background). If the in-shell read is not robust enough for you, you can replace it by reworking the code a bit:

function cat_up_to_N_lines_and_exit_success_if_more() {
# replace this with some other implmentation
# if read -r is not robust enough
local line buffer='' num=0 limit="$1"
while IFS='' read -r line; do
buffer="$buffer$line"$'\n'
line=''
num=$(( num+1 ))
[[ $num -ge $limit ]] && break
done
printf %s "$buffer$line"
[[ $num -ge $limit ]]
}
function catless() {
local limit=$LINES buffer=''
# capture first $limit lines
# the \0 business is to guard the trailing newline
buffer=${"$(
cat_up_to_N_lines_and_exit_success_if_more $limit
ec=$?
printf '\0'
exit $ec)"%$'\0'}
use_pager=$?
if [[ $use_pager -eq 0 ]]; then
{ printf '%s' "$buffer"; cat } | less
else
printf '%s' "$buffer"
fi
}

Would it be possible to automatically page the output in zsh?

Perhaps another option is to use something like screen or tmux rather than trying to force zsh to page for you. This solution provides support for editors such as vim, ssh connections, pagers less, more etc, and also gives multi-command scroll-back abilities which piping each command through a pager doesn't allow.

This approach would give you:

  • Scroll-back (quite a lot of scroll-back) over multiple commands (instead of just the last-entered command).
  • Re-attachment to your sessions (so you don't lose the scroll-back from the morning session when you get back from lunch).

This approach costs you:

  • You need to make a session to attach to before you run commands. This isn't much work, but in my experience the times I forget to use screen are the times I really, really wanted to.

Bash alias to pipe an arbitrary command to less

Eval-type solutions generally fail to be robust against arguments with odd characters in them. On the other hand, they are often unnecessary.

Instead of

/bin/sh -c "/usr/local/bin/p4 "$@" $pager"

which has a quotation error (the quotes around $@ are actually unquoting $@ although fixing that won't help since you would actually need to insert escapes into the string passed to bash -c), you can just execute the command directly:

/usr/local/bin/p4 "$@" | "$pager"

That requires that $pager be defined; you would probably want:

if [[ $pager ]]; then
/usr/local/bin/p4 "$@" | "$pager"
else
/usr/local/bin/p4 "$@"
fi

The explicit path is a bit annoying. With bash, you can make this more robust by using the command builtin:

if [[ $pager ]]; then
command p4 "$@" | "$pager"
else
command p4 "$@"
fi

That introduces a small duplication of code, and it's a bit much if you're going to use that snipped in various places. You could define the following function (in your ~/.bashrc file, for example):

page() {
if [[ $pager ]]; then
"$@" | "${pager[@]}"
else
"$@"
fi
}

Then your p4 script could be:

p4() {
local pager=
if [[ $1 = diff ]]; then pager=less; fi
page command p4 "$@"
}

There are lots of other variants. Note the use of "${pager[@]}" in the page function; this allows pager to be an array, in case you want to pass arguments to less. For example:

( pager=(less -N); p4 diff ...; )

(The parentheses are to make the setting of pager local; you can't use the normal var=value command syntax with arrays.)

How do I pipe to a file or shell program via Pythons subprocess?

Here is a working example of how this can be done:

#!/usr/bin/env python

from subprocess import Popen, PIPE

output = ['this', 'is', 'a', 'test']

output_file_name = 'pipe_out_test.txt.gz'

gzip_output_file = open(output_file_name, 'wb', 0)

output_stream = Popen(["gzip"], stdin=PIPE, stdout=gzip_output_file) # If gzip is supported

for line in output:
output_stream.stdin.write(line + '\n')

output_stream.stdin.close()
output_stream.wait()

gzip_output_file.close()

If our script only wrote to console and we wanted the output zipped, a shell command equivalent of the above could be:

script_that_writes_to_console | gzip > output.txt.gz

How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)?

Editor's note:

- >(…) is a process substitution that is a nonstandard shell feature of some POSIX-compatible shells: bash, ksh, zsh.

- This answer accidentally sends the output process substitution's output through the pipeline too: echo 123 | tee >(tr 1 a) | tr 1 b.

- Output from the process substitutions will be unpredictably interleaved, and, except in zsh, the pipeline may terminate before the commands inside >(…) do.

In unix (or on a mac), use the tee command:

$ echo 123 | tee >(tr 1 a) >(tr 1 b) >/dev/null
b23
a23

Usually you would use tee to redirect output to multiple files, but using >(...) you can
redirect to another process. So, in general,

$ proc1 | tee >(proc2) ... >(procN-1) >(procN) >/dev/null

will do what you want.

Under windows, I don't think the built-in shell has an equivalent. Microsoft's Windows PowerShell has a tee command though.



Related Topics



Leave a reply



Submit