How to Prevent Errno 32 Broken Pipe

How to prevent errno 32 broken pipe?

Your server process has received a SIGPIPE writing to a socket. This usually happens when you write to a socket fully closed on the other (client) side. This might be happening when a client program doesn't wait till all the data from the server is received and simply closes a socket (using close function).

In a C program you would normally try setting to ignore SIGPIPE signal or setting a dummy signal handler for it. In this case a simple error will be returned when writing to a closed socket. In your case a python seems to throw an exception that can be handled as a premature disconnect of the client.

Why do I keep getting [BrokenPipeError: [Errno 32] Broken pipe] no matter the number of workers in my Pool with multiprocessing lib in python3.8?

The child processes were using too much memory and were being killed by the OS.

How to prevent Execution failed:[Errno 32] Broken pipe in Airflow

While I'd suggest you keep looking for a more graceful way of trying to achieve what you want, I'm putting up example usage as requested


First you've got to create an SSHHook. This can be done in two ways

  • The conventional way where you supply all requisite settings like host, user, password (if needed) etc from the client code where you are instantiating the hook. Im hereby citing an example from test_ssh_hook.py, but you must thoroughly go through SSHHook as well as its tests to understand all possible usages

    ssh_hook = SSHHook(remote_host="remote_host",
    port="port",
    username="username",
    timeout=10,
    key_file="fake.file")
  • The Airflow way where you put all connection details inside a Connection object that can be managed from UI and only pass it's conn_id to instantiate your hook

    ssh_hook = SSHHook(ssh_conn_id="my_ssh_conn_id")

    Of course, if your'e relying on SSHOperator, then you can directly pass the ssh_conn_id to operator.

    ssh_operator = SSHOperator(ssh_conn_id="my_ssh_conn_id")

Now if your'e planning to have a dedicated task for running a command over SSH, you can use SSHOperator. Again I'm citing an example from test_ssh_operator.py, but go through the sources for a better picture.

 task = SSHOperator(task_id="test",
command="echo -n airflow",
dag=self.dag,
timeout=10,
ssh_conn_id="ssh_default")

But then you might want to run a command over SSH as a part of your bigger task. In that case, you don't want an SSHOperator, you can still use just the SSHHook. The get_conn() method of SSHHook provides you an instance of paramiko SSHClient. With this you can run a command using exec_command() call

my_command = "echo airflow"
stdin, stdout, stderr = ssh_client.exec_command(
command=my_command,
get_pty=my_command.startswith("sudo"),
timeout=10)

If you look at SSHOperator's execute() method, it is a rather complicated (but robust) piece of code trying to achieve a very simple thing. For my own usage, I had created some snippets that you might want to look at

  • For using SSHHook independently of SSHOperator, have a look at ssh_utils.py
  • For an operator that runs multiple commands over SSH (you can achieve the same thing by using bash's && operator), see MultiCmdSSHOperator

Python3 - [Errno 32] Broken Pipe while using sockets

your server is closing the port immediately after a single recv. I'd suggest changing your handle_client code to have some sort of while loop that ends when recv returns an empty string (this indicates the client has shutdown their end of the connection, probably by closeing their connection)

socket.error: [Errno 32] Broken pipe

You made a small mistake:

s.send(data_string1);

Should be:

conn.send(data_string1);

Also the following lines need to be changed:

socket.shutdown(); to s.shutdown();

And:

socket.close(); to s.close();

IOError: [Errno 32] Broken pipe when piping: `prog.py | othercmd`

I haven't reproduced the issue, but perhaps this method would solve it: (writing line by line to stdout rather than using print)

import sys
with open('a.txt', 'r') as f1:
for line in f1:
sys.stdout.write(line)

You could catch the broken pipe? This writes the file to stdout line by line until the pipe is closed.

import sys, errno
try:
with open('a.txt', 'r') as f1:
for line in f1:
sys.stdout.write(line)
except IOError as e:
if e.errno == errno.EPIPE:
# Handle error

You also need to make sure that othercommand is reading from the pipe before it gets too big - https://unix.stackexchange.com/questions/11946/how-big-is-the-pipe-buffer



Related Topics



Leave a reply



Submit