Docker-Compose Up and User Inputs on Stdin

Docker-compose up with input argumen

From a similar question: https://stackoverflow.com/a/39150040/13731995. Basically you need to add tty and stdin_open to your docker compose file.

version: '3.8'

services:
bigquery:
build: ./
tty: true
stdin_open: true
command: python ./scripts/bq.py

docker-compose does not attach stdin to .net core app Console.ReadLine

You can find individual container and execute the command in that container. Docker compose is streaming your logs, because of this it does not provide you an interactive shell.

As example lets say you have this compose file,





version: '3'


services:

nats:

image: nats-streaming

mongo:

image: mongo

Docker-compose pass stdout from a service to stdin in another service

You can't do this in Docker easily.

Container processes' stdin and stdout aren't usually used for much. Most often the stdout receives log messages that can get reviewed later, and containers actually communicate through network sockets. (A container would typically run Apache but not grep.)

Docker doesn't have a native cross-container pipe, beyond the networking setup. If you're docker running containers from the shell, you can use an ordinary pipe there:

sudo sh -c 'docker run image-a | docker run image-b'

If it's practical to run both processes in the same container, you can use a shell pipe as the main container command:

docker run image sh -c 'process_a | process_b'

A differently hacky approach is to use a tool like Netcat to bridge between "stdin" and a network port. For example, consider a "server":

#!/bin/sh
# server.sh
# (Note, this uses busybox nc syntax)
nc -l -p 12345 \
| cat \ # <-- a process that reads from stdin
> out.txt

And a matching "client":

#!/bin/sh
# client.sh
cat in.txt \ # <-- a process that writes to stdout
| nc "$1" 12345

Build these into an image

FROM busybox
COPY client.sh server.sh /bin/
EXPOSE 12345
WORKDIR /data
CMD ["server.sh"]

Now run both containers:

docker network create testnet
docker build -t testimg .
echo hello world > in.txt
docker run -d -v $PWD:/data --net testnet --name server testimg \
server.sh
docker run -d -v $PWD:/data --net testnet --name client testimg \
client.sh server
docker wait client
docker wait server
cat out.txt

A more robust path would be to wrap the server process in a simple HTTP server that accepted an HTTP POST on some path and launched a subprocess to handle the request; then you'd have a single long-running server process instead of having to re-launch it for each request. The client would use a tool like curl or any other HTTP client.



Related Topics



Leave a reply



Submit