Docker Run' on a Remote Host

`docker run` on a remote host

if your targeted machine B could be created on one of these platform then I guess docker-machine would serve your needs. you would create your machine using docker-machine create --driver <..driver setup..> MACHINE_B then you activate it using eval $(docker-machine env MACHINE_B). docker-machine env MACHINE_B will print out some export statements:

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://...."
export DOCKER_CERT_PATH="/..."
export DOCKER_MACHINE_NAME="MACHINE_B"

once your machine is active, you can use the docker command as you would locally to act remotely on MACHINE_B.

Can I Run Docker Exec from an external VM?

Not sure why there is need of doing docker exec remotely. But anyways it is achievable.

You need to make sure your docker daemon on your host where your containers are running is listening on a socket.

Something like this:

# Running docker daemon which listens on tcp socket
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375

Now interact with the docker daemon remotely from external VM using:

$ docker -H tcp://<machine-ip>:2375 exec -it my-container bash
OR
$ export DOCKER_HOST="tcp://<machine-ip>:2375"
$ docker exec -it my-container bash

Note: Exposing docker socket publicly in your network has some serious security risks. Although there are other ways to expose it on encrypted HTTPS socket or over the ssh protocol.

Please go through these docs carefully, before attempting anything:

https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option

https://docs.docker.com/engine/security/https/

Running docker commands on remote machine through ssh

For your case you can use docker-machine:

Install:

base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo mv /tmp/docker-machine /usr/local/bin/docker-machine &&
chmod +x /usr/local/bin/docker-machine

Run/create:

docker-machine create \
--driver generic \
--generic-ip-address=put_here_ip_of_remote_docker \
--generic-ssh-key ~/.ssh/id_rsa \
vm_123

Check:

docker-machine ls
docker-machine ip vm_123
docker-machine inspect vm_123

Use:

docker-machine ssh vm_123
docker run -it alpine sh
exit
exit
eval $(docker-machine env -u)

Extra tips:

Also you can make vm_123 as the active docker machine via this command:

eval $(docker-machine env vm_123)
docker run -it alpine sh
exit
eval $(docker-machine env -u)

and unset docker machine vm_123 as active via this command:

eval $(docker-machine env -u)

https://docs.docker.com/machine/drivers/generic/

https://docs.docker.com/machine/examples/aws/

https://docs.docker.com/machine/install-machine/

https://docs.docker.com/machine/reference/ssh/

How to connect to a docker container running on a remote host

You have to expose docker guest port of container to bind it with host port.

$ docker run -p 0.0.0.0:1337:1337 --name my-image

Above command will bind it with all the network-interfaces.

If you want you can restrict access to specific network-interface by specific it's IP address.

How to run docker load in remote server

To assign the docker group to your user (please note that there might be some security concerns in doing this):

usermod -G docker $(whoami)

To load the images, you can use:

docker load -i filename.tar

Another possible approach might be to define the command in the sudoers file. Check the /etc/sudoers and add something like this:

Cmnd_Alias DOCKER_LOAD_CMD = docker load -i file.tar
user ALL=(ALL) NOPASSWD: DOCKER_LOAD_CMD

If you plan to follow this, you should check very carefully with a second terminal open if there are errors of any kind (you might break the sudo functionality and that's not fun)

Developing inside a container on a remote Docker host

Solved: https://code.visualstudio.com/docs/containers/ssh

Basically Docker cli can only connect to a remote server over ssh using keys (passwords are not supported).

Following the above document I configured ssh keys and I can now connect using the clone repository option.

How can I run a Docker container on a remote server in the same manner as docker-compose up?

The error you're seeing is accurate: nginx is a directory. Based on your docker-compose.yml manifset, there is an nginx folder in your root directory which you use as a build context for your nginx service.

  nginx:
build: ./nginx # Here's the evidence for the nginx folder.
ports:

When you build the django_web image, you copy over the entire context directory into /app, and that includes the nginx directory.

COPY . /app
# set working directory
WORKDIR /app

The CMD for your username/srdc_prod image is nginx -g daemon off;, which your docker-entrypoint.sh executes. That fails because nginx is a directory.

Based on your docker-compose.yml manifest, it looks like the CMD you actually want is daphne -b 0.0.0.0 -p 8000 srdc.asgi:application or something like that, but not nginx which is not installed in that alpine-based image.

Some recommendations outside of the scope of the question.

  1. If you're using docker-compose in dev, consider using that in your hosted environments, too, instead of running the raw docker run commands.
  2. Better yet, use Docker in swarm mode. You can re-use your manifest file that way, albeit you would need to remove some of the deprecated stuff (depends_on, for example) and expand on the service definitions a bit. Using swarm mode--or some other orchestration tool--will make it easier to scale your service in the future, plus you get some other handy features like secrets, restart policies, network segregation, and so on.


Related Topics



Leave a reply



Submit