Coreos - Get Docker Container Name by Pid

CoreOS - get docker container name by PID?

Something like this?

$ docker ps -q | xargs docker inspect --format '{{.State.Pid}}, {{.ID}}' | grep "^${PID},"

[EDIT]

Disclaimer This is for "normal" linux. I don't know anything useful about CoreOS, so this may or may not work there.

How to retrieve docker container name from pid of docker exec on the host (MacOS)

The approach you're taking won't work on MacOS. Parsing the docker exec command to find the first non-option argument is probably the only way to find this.

On MacOS, there is a hidden Linux VM that actually runs the containers. This means there are three process ID namespaces (the host's, the VM's, and the container's). The other important distinction is that the docker exec process isn't the same process as the process that's running inside the container. So you're trying to match the MacOS host pid of the docker exec process against the Linux VM pid of the main container process, and they'll never match up.

You can get a shell inside the Linux VM (see for example this answer) and then the docker inspect pids will be visible. But now, because of Docker's client-server architecture, there is no docker exec process; it only exists on the MacOS host. You should be able to see the process it launches but it's hard to know that process comes from docker exec.

                   +------------------------------+
| ................ |
docker exec ... ------> dockerd ---> process : |
(host pid) | (VM pid) : (cont. pid) : |
| ................ |
| Linux VM (VM pid) |
+------------------------------+

Access docker container running in coreos on vagrant vm through browser in host ubuntu host

I can see two issues with what you're trying to do.

  1. You are having trouble accessing docker containers running in a Vagrant VM from your host OS.
  2. You are looking for a web UI for administration of your private docker registry, but the docker image I think you are running (library/registry) does not provide this.

Item 1: Please note that the private docker registry you are running does not provide an admin web UI. The service it's providing on port 5000 is not a website; it is for use by command line docker for pushing and pulling images in your private registry. If you need an admin web UI you might consider running an additional service such as https://github.com/atc-/docker-registry-web (which I have not tried but looks promising).

Item 2: If you want to access ports of a Vagrant-VM-hosted docker container from your Vagrant VM's host OS (presumably Windows or OSX, since if your host OS were Linux you probably wouldn't need Vagrant) then I recommend that you open an ssh tunnel to your CoreOS Vagrant VM, forwarding the docker registry port to your local host:

vagrant ssh -L5000:localhost:5000 -L8080:localhost:8080 -L80:localhost:80

And leave that ssh session open as long as you need network access to those docker containers' ports.

While these port forwarding tunnels are open, the ports you forwarded will be available on localhost (i.e. 127.0.0.1). You need not access them via some other IP address as you tried before. This would allow you to access, for example, a web server running in a docker container by visiting http://localhost/ or an application server running on port 8080 by visiting http://localhost:8080/ with a browser or other HTTP client such as curl. Port 5000 is probably useless in this context, because the docker command line utilities that can access the registry don't currently run natively on Windows or OSX. To use your private docker registry, run something like this on your CoreOS Vagrant VM:

docker tag eb62f9df0657 localhost:5000/myimage
docker push localhost:5000/myimage

How to launch a docker with fleet given a dockerfile?

Here's a key line in your service file that should get you thinking:

ExecStartPre=/usr/bin/docker pull nginx-example

Where do you think is this image being pulled from?

In order to pull an image, you need to push it somewhere first. The easiest, of course, is DockerHub. You will need to create an account. I'll leave the exercise of creating the account, repository, and configuring authentication to you, as the documentation is readily available here.

Now, if you were to just try docker push nginx-example, it would fail, because it needs to be associated with your user account's namespace, via a tag. For the sake of this answer, let's assume your account is kimberlybf.

$ docker tag nginx-example:latest kimberlybf/nginx-example:latest - this will tag your image correctly for pushing to DockerHub.

$ docker push kimberlybf/nginx-example:latest - this will actually push your image. The image will be public, so don't put any sensitive data in your configs.

Then you would modify your Service, and replace the container tags accordingly, also remembering to give your container a name, e.g.:

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx
ExecStartPre=-/usr/bin/docker rm nginx
ExecStartPre=/usr/bin/docker pull kimberlybf/nginx-example:latest
ExecStart=/usr/bin/docker docker run -p 80:80 -d --name nginx kimberlybf/nginx-example:latest
ExecStop=/usr/bin/docker stop nginx

Docker, CoreOS and fleet based deployments

There are a lot of moving parts here. The answer already posted is very good. I think there are going to be opinions in any answer you get. I thought I'd go through your punch list in my attempt at 100 bounty points :-)

I've been using CoreOS/Flannel/Kubernetes/Fleet everyday now for about 6 months. When you posted the url to the introduction I decided to watch it. Wow, great presentation. I think Brandon Philips is a very good teacher. I like the way he built upon each technology as he introduced it. I would recommend that tutorial to anyone.

CoreOS is a linux based operating system. It is very stripped down, nothing extra running. For me, it does these things:

  • Auto updates. Does this well. Dual partitions, updates non-active, swaps active, falls back (I think, I have never experienced a fallback). The have tackled the 'how to update your operating system after you deploy' issue and made it relatively painless.
  • systemd init system. This one took me a bit longer to like (being a /etc/init.d guy) but, after a while it grows on you. There is a pretty steep learning curve. Once you get what is going on you will like how systemd keeps the machine running specific things, dependencies, restarts (if you want), listening on sockets (like super initd) and spawning processes, d-bus (although I don't know much about this part yet). systemd lets you specify 'units' and units can have dependencies, pre and post processes, etc.
  • basic services. I've copied the brief description line from each of the services that are running on my CoreOS system.

    • systemd - It provides a system and service manager that runs as PID 1 and starts the rest of the system
    • docker - Docker is an open source project to pack, ship and run any application as a lightweight container
    • etcd - etcd is a distributed, consistent key-value store for shared configuration and service discovery
    • sshd - sshd (OpenSSH Daemon) is the daemon program for ssh(1). Together these programs replace rlogin and rsh, and provide secure encrypted communications between two untrusted hosts over an insecure network.
    • locksmithd - locksmith is a reboot manager for the CoreOS update engine which uses etcd to ensure that only a subset of a cluster of machines are rebooting at any given time. locksmithd runs as a daemon on CoreOS machines and is responsible for controlling the reboot behaviour after updates.
    • journald - systemd-journald is a system service that collects and stores logging data.
    • timesyncd - systemd-timesyncd is a system service that may be used to synchronize the local system clock with a remote Network Time Protocol server
    • update_engine
    • udevd - systemd-udevd listens to kernel uevents. For every event, systemd-udevd executes matching instructions specified in udev rules. See udev(7).
    • logind - systemd-logind is a system service that manages user logins.
    • resolved - systemd-resolved is a system service that manages network name resolution. It implements a caching DNS stub resolver and an LLMNR resolver and responder.
    • hostnamed - This is a tiny daemon that can be used to control the host name and related machine meta data from user programs.
    • networkd - systemd-networkd is a system service that manages networks. It detects and configures network devices as they appear, as well as creating virtual network devices.

CoreOS doesn't necessarily require that everything that you want to run must be a container. It will run anything that a unix box will run. yum and apt-get are conspicuously missing, but wget is included. So, you can 'install' programs, libraries, even apt-get via wget and be on your way to polluting the CoreOS base. That wouldn't be good, though. You really do want to keep it pristine. To that end, they include a 'toolbox' which lets you run a container like sandbox to do your work that goes away when you log out of it.

My favorite part of CoreOS is the cloud-config. On first boot you can provide user_data called a cloud-config. It is a yaml file which tells the base CoreOS what to do when it boots the first time. This is where you install things like fleet, flannel, kubernetes, etc. It is a real easy way to get a repeatable install of a combination of your choosing on a VM. In a typical cloud-config I will write configuration files, copy files from other machines to install on the new machine, and create unit files that control the other processes we want CoreOS' systemd to manage (like flannel, fleet, etc). And it is completely repeatable.

Here is another interesting thing about CoreOS. You can modify the dependency and configuration of existing units. For example, CoreOS starts docker. But, I want to modify the startup sequence of docker, so I can add a drop-in configuration that augments the existing system docker configuration. I use this to drop-in the dependency for flannel before docker starts, so I can configure docker to use a flannel provided network. This isn't necessarily CoreOS, but, it does make it all fit together.

I think you can use cloud-config with Ubuntu as well as CoreOS, and you can do the same things. So, I think the benefit you get from CoreOS over Ubuntu would be that you get a new release often, the operating system is auto-updated, and you don't have anything 'extra' running (it's lean, and a reduced attack vector is fallout). CoreOS is tuned for docker (it is already running) and ubuntu doesn't have it already running. Although, you can create a cloud-config file that will make ubuntu run docker... In summary, I think you have CoreOS understood.

Another thing that you can get with CoreOS is support, directly from the company, either paid or unpaid. I have had many questions answered by the people at CoreOS via this forum and CoreOS Dev/CoreOS User Google groups.

Your fleet description is also pretty good. Fleet manages a cluster. A cluster is one or more CoreOS machines. So, if you are going to use fleet you must use CoreOS, I guess this would be another of those benefits of CoreOS over Ubuntu.

Much like how a Unit File for systemd controls running a process on a host, a Unit File for fleetd controls running a process on a cluster. There is a bit of syntactic sugar, but a Unit file for fleet is about the same as a unit file for systemd. They fit very well together. Fleet's unit files are saved in etcd's database, so once ingested the unit is persistent, even if the machine(s) that host the unit service go down, the unit description exists in etcd.

Fleet also has commands for listing my machines in my cluster, listing a unit file, showing the units that are running, etc. Basically you can submit units to run on the cluster (or all machines, or on a specific kind of machine (like with ssd drives), or on the same machine as something else is running (affinity), etc, etc).

Fleet keeps it running. If the machine goes away its units are going to be run on some other machine in the cluster.

In the tutorial you reference Brandon uses Fleet to launch Kubernetes. It is very simple to do. By making the Fleet unit files place Kubernetes on all machines in the fleet cluster, as machines are added and subtracted from the fleet cluster Kubernetes automatically uses that machine and schedules the Kubernetes to run on them. I have run my Kubernetes cluster like this as well. However, I don't do that much anymore. I am sure there is a benefit that I don't see, but, I feel like it is not necessary in my environment. Since I already boot my machines with a cloud-config file, it is trivial to put the Kubernetes node services directly in there. In fact, with cloud-config, if I wanted to use Fleet to boot the Kubernetes stuff, I would have to write the Fleet unit files, start Fleet, submit the unit files I wrote to Fleet, when I could just write a unit file to start the Kubernetes node. But I digress...

Fleet is a scheduling mechanism, just like Kubernetes. However, Fleet can start any executable just like systemd via a unit file, where Kubernetes is geared towards containers. Kubernetes allows definition of:

  • replication controllers
  • services
  • pods

    • containers

(other stuff as well).

So, the assertion that Fleet is just a different 'layer' of scheduling is a good one. You might add that Fleet schedules different things. In my work I don't use the Fleet layer, I just jump directly to the Kubernetes because I am working only with containers.

Finally, the assertion about flannel is incorrect. Flannel uses etcd for its database. Flannel creates a private network for each host that it routes between them. The flannel network is handed to docker, and docker is told to use that network to assign ip addresses from. So, docker processes that use flannel can communicate with each other over ip. All of the port mapping stuff can be skipped since each container gets its own ip address. These docker processes can communicate infra and intra machine on the flannel network. I could be wrong, but I don't think there is any connection between Fleet and flannel. Also, I don't think etcd or Fleet use flannel to route their data. Etcd and Fleet route whether or not flannel is being used. Docker containers route their traffic over flannel.

-g



Related Topics



Leave a reply



Submit