Wednesday, March 29, 2017

Docker Part 2
Tip #1 — Use a smaller base image
Tip #2 — Don’t install debug tools like vim/curl
One technique is to have a development Dockerfile and a production Dockerfile. During development, have all of the tools you need, and then when deploying to production remove the development tools.

Tip #3 — Minimize Layers
Each line of a Dockerfile is a step in the build process; a layer that takes up size. Combine your RUN statements to reduce the image size. Instead of

FROM debian
RUN apt-get install -y <packageA>
RUN apt-get install -y <packageB>

FROM debian
RUN apt-get install -y <packageA> <packageB>

A drawback of this approach is that you’ll have to rebuild the entire image each time you add a new library. If you aren’t aware, Docker doesn’t rebuild layers it’s already built, and it caches the Dockerfile line by line Try changing one character of a Dockerfile you’ve already built, and then rebuild. You’ll notice that each step above that line will be recognized as already been built, but the line you changed (and each line following) will be rebuilt.

A strategy I recommend is that while in development and testing dependencies, separate out the RUN commands. Once you’re ready to deploy to production, combine the RUN statements into one line.

Tip #4 Use — no-install-recommends on apt-get install
Adding — no-install-recommendsto apt-get install -y can help dramatically reduce the size by avoiding installing packages that aren’t technically dependencies but are recommended to be installed alongside packages.

apk add commands should have--no-cache added.
Tip #5 Add rm -rf /var/lib/apt/lists/* to same layer as apt-get installs
Add rm -rf /var/lib/apt/lists/* at the end of the apt-get -y install to clean up after install packages.

For yum, add yum clean all
Also, if you are install wget or curl in order to download some package, remember to combine them all in one RUN statement. Then at the end of the run statement, apt-get remove curl or wget once you no longer need them. This advice goes for any package that you only need temporarily.

Tip #6 Use

FromLatest will Lint your Dockerfile and check for even more steps you can perform to reduce your image size.
Docker has a system sub-command that has a few useful commands. One of them is docker system df which reports back disk space usage stats of your Docker installation.
Here’s what the output of docker system df looks like on my machine:
My build cache is empty because I run docker system prune as a daily scheduled task. If you want to see how to set that up on any OS, check out Docker Tip #32.
The command "docker images" tells you the size (last column). Example below
After 1.13.0, Docker includes a new command docker system df to show docker disk usage.
$ docker system df

To see the file size of your containers, you can use the -s argument of docker ps:
docker ps -s

    docker run --name ubuntu_bash --rm -i -t ubuntu bash
This will create a container named ubuntu_bash and start a Bash session. A more detailed breakdown of the options and arguments used in the example is:

  • --name assigns a name to the container, in this case ubuntu_bash
  • --rm is like the bash command rm it removes the container, but when it exits
  • -i is short for -interactive, this ensures STDIN is kept open even if not attached to the running container
  • -t, which can also be referenced with -tty, starts an interactive bash shell in the container
  • The image for the container follows the options, here it is the image ubuntu
  • The last part that follows the image, is the command you want to run: bash

This is for when you want to run a command in an existing container. This is better if you already have a container running and want to change it or obtain something from it. For example, if you are using Docker Compose you will probably spin-up multiple containers and you may want to access one or more of them once they are created.
    docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Docker exec also has a range of options and arguments you can specify, although you must state the container and command to execute. You can start an interactive bash shell on a container named ubuntu_bash using:
    docker exec -it ubuntu_bash bash
Here the options -it have the same effect as with run. An example with more options and arguments is:
    docker exec -d -w /temp ubuntu_bash touch

  • -w followed by the directory or file path allows you to state which working directory you want to run the command in.
  • -d or -detached means that the container will run in detached mode, so you can still continue to use your terminal session with the container running in the background. Don’t use this if you want to see what the container sends to STDOUT.
  • The command is touch used to create the file with the name inside the /temp directory of the running container ubuntu_bash

we will use “docker run” command to manipulate images that exists or accessible from localhost. While we use “docker exec” to operate an existing docker container. To make a container up and running, you need to make processes inside docker exists, when process is halt, the whole container will enter stopped state. The container will still exists, which could be started again by “docker start” and it will run until the process next halt.

Simply speaking “docker run” has its target as docker images and “docker exec” is targeting pre-existing docker containers. Using the resources inside images or container are of different sense. When using “docker run” a temporary docker container is created  and stopped(not terminated) after the command has finished running. “Docker exec” needs a running container to take the command.
Here are some command ref:
To use binaries in a docker images:
docker run #{image} "COMMAND to be Ran"
To use binaries in a docker images continuously by daemon/detach mode(setting or configs need to be inherited):
docker run -d #{image}
To enter a docker images and run command interactively:
docker exec -it #{image} sh
docker exec -it #{image} bash
To use binaries in a docker container(Always provide full path for command)
docker exec -it #{image} "COMMAND to be Ran"
Simply speaking “docker run” has its target as docker images and “docker exec” is targeting pre-existing docker containers. Using the resources inside images or container are of different sense. When using “docker run” a temporary docker container is created  and stopped(not terminated) after the command has finished running. “Docker exec” needs a running container to take the command.
To use binaries in a docker images:
docker run #{image} "COMMAND to be Ran"
To use binaries in a docker images continuously(setting or configs need to be inherited):
docker run #{image} --name #{container_name}
To enter a docker images and run command interactively:
docker run -it #{image}
To start a docker in background(as daemon)with process up and running that defined in Dockerfile:
docker run -d --name #{container_name} #{image}
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Even though I’m marc, the container is running as root and therefore has access to everything root has access to on this server. This isn’t ideal; running containers this way means that every container you pull from Docker Hub could have full access to everything on your server (depending on how you run it).

The recommendation here is to create a user with a known uid in the Dockerfile and run the application process as that user. The start of a Dockerfile should follow this pattern:
RUN groupadd -g 999 appuser && \
    useradd -r -u 999 -g appuser appuser
USER appuser

But when you FROM an image that is running as non-root, your container will inherit that non-root user. If you need to create your own or perform operations as root, be sure to USER root somewhere near the top of your Dockerfile. Then FROM appuseragain to make it usable.

This works and does the same thing as creating a user in the Dockerfile, but it requires the user to optionally run the container securely. Specifying a non-root user in the Dockerfile will make the container run securely by default.
$ docker run --user 1001 -v /root/secrets.txt:/tmp/secrets.txt <img>
cat: /tmp/secrets.txt: Permission denied
The linux kernel is responsible for managing the uid and gid space, and it’s kernel-level syscalls that are used to determine if requested privileges should be granted. For example, when a process attempts to write to a file, the uid and gid that created the process are examined by the kernel to determine if it has enough privileges to modify the file. The username isn’t used here, the uid is used.
When running Docker containers on a server, there’s still a single kernel. A huge part of the value that containerization brings is that all of these separate processes can continue to share a single kernel. This means that even on a server that is running Docker containers, the entire world of uids and gids is controlled by a single kernel.

docker run -d ubuntu:latest sleep infinity
Docker containers are always run as root user by default. As a result all running processes, shared volumes, folders, files will be owned by root user. It becomes real problem when we need to modify files and folder in shared folders within host OS or docker container.

In order to solve such issue, we need to match host OS and docker container user's UIDs. The root user's UID is always 0. Running docker as root user is also considered as a bad security practice.

ENV user inanzzz

RUN useradd -m -d /home/${user} ${user} \

 && chown -R ${user} /home/${user}

USER ${user}

Creates a user and group called inanzzz with UID equals to 1000.

Let inanzzz user recursively own home directory /home/inanzzz.

Switch to inanzzz user to run container.
$ docker build --build-arg user=what_user .

FROM busybox
RUN ./run/setup $SETTINGS
    image: solr
     - "8983:8983"
      - data:/opt/solr/server/solr/mycores
      - solr-precreate
      - mycore
The cp command can be used to copy files. One specific file can be copied like:
docker cp foo.txt mycontainer:/foo.txt
docker cp mycontainer:/foo.txt foo.txt
Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. mycontainer:/target
docker cp mycontainer:/src/. target
As mentioned in the above answers, you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Once you get the tty running you can navigate to /var/lib/docker
Named volumes look like this postgres:/var/lib/postgresql/data. If you’re using Docker Compose, it will automatically create the volume for you when you first do a docker-compose up, but if not you would need to create it yourself by running docker volume create postgres.
The name doesn’t need to be postgres, but it’s a best practice to name your volumes so you know what they refer to later. You can prefix them with your project’s name to avoid name conflicts.
When you use a volume like this, Docker will manage the volume for you. On Linux, that volume will get saved to /var/lib/docker/volumes/postgres/_data.

Path based volumes serve the same purpose as named volumes, except you’re responsible for managing where the volume gets saved on the Docker host. For example if you did ./postgres:/var/lib/postgresql/data then a postgres/ directory would get created in the current directory on the Docker host.
If you go this route you’ll notice that the permissions will be the same as what they are set to in your Dockerfile or what you set with the --user flag when running the container. If you did none of that then the contents of that folder will be owned by root.
Back before named volumes existed, it was always a question on where you should store these volumes. Some people put them in a data/ folder relative to your project

My rule of thumb is, if you’re dealing with data that you’re not actively dealing with directly then use a named volume and let Docker manage it.

However, if you’re using a volume in development and you want to mount in the current directory so you can code your project without rebuilding then by all means use a path based volume because that’s 100% normal and is considered a best practice. For example, you may see that referenced as .:/app in a docker-compose.yml file.
docker exec -e COLUMNS="`tput cols`" -e LINES="`tput lines`" -ti container bash
docker exec -t container_name /bin/bash -c "export COLUMNS=`tput cols`; export LINES=`tput lines`; exec bash"
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service’s name. In the following example, dbis reachable from web at the hostnames db and database:
    build: .
      - "db:database"

From my understanding both are kind of deprecated. Depends_on is no longer supported in swarm and has been replaced by health checks. Links have been replaced with networks.
One could use command: tail -f /dev/null in docker-compose to keep the container running.
Figured it out, use bash -c.
command: bash -c "python migrate && python runserver"
As far as I know, the CLI does not allow searching/listing tags in a repository.

But if you know which tag you want, you can pull that explicitly by adding a colon and the image name: docker pull ubuntu:saucy
- docker search
- docker run --name my_solr -d -p 8983:8983 -t solr:7.4
docker Error response from daemon: Error processing tar file(exit status 1): write no space left on device
I have already run docker system prune -all and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.
  # Just specify a path and let the Engine create a volume
  - /var/lib/mysql

  # Specify an absolute path mapping
  - /opt/data:/var/lib/mysql

  # Path on the host, relative to the Compose file
  - ./cache:/tmp/cache

  # User-relative path
  - ~/configs:/etc/configs/:ro

  # Named volume
  - datavolume:/var/lib/mysql

docker volume prune
It’s hidden inside the xhyve virtual machine. But you don’t really need to look inside it. If you’re reallycurious you can use the magic screen command to get a shell in the VM, but mostly it’s all internal Docker details.
If you’re trying to access the volume data, I think the usual way is to launch another container: docker run -v test:/test -it ubuntu:16.04 bash will get a shell with the volume data visible in /test.
How do I mount a host directory as a volume in docker compose
From the looks of it you could do the following on your docker-compose.yml
    - ./:/app
      - .:/var/www/project:cached
In between applications and the network sits Docker networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers.

The most commonly used built-in network drivers are bridgeoverlay and macvlan

The bridge driver creates a private network internal to the host so containers on this network can communicate. External access is granted by exposing ports to containers. Docker secures the network by managing rules that block connectivity between different Docker networks.

Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, iptables rules, and host routes to make this connectivity possible. In the example highlighted below, a Docker bridge network is created and two containers are attached to it. With no extra configuration the Docker Engine does the necessary wiring, provides service discovery for the containers, and configures security rules to prevent communication to other networks. A built-in IPAM driver provides the container interfaces with private IP addresses from the subnet 
 docker network create -d bridge mybridge
 docker run -d --net mybridge --name db redis
 docker run -d --net mybridge -e DB=db -p 8000:5000 --name web chrch/web

The bridge driver is a local scope driver, which means it only provides service discovery, IPAM, and connectivity on a single host. Multi-host service discovery requires an external solution that can map containers to their host location. This is exactly what makes the overlay driver so great.
To view Docker networks, run:
docker network ls
To get further details on networks, run:
docker network inspect

Docker creates three networks automatically on install: bridgenone, and host. Specify which network a container should use with the --net flag. If you create a new network my_network (more on this later), you can connect your container (my_container) with:

docker run my_container --net=my_network

All Docker installations represent the docker0 network with bridge; Docker connects to bridge by default. Run ifconfig on the Linux host to view the bridge network.

When you run the following command in your console, Docker returns a JSON object describing the bridge network (including information regarding which containers run on the network, the options set, and listing the subnet and gateway).

docker network inspect bridge

Docker automatically creates a subnet and gateway for the bridge network, and docker runautomatically adds containers to it. If you have containers running on your network, docker network inspect displays networking information for your containers.

Any containers on the same network may communicate with one another via IP addresses. Docker does not support automatic service discovery on bridge. You must connect containers with the --link option in your docker run command.

The Docker bridge supports port mappings and docker run --link allowing communications between containers on the docker0 network. However, these error-prone techniques require unnecessary complexity. Just because you can use them, does not mean you should. It’s better to define your own networks instead.


This offers a container-specific network stack that lacks a network interface. This container only has a local loopback interface (i.e., no external network interface).


This enables a container to attach to your host’s network (meaning the configuration inside the container matches the configuration outside the container)
docker-compose run [service] bash
$ docker-compose run -rm web ./ [command] [arguments]
docker-compose run tells Docker that we’re about to run a command. The -rm option will shut down this container when we’re finished with it. web identifies the service we want to run; commands will generally be in the web service.
$ docker-compose run -rm web ./ test [app]
docker-compose exec
docker-compose exec worker ping -c 3 db
This will launch a new process in the already running worker container and ping the db container 3 times

The run command is useful if you need to run a containerized command as a one-off within your application. For example, if you use a package manager such as composer to update the dependencies of your project that is stored on a volume, you could run something like this:

$ docker-compose run --volume data_volume:/app composer install
This would run the composer container with the install command and mount the data_volume to /app within the container.

docker-compose up -d --scale worker=3
  1. docker logs <container_id>
    Hopefully you’ve already tried this, but if not, start here. This’ll give you the full STDOUT and STDERR from the command that was run initially in your container.
  2. docker stats <container_id>If you just need to keep an eye on the metrics of your container to work out what’s gone wrong, docker stats can help: it’ll give you a live stream of resource usage, so you can see just how much memory you’ve leaked so far.
  3. docker cp <container_id>:/path/to/useful/file /local-pathOften just getting hold of more log files is enough to sort you out. If you already know what you want, docker cp has your back: copy any file from any container back out onto your local machine, so you can examine it in depth (especially useful analysing heap dumps).
  4. docker exec -it <container_id> /bin/bashNext up, if you can run the container (if it’s crashed, you can restart it with docker start <container_id>), shell in directly and start digging around for further details by hand.
  5. docker commit <container_id> my-broken-container &&
    docker run -it my-broken-container /bin/bash
    Can’t start your container at all? If you’ve got a initial command or entrypoint that immediately crashes, Docker will immediately shut it back down for you. This can make your container unstartable, so you can’t shell in any more, which really gets in the way.

  1. Have a failing entrypoint instead? There’s an entrypoint override command-line flag too.

--entrypoint="": Overwrite the default entrypoint set by the image
restart: unless-stopped
If you use the host network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.
docker network create --driver overlay proxy
The proxy network will be dedicated to the proxy container and services that will be attached to it.
curl -o docker-compose-go-demo.yml \

docker stack deploy  -c docker-compose-go-demo.yml go-demo

docker stack ps go-demo
docker-machine create --driver virtualbox manager1
Keep in mind that using docker-machine utility, you can SSH into any of the machines as follows:
docker-machine ssh <machine-name>
docker-machine ip manager1
The first thing to do is initialize the Swarm. We will SSH into the manager1 machine and initialize the swarm in there.
docker@manager1:~$ docker swarm init --advertise-addr
To add a worker to this swarm, run the following command:
docker swarm join \
 — token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1-ad7b1k8k3bl3aa3k3q13zivqd \
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
docker node ls

in the manager:
docker service create --replicas 5 -p 80:80 --name web nginx
docker service ps web
docker service ls
$ docker service scale web=8
$ docker node inspect self
Or if you want to check up on the other nodes, give the node name. For e.g.
$ docker node inspect worker1
But sometimes, we have to bring the Node down for some maintenance reason. This meant by setting the Availability to Drain mode.
docker node update --availability drain worker1
docker service rm web
rolling update
$ docker service update --image <imagename>:<version> web

docker-compose up
version: "3"
    image: USER/You-Pre-Build-Image
      - VIRTUAL_HOST=localhost
      - logs:/app/out/
    command: ["npm","start"]
      - NET_ADMIN
      - SYS_ADMIN
          cpus: '0.001'
          memory: 50M
          cpus: '0.0001'
          memory: 20M

  - logs

    driver: overlay
Docker will autostart any container with a RestartPolicy of 'always' when the docker service initially starts. You won't find any evidence of this within cron or any other normal system startup scripts; you'll have to dig into the container configuration to find it.
docker inspect my-container (Look for RestartPolicy in the output)
I've mostly had this situation occur when a container was created with --restart always, and the situation later changed such that I no longer wanted this to happen.
After docker 1.11, this is easy to fix
docker update --restart=no my-container
You can start your container with --restart=unless-stopped.

When using Docker for Mac Application, it appears that the containers are stored within the VM located at:
you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Once you get the tty running you can navigate to /var/lib/docker

/var/lib/docker does not exist on host

It’s hidden inside the xhyve virtual machine. But you don’t really need to look inside it. If you’re reallycurious you can use the magic screen command to get a shell in the VM, but mostly it’s all internal Docker details.
If you’re trying to access the volume data, I think the usual way is to launch another container: docker run -v test:/test -it ubuntu:16.04 bash will get a shell with the volume data visible in /test.

List Docker Container Names and IPs

docker ps -q | xargs -n 1 docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Name }}' | sed 's/ \// /'
Note: This command is deprecated. Use the up command with the --scale flag instead.
Usage: scale [SERVICE=NUM...]
Sets the number of containers to run for a service.
Numbers are specified as arguments in the form service=num. For example:
docker-compose scale web=2 worker=3
docker history couchbase
docker images couchbase
docker-machine ip default returns "Host does not exist: "default"" #27
docker-machine ip
Sounds like you may need to run docker-machine create default first.
将上面的 和Dockerfile.old 结合起来,稍加修饰,得到如下全新的Dockerfile:

FROM maven:3.5.2-alpine as builder
MAINTAINER zhouzhipeng <>
COPY src .
COPY pom.xml .
# 编译打包 (jar包生成路径:/app/target)
RUN mvn package -Dmaven.test.skip=true

FROM openjdk:8-jre-alpine
MAINTAINER zhouzhipeng <>
COPY --from=builder /app/target/docker-multi-stage-demo-1.0-SNAPSHOT.jar .
# 运行main类
CMD java -cp docker-multi-stage-demo-1.0-SNAPSHOT.jar com.zhouzhipeng.HelloWorld

然后,仍然是熟悉的docker build命令
docker build -t zhouzhipeng/dockermultistagedemo-new .
上面的Dockerfile 中有两处地方不一样,
  1. 出现了多个FROM 语句
  2. COPY 命令后多了--from=builder
stage之间交互的是文件,故COPY 命令需要扩展,通过--from=<name> 来指定需要从上方的哪个”stage” 拷贝文件, 其完整命令格式如下:
但是往往在我们 Build 一个应用的时候,是将我们的源代码也构建进去的,这对于类似于 golang 这样的编译型语言肯定是不行的,因为实际运行的时候我只需要把最终构建的二进制包给你就行,把源码也一起打包在镜像中,需要承担很多风险,即使是脚本语言,在构建的时候也可能需要使用到一些上线的工具,这样无疑也增大了我们的镜像体积。

 使用多阶段构建,你可以在一个Dockerfile中使用多个 FROM 语句。每个 FROM 指令都可以使用不同的基础镜像,并表示开始一个新的构建阶段。你可以很方便的将一个阶段的文件复制到另外一个阶段,在最终的镜像中保留下你需要的内容即可。
FROM golang AS build-env ADD . /go/src/app WORKDIR /go/src/app RUN go get -u -v RUN govendor sync RUN GOOS=linux GOARCH=386 go build -v -o /go/src/app/app-server FROM alpine RUN apk add -U tzdata RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime COPY --from=build-env /go/src/app/app-server /usr/local/bin/app-server EXPOSE 8080 CMD [ "app-server" ]
默认情况下,构建阶段是没有命令的,我们可以通过它们的索引来引用它们,第一个 FROM 指令从0开始,我们也可以用AS指令为阶段命令,比如我们这里的将第一阶段命名为build-env,然后在其他阶段需要引用的时候使用--from=build-env参数即可。
docker run -i -t -h myhost centos:6 /bin/bash

I think the following is better because docker containers usually don't have 'hostname' installed, therefore I would use the head command:
echo $(head -1 /etc/hosts | cut -f1) $HOST_NAME >> /etc/hosts
docker run --rm -it --cap-add SYS_ADMIN alpine sh
/ # hostname foobar
/ # hostname
However, before adding this capability, think twice or more, because SYS_ADMIN is a "collection" of capabilities, and is really insecure (it allows breaking out of the container, and such, so is highly insecure)

EXPOSE 8983-8986
USER builder

Use a .dockerignore file

The current working directory where you are located when you issue a docker build command is called the build context, and the Dockerfile must be somewhere within this build context. By default, it is assumed to be in the current directory, but you can specify a different location by using the -f flag. Regardless of where the Dockerfile actually lives, all of the recursive contents of files and directories in the current directory are sent to the Docker daemon as the build context. Inadvertently including files that are not necessary for building the image results in a larger build context and larger image size. These in turn can increase build time, time to pull and push the image, and the runtime size of containers. To see how big your build context is, look for a message like the following, when you build your Dockerfile.
Sending build context to Docker daemon  187.8MB
To exclude files which are not relevant to the build, without restructuring your source repository, use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.

Each container should have only one concern

Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
RUN apt-get update && apt-get install -y \
  bzr \
  cvs \
  git \
  mercurial \

FROM <image>, or FROM <image>:<tag>, or FROM <image>@<digest>
The latest tag assigned to an image simply means that it's the image that was last built and executed without a specific tag provided.

The WORKDIR instruction adds a working directory for any CMD, RUN, ENTRYPOINT, COPY, and ADD instructions that comes after it in the Dockerfile. The syntax for the instruction is WORKDIR /PATH. You can have multiple WORKDIR instructions in one Dockerfile, if the relative path is provided; it will be relative to the path of the previous WORKDIR instruction.

ADD config.json projectRoot/ will add the config.json file to <WORKDIR>/projectRoot/
ADD config.json /absoluteDirectory/ will add the config.json file to the /absoluteDirectory/
Note that ADD shouldn't be used if you don't need its special features, such as unpacking archives, you should use COPY instead.

COPY supports only the basic copying of local files into the container. On the other hand, ADD gives some more features, such as archive extraction, downloading files through URL, and so on. Docker's best practices say that you should prefer COPY if you do not need those additional features of ADD.

layering is the core concept in Docker. RUN, takes a command as its argument and runs it to create the new layer.

Imagine a case when you use the RUN command for pulling source code from the Git repository, by using the git clone as the first step of building the image.

In our example, we should always combine RUN apt-get update with apt-get install in the same RUN statement, which will create just a single layer; for example:

RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get install -y nodejs \
&& apt-get clean

The purpose of a CMD instruction is to provide defaults for an executing container. You can think of the CMD instruction as a starting point of your image, when the container is being run later on. This can be an executable, or, if you specify the ENTRYPOINT instruction (we are going to explain it next), you can omit the executable and provide the default parameters only.

CMD ["executable","parameter1","parameter2"]: This is a so called exec form. It's also the preferred and recommended form. The parameters are JSON array, and they need to be enclosed in square brackets. The important note is that the exec form does not invoke a command shell when the container is run. It just runs the executable provided as the first parameter. If the ENTRYPOINT instruction is present in the Dockerfile, CMD provides a default set of parameters for the ENTRYPOINT instruction.
CMD command parameter1 parameter2: This a shell form of the instruction. This time, the shell (if present in the image) will be processing the provided command. The specified binary will be executed with an invocation of the shell using /bin/sh -c. It means that if you display the container's hostname, for example, using CMD echo $HOSTNAME, you should use the shell form of the instruction.

everything started through the shell will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container's PID 1, and will not receive Unix signals, so your executable will not receive a SIGTERM from docker stop <container>.

CMD ["executable","parameter1","parameter2"]

RUN is a build-time instruction, the CMD is a runtime instruction.
whereas the command specified through the CMD instruction is executed when the container is launched by executing docker run on the newly created image. Unlike CMD, the RUN instruction is actually used to build the image, by creating a new layer on top of the previous one which is committed.

omitting a tag when building an image will result in creating the latest tag

docker build . -t rest-example
The dot as the first parameter specifies the context for the docker build command. In our case, it will be just a root directory of our little microservice.

docker image ls
docker run -it rest-example
FROM java:8
RUN apt-get update
RUN apt-get install -y maven
COPY pom.xml /app/pom.xml
COPY src /app/src
RUN ["mvn", "package"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java",
"-jar", "target/ rest-example-0.1.0.jar"]

The ENTRYPOINT instruction allows you to configure a container that will run as an executable.

The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD, on the other hand, specifies the arguments that will be fed to the ENTRYPOINT. Docker has a default ENTRYPOINT which is /bin/sh -c but does not have a default CMD.

If you want shell processing then you need either to use the shell form or execute a shell directly
ENTRYPOINT [ "sh", "-c", "echo $HOSTNAME" ]
Exactly the same as the exec form of the CMD instruction, this will not invoke a command shell. This means that the normal shell processing will not happen. For example, ENTRYPOINT [ "echo", "$HOSTNAME" ] will not do variable substitution on the $HOSTNAME variable.

ENTRYPOINT command parameter1 parameter2 is a a shell form. Normal shell processing will occur. This form will also ignore any CMD or docker run command line arguments. Also, your command will not be PID 1, because it will be executed by the shell. As a result, if you then run docker stop <container>, the container will not exit cleanly, and the stop command will be forced to send a SIGKILL after the timeout.

FROM busybox
ENTRYPOINT ["/bin/ping"]
CMD ["localhost"]

The CMD instruction, as you will remember from its description, sets the default command and/or parameters, which can be overwritten from the command line when you run the container. The ENTRYPOINT is different, its command and parameters cannot be overwritten using the command line. Instead, all command line arguments will be appended after the ENTRYPOINT parameters. This way you can, kind of, lock the command that will be executed always during the container start.

Unlike the CMD parameters, the ENTRYPOINT command and parameters are not ignored when a Docker container runs with command-line parameters.
$ docker run ping-example
ENTRYPOINT should be defined when using the container as an executable
You should use the CMD instruction as a way of defining default arguments for the command defined as ENTRYPOINT or for executing an ad-hoc command in a container
CMD will be overridden when running the container with alternative arguments
ENTRYPOINT sets the concrete default application that is used every time a container is created using the image
If you couple ENTRYPOINT with CMD, you can remove an executable from CMD and just leave its arguments which will be passed to ENTRYPOINT

$ docker run -p 8080:8080 -it rest-example
The fundamental difference between VOLUME and -v is this: -v will mount existing files from your operating system inside your Docker container and VOLUME will create a new, empty volume on your host and mount it inside your container.

docker inspect

ENV CONFIG_TYPE=file CONFIG_LOCATION="home/Jarek/my \app/config.json"
ENV PATH /var/lib/tomcat8/bin:$PATH
You can also use ENV to set the often-modified version numbers so that upgrades are easier to handle, as seen in the following example:

RUN curl -SL$TOMCAT_VERSION_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz | tar zxvf apache-tomcat-$TOMCAT_VERSION.tar.gz -c /usr/Jarek/apache-tomcat-$TOMCAT_VERSION
ENV PATH /usr/Jarek/apache-tomcat-$TOMCAT_VERSION/bin:$PATH

docker run --env <key>=<value>
USER tomcat
You can use the USER command if an executable can be run without privileges. The Dockerfile can contain the user and group creation instruction the same as this one:

RUN groupadd -r tomcat && useradd -r -g tomcat tomcat
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile
ARG user=jarek
$ docker build --build-arg <variable name>=<value> .
It is not recommended to use ARG for passing secrets as GitHub keys, user credentials, passwords, and so on, as all of them will be visible to any user of the image by using the docker history command!

FROM maven:3-jdk-8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ADD . /usr/src/app
ONBUILD RUN mvn install

The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the child build, as if it had been inserted immediately after the FROM instruction in the child Dockerfile.

FROM maven:3.3-jdk-8-onbuild
CMD ["java","-jar","/usr/src/app/target/app-1.0-SNAPSHOT-jar-with-dependencies.jar"]
the ONBUILD instruction is an instruction the parent Dockerfile gives to the child Dockerfile (downstream build). Any build instruction can be registered as a trigger and those instructions will be triggered immediately after the FROM instruction in the Dockerfile.

The HEALTHCHECK instruction can be used to inform Docker how to test a container to check that it is still working
HEALTHCHECK --interval=<interval> --timeout=<timeout> CMD <command>
HEALTHCHECK --interval=5m --timeout=2s --retries=3 CMD curl -f http://localhost/ping || exit 1

docker run
Run a command in a new container
docker exec
Run a command in a running container

Adding a data volume

You can add a data volume to a container using the -v flag with the docker create and docker run command. You can use the -v multiple times to mount multiple data volumes
$ docker run -d -P --name web -v /webapp training/webapp python
$ docker inspect web
Mounts": [
        "Name": "fac362...80535",
        "Source": "/var/lib/docker/volumes/fac362...80535/_data",
        "Destination": "/webapp",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""
You will notice in the above Source is specifying the location on the host andDestination is specifying the volume location inside the container. RW shows if the volume is read/write.

Mount a host directory as a data volume

In addition to creating a volume using the -v flag you can also mount a directory from your Docker daemon’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python
This command mounts the host directory, /src/webapp, into the container at/opt/webapp. If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
The container-dir must always be an absolute path such as /src/docs. The host-dircan either be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
It is not possible to use the VOLUME instruction to tell docker what to mount. That would seriously break portability. This instruction tells docker that content in those directories does not go in images and can be accessed from other containers using the --volumes-from command line parameter. You have to run the container using -v /path/on/host:/path/in/container to access directories from the host.
Mounting host volumes during build is not possible. There is no privileged build and mounting the host would also seriously degrade portability. You might want to try using wget or curl to download whatever you need for the build and put it in place.
docker exec -it <containerIdOrName> bash
Basically if the docker container was started using /bin/bash command you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec
Fact 1: You need at least one (ENTRYPOINT or CMD) defined (in order to run)
Fact 2: If just one is defined at runtime, CMD and ENTRYPOINT have the same effect
$ docker inspect b52 | jq .[0].Config

ENTRYPOINT ["ping", ""]  # "exec" format  

ENTRYPOINT ping  # "shell" format  

docker build -t test .
Fact 4: The "exec" form is the recommended form
Fact 5: No Shell? No Environment Variables.
The problem with not running in a shell, is that you don't get the benefits of environment variables (such as $PATH), and other things that come with using a shell. There are a two problems with the below Dockerfile.
$ cat Dockerfile 
FROM openjdk:8-jdk-alpine  
WORKDIR /data  
COPY *.jar /data  
CMD ["java", "-jar", "*.jar"]  # "exec" format  
The first problem is that since you don't have $PATH, you need to specify the exact location of the java executable. The second problem is that wildcards are evaluated by the shell, so *.jar won't resolve properly. After fixing those issues, the resulting Dockerfile is this:
FROM openjdk:8-jdk-alpine  
WORKDIR /data  
COPY *.jar /data  
CMD ["/usr/bin/java", "-jar", "spring.jar"]  
Fact 6: CMD arguments append to end of ENTRYPOINT... sometimes
Fact 6a If you use "shell" format for ENTRYPOINTCMD is ignored.
$ cat Dockerfile 
FROM alpine  
ENTRYPOINT ls /usr  
CMD blah blah blah blah  

FACT 6b If you use "exec" format for ENTRYPOINTCMD arguments are appended after.

$ cat Dockerfile 
FROM alpine  
ENTRYPOINT ["ls", "/usr"]  
CMD ["/var"]  

Fact 6c If you use the "exec" format for ENTRYPOINT, then you need to use the "exec" format for CMD as well. If you don't, docker tries to add the sh -c into the arguments that are appended, which could lead to some funky results.

Fact 7: ENTRYPOINT and CMD can be overridden via command line flags.

Use the --entrypoint flag to override ENTRYPOINT:

docker run --entrypoint [my_entrypoint] test  

Anything after the image in the docker run command overrides CMD:

docker run test [command 1] [arg1] [arg2]  

All of the above facts apply, just keep in mind that developers have the ability to override these flags when they do docker run. Which leads me to the conclusion...

Use ENTRYPOINT if you don't want developers to change the executable that is run when the container starts. You can think of your container as an "executable wrapper". A good strategy is to define a "stable" combination of executable + parameters as the ENTRYPOINT. Then you can (optionally) specify a default CMD that developers can easily override.
$ cat Dockerfile
FROM alpine  
ENTRYPOINT ["ping"]  
CMD [""]  
$ docker build -t test .

Override CMD with your own parameters:

$ docker run test

Use only CMD (with no ENTRYPOINT) if you want developers the ability to easily override the executable that is being run. If entrypoint is defined you can still override the executable using --entrypoint, but it is a much easier for developers to append the command they want at the end of docker run.

Ping is nice, but let's start the container with a shell instead.
$ docker run -it test sh

If you ran these commands, you have a bunch of stopped containers left on your host. Clean them up:

$ docker system prune
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's docker-compose.yml.
Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
You can specify the path to your individual Dockerfiles using build /path/to/dockerfiles/blahwhere /path/to/dockerfiles/blah is where blah's Dockerfile lives.
The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles
A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile. It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

CMD ./
Try to create script with ADD command and specification of working directory Like this("script" is the name of script and /root/ is where you want it in the container, it can be different path:
ADD /root/
In this case ADD has to come before CMD, if you have one BTW it's cool way to import scripts to any location in container from host machine
In CMD place [./script]
RUN and ENTRYPOINT are two different way to execute a script.
RUN means it creates an intermediate container, runs the script and freeze the new state of that container in a new intermediate image. The script won't be run after that: your final image is supposed to reflect the result of that script.
ENTRYPOINT means your image (which has not executed the script yet) will create a container, and runs that script.
In both cases, the script needs to be added, and a RUN chmod +x / is a good idea.
It should also starts with a shebang (like #!/bin/sh)
Considering your script (KevinRaimbaud/docker-symfony/docker/php/ a couple of git config --global commands), it would be best to RUN that script once in your Dockerfile, but making sure to use the right user (the global git config file is %HOME%/.gitconfig, which by default is the /root one)
Add to your Dockerfile:
Then, when running a container, check the content of /root/.gitconfig to confirm the script was run.
The container-dest must always be an absolute path such as /src/docs. The host-src can either be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
  1. docker see that as a name for a volume, and thus, create the volume, create a folder for it inside and mount the volume in it.
  2. docker still see that as a name for a volume but / is not allowed in the name of a volume, and thus it fails
  3. working as expected 👼
The reason why the docker cli does not take relative path is because the docker client and the docker daemon might not be on the same host. Thus how should the relative path handled ?
  • on client side ? but then, the daemon not being on the same host, might not have the file at this place, and even if it has, is the content of the file the same ?
  • on daemon side ? but relative to what path then ? path of the daemon ?
You can do -v $PWD/../../path:/location to use a relative path indirectly. 👍 6
In a normal virtualized environment, one or more virtual machines run on top of a physical machine using a hypervisor like Xen, Hyper-V etc. Containers, on the other hand, run in user space on top of operating systems kernel. It can be called as OS level virtualization. Each container will have its isolated user space and you can run multiple containers on a host, each having its own user space. It means you can run different Linux systems (containers) on a single host.

Containers are isolated in a host using the two Linux kernel features called namespaces and control groups.
There are six namespaces in Linux (mnt, IPC, net, usr etc.). Using these namespaces a container can have its own network interfaces, IP address etc. Each container will have its own namespace and the processes running inside that namespace will not have any privileges outside its namespace.
Control groups:
The resources used by a container is managed by Linux control groups. You can decide on how much CPU and memory resource a container should use using Linux control groups.
Docker is basically a container engine which uses the Linux Kernel features like namespaces and control groups to create containers on top of an operating system and automates application deployment on the container

Docker is composed of following four components
  1. Docker Client and Daemon.
  2. Images
  3. Docker registries
  4. Containers

Docker has a client-server architecture. Docker Daemon or server is responsible for all the actions that are related to containers. The daemon receives the commands from the Docker client through CLI or REST API’s. Docker client can be on the same host as a daemon or it can be present on any other host.

Images are organized in a layered manner. Every change in an image is added as a layer on top of it.

Docker registry is a repository for Docker images. Using Docker registry, you can build and share images with your team. A registry can be public or private. Docker Inc provides a hosted registry service called Docker Hub.
Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.
AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.
So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB times x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).
A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host, and it won't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.
A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds, and often even less than a second.
There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.
Amazon and Google containers are based on Docker; you can happily use Docker on your local development machines as well as your cloud-based production machines (with the help of the cloud provider’s container services).
When you install Docker, it creates three networks automatically. You can list these networks using the docker network lscommand:

These three networks are built into Docker. When you run a container, you can use the --network flag to specify which networks your container should connect to.

docker logs -f container_id


To view Docker for Mac logs at the command line, type this command in a terminal window or your favorite shell.
$ syslog -k Sender Docker
Alternatively, you can send the output of this command to a file. The following command redirects the log output to a file called my_docker_logs.txt.
$ syslog -k Sender Docker > ~/Desktop/my_docker_logs.txt
The Console lives on your Mac hard drive in Applications > Utilities. You can bring it up quickly by just searching for it with Spotlight Search.
To find all Docker app log messages, do the following.
  1. From the Console menu, choose File > New System Log Query…
  -a, --all             Show all containers (default shows just running)
  -f, --filter value    Filter output based on conditions provided (default [])
      --format string   Pretty-print containers using a Go template
      --help            Print usage
  -n, --last int        Show n last created containers (includes all states) (default -1)
  -l, --latest          Show the latest created container (includes all states)
      --no-trunc        Don't truncate output

docker ps --help

docker-compose ps
docker-compose rm -v
docker-compose logs
docker-compose logs pump elasticsearch
docker-compose -f ps

docker-compose build
docker-compose build calaca pump
docker-compose rm -vf
docker-compose pull
docker-compose up -d db
docker-compose ps coffee
docker-compose scale coffee=5

Docker builds container links by creating firewall rules and injecting service discovery information into the dependent container’s environment variables and /etc/hosts file.

When containers are re-created or restarted, they come back with different IP addresses. That change makes the information that was injected into the proxy service stale.

Make sure all RUN yum commands end with && yum clean all to save space. Furthermore, packages which are only needed during the image creation, but not for the actual use of the image, can be removed using yum's transaction support, and specifically using && yum history undo last -y
RUN yum install -y libfoo \
 && yum install -y libfoo-devel \
 && # build/install something which requires libfoo \
 && yum history undo last -y \
 && yum clean all


Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts