https://www.codacy.com/blog/five-ways-to-slim-your-docker-images/
https://hackernoon.com/tips-to-reduce-docker-image-sizes-876095da3b34
Tip #1 — Use a smaller base image
https://nickjanetakis.com/blog/docker-tip-44-show-total-disk-space-used-by-docker
https://www.quora.com/How-can-I-get-the-size-of-a-Docker-image
The command "docker images" tells you the size (last column). Example below
https://stackoverflow.com/questions/26753087/docker-how-to-analyze-a-containers-disk-usage
https://chankongching.wordpress.com/2017/03/17/docker-what-is-the-different-between-run-and-exec/
https://chankongching.wordpress.com/2017/03/17/docker-what-is-the-different-between-run-and-exec/
https://stackoverflow.com/questions/38986057/how-to-set-image-name-in-dockerfile
https://medium.com/@mccode/processes-in-containers-should-not-run-as-root-2feae3f0df3b
But when you
docker run -d ubuntu:latest sleep infinity
http://www.inanzzz.com/index.php/post/q1rj/running-docker-container-with-a-non-root-user-and-fixing-shared-volume-permissions-with-dockerfile
Docker containers are always run as
In order to solve such issue, we need to match host OS and docker container user's
https://docs.docker.com/engine/reference/builder/#scope
solr:
https://github.com/docker-solr/docker-solr/blob/master/Docker-FAQ.md#can-i-run-zookeeper-and-solr-clusters-under-docker
https://hub.docker.com/_/solr/
https://docs.docker.com/samples/library/solr/#using-docker-compose
docker exec -t container_name /bin/bash -c "export COLUMNS=`tput cols`; export LINES=`tput lines`; exec bash"
https://docs.docker.com/compose/networking/#links
https://stackoverflow.com/questions/38546755/docker-compose-keep-container-running
https://stackoverflow.com/questions/30063907/using-docker-compose-how-to-execute-multiple-commands
https://stackoverflow.com/questions/24481564/how-can-i-find-docker-image-with-specific-tag-in-docker-registry-in-docker-comma
- docker search
- docker run --name my_solr -d -p 8983:8983 -t solr:7.4
https://github.com/docker/for-win/issues/1042
docker Error response from daemon: Error processing tar file(exit status 1): write no space left on device
http://container-solutions.com/understanding-volumes-docker/
https://stackoverflow.com/questions/40905761/how-do-i-mount-a-host-directory-as-a-volume-in-docker-compose
How do I mount a host directory as a volume in docker compose
https://blog.docker.com/2016/12/understanding-docker-networking-drivers-use-cases/
In between applications and the network sits Docker networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers.
The most commonly used built-in network drivers are bridge, overlay and macvlan
https://runnable.com/docker/basic-docker-networking
https://www.revsys.com/tidbits/docker-useful-command-line-stuff/
docker-compose run [service] bash
$ docker-compose run -rm web ./manage.py test [app]
https://docs.docker.com/compose/reference/exec/
docker-compose exec
docker-compose exec worker ping -c 3 db
This will launch a new process in the already running worker container and ping the db container 3 times
The run command is useful if you need to run a containerized command as a one-off within your application. For example, if you use a package manager such as composer to update the dependencies of your project that is stored on a volume, you could run something like this:
$ docker-compose run --volume data_volume:/app composer install
This would run the composer container with the install command and mount the data_volume to /app within the container.
docker-compose up -d --scale worker=3
https://medium.com/@pimterry/5-ways-to-debug-an-exploding-docker-container-4f729e2c0aa8
https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options
restart: unless-stopped
https://docs.docker.com/network/host/
https://technologyconversations.com/2017/01/23/using-docker-stack-and-compose-yaml-files-to-deploy-swarm-services/
https://raw.githubusercontent.com/vfarcic/go-demo/master/docker-compose-stack.yml
docker stack deploy -c docker-compose-go-demo.yml go-demo
docker stack ps go-demo
https://rominirani.com/docker-swarm-tutorial-b67470cf8872
docker-machine create --driver virtualbox manager1
docker-compose up
https://stackoverflow.com/questions/42345235/how-to-specify-memory-cpu-limit-in-docker-compose-version-3/42345411
https://stackoverflow.com/questions/38532483/where-is-var-lib-docker-on-mac-os-x
https://forums.docker.com/t/var-lib-docker-does-not-exist-on-host/18314/2
https://gist.github.com/ipedrazas/2c93f6e74737d1f8a791
List Docker Container Names and IPs
docker ps -q | xargs -n 1 docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Name }}' | sed 's/ \// /'
https://docs.docker.com/compose/reference/scale/
https://docs.docker.com/compose/reference/up/
http://blog.arungupta.me/show-layers-of-docker-image/
docker history couchbase
https://github.com/prakhar1989/docker-curriculum/issues/27
docker-machine ip default returns "Host does not exist: "default"" #27
docker-machine ip
Sounds like you may need to run docker-machine create default first.
https://blog.alexellis.io/mutli-stage-docker-builds/
https://blog.zhouzhipeng.com/dockerfile-auto-ci-tool.html
但是往往在我们 Build 一个应用的时候,是将我们的源代码也构建进去的,这对于类似于 golang 这样的编译型语言肯定是不行的,因为实际运行的时候我只需要把最终构建的二进制包给你就行,把源码也一起打包在镜像中,需要承担很多风险,即使是脚本语言,在构建的时候也可能需要使用到一些上线的工具,这样无疑也增大了我们的镜像体积。
使用多阶段构建,你可以在一个
FROM golang AS build-env ADD . /go/src/app WORKDIR /go/src/app RUN go get -u -v github.com/kardianos/govendor RUN govendor sync RUN GOOS=linux GOARCH=386 go build -v -o /go/src/app/app-server FROM alpine RUN apk add -U tzdata RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime COPY --from=build-env /go/src/app/app-server /usr/local/bin/app-server EXPOSE 8080 CMD [ "app-server" ]
EXPOSE 8983-8986
USER builder
https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#general-guidelines-and-recommendations
FROM <image>, or FROM <image>:<tag>, or FROM <image>@<digest>
The latest tag assigned to an image simply means that it's the image that was last built and executed without a specific tag provided.
The WORKDIR instruction adds a working directory for any CMD, RUN, ENTRYPOINT, COPY, and ADD instructions that comes after it in the Dockerfile. The syntax for the instruction is WORKDIR /PATH. You can have multiple WORKDIR instructions in one Dockerfile, if the relative path is provided; it will be relative to the path of the previous WORKDIR instruction.
ADD config.json projectRoot/ will add the config.json file to <WORKDIR>/projectRoot/
ADD config.json /absoluteDirectory/ will add the config.json file to the /absoluteDirectory/
Note that ADD shouldn't be used if you don't need its special features, such as unpacking archives, you should use COPY instead.
COPY supports only the basic copying of local files into the container. On the other hand, ADD gives some more features, such as archive extraction, downloading files through URL, and so on. Docker's best practices say that you should prefer COPY if you do not need those additional features of ADD.
layering is the core concept in Docker. RUN, takes a command as its argument and runs it to create the new layer.
Imagine a case when you use the RUN command for pulling source code from the Git repository, by using the git clone as the first step of building the image.
In our example, we should always combine RUN apt-get update with apt-get install in the same RUN statement, which will create just a single layer; for example:
RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get install -y nodejs \
&& apt-get clean
The purpose of a CMD instruction is to provide defaults for an executing container. You can think of the CMD instruction as a starting point of your image, when the container is being run later on. This can be an executable, or, if you specify the ENTRYPOINT instruction (we are going to explain it next), you can omit the executable and provide the default parameters only.
CMD ["executable","parameter1","parameter2"]: This is a so called exec form. It's also the preferred and recommended form. The parameters are JSON array, and they need to be enclosed in square brackets. The important note is that the exec form does not invoke a command shell when the container is run. It just runs the executable provided as the first parameter. If the ENTRYPOINT instruction is present in the Dockerfile, CMD provides a default set of parameters for the ENTRYPOINT instruction.
CMD command parameter1 parameter2: This a shell form of the instruction. This time, the shell (if present in the image) will be processing the provided command. The specified binary will be executed with an invocation of the shell using /bin/sh -c. It means that if you display the container's hostname, for example, using CMD echo $HOSTNAME, you should use the shell form of the instruction.
everything started through the shell will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container's PID 1, and will not receive Unix signals, so your executable will not receive a SIGTERM from docker stop <container>.
CMD ["executable","parameter1","parameter2"]
CMD echo $HOSTNAME
RUN is a build-time instruction, the CMD is a runtime instruction.
whereas the command specified through the CMD instruction is executed when the container is launched by executing docker run on the newly created image. Unlike CMD, the RUN instruction is actually used to build the image, by creating a new layer on top of the previous one which is committed.
omitting a tag when building an image will result in creating the latest tag
docker build . -t rest-example
The dot as the first parameter specifies the context for the docker build command. In our case, it will be just a root directory of our little microservice.
docker image ls
docker run -it rest-example
FROM java:8
RUN apt-get update
RUN apt-get install -y maven
WORKDIR /app
COPY pom.xml /app/pom.xml
COPY src /app/src
RUN ["mvn", "package"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java",
"-jar", "target/ rest-example-0.1.0.jar"]
The ENTRYPOINT instruction allows you to configure a container that will run as an executable.
The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD, on the other hand, specifies the arguments that will be fed to the ENTRYPOINT. Docker has a default ENTRYPOINT which is /bin/sh -c but does not have a default CMD.
If you want shell processing then you need either to use the shell form or execute a shell directly
ENTRYPOINT [ "sh", "-c", "echo $HOSTNAME" ]
Exactly the same as the exec form of the CMD instruction, this will not invoke a command shell. This means that the normal shell processing will not happen. For example, ENTRYPOINT [ "echo", "$HOSTNAME" ] will not do variable substitution on the $HOSTNAME variable.
ENTRYPOINT command parameter1 parameter2 is a a shell form. Normal shell processing will occur. This form will also ignore any CMD or docker run command line arguments. Also, your command will not be PID 1, because it will be executed by the shell. As a result, if you then run docker stop <container>, the container will not exit cleanly, and the stop command will be forced to send a SIGKILL after the timeout.
FROM busybox
ENTRYPOINT ["/bin/ping"]
CMD ["localhost"]
The CMD instruction, as you will remember from its description, sets the default command and/or parameters, which can be overwritten from the command line when you run the container. The ENTRYPOINT is different, its command and parameters cannot be overwritten using the command line. Instead, all command line arguments will be appended after the ENTRYPOINT parameters. This way you can, kind of, lock the command that will be executed always during the container start.
Unlike the CMD parameters, the ENTRYPOINT command and parameters are not ignored when a Docker container runs with command-line parameters.
$ docker run ping-example www.google.com
ENTRYPOINT should be defined when using the container as an executable
You should use the CMD instruction as a way of defining default arguments for the command defined as ENTRYPOINT or for executing an ad-hoc command in a container
CMD will be overridden when running the container with alternative arguments
ENTRYPOINT sets the concrete default application that is used every time a container is created using the image
If you couple ENTRYPOINT with CMD, you can remove an executable from CMD and just leave its arguments which will be passed to ENTRYPOINT
$ docker run -p 8080:8080 -it rest-example
EXPOSE 8080
The fundamental difference between VOLUME and -v is this: -v will mount existing files from your operating system inside your Docker container and VOLUME will create a new, empty volume on your host and mount it inside your container.
docker inspect
ENV CONFIG_TYPE=file CONFIG_LOCATION="home/Jarek/my \app/config.json"
ENV PATH /var/lib/tomcat8/bin:$PATH
You can also use ENV to set the often-modified version numbers so that upgrades are easier to handle, as seen in the following example:
ENV TOMCAT_VERSION_MAJOR 8
ENV TOMCAT_VERSION 8.5.4
RUN curl -SL http://apache.uib.no/tomcat/tomcat-$TOMCAT_VERSION_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz | tar zxvf apache-tomcat-$TOMCAT_VERSION.tar.gz -c /usr/Jarek/apache-tomcat-$TOMCAT_VERSION
ENV PATH /usr/Jarek/apache-tomcat-$TOMCAT_VERSION/bin:$PATH
docker run --env <key>=<value>
USER tomcat
You can use the USER command if an executable can be run without privileges. The Dockerfile can contain the user and group creation instruction the same as this one:
RUN groupadd -r tomcat && useradd -r -g tomcat tomcat
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile
ARG user=jarek
$ docker build --build-arg <variable name>=<value> .
It is not recommended to use ARG for passing secrets as GitHub keys, user credentials, passwords, and so on, as all of them will be visible to any user of the image by using the docker history command!
FROM maven:3-jdk-8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ADD . /usr/src/app
ONBUILD RUN mvn install
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the child build, as if it had been inserted immediately after the FROM instruction in the child Dockerfile.
FROM maven:3.3-jdk-8-onbuild
CMD ["java","-jar","/usr/src/app/target/app-1.0-SNAPSHOT-jar-with-dependencies.jar"]
the ONBUILD instruction is an instruction the parent Dockerfile gives to the child Dockerfile (downstream build). Any build instruction can be registered as a trigger and those instructions will be triggered immediately after the FROM instruction in the Dockerfile.
The HEALTHCHECK instruction can be used to inform Docker how to test a container to check that it is still working
HEALTHCHECK --interval=<interval> --timeout=<timeout> CMD <command>
HEALTHCHECK --interval=5m --timeout=2s --retries=3 CMD curl -f http://localhost/ping || exit 1
Fabric8
docker run
Run a command in a new container
docker exec
Run a command in a running container
https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/
https://stackoverflow.com/questions/26050899/how-to-mount-host-volumes-into-docker-containers-in-dockerfile-during-build
https://askubuntu.com/questions/505506/how-to-get-bash-or-ssh-into-a-running-container-in-background-mode
Fact 1: You need at least one (ENTRYPOINT or CMD) defined (in order to run)
Fact 2: If just one is defined at runtime, CMD and ENTRYPOINT have the same effect
Fact 4: The "exec" form is the recommended form
Fact 6:
https://stackoverflow.com/questions/29480099/docker-compose-vs-dockerfile-which-is-better
https://nelnet.org/docker/2017/03/23/Docker-Multiple-Commands-at-Run/
https://docs.docker.com/engine/admin/multi-service_container/
https://stackoverflow.com/questions/34549859/run-a-script-in-dockerfile
https://github.com/moby/moby/issues/24408
You can do -v $PWD/../../path:/location to use a relative path indirectly. 👍 6
https://devopscube.com/what-is-docker/
In a normal virtualized environment, one or more virtual machines run on top of a physical machine using a hypervisor like Xen, Hyper-V etc. Containers, on the other hand, run in user space on top of operating systems kernel. It can be called as OS level virtualization. Each container will have its isolated user space and you can run multiple containers on a host, each having its own user space. It means you can run different Linux systems (containers) on a single host.
Docker has a client-server architecture. Docker Daemon or server is responsible for all the actions that are related to containers. The daemon receives the commands from the Docker client through CLI or REST API’s. Docker client can be on the same host as a daemon or it can be present on any other host.
Images are organized in a layered manner. Every change in an image is added as a layer on top of it.
Docker registry is a repository for Docker images. Using Docker registry, you can build and share images with your team. A registry can be public or private. Docker Inc provides a hosted registry service called Docker Hub.
https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-machine
docker logs -f container_id
https://docs.docker.com/docker-for-mac/troubleshoot/#diagnose-problems-send-feedback-and-create-github-issues
https://www.mankier.com/1/docker
-a, --all Show all containers (default shows just running)
-f, --filter value Filter output based on conditions provided (default [])
--format string Pretty-print containers using a Go template
--help Print usage
-n, --last int Show n last created containers (includes all states) (default -1)
-l, --latest Show the latest created container (includes all states)
--no-trunc Don't truncate output
docker ps --help
docker-compose ps
docker-compose rm -v
docker-compose logs
docker-compose logs pump elasticsearch
docker-compose -f ps
docker-compose build
docker-compose build calaca pump
docker-compose rm -vf
https://github.com/dockerinaction/ch11_coffee_api
docker-compose pull
docker-compose up -d db
docker-compose ps coffee
docker-compose scale coffee=5
Docker builds container links by creating firewall rules and injecting service discovery information into the dependent container’s environment variables and /etc/hosts file.
When containers are re-created or restarted, they come back with different IP addresses. That change makes the information that was injected into the proxy service stale.
Make sure all RUN yum commands end with && yum clean all to save space. Furthermore, packages which are only needed during the image creation, but not for the actual use of the image, can be removed using yum's transaction support, and specifically using && yum history undo last -y
https://hackernoon.com/tips-to-reduce-docker-image-sizes-876095da3b34
Tip #1 — Use a smaller base image
Tip #2 — Don’t install debug tools like vim/curl
One technique is to have a development Dockerfile and a production Dockerfile. During development, have all of the tools you need, and then when deploying to production remove the development tools.
Tip #3 — Minimize Layers
Each line of a Dockerfile is a step in the build process; a layer that takes up size. Combine your RUN statements to reduce the image size. Instead of
FROM debian
RUN apt-get install -y <packageA>
RUN apt-get install -y <packageB>
Do
FROM debian
RUN apt-get install -y <packageA> <packageB>
A drawback of this approach is that you’ll have to rebuild the entire image each time you add a new library. If you aren’t aware, Docker doesn’t rebuild layers it’s already built, and it caches the Dockerfile line by line Try changing one character of a Dockerfile you’ve already built, and then rebuild. You’ll notice that each step above that line will be recognized as already been built, but the line you changed (and each line following) will be rebuilt.
A strategy I recommend is that while in development and testing dependencies, separate out the RUN commands. Once you’re ready to deploy to production, combine the RUN statements into one line.
Tip #4 Use — no-install-recommends on apt-get install
Adding — no-install-recommendsto apt-get install -y can help dramatically reduce the size by avoiding installing packages that aren’t technically dependencies but are recommended to be installed alongside packages.
apk add commands should have--no-cache added.
Tip #5 Add rm -rf /var/lib/apt/lists/* to same layer as apt-get installs
Add rm -rf /var/lib/apt/lists/* at the end of the apt-get -y install to clean up after install packages.
For
yum
, add yum clean all
Also, if you are install
wget
or curl
in order to download some package, remember to combine them all in one RUN
statement. Then at the end of the run statement, apt-get remove curl
or wget
once you no longer need them. This advice goes for any package that you only need temporarily.Tip #6 Use FromLatest.io
FromLatest will Lint your Dockerfile and check for even more steps you can perform to reduce your image size.
https://nickjanetakis.com/blog/docker-tip-44-show-total-disk-space-used-by-docker
Docker has a
system
sub-command that has a few useful commands. One of them is docker system df
which reports back disk space usage stats of your Docker installation.
Here’s what the output of
docker system df
looks like on my machine:
My build cache is empty because I run
docker system prune
as a daily scheduled task. If you want to see how to set that up on any OS, check out Docker Tip #32.The command "docker images" tells you the size (last column). Example below
https://stackoverflow.com/questions/26753087/docker-how-to-analyze-a-containers-disk-usage
After 1.13.0, Docker includes a new command
docker system df
to show docker disk usage.$ docker system df
To see the file size of your containers, you can use the
-s
argument of docker ps
:docker ps -s
docker run --name ubuntu_bash --rm -i -t ubuntu bash
This will create a container named
ubuntu_bash
and start a Bash session. A more detailed breakdown of the options and arguments used in the example is:--name
assigns a name to the container, in this caseubuntu_bash
--rm
is like the bash commandrm
it removes the container, but when it exits-i
is short for-interactive
, this ensuresSTDIN
is kept open even if not attached to the running container-t
, which can also be referenced with-tty
, starts an interactive bash shell in the container- The image for the container follows the options, here it is the image
ubuntu
- The last part that follows the image, is the command you want to run:
bash
This is for when you want to run a command in an existing container. This is better if you already have a container running and want to change it or obtain something from it. For example, if you are using Docker Compose you will probably spin-up multiple containers and you may want to access one or more of them once they are created.
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Docker
exec
also has a range of options and arguments you can specify, although you must state the container and command to execute. You can start an interactive bash shell on a container named ubuntu_bash
using: docker exec -it ubuntu_bash bash
Here the options
-it
have the same effect as with run. An example with more options and arguments is: docker exec -d -w /temp ubuntu_bash touch my_file.sh
-w
followed by the directory or file path allows you to state which working directory you want to run the command in.-d
or-detached
means that the container will run in detached mode, so you can still continue to use your terminal session with the container running in the background. Don’t use this if you want to see what the container sends toSTDOUT
.- The command is touch used to create the file with the name
my_file.sh
inside the/temp
directory of the running container ubuntu_bash
https://chankongching.wordpress.com/2017/03/17/docker-what-is-the-different-between-run-and-exec/
we will use “docker run” command to manipulate images that exists or accessible from localhost. While we use “docker exec” to operate an existing docker container. To make a container up and running, you need to make processes inside docker exists, when process is halt, the whole container will enter stopped state. The container will still exists, which could be started again by “docker start” and it will run until the process next halt.
Simply speaking “docker run” has its target as docker images and “docker exec” is targeting pre-existing docker containers. Using the resources inside images or container are of different sense. When using “docker run” a temporary docker container is created and stopped(not terminated) after the command has finished running. “Docker exec” needs a running container to take the command.
Here are some command ref:
To use binaries in a docker images:
docker run #{image} "COMMAND to be Ran"
To use binaries in a docker images continuously by daemon/detach mode(setting or configs need to be inherited):
docker run -d #{image}
To enter a docker images and run command interactively:
docker exec -it #{image} sh
or
docker exec -it #{image} bash
To use binaries in a docker container(Always provide full path for command)
docker exec -it #{image} "COMMAND to be Ran"
https://chankongching.wordpress.com/2017/03/17/docker-what-is-the-different-between-run-and-exec/
Simply speaking “docker run” has its target as docker images and “docker exec” is targeting pre-existing docker containers. Using the resources inside images or container are of different sense. When using “docker run” a temporary docker container is created and stopped(not terminated) after the command has finished running. “Docker exec” needs a running container to take the command.
To use binaries in a docker images:
docker run #{image} "COMMAND to be Ran"
To use binaries in a docker images continuously(setting or configs need to be inherited):
docker run #{image} --name #{container_name}
To enter a docker images and run command interactively:
docker run -it #{image}
To start a docker in background(as daemon)with process up and running that defined in Dockerfile:
docker run -d --name #{container_name} #{image}
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Even though I’m
marc
, the container is running as root
and therefore has access to everything root
has access to on this server. This isn’t ideal; running containers this way means that every container you pull from Docker Hub could have full access to everything on your server (depending on how you run it).
The recommendation here is to create a user with a known
uid
in the Dockerfile and run the application process as that user. The start of a Dockerfile should follow this pattern:
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
USER appuser
But when you
FROM
an image that is running as non-root, your container will inherit that non-root user. If you need to create your own or perform operations as root, be sure to USER root
somewhere near the top of your Dockerfile. Then FROM appuser
again to make it usable.
This works and does the same thing as creating a user in the Dockerfile, but it requires the user to optionally run the container securely. Specifying a non-root user in the Dockerfile will make the container run securely by default.
$ docker run --user 1001 -v /root/secrets.txt:/tmp/secrets.txt <img>
cat: /tmp/secrets.txt: Permission denied
The linux kernel is responsible for managing the uid and gid space, and it’s kernel-level syscalls that are used to determine if requested privileges should be granted. For example, when a process attempts to write to a file, the uid and gid that created the process are examined by the kernel to determine if it has enough privileges to modify the file. The username isn’t used here, the uid is used.
When running Docker containers on a server, there’s still a single kernel. A huge part of the value that containerization brings is that all of these separate processes can continue to share a single kernel. This means that even on a server that is running Docker containers, the entire world of uids and gids is controlled by a single kernel.
docker run -d ubuntu:latest sleep infinity
http://www.inanzzz.com/index.php/post/q1rj/running-docker-container-with-a-non-root-user-and-fixing-shared-volume-permissions-with-dockerfile
Docker containers are always run as
root
user by default. As a result all running processes, shared volumes, folders, files will be owned by root
user. It becomes real problem when we need to modify files and folder in shared folders within host OS or docker container.In order to solve such issue, we need to match host OS and docker container user's
UID
s. The root
user's UID
is always 0
. Running docker as root
user is also considered as a bad security practice.
ENV user inanzzz
RUN useradd -m -d /home/${user} ${user} \
&& chown -R ${user} /home/${user}
USER ${user}
Creates a user and group called inanzzz with UID equals to 1000.
Let inanzzz user recursively own home directory /home/inanzzz.
Switch to inanzzz user to run container.
https://docs.docker.com/engine/reference/builder/#scope
$ docker build --build-arg user=what_user .
FROM busybox
ARG SETTINGS
RUN ./run/setup $SETTINGS
https://github.com/docker-solr/docker-solr/blob/master/Docker-FAQ.md#can-i-run-zookeeper-and-solr-clusters-under-docker
https://hub.docker.com/_/solr/
https://docs.docker.com/samples/library/solr/#using-docker-compose
services:
solr:
image: solr
ports:
- "8983:8983"
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
https://stackoverflow.com/questions/22907231/copying-files-from-host-to-docker-container
The
cp
command can be used to copy files. One specific file can be copied like:docker cp foo.txt mycontainer:/foo.txt
docker cp mycontainer:/foo.txt foo.txt
Multiple files contained by the folder
src
can be copied into the target
folder using:docker cp src/. mycontainer:/target
docker cp mycontainer:/src/. target
https://stackoverflow.com/questions/38532483/where-is-var-lib-docker-on-mac-os-x
My rule of thumb is, if you’re dealing with data that you’re not actively dealing with directly then use a named volume and let Docker manage it.
However, if you’re using a volume in development and you want to mount in the current directory so you can code your project without rebuilding then by all means use a path based volume because that’s 100% normal and is considered a best practice. For example, you may see that referenced as .:/app in a docker-compose.yml file.
https://stackoverflow.com/questions/34357252/docker-data-volume-vs-mounted-host-directory
https://github.com/moby/moby/issues/33794
docker exec -e COLUMNS="`tput cols`" -e LINES="`tput lines`" -ti container bash
As mentioned in the above answers, you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Once you get the tty running you can navigate to
https://nickjanetakis.com/blog/docker-tip-28-named-volumes-vs-path-based-volumes/var/lib/docker
Named volumes look like this
postgres:/var/lib/postgresql/data
. If you’re using Docker Compose, it will automatically create the volume for you when you first do a docker-compose up
, but if not you would need to create it yourself by running docker volume create postgres
.
The name doesn’t need to be postgres, but it’s a best practice to name your volumes so you know what they refer to later. You can prefix them with your project’s name to avoid name conflicts.
When you use a volume like this, Docker will manage the volume for you. On Linux, that volume will get saved to
/var/lib/docker/volumes/postgres/_data
.
Path based volumes serve the same purpose as named volumes, except you’re responsible for managing where the volume gets saved on the Docker host. For example if you did
./postgres:/var/lib/postgresql/data
then a postgres/
directory would get created in the current directory on the Docker host.
If you go this route you’ll notice that the permissions will be the same as what they are set to in your
Dockerfile
or what you set with the --user
flag when running the container. If you did none of that then the contents of that folder will be owned by root.
Back before named volumes existed, it was always a question on where you should store these volumes. Some people put them in a
data/
folder relative to your projectMy rule of thumb is, if you’re dealing with data that you’re not actively dealing with directly then use a named volume and let Docker manage it.
However, if you’re using a volume in development and you want to mount in the current directory so you can code your project without rebuilding then by all means use a path based volume because that’s 100% normal and is considered a best practice. For example, you may see that referenced as .:/app in a docker-compose.yml file.
https://stackoverflow.com/questions/34357252/docker-data-volume-vs-mounted-host-directory
https://github.com/moby/moby/issues/33794
docker exec -t container_name /bin/bash -c "export COLUMNS=`tput cols`; export LINES=`tput lines`; exec bash"
docker exec -it container_name sh -c "stty rows 50 && stty cols 150 && bash"
https://docs.docker.com/compose/networking/#links
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service’s name. In the following example,
db
is reachable from web
at the hostnames db
and database
: web:
build: .
links:
- "db:database"
From my understanding both are kind of deprecated. Depends_on is no longer supported in swarm and has been replaced by health checks. Links have been replaced with networks.
https://stackoverflow.com/questions/38546755/docker-compose-keep-container-running
One could use
command: tail -f /dev/null
in docker-compose to keep the container running.
Figured it out, use
bash -c
.
Example:
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
https://stackoverflow.com/questions/24481564/how-can-i-find-docker-image-with-specific-tag-in-docker-registry-in-docker-comma
As far as I know, the CLI does not allow searching/listing tags in a repository.
But if you know which tag you want, you can pull that explicitly by adding a colon and the image name:
docker pull ubuntu:saucy
- docker run --name my_solr -d -p 8983:8983 -t solr:7.4
https://github.com/docker/for-win/issues/1042
docker Error response from daemon: Error processing tar file(exit status 1): write no space left on device
I have already run
https://docs.docker.com/compose/compose-file/#volumesdocker system prune -all
and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
docker volume prune
https://forums.docker.com/t/var-lib-docker-does-not-exist-on-host/18314http://container-solutions.com/understanding-volumes-docker/
https://stackoverflow.com/questions/40905761/how-do-i-mount-a-host-directory-as-a-volume-in-docker-compose
How do I mount a host directory as a volume in docker compose
From the looks of it you could do the following on your docker-compose.yml
volumes:
- ./:/app
volumes:
- .:/var/www/project:cached
https://blog.docker.com/2016/12/understanding-docker-networking-drivers-use-cases/
In between applications and the network sits Docker networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers.
The most commonly used built-in network drivers are bridge, overlay and macvlan
The
bridge
driver creates a private network internal to the host so containers on this network can communicate. External access is granted by exposing ports to containers. Docker secures the network by managing rules that block connectivity between different Docker networks.
Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, iptables rules, and host routes to make this connectivity possible. In the example highlighted below, a Docker bridge network is created and two containers are attached to it. With no extra configuration the Docker Engine does the necessary wiring, provides service discovery for the containers, and configures security rules to prevent communication to other networks. A built-in IPAM driver provides the container interfaces with private IP addresses from the subnet
docker network create -d bridge mybridge
docker run -d --net mybridge --name db redis
docker run -d --net mybridge -e DB=db -p 8000:5000 --name web chrch/web
The bridge driver is a local scope driver, which means it only provides service discovery, IPAM, and connectivity on a single host. Multi-host service discovery requires an external solution that can map containers to their host location. This is exactly what makes the overlay
driver so great.
https://runnable.com/docker/basic-docker-networking
To view Docker networks, run:
docker network ls
To get further details on networks, run:
docker network inspect
Docker creates three networks automatically on install: bridge
, none
, and host
. Specify which network a container should use with the --net
flag. If you create a new network my_network
(more on this later), you can connect your container (my_container
) with:
docker run my_container --net=my_network
All Docker installations represent the docker0
network with bridge
; Docker connects to bridge
by default. Run ifconfig
on the Linux host to view the bridge
network.
When you run the following command in your console, Docker returns a JSON object describing the bridge
network (including information regarding which containers run on the network, the options set, and listing the subnet and gateway).
docker network inspect bridge
Docker automatically creates a subnet and gateway for the bridge
network, and docker run
automatically adds containers to it. If you have containers running on your network, docker network inspect
displays networking information for your containers.
Any containers on the same network may communicate with one another via IP addresses. Docker does not support automatic service discovery on bridge
. You must connect containers with the --link
option in your docker run
command.
The Docker bridge
supports port mappings and docker run --link
allowing communications between containers on the docker0
network. However, these error-prone techniques require unnecessary complexity. Just because you can use them, does not mean you should. It’s better to define your own networks instead.
None
None
This offers a container-specific network stack that lacks a network interface. This container only has a local loopback interface (i.e., no external network interface).
Host
Host
This enables a container to attach to your host’s network (meaning the configuration inside the container matches the configuration outside the container)
docker-compose run [service] bash
$ docker-compose run -rm web ./manage.py [command] [arguments]
docker-compose run tells Docker that we’re about to run a command. The -rm option will shut down this container when we’re finished with it. web identifies the service we want to run; manage.py commands will generally be in the web service.
https://docs.docker.com/compose/reference/exec/
docker-compose exec
docker-compose exec worker ping -c 3 db
This will launch a new process in the already running worker container and ping the db container 3 times
The run command is useful if you need to run a containerized command as a one-off within your application. For example, if you use a package manager such as composer to update the dependencies of your project that is stored on a volume, you could run something like this:
$ docker-compose run --volume data_volume:/app composer install
This would run the composer container with the install command and mount the data_volume to /app within the container.
docker-compose up -d --scale worker=3
- docker logs <container_id>
Hopefully you’ve already tried this, but if not, start here. This’ll give you the full STDOUT and STDERR from the command that was run initially in your container. - docker stats <container_id>If you just need to keep an eye on the metrics of your container to work out what’s gone wrong, docker stats can help: it’ll give you a live stream of resource usage, so you can see just how much memory you’ve leaked so far.
- docker cp <container_id>:/path/to/useful/file /local-pathOften just getting hold of more log files is enough to sort you out. If you already know what you want, docker cp has your back: copy any file from any container back out onto your local machine, so you can examine it in depth (especially useful analysing heap dumps).
- docker exec -it <container_id> /bin/bashNext up, if you can run the container (if it’s crashed, you can restart it with docker start <container_id>), shell in directly and start digging around for further details by hand.
- docker commit <container_id> my-broken-container &&
docker run -it my-broken-container /bin/bashCan’t start your container at all? If you’ve got a initial command or entrypoint that immediately crashes, Docker will immediately shut it back down for you. This can make your container unstartable, so you can’t shell in any more, which really gets in the way.
- Have a failing entrypoint instead? There’s an entrypoint override command-line flag too.
--entrypoint="": Overwrite the default entrypoint set by the image
https://docs.docker.com/network/host/
If you use the
host
network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host
networking, the container’s application will be available on port 80 on the host’s IP address.docker network create --driver overlay proxy |
The proxy network will be dedicated to the proxy container and services that will be attached to it.
curl -o docker-compose-go-demo.yml \https://raw.githubusercontent.com/vfarcic/go-demo/master/docker-compose-stack.yml
docker stack deploy -c docker-compose-go-demo.yml go-demo
docker stack ps go-demo
https://rominirani.com/docker-swarm-tutorial-b67470cf8872
docker-machine create --driver virtualbox manager1
Keep in mind that using docker-machine utility, you can SSH into any of the machines as follows:
docker-machine ssh <machine-name>
docker-machine ip manager1
The first thing to do is initialize the Swarm. We will SSH into the manager1 machine and initialize the swarm in there.
docker@manager1:~$ docker swarm init --advertise-addr 192.168.1.8
To add a worker to this swarm, run the following command:
docker swarm join \ — token SWMTKN-1–5mgyf6ehuc5pfbmar00njd3oxv8nmjhteejaald3yzbef7osl1-ad7b1k8k3bl3aa3k3q13zivqd \ 192.168.1.8:2377
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
docker node ls
in the manager:
docker service create --replicas 5 -p 80:80 --name web nginx
docker service ps web
docker service ls
$ docker service scale web=8
$ docker node inspect self
Or if you want to check up on the other nodes, give the node name. For e.g.
$ docker node inspect worker1
But sometimes, we have to bring the Node down for some maintenance reason. This meant by setting the Availability to Drain mode.
docker node update --availability drain worker1
docker service rm webrolling update
$ docker service update --image <imagename>:<version> web
https://stackoverflow.com/questions/42345235/how-to-specify-memory-cpu-limit-in-docker-compose-version-3/42345411
version: "3"
services:
node:
image: USER/You-Pre-Build-Image
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
volumes:
- logs
networks:
default:
driver: overlay
https://stackoverflow.com/questions/40513545/how-to-prevent-docker-from-starting-a-container-automatically-on-system-startup
Docker will autostart any container with a RestartPolicy of 'always' when the docker service initially starts. You won't find any evidence of this within cron or any other normal system startup scripts; you'll have to dig into the container configuration to find it.
docker inspect my-container
(Look for RestartPolicy in the output)
I've mostly had this situation occur when a container was created with
--restart always
, and the situation later changed such that I no longer wanted this to happen.
After docker 1.11, this is easy to fix
docker update --restart=no my-container
You can start your container with
--restart=unless-stopped
.
When using Docker for Mac Application, it appears that the containers are stored within the VM located at:
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Once you get the tty running you can navigate to
/var/lib/docker
/var/lib/docker does not exist on host
https://gist.github.com/ipedrazas/2c93f6e74737d1f8a791
List Docker Container Names and IPs
docker ps -q | xargs -n 1 docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Name }}' | sed 's/ \// /'
https://docs.docker.com/compose/reference/scale/
Usage: scale [SERVICE=NUM...]
Sets the number of containers to run for a service.
Numbers are specified as arguments in the form
service=num
. For example:docker-compose scale web=2 worker=3
http://blog.arungupta.me/show-layers-of-docker-image/
docker history couchbase
docker images couchbase
docker-machine ip default returns "Host does not exist: "default"" #27
docker-machine ip
Sounds like you may need to run docker-machine create default first.
https://blog.alexellis.io/mutli-stage-docker-builds/
https://blog.zhouzhipeng.com/dockerfile-auto-ci-tool.html
将上面的Dockerfile.build 和Dockerfile.old 结合起来,稍加修饰,得到如下全新的Dockerfile:
FROM maven:3.5.2-alpine as builder
MAINTAINER zhouzhipeng <admin@zhouzhipeng.com>
WORKDIR /app
COPY src .
COPY pom.xml .
# 编译打包 (jar包生成路径:/app/target)
RUN mvn package -Dmaven.test.skip=true
FROM openjdk:8-jre-alpine
MAINTAINER zhouzhipeng <admin@zhouzhipeng.com>
WORKDIR /app
COPY --from=builder /app/target/docker-multi-stage-demo-1.0-SNAPSHOT.jar .
# 运行main类
CMD java -cp docker-multi-stage-demo-1.0-SNAPSHOT.jar com.zhouzhipeng.HelloWorld
然后,仍然是熟悉的docker build命令
docker build -t zhouzhipeng/dockermultistagedemo-new .
上面的Dockerfile 中有两处地方不一样,
- 出现了多个
FROM
语句 COPY
命令后多了--from=builder
stage之间交互的是文件,故
https://blog.qikqiak.com/post/multi-stage-build-for-docker/COPY
命令需要扩展,通过--from=<name>
来指定需要从上方的哪个”stage” 拷贝文件, 其完整命令格式如下:但是往往在我们 Build 一个应用的时候,是将我们的源代码也构建进去的,这对于类似于 golang 这样的编译型语言肯定是不行的,因为实际运行的时候我只需要把最终构建的二进制包给你就行,把源码也一起打包在镜像中,需要承担很多风险,即使是脚本语言,在构建的时候也可能需要使用到一些上线的工具,这样无疑也增大了我们的镜像体积。
使用多阶段构建,你可以在一个
Dockerfile
中使用多个 FROM 语句。每个 FROM 指令都可以使用不同的基础镜像,并表示开始一个新的构建阶段。你可以很方便的将一个阶段的文件复制到另外一个阶段,在最终的镜像中保留下你需要的内容即可。FROM golang AS build-env ADD . /go/src/app WORKDIR /go/src/app RUN go get -u -v github.com/kardianos/govendor RUN govendor sync RUN GOOS=linux GOARCH=386 go build -v -o /go/src/app/app-server FROM alpine RUN apk add -U tzdata RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime COPY --from=build-env /go/src/app/app-server /usr/local/bin/app-server EXPOSE 8080 CMD [ "app-server" ]
默认情况下,构建阶段是没有命令的,我们可以通过它们的索引来引用它们,第一个 FROM 指令从
https://stackoverflow.com/questions/28898787/how-to-handle-specific-hostname-like-h-option-in-dockerfile0
开始,我们也可以用AS
指令为阶段命令,比如我们这里的将第一阶段命名为build-env
,然后在其他阶段需要引用的时候使用--from=build-env
参数即可。docker run -i -t -h myhost centos:6 /bin/bash
I think the following is better because docker containers usually don't have 'hostname' installed, therefore I would use the head command:
echo $(head -1 /etc/hosts | cut -f1) $HOST_NAME >> /etc/hosts
docker run --rm -it --cap-add SYS_ADMIN alpine sh
/ # hostname foobar
/ # hostname
foobar
However, before adding this capability, think twice or more, because SYS_ADMIN
is a "collection" of capabilities, and is really insecure (it allows breaking out of the container, and such, so is highly insecure)
EXPOSE 8983-8986
USER builder
https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#general-guidelines-and-recommendations
Use a .dockerignore file
The current working directory where you are located when you issue a
docker build
command is called the build context, and the Dockerfile
must be somewhere within this build context. By default, it is assumed to be in the current directory, but you can specify a different location by using the -f
flag. Regardless of where the Dockerfile
actually lives, all of the recursive contents of files and directories in the current directory are sent to the Docker daemon as the build context. Inadvertently including files that are not necessary for building the image results in a larger build context and larger image size. These in turn can increase build time, time to pull and push the image, and the runtime size of containers. To see how big your build context is, look for a message like the following, when you build your Dockerfile
.Sending build context to Docker daemon 187.8MB
To exclude files which are not relevant to the build, without restructuring your source repository, use a
.dockerignore
file. This file supports exclusion patterns similar to .gitignore
files.Each container should have only one concern
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion
FROM <image>, or FROM <image>:<tag>, or FROM <image>@<digest>
The latest tag assigned to an image simply means that it's the image that was last built and executed without a specific tag provided.
The WORKDIR instruction adds a working directory for any CMD, RUN, ENTRYPOINT, COPY, and ADD instructions that comes after it in the Dockerfile. The syntax for the instruction is WORKDIR /PATH. You can have multiple WORKDIR instructions in one Dockerfile, if the relative path is provided; it will be relative to the path of the previous WORKDIR instruction.
ADD config.json projectRoot/ will add the config.json file to <WORKDIR>/projectRoot/
ADD config.json /absoluteDirectory/ will add the config.json file to the /absoluteDirectory/
Note that ADD shouldn't be used if you don't need its special features, such as unpacking archives, you should use COPY instead.
COPY supports only the basic copying of local files into the container. On the other hand, ADD gives some more features, such as archive extraction, downloading files through URL, and so on. Docker's best practices say that you should prefer COPY if you do not need those additional features of ADD.
layering is the core concept in Docker. RUN, takes a command as its argument and runs it to create the new layer.
Imagine a case when you use the RUN command for pulling source code from the Git repository, by using the git clone as the first step of building the image.
In our example, we should always combine RUN apt-get update with apt-get install in the same RUN statement, which will create just a single layer; for example:
RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get install -y nodejs \
&& apt-get clean
The purpose of a CMD instruction is to provide defaults for an executing container. You can think of the CMD instruction as a starting point of your image, when the container is being run later on. This can be an executable, or, if you specify the ENTRYPOINT instruction (we are going to explain it next), you can omit the executable and provide the default parameters only.
CMD ["executable","parameter1","parameter2"]: This is a so called exec form. It's also the preferred and recommended form. The parameters are JSON array, and they need to be enclosed in square brackets. The important note is that the exec form does not invoke a command shell when the container is run. It just runs the executable provided as the first parameter. If the ENTRYPOINT instruction is present in the Dockerfile, CMD provides a default set of parameters for the ENTRYPOINT instruction.
CMD command parameter1 parameter2: This a shell form of the instruction. This time, the shell (if present in the image) will be processing the provided command. The specified binary will be executed with an invocation of the shell using /bin/sh -c. It means that if you display the container's hostname, for example, using CMD echo $HOSTNAME, you should use the shell form of the instruction.
everything started through the shell will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container's PID 1, and will not receive Unix signals, so your executable will not receive a SIGTERM from docker stop <container>.
CMD ["executable","parameter1","parameter2"]
CMD echo $HOSTNAME
RUN is a build-time instruction, the CMD is a runtime instruction.
whereas the command specified through the CMD instruction is executed when the container is launched by executing docker run on the newly created image. Unlike CMD, the RUN instruction is actually used to build the image, by creating a new layer on top of the previous one which is committed.
omitting a tag when building an image will result in creating the latest tag
docker build . -t rest-example
The dot as the first parameter specifies the context for the docker build command. In our case, it will be just a root directory of our little microservice.
docker image ls
docker run -it rest-example
FROM java:8
RUN apt-get update
RUN apt-get install -y maven
WORKDIR /app
COPY pom.xml /app/pom.xml
COPY src /app/src
RUN ["mvn", "package"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java",
"-jar", "target/ rest-example-0.1.0.jar"]
The ENTRYPOINT instruction allows you to configure a container that will run as an executable.
The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD, on the other hand, specifies the arguments that will be fed to the ENTRYPOINT. Docker has a default ENTRYPOINT which is /bin/sh -c but does not have a default CMD.
If you want shell processing then you need either to use the shell form or execute a shell directly
ENTRYPOINT [ "sh", "-c", "echo $HOSTNAME" ]
Exactly the same as the exec form of the CMD instruction, this will not invoke a command shell. This means that the normal shell processing will not happen. For example, ENTRYPOINT [ "echo", "$HOSTNAME" ] will not do variable substitution on the $HOSTNAME variable.
ENTRYPOINT command parameter1 parameter2 is a a shell form. Normal shell processing will occur. This form will also ignore any CMD or docker run command line arguments. Also, your command will not be PID 1, because it will be executed by the shell. As a result, if you then run docker stop <container>, the container will not exit cleanly, and the stop command will be forced to send a SIGKILL after the timeout.
FROM busybox
ENTRYPOINT ["/bin/ping"]
CMD ["localhost"]
The CMD instruction, as you will remember from its description, sets the default command and/or parameters, which can be overwritten from the command line when you run the container. The ENTRYPOINT is different, its command and parameters cannot be overwritten using the command line. Instead, all command line arguments will be appended after the ENTRYPOINT parameters. This way you can, kind of, lock the command that will be executed always during the container start.
Unlike the CMD parameters, the ENTRYPOINT command and parameters are not ignored when a Docker container runs with command-line parameters.
$ docker run ping-example www.google.com
ENTRYPOINT should be defined when using the container as an executable
You should use the CMD instruction as a way of defining default arguments for the command defined as ENTRYPOINT or for executing an ad-hoc command in a container
CMD will be overridden when running the container with alternative arguments
ENTRYPOINT sets the concrete default application that is used every time a container is created using the image
If you couple ENTRYPOINT with CMD, you can remove an executable from CMD and just leave its arguments which will be passed to ENTRYPOINT
$ docker run -p 8080:8080 -it rest-example
EXPOSE 8080
The fundamental difference between VOLUME and -v is this: -v will mount existing files from your operating system inside your Docker container and VOLUME will create a new, empty volume on your host and mount it inside your container.
docker inspect
ENV CONFIG_TYPE=file CONFIG_LOCATION="home/Jarek/my \app/config.json"
ENV PATH /var/lib/tomcat8/bin:$PATH
You can also use ENV to set the often-modified version numbers so that upgrades are easier to handle, as seen in the following example:
ENV TOMCAT_VERSION_MAJOR 8
ENV TOMCAT_VERSION 8.5.4
RUN curl -SL http://apache.uib.no/tomcat/tomcat-$TOMCAT_VERSION_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz | tar zxvf apache-tomcat-$TOMCAT_VERSION.tar.gz -c /usr/Jarek/apache-tomcat-$TOMCAT_VERSION
ENV PATH /usr/Jarek/apache-tomcat-$TOMCAT_VERSION/bin:$PATH
docker run --env <key>=<value>
USER tomcat
You can use the USER command if an executable can be run without privileges. The Dockerfile can contain the user and group creation instruction the same as this one:
RUN groupadd -r tomcat && useradd -r -g tomcat tomcat
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile
ARG user=jarek
$ docker build --build-arg <variable name>=<value> .
It is not recommended to use ARG for passing secrets as GitHub keys, user credentials, passwords, and so on, as all of them will be visible to any user of the image by using the docker history command!
FROM maven:3-jdk-8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ADD . /usr/src/app
ONBUILD RUN mvn install
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the child build, as if it had been inserted immediately after the FROM instruction in the child Dockerfile.
FROM maven:3.3-jdk-8-onbuild
CMD ["java","-jar","/usr/src/app/target/app-1.0-SNAPSHOT-jar-with-dependencies.jar"]
the ONBUILD instruction is an instruction the parent Dockerfile gives to the child Dockerfile (downstream build). Any build instruction can be registered as a trigger and those instructions will be triggered immediately after the FROM instruction in the Dockerfile.
The HEALTHCHECK instruction can be used to inform Docker how to test a container to check that it is still working
HEALTHCHECK --interval=<interval> --timeout=<timeout> CMD <command>
HEALTHCHECK --interval=5m --timeout=2s --retries=3 CMD curl -f http://localhost/ping || exit 1
Fabric8
docker run
Run a command in a new container
docker exec
Run a command in a running container
https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/
Adding a data volume
You can add a data volume to a container using the
-v
flag with the docker create
and docker run
command. You can use the -v
multiple times to mount multiple data volumes$ docker run -d -P --name web -v /webapp training/webapp python app.py
$ docker inspect web
Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
...
You will notice in the above
Source
is specifying the location on the host andDestination
is specifying the volume location inside the container. RW
shows if the volume is read/write.Mount a host directory as a data volume
In addition to creating a volume using the
-v
flag you can also mount a directory from your Docker daemon’s host into a container.$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory,
/src/webapp
, into the container at/opt/webapp
. If the path /opt/webapp
already exists inside the container’s image, the /src/webapp
mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount
command.
The
https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/container-dir
must always be an absolute path such as /src/docs
. The host-dir
can either be an absolute path or a name
value. If you supply an absolute path for the host-dir
, Docker bind-mounts to the path you specify. If you supply a name
, Docker creates a named volume by that name
.https://stackoverflow.com/questions/26050899/how-to-mount-host-volumes-into-docker-containers-in-dockerfile-during-build
It is not possible to use the
VOLUME
instruction to tell docker what to mount. That would seriously break portability. This instruction tells docker that content in those directories does not go in images and can be accessed from other containers using the --volumes-from
command line parameter. You have to run the container using -v /path/on/host:/path/in/container
to access directories from the host.
Mounting host volumes during build is not possible. There is no privileged build and mounting the host would also seriously degrade portability. You might want to try using wget or curl to download whatever you need for the build and put it in place.
https://askubuntu.com/questions/505506/how-to-get-bash-or-ssh-into-a-running-container-in-background-mode
docker exec -it <containerIdOrName> bash
Basically if the docker container was started using /bin/bash command you can access it using
http://www.johnzaccone.io/entrypoint-vs-cmd-back-to-basics/attach
, if not then you need to execute the command to create a bash instance inside the container using exec
Fact 1: You need at least one (ENTRYPOINT or CMD) defined (in order to run)
Fact 2: If just one is defined at runtime, CMD and ENTRYPOINT have the same effect
$ docker inspect b52 | jq .[0].Config
ENTRYPOINT ["ping", "www.google.com"] # "exec" format
ENTRYPOINT ping www.google.com # "shell" format
docker build -t test .
Fact 5: No Shell? No Environment Variables.
The problem with not running in a shell, is that you don't get the benefits of environment variables (such as
$PATH
), and other things that come with using a shell. There are a two problems with the below Dockerfile.$ cat Dockerfile
FROM openjdk:8-jdk-alpine
WORKDIR /data
COPY *.jar /data
CMD ["java", "-jar", "*.jar"] # "exec" format
The first problem is that since you don't have
$PATH
, you need to specify the exact location of the java executable. The second problem is that wildcards are evaluated by the shell, so *.jar
won't resolve properly. After fixing those issues, the resulting Dockerfile is this:FROM openjdk:8-jdk-alpine
WORKDIR /data
COPY *.jar /data
CMD ["/usr/bin/java", "-jar", "spring.jar"]
Fact 6: CMD
arguments append to end of ENTRYPOINT
... sometimes
Fact 6a If you use "shell" format for
ENTRYPOINT
, CMD
is ignored.$ cat Dockerfile
FROM alpine
ENTRYPOINT ls /usr
CMD blah blah blah blah
FACT 6b If you use "exec" format for ENTRYPOINT
, CMD
arguments are appended after.
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["ls", "/usr"]
CMD ["/var"]
Fact 6c If you use the "exec" format for ENTRYPOINT
, then you need to use the "exec" format for CMD
as well. If you don't, docker tries to add the sh -c
into the arguments that are appended, which could lead to some funky results.
Fact 7: ENTRYPOINT
and CMD
can be overridden via command line flags.
Fact 7: ENTRYPOINT
and CMD
can be overridden via command line flags.
Use the --entrypoint
flag to override ENTRYPOINT
:
docker run --entrypoint [my_entrypoint] test
Anything after the image in the docker run
command overrides CMD
:
docker run test [command 1] [arg1] [arg2]
All of the above facts apply, just keep in mind that developers have the ability to override these flags when they do docker run
. Which leads me to the conclusion...
Use ENTRYPOINT
if you don't want developers to change the executable that is run when the container starts. You can think of your container as an "executable wrapper". A good strategy is to define a "stable" combination of executable + parameters as the ENTRYPOINT
. Then you can (optionally) specify a default CMD
that developers can easily override.
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
$ docker build -t test .
Override CMD
with your own parameters:
$ docker run test www.yahoo.com
Use only CMD
(with no ENTRYPOINT
) if you want developers the ability to easily override the executable that is being run. If entrypoint
is defined you can still override the executable using --entrypoint
, but it is a much easier for developers to append the command they want at the end of docker run
.
Ping is nice, but let's start the container with a shell instead.
$ docker run -it test sh
If you ran these commands, you have a bunch of stopped containers left on your host. Clean them up:
$ docker system prune
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's
docker-compose.yml
.
Your Docker workflow should be to build a suitable
Dockerfile
for each image you wish to create, then use compose to assemble the images using the build
command.
You can specify the path to your individual Dockerfiles using
build /path/to/dockerfiles/blah
where /path/to/dockerfiles/blah
is where blah's Dockerfile
lives.
The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles. http://deninet.com/blog/1587/docker-scratch-part-4-compose-and-volumes
https://docs.docker.com/engine/admin/multi-service_container/
A container’s main running process is the
ENTRYPOINT
and/or CMD
at the end of the Dockerfile
. It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
CMD ./my_wrapper_script.sh
https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#use-a-dockerignore-file
Try to create script with
ADD
command and specification of working directory Like this("script" is the name of script and /root/script.sh
is where you want it in the container, it can be different path:ADD script.sh /root/script.sh
In this case
ADD
has to come before CMD
, if you have one BTW it's cool way to import scripts to any location in container from host machine
In
CMD
place [./script]
RUN
and ENTRYPOINT
are two different way to execute a script.RUN
means it creates an intermediate container, runs the script and freeze the new state of that container in a new intermediate image. The script won't be run after that: your final image is supposed to reflect the result of that script.ENTRYPOINT
means your image (which has not executed the script yet) will create a container, and runs that script.
In both cases, the script needs to be added, and a
RUN chmod +x /bootstarp.sh
is a good idea.
It should also starts with a shebang (like
#!/bin/sh
)
Considering your script (
KevinRaimbaud/docker-symfony/docker/php/bootstarp.sh
: a couple of git config --global
commands), it would be best to RUN that script once in your Dockerfile, but making sure to use the right user (the global git config file is %HOME%/.gitconfig, which by default is the /root one)
Add to your Dockerfile:
RUN /bootstart.sh
Then, when running a container, check the content of
/root/.gitconfig
to confirm the script was run.https://devopscube.com/what-is-docker/
In a normal virtualized environment, one or more virtual machines run on top of a physical machine using a hypervisor like Xen, Hyper-V etc. Containers, on the other hand, run in user space on top of operating systems kernel. It can be called as OS level virtualization. Each container will have its isolated user space and you can run multiple containers on a host, each having its own user space. It means you can run different Linux systems (containers) on a single host.
Containers are isolated in a host using the two Linux kernel features called namespaces and control groups.
Namespaces:
There are six namespaces in Linux (mnt, IPC, net, usr etc.). Using these namespaces a container can have its own network interfaces, IP address etc. Each container will have its own namespace and the processes running inside that namespace will not have any privileges outside its namespace.
Control groups:
The resources used by a container is managed by Linux control groups. You can decide on how much CPU and memory resource a container should use using Linux control groups.
Docker is basically a container engine which uses the Linux Kernel features like namespaces and control groups to create containers on top of an operating system and automates application deployment on the container
Docker is composed of following four components
- Docker Client and Daemon.
- Images
- Docker registries
- Containers
Docker has a client-server architecture. Docker Daemon or server is responsible for all the actions that are related to containers. The daemon receives the commands from the Docker client through CLI or REST API’s. Docker client can be on the same host as a daemon or it can be present on any other host.
Images are organized in a layered manner. Every change in an image is added as a layer on top of it.
Docker registry is a repository for Docker images. Using Docker registry, you can build and share images with your team. A registry can be public or private. Docker Inc provides a hosted registry service called Docker Hub.
https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-machine
Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.
AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.
So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB times x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).
A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host, and it won't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.
A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds, and often even less than a second.
There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.
Amazon and Google containers are based on Docker; you can happily use Docker on your local development machines as well as your cloud-based production machines (with the help of the cloud provider’s container services).
https://docs.docker.com/engine/userguide/networking/
When you install Docker, it creates three networks automatically. You can list these networks using the
docker network ls
command:
These three networks are built into Docker. When you run a container, you can use the
--network
flag to specify which networks your container should connect to.docker logs -f container_id
https://docs.docker.com/docker-for-mac/troubleshoot/#diagnose-problems-send-feedback-and-create-github-issues
USE THE COMMAND LINE TO VIEW LOGS
To view Docker for Mac logs at the command line, type this command in a terminal window or your favorite shell.
$ syslog -k Sender Docker
Alternatively, you can send the output of this command to a file. The following command redirects the log output to a file called
my_docker_logs.txt
.$ syslog -k Sender Docker > ~/Desktop/my_docker_logs.txt
The Console lives on your Mac hard drive in
Applications
> Utilities
. You can bring it up quickly by just searching for it with Spotlight Search.
To find all Docker app log messages, do the following.
- From the Console menu, choose File > New System Log Query…
https://www.mankier.com/1/docker
-a, --all Show all containers (default shows just running)
-f, --filter value Filter output based on conditions provided (default [])
--format string Pretty-print containers using a Go template
--help Print usage
-n, --last int Show n last created containers (includes all states) (default -1)
-l, --latest Show the latest created container (includes all states)
--no-trunc Don't truncate output
docker ps --help
docker-compose ps
docker-compose rm -v
docker-compose logs
docker-compose logs pump elasticsearch
docker-compose -f ps
docker-compose build
docker-compose build calaca pump
docker-compose rm -vf
https://github.com/dockerinaction/ch11_coffee_api
docker-compose pull
docker-compose up -d db
docker-compose ps coffee
docker-compose scale coffee=5
Docker builds container links by creating firewall rules and injecting service discovery information into the dependent container’s environment variables and /etc/hosts file.
When containers are re-created or restarted, they come back with different IP addresses. That change makes the information that was injected into the proxy service stale.
Make sure all RUN yum commands end with && yum clean all to save space. Furthermore, packages which are only needed during the image creation, but not for the actual use of the image, can be removed using yum's transaction support, and specifically using && yum history undo last -y
RUN yum install -y libfoo \
&& yum install -y libfoo-devel \
&& # build/install something which requires libfoo \
&& yum history undo last -y \
&& yum clean all
/
is not allowed in the name of a volume, and thus it failsdocker
cli does not take relative path is because the docker client and the docker daemon might not be on the same host. Thus how should the relative path handled ?