Thursday, October 29, 2015

Docker



debug + troubleshooting
https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
docker events&
Then run your failing docker run ... command. Then you should see something like the following on screen:
2015-12-22T15:13:05.503402713+02:00 xxxxxxxacd8ca86df9eac5fd5466884c0b42a06293ccff0b5101b5987f5da07d: (from xxx/xxx:latest) die
Then you can get the startup hex id from previous message or the output of the run command. Then you can use it with the logs command:
docker logs <copy the instance id from docker events messages on screen>

https://cntnr.io/running-guis-with-docker-on-mac-os-x-a14df6a76efc


https://cntnr.io/whats-eating-my-disk-docker-system-commands-explained-d778178f96f1
Now I’m going to create a Dockerfile, that will build an image from the Alpine base image and in this image I’m writing three random files that each consist of 1 block with block size 1GB, by utilizing the dd command, taking up a total of 3 gigabytes. Since I’m not planning to do anything usefull with this image, the CMD simply defaults to /bin/true.
FROM alpine
RUN dd if=/dev/zero of=1g1.img bs=1G count=1
RUN dd if=/dev/zero of=1g2.img bs=1G count=1
RUN dd if=/dev/zero of=1g3.img bs=1G count=1
CMD /bin/true


Under the same system namespace Docker provides another command docker system prune, that will help you clean up dangling images, unused containers, stale volumes and networks.

After this commands greets us with a warning, it will remove all the stopped containers and dangling images. In our case, the intermediary image, the one with the 3 x 1GB files isn’t associated anymore, hence ‘dangling’, so it is pruned. Also all the intermediary images that were created while I was building my image get removed, and the total reclaimed space is about three gigabytes. Win!

$ docker system prune -a
By running this you’re basically cleaning up your entire system and only keep the things that are actually running on you r system, so be very aware of what you are doing when running this. For instance, you don’t want to run this prune -a command on a production server where you have some sidecar images idling (eg. scheduled backup or rollup, weekly exports, etc.), waiting to be executed every once in a while, because those will be cleaned up and need to be pulled in again when running your sidecar.


https://docs.docker.com/engine/admin/host_integration/
As of Docker 1.2, restart policies are the built-in Docker mechanism for restarting containers when they exit. If set, restart policies will be used when the Docker daemon starts up, as typically happens after a system boot. Restart policies will ensure that linked containers are started in the correct order.
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container. If the process hasn't exited within the timeout period a SIGKILL signal will be sent.
http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
https://til.codes/docker-run-vs-cmd-vs-entrypoint/
  • RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.
  • CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
  • ENTRYPOINT configures a container that will run as an executable.
ENV name John Dow  
ENTRYPOINT echo "Hello, $name"  
when container runs as docker run -it <image> will produce output
Hello, John Dow  
When instruction is executed in exec form it calls executable directly, and shell processing does not happen. For example, the following snippet in Dockerfile
ENV name John Dow  
ENTRYPOINT ["/bin/echo", "Hello, $name"]  
when container runs as docker run -it <image> will produce output
Hello, $name  
Note that variable name is not substituted.
ENV name John Dow  
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $name"]  
  • CMD ["executable","param1","param2"] (exec form, preferred)
  • CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
  • CMD command param1 param2 (shell form)
https://www.ctl.io/developers/blog/post/dockerfile-entrypoint-vs-cmd/
use ENTRYPOINT or CMD to specify your image's default executable

both ENTRYPOINT and CMD give you a way to identify which executable should be run when a container is started from your image. In fact, if you want your image to be runnable (without additional docker run command line arguments) you must specify an ENTRYPOINT or CMD.
Trying to run an image which doesn't have an ENTRYPOINT or CMD declared will result in an error
$ docker run alpine
FATA[0000] Error response from daemon: No command specified
Many of the Linux distro base images that you find on the Docker Hub will use a shell like /bin/sh or /bin/bash as the the CMD executable. This means that anyone who runs those images will get dropped into an interactive shell by default (assuming, of course, that they used the -i and -t flags with the docker run command).
FROM ubuntu:trusty
CMD ping localhost
we can override the default CMD by specifying an argument after the image name when starting the container:
$ docker run demo hostname
The default ENTRYPOINT can be similarly overridden but it requires the use of the --entrypoint flag:
$ docker run --entrypoint hostname demo
Given how much easier it is to override the CMD, the recommendation is use CMD in your Dockerfile when you want the user of your image to have the flexibility to run whichever executable they choose when starting the container

In contrast, ENTRYPOINT should be used in scenarios where you want the container to behave exclusively as if it were the executable it's wrapping. That is, when you don't want or expect the user to override the executable you've specified.
There are many situations where it may be convenient to use Docker as portable packaging for a specific executable. Imagine you have a utility implemented as a Python script you need to distribute but don't want to burden the end-user with installation of the correct interpreter version and dependencies. You could package everything in a Docker image with an ENTRYPOINT referencing your script. Now the user can simply docker run your image and it will behave as if they are running your script directly.
Of course you can achieve this same thing with CMD, but the use of ENTRYPOINT sends a strong message that this container is only intended to run this one command.
Both the ENTRYPOINT and CMD instructions support two different forms: the shell formand the exec form. In the example above, we used the shell form which looks like this:
CMD executable param1 param2
When using the shell form, the specified binary is executed with an invocation of the shell using /bin/sh -c. You can see this clearly if you run a container and then look at the docker ps output
but there are some subtle issues that can occur when using the shell form of either the ENTRYPOINT or CMD instruction. If we peek inside our running container and look at the running processes we will see something like this:
$ docker exec 15bfcddb ps -f
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:14 ? 00:00:00 /bin/sh -c ping localhost
root 9 1 0 20:14 ? 00:00:00 ping localhost
root 49 0 0 20:15 ? 00:00:00 ps -f
Note how the process running as PID 1 is not our ping command, but is the /bin/shexecutable. This can be problematic if we need to send any sort of POSIX signals to the container since /bin/sh won't forward signals to child processes
you may also run into problems with the shell form if you're building a minimal image which doesn't even include a shell binary. When Docker is constructing the command to be run it doesn't check to see if the shell is available inside the container -- if you don't have /bin/sh in your image, the container will simply fail to start.
A better option is to use the exec form of the ENTRYPOINT/CMD instructions which looks like this:
CMD ["executable","param1","param2"]
When the exec form of the CMD instruction is used the command will be executed without a shell.
Now /bin/ping is being run directly without the intervening shell process (and, as a result, will end up as PID 1 inside the container).
Whether you're using ENTRYPOINT or CMD (or both) the recommendation is to always use the exec form so that's it's obvious which command is running as PID 1 inside your container.
Combining ENTRYPOINT and CMD allows you to specify the default executable for your image while also providing default arguments to that executable which may be overridden by the user.
FROM ubuntu:trusty
ENTRYPOINT ["/bin/ping","-c","3"]
CMD ["localhost"]
 When both an ENTRYPOINT and CMD are specified, the CMD string(s) will be appended to the ENTRYPOINT in order to generate the container's command string. Remember that the CMD value can be easily overridden by supplying one or more arguments to `docker run` after the name of the image.
docker run ping docker.io
When using ENTRYPOINT and CMD together it's important that you always use the exec form of both instructions. Trying to use the shell form, or mixing-and-matching the shelland exec forms will almost never give you the result you want.
ARG VERSION="3.4.8"
ARG PACKAGE="zookeeper-${VERSION}"
ENTRYPOINT ["/entry.sh"]
CMD ["start-foreground"]
https://docs.docker.com/compose/compose-file/compose-file-v2/
https://docs.docker.com/compose/compose-file
build can be specified either as a string containing a path to the build context, or an object with the path specified under context and optionally dockerfile and args.
build: ./dir

build:
  context: ./dir
  dockerfile: Dockerfile-alternate
  args:
    buildno: 1
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image:
build: ./dir
image: webapp:tag
This will result in an image named webapp and tagged tag, built from ./dir.

CONTEXT

Either a path to a directory containing a Dockerfile, or a url to a git repository.
When the value supplied is a relative path, it is interpreted as relative to the location of the Compose file. This directory is also the build context that is sent to the Docker daemon.

ARGS

Add build arguments, which are environment variables accessible only during the build process.
First, specify the arguments in your Dockerfile:
ARG buildno
ARG password

RUN echo "Build number: $buildno"
RUN script-requiring-password.sh "$password"
Then specify the arguments under the build key. You can pass either a mapping or a list:
build:
  context: .
  args:
    buildno: 1
    password: secret

build:
  context: .
  args:
    - buildno=1
    - password=secret
You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running.
args:
  - buildno
  - password

ESTART_POLICY

Configures if and how to restart containers when they exit. Replaces restart.
  • condition: One of noneon-failure or any (default: any).
  • delay: How long to wait between restart attempts, specified as a duration (default: 0).
  • max_attempts: How many times to attempt to restart a container before giving up (default: never give up).
  • window: How long to wait before deciding if a restart has succeeded, specified as a duration (default: decide immediately).
    restart_policy:
      condition: on-failure
      delay: 5s
      max_attempts: 3
      window: 120s

depends_on

Express dependency between services, which has two effects:
  • docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
  • docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Configure a check that’s run to determine whether or not containers for this service are “healthy”. See the docs for theHEALTHCHECK Dockerfile instruction for details on how healthchecks work.
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"]
  interval: 1m30s
  timeout: 10s
  retries: 3

docker kill $(docker ps -q)
docker build --no-cache

# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)

https://gist.github.com/ngpestelos/4fc2e31e19f86b9cf10b
Another way of removing all images is:
docker images -q | xargs docker rmi
If images have depended children, forced removal is via the -f flag:
docker images -q | xargs docker rmi -f
ENV PATH="/opt/gtk/bin:$PATH"

When the docker build command is run with the --no-cache flag.
When a non-cacheable command such as apt-get update is given. All the following RUN instructions will be run again.

The CMD instruction provides the default command for a container to execute.
The WORKDIR instruction sets the working directory for the RUN, CMD, and ENTRYPOINT Dockerfile commands that follow it:

The VOLUME instruction will create a mount point with the given name and mark it as holding externally mounted volumes from the host or from other containers

http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/

  • RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.
  • CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
  • ENTRYPOINT configures a container that will run as an executable.
When Docker runs a container, it runs an image inside it. This image is usually built by executing Docker instructions, which add layers on top of existing image or OS distributionOS distribution is the initial image and every added layer creates a new image.
http://www.sitepoint.com/docker-and-dockerfiles-made-easy/
docker-machine ls
docker run hello-world
docker run -it ubuntu bash

How to install docker
Using this: https://docs.docker.com/engine/installation/mac/
http://stackoverflow.com/questions/26572112/how-to-fix-error-in-run-failed-to-get-machine-boot2docker-vm-machine-does-n
Update: Boot2Docker is now legacy. See here for official deprecation notice:https://docs.docker.com/installation/mac/, and here for info on migrating a preexisting Boot2Docker vm to Docker Machine: https://docs.docker.com/machine/migrate-to-machine/.

1. Run Docker Quickstart Terminal

https://github.com/docker/kitematic/issues/1010
When you run docker-machine env default it doesn't set the actual env variables. At the end of the message you will see something like:
# Run this command to configure your shell:
# eval "$(docker-machine env default)"

2. From your shell
docker-machine create --driver virtualbox default
docker-machine ls

docker-machine env default
eval "$(docker-machine env default)"

Dockerfile
$ cat Dockerfile
FROM fedora:20
MAINTAINER "Scott Collier" <scollier@redhat.com>

RUN yum -y update && yum clean all
RUN yum -y install httpd && yum clean all
RUN echo "Apache" >> /var/www/html/index.html

EXPOSE 80

# Simple startup script to avoid some issues observed with container restart
ADD run-apache.sh /run-apache.sh
RUN chmod -v +x /run-apache.sh

CMD ["/run-apache.sh"]
docker build -t fedora/apache .

http://programster.blogspot.com/2014/01/docker-build-apachephp-image-from.html
http://www.slashroot.in/dockerfile-tutorial-building-docker-images-for-containers
FROM ubuntu:12.04
MAINTAINER Sarath "sarath@slashroot.in"
RUN apt-get update
RUN apt-get install -y nginx
RUN echo 'Our first Docker image for Nginx' > /usr/share/nginx/html/index.html
EXPOSE 80

docker build -t="spillai/test_nginx_image" .

So basically if one of your instruction in the Dockerfile fails to complete successfully, you will still have an image (that was created during the instruction before) that is usable.
This is really helpful for troubleshooting to figure out, why the instruction failed. ie: You can simply launch a container from the last image created during the build operation and debugg the failed instruction in Dockerfile by manually executing it.

docker images | grep nginx
docker run -d -p 80:80 --name my_nginx_test_container spillai/test_nginx_image nginx -g "daemon off;"
docker history 728d805bd6d0


cat Dockerfile
FROM ubuntu:14.04
MAINTAINER Sarath "sarath@slashroot.in"
RUN apt-get update
RUN apt-get install -y nginx
RUN echo 'Our first Docker image for Nginx' > /usr/share/nginx/html/index.html
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
EXPOSE 80

docker run -d -p 80:80 --name my_nginx_test_container spillai/test_nginx_image:1.0

https://coderwall.com/p/ewk0mq/stop-remove-all-docker-containers
One liner to stop / remove all of Docker containers:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)

docker logs - Shows us the standard output of a container.

docker commands
docker ps

docker run -t -i ubuntu:14.04 /bin/bash
The -t flag assigns a pseudo-tty or terminal inside our new container and the -i flag allows us to make an interactive connection by grabbing the standard in (STDIN) of the container.

docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"
docker stop insane_babbage
docker logs insane_babbage


docker-machine create --driver virtualbox docker-vm
You can print the machine’s configuration by running the docker-machine env docker-vm command. This is how you can switch between machines.

# Use the docker-vm
eval "$(docker-machine env docker-vm)"

# Switch to the dev machine
eval "$(docker-machine env dev)"

# Get image from the hub
docker pull nimmis/apache-php5
# Create the container
docker run -tid nimmis/apache-php5
the last one (d) means that we want to run it in the background.

The -P option on the run command will automatically expose any ports needed from the container to the host machine, while the -p option lets you specify ports to expose from the container to the host.

# Automatically exposes the container ports to an available host port
docker run -tid -P nimmis/apache-php5

# We can also specify ports manually <host port>:<container port>
docker run -tid -p 80:80 nimmis/apache-php5

Container Volumes
Volumes are an easy way to share storage between your host machine and the container. They are initialized during the container’s creation and kept synced. In our case we want to mount /var/www to a local directory ~/Desktop/www/laravel_demo.
# Option syntax
docker run -v <local dir>:<container dir>
docker run -tid -p 80:80 -v ~/Desktop/www/laravel_demo:/var/www nimmis/apache-php5

You can log into the container using the exec command.
docker exec -it <container> bash
# Restart Apache
/etc/init.d/apache2 restart

Naming Containers
docker run -tid -p 80:80 -v ~/Desktop/www/laravel_demo:/var/www --name wazo_server nimmis/apache-php5

Docker Machines
A Docker machine is the VM that holds your images and containers
Docker Images
Docker images are OS boxes that contain some pre-installed and configured software. You can browse the list of available images on the Docker Hub. In fact, you can create your own image based on another one and push it to the Docker hub so that other users can use it.
Docker Containers
Docker containers are separate instances that we create from images. 


docker-machine ip default
and I am able to browse my site from the host at http://192.168.59.103:49159/

docker run -d -p 8000:80 nginx
curl $(docker-machine ip dev):8000
docker-machine stop/start dev

use docker ps to find which port the container exposes


http://blog.javabien.net/2014/03/03/setup-docker-on-osx-the-no-brainer-way/
brew update
brew tap phinze/homebrew-cask
brew install brew-cask

install Virtualbox which is a prerequisite to running docker on OSX:
brew cask install virtualbox

Install boot2docker
Boot2docker is a small script that helps download and setup a minimal Linux VM that will be in charge of running docker daemon.
brew install boot2docker
boot2docker init
boot2docker up
export DOCKER_HOST=tcp://localhost:4243

Install docker
brew install docker
docker version

http://blog.javabien.net/2014/03/17/upgrade-docker-and-boot2docker-on-osx/
First let’s upgrade docker and boot2docker:
$ brew update
$ brew upgrade docker
$ brew upgrade boot2docker
Now it’s very important to upgrade boot2docker’s image otherwise you’ll see this kind of message when you try to create new images:

Error: Multipart upload for build is no longer supported. Please upgrade your docker client.
See this issue

So let’s upgrade boot2docker’s image:

$ boot2docker stop
$ boot2docker delete
$ boot2docker download
$ boot2docker init
$ boot2docker up
Sometimes boot2docker won’t stop. I’ve had to manually shutdown the vm with VirtualBox’s GUI.

http://prismoskills.appspot.com/lessons/System_Design_and_Big_Data/Chapter_10_-_Docker.jsp
One of the most common scaling paradigm during the 2000s was that of creating several virtual machines, create design packs for installing standard softwares like Oracle, MySQL, ZooKeeper, Solr etc. and manage the lifecycle of these components through a browser. The "virtualization" technology at the core of this strategy creates several VMs (Virtual Machines) on a single powerful machine so that its resources are shared by each of these VMs. To scale horizontally then, one just keeps on adding more hardware and keeps on spinning more VMs on them.

The above solution works well till the number of VMs is in thousands.
When the count of VMs goes beyond that, the cost of hardware becomes really high and so is its maintenance.
Besides, the cost of spinning a VMs is high and can take several minutes.
So if a service is designed to be elastic such that it adds machines based on demand, then it could
take quite sometime to add those VMs during high demand and could cause the website to appear slow while
new VMs provisioning is in progress.

Google faced this problem in the 2005 and its engineers came out with Control Groups (also called as cgroups or Process Containers) in 2007. CGroups is a feature in Linux kernel that limits and isolates the resource usage of a group of processes. With such isolation, processes in one group cannot affect those in the other. And since a limit is enforced on each group, processes in a single group cannot consume all the resources of a given machine unless explicitly so configured.


Control Groups vs VMs

Running an isolated group of processes is much less intensive than running a full blown VM.
This becomes evident when you realize that unlike a VM, CGroups does not try to emulate the hardware layer for some OS running on them.
A VM provides a virtual hardware-like API layer to the OS that runs on it.
Since the VM does not know what will run on it, it has to provide all the APIs of the hardware layer.
This is wasteful since the application running on it may use only a handful of those APIs

The emulation of all these APIs adds time to the VM bring-up as well.

Eventually, a VM has to interact with the underlying hardware for accessing memory, network etc.
So, the emulation API has a redirection that passes these commands to the underlying hardware.
This redirection consumes some time, however little.

As seen in #1, since a VM is heavy, lot of hardware resources are wasted in trying to provide a common environment to all programs.
Imagine spinning a full-linux VM for an application whose only job is to do some number crunching or to deal with storage.


CGroups are free from the above problems and they can achieve better resource utilization by isolating their groups from each other. Hence CGroups are faster to bring up, faster for the processes that run in them (as the processes interact directly with the underlying hardware) and more groups can be run on a machine than VMs.

Downside of CGroups is that it runs only on Linux. So if you have to run them on Windows, you first need to install a VM running Linux and then install CGroups on the same.


Namespace Isolation


Namespace isolation is a feature in which groups of processes remain unaware of other groups' presence on the same machine. It was released in 2008.

Enter Docker


Docker makes use of CGroups and namespace isolation and allows creation of Docker images
A docker image is a collection of processes that are run in isolation by cgroups.
Roughly speaking, CGroups with Docker is equivalent to VirtualBox and Docker image is equivalent to the VM.
Just that Docker/CGroups does not provide a hardware-like API layer to any OS running on it.
It just makes sure that groups do not interfere with each other and stay within their allocated limits.

  1. Containers' white-paper by Parallels
  2. CGroups at Wikipedia
  3. An in-depth explanation of linux namespaces
  4. LinuxContainers.org
https://en.wikipedia.org/wiki/Cgroups#NAMESPACE-ISOLATION
cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.

One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides:
Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7]
Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9]
Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10]
Control: freezing the groups of processes, their checkpointing and restarting[10]
Use[edit]

Namespace isolation
While not technically part of the cgroups work, a related feature of the Linux kernel is namespace isolation, where groups of processes are separated such that they cannot "see" resources in other groups.

http://coolshell.cn/articles/17010.html

http://stackoverflow.com/questions/22332830/command-for-upgrading-docker

https://docs.docker.com/docker-for-mac/docker-toolbox/
Docker Toolbox installs dockerdocker-compose and docker-machine in /usr/local/bin on your Mac. It also installs VirtualBox. At installation time, Toolbox uses docker-machine to provision a VirtualBox VM called default, running theboot2docker Linux distribution, with Docker Engine with certificates located on your Mac at$HOME/.docker/machine/machines/default.
Before you use docker or docker-compose on your Mac, you typically use the command eval $(docker-machine env default) to set environment variables so that docker or docker-compose know how to talk to Docker Engine running on VirtualBox.
  • Docker for Mac does not use VirtualBox, but rather HyperKit, a lightweight OS X virtualization solution built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher.
  • Installing Docker for Mac does not affect machines you created with Docker Machine. The install offers to copy containers and images from your local default machine (if one exists) to the new Docker for Mac HyperKit VM. If chosen, content from default is copied to the new Docker for Mac HyperKit VM, and your original defaultmachine is kept as is.
  • The Docker for Mac application does not use docker-machine to provision that VM; but rather creates and manages it directly.
https://docs.docker.com/v1.8/installation/mac/
 $ docker --version
 Docker version 1.12.0, build 8eab29e

 $ docker-compose --version
 docker-compose version 1.8.0, build f3628c7

 $ docker-machine --version
 docker-machine version 0.8.0, build b85aac1
docker run hello-world
docker ps
docker run -d -p 80:80 --name webserver nginx
Installing bash completion¶
cd /usr/local/etc/bash_completion.d ln -s /Applications/Docker.app/Contents/Resources/etc/docker.bash-completion ln -s /Applications/Docker.app/Contents/Resources/etc/docker-machine.bash-completion ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.bash-completion

docker kill [OPTIONS] CONTAINER [CONTAINER...]

https://docs.docker.com/v1.8/compose/yml/

https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
WORKDIR /path/to/workdir
The WORKDIR instruction sets the working directory for any RUNCMDENTRYPOINTCOPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequentDockerfile instruction.
Warning Avoid using your root directory, /, as the root of the source repository.
The docker build command will use whatever directory contains the Dockerfile
as the build context (including all of its subdirectories). The build context will be
sent to the Docker daemon before building the image, which means if you
use / as the source repository, the entire contents of your hard drive will get
sent to the daemon (and thus to the machine running the daemon). You
probably don't want that.
and
The <src> path must be inside the context of the build; you cannot
ADD ../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker daemon.
http://pothibo.com/2015/7/how-to-debug-a-docker-container
Every instruction you set in the Dockerfile is going to be built as a separate, temporary image for the other instruction to build itself on top of.

http://stackoverflow.com/questions/26220957/how-can-i-inspect-the-file-system-of-a-failed-docker-build
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS               NAMES
6934ada98de6        42e0228751b3        "/bin/sh -c './utils/"   24 minutes ago      Exited (1) About a minute ago                       sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
https://medium.com/@betz.mark/ten-tips-for-debugging-docker-containers-cde4da841a1d#.rz4cqd3jn

http://blogs.dlt.com/troubleshooting-dockerfile-builds-checkpoint-containers/
https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/

https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Docker currently has four restart policies:
  • no
  • on-failure
  • unless-stopped
  • always
docker-compose UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
Slack also changes --no-cache to –no-cache, which will result in the exact same error when copy pasted from there.

https://help.ubuntu.com/community/HowToSHA256SUM

http://odewahn.github.io/docker-jumpstart/building-images-with-dockerfiles.html
$ docker build -t "simple_flask:dockerfile" .
The "-t" flag adds a tag to the image so that it gets a nice repository name and tag. Also not the final ".", which tells Docker to use the Dockerfile in the current directory.
Running docker history will show you the effect of each command has on the overall size of the file:
$ docker history simple_flask:dockerfile
http://dockone.io/article/103
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-3-automation-is-the-word-using-dockerfile
Dockerfile支持支持的语法命令如下:
INSTRUCTION argument

指令不区分大小写。但是,命名约定为全部大写。

所有Dockerfile都必须以FROM命令开始。 FROM命令会指定镜像基于哪个基础镜像创建,接下来的命令也会基于这个基础镜像(译者注:CentOS和Ubuntu有些命令可是不一样的)。FROM命令可以多次使用,表示会创建多个镜像。具体语法如下:
FROM <image name>

例如:
FROM ubuntu
http://blog.yohanliyanage.com/2016/09/docker-machine-moby-name-or-service-not-known/
I have been running Docker on OS X for quite a while now and the I switched to Docker Machine for Mac Beta few months back. It has been a great experience so far, but not without occasional hiccups. I run some of my containers in host networking mode, and I have faced the following problem when I start some of my containers.
java.net.UnknownHostException: moby: moby: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505) ~[na:1.8.0_102]
at com.netflix.eureka.transport.JerseyReplicationClient.createReplicationClient(JerseyReplicationClient.java:170) ~[eureka-core-1.4.9.jar!/:1.4.9]
at com.netflix.eureka.cluster.PeerEurekaNodes.createPeerEurekaNode(PeerEurekaNodes.java:194) [eureka-core-1.4.9.jar!/:1.4.9]
....
Caused by: java.net.UnknownHostException: moby: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_102]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_102]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_102]
at java.net.InetAddress.getLocalHost(InetAddress.java:1500) ~[na:1.8.0_102]
... 67 common frames omitted
‘Moby’ is the name given to the host system that runs behind the scenes with Docker Machine for Mac (and Windows). Docker Machine for Mac runs MobyLinux VM, and the hostname file refers to itself as ‘moby’. But this entry is missing in the /etc/hosts file and my services fail to start up because it cannot resolve ‘moby’ to an IP. The workaround is simple. Just add the ‘add-host’ flag to your Docker run command telling it that ‘moby’ is ‘127.0.0.1’, like so:
docker run --net=host --add-host=moby:127.0.0.1 yourid/yourimage
This will automatically add an entry to the /etc/hosts file saying that moby in fact is 127.0.0.1.
https://blog.fundebug.com/2018/01/10/how-to-clean-docker-disk/
docker system df命令,类似于Linux上的df命令,用于查看Docker的磁盘使用情况:


docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              147                 36                  7.204GB             3.887GB (53%)
Containers          37                  10                  104.8MB             102.6MB (97%)
Local Volumes       3                   3                   1.421GB             0B (0%)
Build Cache                                                 0B                  0B

可知,Docker镜像占用了7.2GB磁盘,Docker容器占用了104.8MB磁盘,Docker数据卷占用了1.4GB磁盘。
docker system prune命令可以用于清理磁盘,删除关闭的容器、无用的数据卷和网络,以及dangling镜像(即无tag的镜像)。docker system prune -a命令清理得更加彻底,可以将没有容器使用Docker镜像都删掉。注意,这两个命令会把你暂时关闭的容器,以及暂时没有用到的Docker镜像都删掉了…所以使用之前一定要想清楚吶。
使用truncate命令,可以将nginx容器的日志文件“清零”:


truncate -s 0 /var/lib/docker/containers/a376aa694b22ee497f6fc9f7d15d943de91c853284f8f105ff5ad6c7ddae7a53/*-json.log

当然,这个命令只是临时有作用,日志文件迟早又会涨回来。要从根本上解决问题,需要限制nginx容器的日志文件大小。这个可以通过配置日志的max-size来实现,下面是nginx容器的docker-compose配置文件:


nginx:
  image: nginx:1.12.1
  restart: always
  logging:
    driver: "json-file"
    options:
      max-size: "5g"

重启nginx容器之后,其日志文件的大小就被限制在5GB,再也不用担心了~
对于旧版的Docker(版本1.13之前),是没有docker system命令的,因此需要进行手动清理。这里给出几个常用的命
删除所有关闭的容器


docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm

删除所有dangling镜像(即无tag的镜像):


docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

删除所有dangling数据卷(即无用的volume):


docker volume rm $(docker volume ls -qf dangling=true)

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts