Launch Web with Docker
I. WHAT IS DOCKER?
Docker is an open platform for developing, deploying, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
II. BENEFITS OF DOCKER
Unlike virtual machines, Docker starts and stops in seconds.
You can launch the container on each system that you want.
Containers can be built and removed faster than virtual machines.
Easy to set up working environment. Just config once only and never have to reinstall dependencies.
It keeps your work-space cleaner when you delete the environment that affects other parts.
III. INSTALL
See the instructions: https://docs.docker.com/get-docker/
IV. IMAGE
Docker Image is a read-only template used to create containers. Image is structured of layers and all layers are read-only. Create an image can be based on another image with some additional customization. In short, Docker Image is a place to store environment settings such as OS, packages, software to run…
V. CONTAINER
Docker Container is created from Docker Image, which contains everything needed to be able to run an application. As virtualization but the Container is very light, can be considered as a system process. It only takes a few seconds to start, stop or restart a Container. With a physical server, instead of running a few traditional virtual machines, we can run several dozen, even several hundred Docker Containers.
VI. COMPARE CONTAINER AND VIRTUAL MACHINE
CONTAINER | VIRTUAL MACHINE | |
Resource | The process in the container directly uses real resources, but the operating system can stipulate each process a different resource limit (or unlimited). | Everything is limited by virtual hardware. |
Execution | The real OS runs the software. | Real OS → Virtual OS → Virtual OS running the software. (For VPS, Hypervisor type 1 replaces the real OS) |
Performance | Real software runs on real hardware. Starting speed is almost a normal software. | Real hardware has to carry a virtual OS. Since the computer was booted up, it took time until the software was used. |
Security | Processes in the same container can still affect each other. But normally each container should only run one process. Processes of different containers cannot affect each other. | Software has malicious code can affect other process resources in the same VM. |
Support software | Docker Engine, LXC Linux Container, Apache Mesos, CRI-O (Kubernetes)… | VirtualBox, VMWare, Microsoft Hyper-V, Parallels, Linux KVM, Docker Machine… |
VII. DOCKERFILE
– Dockerfile is a config file for Docker to build the image. It uses a basic image to build the initial image class. Some basic images: python, ubuntu and alpine. Then if there are additional layers then it is stacked on top of the base layer.
– The config :
- FROM — Specifies the original image: python, ubuntu, alpine…
- LABEL — Provide metadata for the image. Can be used to add maintainer information. To see the labels for the images, use the docker inspect command.
- ENV — Set an environment variable.
- RUN — Can create a command when building image. Used to install packages into containers.
- COPY — Copy files and folders into the container.
- ADD — Copy files and folders into the container.
- CMD — Provide a command and argument for the executable container. Parameters can be overridden and only a CMD.
- WORKDIR — Set up working directory for other directives such as: RUN, CMD, ENTRYPOINT, COPY, ADD, …
- ARG — Define the variables value to be used during image build.
- ENTRYPOINT — Provide a command and argument for the executable container.
- EXPOSE — Declares the port of the image.
- VOLUME — Create a directory mount point for accessing and storing data.
Example:
FROM node:12-alpine RUN apk add git RUN mkdir -p /home/node/app WORKDIR /home/node/app COPY package*.json ./ RUN npm install COPY . . ENV HOST=0.0.0.0 PORT=3334 EXPOSE $PORT CMD [ "node", "." ]
VIII. THE OTHER CONCEPTS
Docker Client: Interacts with docker via command in terminal. Docker Client will use the API to send commands to Docker Daemon.
Docker Daemon: Docker server for requests from the Docker API. It manages images, containers, networks and volumes.
Docker Volumes: A place that stores data continuously for using and creating apps.
Docker Registry: Docker Images’s private storage. Images are pushed into the registry and the client pulls images from the registry. You can use your own registry or the registry of your provider such as: AWS, Google Cloud, Microsoft Azure.
Docker Hub: Docker Images largest registry (default). Can find images and store your own images on Docker Hub (free).
Docker Repository: set of Docker Images with the same name but different tags. Example: node: 12-alpine.
Docker Networking: allows to connect containers together. This connection can be on a host or more hosts.
Docker Compose: The tool allows to run apps with multiple Docker containers easily. Docker Compose allows you to configure commands in the docker-compose.yml file for reuse. Available with Docker installed.
Docker Swarm: coordinate container deployment.
Docker Services: The containers in production. A service only runs an image but it encrypts the way to run the image – which port to use, how many copies of the container to run the service has the necessary and immediate performance.
IX. BASIC COMMANDS IN DOCKER
List image/container:
docker image/container ls
List all containers:
docker ps –a
Stop a container:
docker stop <container name>
Run the container from the image and change the container name:
docker run –name <container name> <image name>
Stop all container:
docker stop $(docker ps –a –q)
Show log a container:
docker logs <container name>
Build an image from container:
docker build -t <container name> .
Create a container running in the background:
docker run -d <image name>
Start a container:
docker start <container name>
See more: https://docs.docker.com/reference/
X. INITIAL INSTALL
Step 1: Access to EC2 of AWS.
- Open Tera Term application to access.
- Enter host server into Host textbox.
- Enter username and key, then OK press.
- Screen after access.
Step 2: Install docker.
- Update the packages on your instance
sudo yum update -y
- Install Docker
sudo yum install docker -y
- Start the Docker Service
sudo service docker start
- Add the ec2-user to the docker group so you can execute Docker commands without using sudo.
sudo usermod -a -G docker ec2-user
Step 3: Install git and clone/pull source code.
- Install git
sudo yum install git
- Clone/pull source code.
git clone https://usernameToken:passwordToken@gitlab.com/<project name>/<git name>.git or git pull origin develop
** With usernameToken is name’s deploy token and passwordToken is a random string of characters.(See more here).
XI. DEPLOY
Step 1: Go to the directory containing the source code
cd <project folder>
Dockerfile-dev file in source is configured as follows:
# Check out https://hub.docker.com/_/node to select a new base image FROM node:12-alpine RUN apk add git # Set to a non-root built-in user `node` USER node # Create app directory (with user `node`) RUN mkdir -p /home/node/app WORKDIR /home/node/app ENV NODE_ENV="<Environment name>" ENV PORT="3334" ENV DEBUG="front:*" ENV SESSION_SECRET="session-secret" # Install app dependencies # A wildcard is used to ensure both package.json AND package-lock.json are copied # where available (npm@5+) COPY package*.json ./ RUN npm install # Bundle app source code COPY . . # Bind to all network interfaces so that it can be mapped to the host OS ENV HOST=0.0.0.0 PORT=3334 EXPOSE $PORT CMD [ "node", "." ]
Step 2: Build images source
docker build --no-cache -t <Image name> -f Dockerfile.dev . (Using for development)
* -t: option tags of image.
* . : source folder
* –no-cache: not save cache.
Step 3: Run container from image source.
docker run -dp <Host port>:<Container port> --name <container name> <Image name>
* –name: set name for container, this is evaluate-system. Name is unique, If not, docker generate.
* -p: open port container.
* -d: turn on background mode
Step 4: Go to nginx folder in source.
cd nginx
In nginx folder include 2 file are default.conf and Dockerfile
– File default.conf
server { location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://app:3334; } }
– File Dockerfile
FROM nginx RUN rm /etc/nginx/conf.d/* COPY default.conf /etc/nginx/conf.d/
Build image nginx for source
docker build -t es/nginx .
Run container es/nginx
docker run -dp 80:80 --link <container name>:app --name nginx-proxy es/nginx
Step 5: Show images and containers.
- List image
- List container.
Step 6: Access to web
XII. REFERENCES
https://docs.docker.com/get-docker/
https://docs.docker.com/reference/
https://egghead.io/lessons/node-js-setup-an-nginx-proxy-for-a-node-js-app-with-docker