30
loading...
This website collects cookies to deliver better user experience
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Faster time to market: Keeping the competitive advantage needs new software and services. Organizations may use growth and organizational agility to accelerate the implementation of new services.
Deployment velocity: Containerization allows a quicker move from production to implementation. It allows DevOps teams to reduce deployment times and frequency by breaking down barriers.
Reduction of IT infrastructure: Containerization increases the density of device workloads, improve the utilization of your server compute density, and cut software licensing costs to save money.
Performance in IT operations: Containerization allows developers to streamline and automate the management of multiple applications and resources into a single operating model to improve operational performance.
Obtain greater freedom of choice: Any public or private cloud can be used to package, ship, and run applications.
Container orchestration is the automation of much of the operational effort required to run containerized workloads and services. This includes a wide range of things software teams need to manage a container’s lifecycle, including provisioning, deployment, scaling (up and down), networking, load balancing and more.
Improved Resilience: Container orchestration software can improve stability by automatically restarting or scaling a container or cluster.
Simplification of Operations: The most significant advantage of container orchestration, and the primary explanation for its popularity, is simplified operations. Containers add a lot of complexity, which can easily spiral out of control if you don’t use container orchestration to keep track of it.
Enhanced Security: Container orchestration’s automated approach contributes to the protection of containerized applications by reducing or removing the risk of human error.
npm init
. The package.json
generated is as follows:index.js
as follows:http://127.0.0.1:3000/
package.json
so that our application can be readily executed by Docker. Therefore, we’d modify our package.json
as follows:index.js
. A Dockerfile
is simply a set of instructions required for building the container image of the application. Our Dockerfile
looks something like this:FROM
instruction initializes a new build stage and sets the Base Image for subsequent instructions. A Base Image is simply a Docker image that has no parent image, which is created using the FROM scratch
directive. As such, a valid Dockerfile
must start with a FROM
instruction. Here we are using the node v12.0 slim
Base Image.WORKDIR
command is used to set the working directory for all the subsequent Dockerfile
instructions. If the WORKDIR
is not manually created, it gets created automatically during the processing of the instructions. It does not create new intermediate Image layers. Here we set our working directory as /app
.COPY
instruction copies new files or directories from <src>
and adds them to the filesystem of the container at the path <dest>
. Here we are copying the package.json
file to /app
directory. Interestingly, we don’t copy the rest of the files into /app
just yet. Can you guess why? This is because we’d like Docker to cache the first 3 commands so that every time we run docker build
we won’t need to execute those commands, and thus improve our build speed.RUN
command can be used in two ways; either through a Dockerfile
as shown here, otherwise through the Docker CLI. The RUN
instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile
. To install the Node.js app dependencies from the package.json
file, we use the RUN
command here./app
directory.CMD
command to execute the Node.js application. The main purpose of a CMD
is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case we must specify an ENTRYPOINT
instruction as well. There can only be one CMD
instruction in a Dockerfile
. If we list more than one CMD
then only the last CMD
will take effect.Dockerfile
. Now we’re all set to build our container image, which can be done using the command docker build -t hello-world:1.0 .
-t
flag, as it’d be very much beneficial for us to deploy and manage the container image later on. Here we have specified the name of our container image as hello-world
and tagged it with its version 1.0
. We had seen a similar practice with our Base Image node:12-slim
as well. Lastly, we specified the directory from where the Dockerfile
is located using the .
path.Dockerfile
in a sequential order to complete the build process. Here’s the build output:docker images
:docker run -it -p 3000:3000 hello-world:1.0
-it
. Further, we specify the port mapping of our container using the -p
flag, where we direct that the 3000
port of the host machine is mapped to the 3000
port of the container. Finally, we specify the name and tag of the image to be run. Hence we obtain:http://127.0.0.1:3000/
:-d
flag in place of -it
in the previous command: docker run -d -p 3000:3000 hello-world:1.0
docker container ls
:npoc
command: docker images
: minikube ssh
: docker images
command and this time the hello-world image or the node image is nowhere to be found:exit
command.eval $(minikube docker-env)
. This will allow us to use the Docker daemon of Minikube to be used for the subsequent command. You can confirm that now the Minikube’s Docker is being used by using the docker images command again and this time you’d find the images from your Minikube Docker: docker build -t hello-world:1.0 .
:kubectl create deployment
command directly. The first approach is a better approach since it gives us much more flexibility to specify our exact Pod configuration.deployment.yml
manifest:replicas
has been set to 1 for the time being. This means that there will always be exactly one pod for our deployment under the present configuration. The imagePullPolicy
is set to “Never” as the image is expected to be fetched from the Minikube Docker’s Local Registry. Finally under ports
, the containerPort
has been set to 3000 because our Node.js application listens on that port.kubectl create -f deployment.yml
, assuming that the terminal is open in the directory where deployment.yml
file is present. We get the following output:kubectl get deployments
:kubectl get pods
: kubectl scale --replicas=3 deployment hello-world
. This command will create 2 more Pods which will be exact replicas of the Pod that we have created:kubectl get pods
:service.yml
manifest:ports
, we have defined the port
and targetPort
in relation to the Service itself i.e. port refers to the port exposed by the service and targetPort
refers to the port used by our deployment aka. the Node.JS application.kubectl create -f service.yml
, assuming that the terminal is open in the directory where service.yml
file is present. We get the following output:kubectl get services
:kubectl port-forward svc/hello-world-svc 3000:3000
:http://127.0.0.1:3000
: