Monday, 22 July 2019

Kubernetes Architecture

Kubernetes Architecture has the following main components:
  • Master nodes
  • Worker/Slave nodes
  • Distributed key-value store(etcd.)
Master Node
It is the entry point for all administrative tasks which is responsible for managing the Kubernetes cluster. There can be more than one master node in the cluster to check for fault tolerance. More than one master node puts the system in a High Availability mode, in which one of them will be the main node which we perform all the tasks.
For managing the cluster state, it uses etcd in which all the master nodes connect to it.
Let us discuss the components of a master node. As you can see in the diagram it consists of 4 components:
API server:
  • Performs all the administrative tasks through the API server within the master node.
  • In this REST commands are sent to the API server which validates and processes the requests.
  • After requesting, the resulting state of the cluster is stored in the distributed key-value store.
Scheduler:
  • The scheduler schedules the tasks to slave nodes. It stores the resource usage information for each slave node.
  • It schedules the work in the form of Pods and Services.
  • Before scheduling the task, the scheduler also takes into account the quality of the service requirements, data locality, affinity, anti-affinity, etc.
Controller manager:
  • Also known as controllers.
  • It is a daemon which regulates the Kubernetes cluster which manages the different non-terminating control loops.
  • It also performs lifecycle functions such as namespace creation and lifecycle, event garbage collection, terminated-pod garbage collection, cascading-deletion garbage collection, node garbage collection, etc.
  • Basically, a controller watches the desired state of the objects it manages and watches their current state through the API server. If the current state of the objects it manages does not meet the desired state, then the control loop takes corrective steps to make sure that the current state is the same as the desired state.
What is the ETCD?
  • etcd is a distributed key-value store which stores the cluster state.
  • It can be part of the Kubernetes Master, or, it can be configured externally.
  • etcd is written in the Go programming language. In Kubernetes, besides storing the cluster state (based on the Raft Consensus Algorithm) it is also used to store configuration details such as subnets, ConfigMaps, Secrets, etc.
  • A raft is a consensus algorithm designed as an alternative to Paxos. The Consensus problem involves multiple servers agreeing on values; a common problem that arises in the context of replicated state machines. Raft defines three different roles (Leader, Follower, and Candidate) and achieves consensus via an elected leader
Now you have understood the functioning of Master node. Let’s see what is the Worker/Minions node and its components.
Worker Node (formerly minions)
It is a physical server or you can say a VM which runs the applications using Pods (a pod scheduling unit) which is controlled by the master node. On a physical server (worker/slave node), pods are scheduled. For accessing the applications from the external world, we connect to nodes.
Let’s see what are the following components:
Container runtime:
  • To run and manage a container’s lifecycle, we need a container runtime on the worker node. Some examples of container runtimes are:
  • Sometimes, Docker is also referred to as a container runtime, but to be precise, Docker is a platform which uses containers as a container runtime.
Kubelet:
  • It is an agent which communicates with the Master node and executes on nodes or the worker nodes. It gets the Pod specifications through the API server and executes the containers associated with the Pod and ensures that the containers described in those Pod are running and healthy.
  • The kubelet connects to the container runtime using Container Runtime Interface (CRI). The Container Runtime Interface consists of protocol buffers, gRPC API, and libraries.
Kube-proxy:
  • It is the network proxy which runs on each worker node and listens to the API server for each Service endpoint creation/deletion.
  • For each Service endpoint, kube-proxy sets up the routes so that it can reach to it.
So, that’s the Kubernetes architecture in a simple fashion

Kubernetes VS Docker Swarm

Docker Swarm - Docker Swarm is Docker’s own native clustering solution for Docker containers which has an advantage of being tightly integrated into the ecosystem of Docker and uses its own API. It monitors the number of containers spread across clusters of servers and is the most convenient way to create clustered docker application without additional hardware. It provides you with a small-scale but useful orchestration system for the Dockerized app.
Kubernetes - Kubernetes is an open source system for managing containerized application in a clustered environment. Using Kubernetes in a right way helps the DevOps as a Service team to automatically scale up-down the application and update with the zero downtime.

Pros of using Kubernetes
  • Its fast: When it comes to continuously deploy new features without downtime; Kubernetes is a perfect choice. The goal of the Kubernetes is to update an application with a constant uptime. Its speed is measured through a number of features you can ship per hour while maintaining an available service.
  • Adheres to the principals of immutable infrastructure: In a traditional way, if anything goes wrong with multiple updates, you don’t have any record of how many updates you deployed and at which point error occurred. In immutable infrastructure, if you wish to update any application, you need to build container image with a new tag and deploy it, killing the old container with old image version. In this way, you will have a record and get an insight of what you did and in-case if there is any error; you can easily rollback to the previous image.
  • Provides declarative configuration: User can know in what state the system should be to avoid errors. Source control, unit tests etc. which are traditional tools can’t be used with imperative configurations but can be used with declarative configurations.
  • Deploy and update software at scale: Scaling is easy due to its immutable, declarative nature of Kubernetes. Kubernetes offers several useful features for scaling purpose:Horizontal Infrastructure Scaling: Operations are done at the individual server level to apply horizontal scaling. Latest servers can be added or detached effortlessly.Auto-scaling: Based on the usage of CPU resources or other application-metrics, you can change the number of containers that are runningManual scaling: You can manually scale the number of running containers through a command or the interfaceReplication controller: The Replication controller makes sure that cluster has a specified number of equivalent pods in a running condition. If in-case, there are too many pods; replication controller can remove extra pods or vice-versa.
  • Handles the availability of the application: Kubernetes checks the health of nodes and containers as well as provides self-healing and auto-replacement if in-case pod crashes due to an error. Moreover, it distributes the load across multiple pods to balance the resources quickly during accidental traffic.
  • Storage Volume: In Kubernetes, data is shared across the containers, but if pods get killed volume is automatically removed. Moreover, data is stored remotely, if the pod is moved to another node, the data will remain until it is deleted by the user.
Cons of using Kubernetes
  • Initial process takes time: When a new process is created, you have to wait for the app to commence before it is available to the users. If you are migrating to Kubernetes, modifications in the code base need to be done to make a start process more efficient so that users don’t have a bad experience.
  • Migrating to stateless requires many efforts: If your application is clustered or stateless, extra pods will not get configured and will have to rework on the configurations within your applications.
  • The installation process is tedious: It is difficult to set up Kubernetes on your cluster if you are not using any cloud provider like Azure, Google or Amazon.

Pros of using Docker Swarm
  • Runs at a faster pace: When you were using a virtual environment, you may have realized that it takes a long time and includes the tedious procedure of booting up and starting the application that you want to run. With Docker Swarm, this is no more a problem. Docker Swarm removes the need to boot up a full virtual machine and enables the app to run in a virtual and software-defined environment quickly and helps in DevOps implementation.
  • Documentation provides every bit of information: The Docker team stands out when it comes to documentation! Docker is rapidly evolving and has received great applause for the entire platform. When version gets released at a short interval of time, some platform doesn’t maintain/take care to maintain documentation. But docker swarm never compromises with it. If in case the information only applies to the certain versions of a docker swarm, the documentation makes sure that all information is updated.
  • Provides simple and fast configuration: One of the key benefits of Docker Swarm is that, it simplifies the matter. Docker Swarm enables the user to take their own configuration, put it into a code and deploy it without any hassle. As Docker Swarm can be used in various environments, requirements are just not bound by the environment of the application.
  • Ensures that application is isolated: Docker Swarm takes care that each container is isolated from the other containers and has its own resources. Various containers can be deployed for running the separate application in different stacks. Apart from this, Docker Swarm cleans app removal as each application runs on its own container. If the application is no longer required, you can delete its container. It won’t leave any temporary or configuration files on your host OS.
  • Version control and component reuse – With Docker Swarm, you can track consecutive versions of a container, examine differences or roll-back to the preceding versions. Containers reuse the components from the preceding layers which makes them noticeably lightweight.
Cons of using Docker Swarm
  • Docker is platform dependent: Docker Swarm is a Linux agonistic platform. Although Docker supports Windows and Mac OS X, it utilizes virtual machines to run on a non-Linux platform. An application which is designed to run in docker container on Windows can’t run on Linux and vice versa.
  • Doesn’t provide storage option: Docker Swarm doesn’t provide a hassle-free way to connect containers to storage and this is one of the major disadvantages. Its data volumes require a lot of improvising on the host and manual configurations. If you’re expecting Docker Swarm to solve the storage issues, it may get done but not in an efficient and user-friendly way.
  • Poor monitoring: Docker Swarm provides the basic information about the container and if you are looking for the basic monitoring solution than Stats command is suffice. If you are looking for the advanced monitoring than Docker Swarm is never an option. Although there are third-party tools available like CAdvisor which offers more monitoring. It is not feasible to collect more data about containers in real-time with Docker itself
To Avoid These Shortfalls, Kubernetes Can be Used
Automated Container Deployment, Scaling and Management Platform
When an application is developed with the diverse components across numerous containers on several machines, there is a need for the tool to manage and orchestrate the containers. This is only feasible with the help of Kubernetes.
Kubernetes is an open source system for managing containerized application in a clustered environment. Using Kubernetes in a right way helps the DevOps as a Service team to automatically scale up-down the application and update with the zero downtime.
Docker and Kubernetes are Different; But not Rivals
Let’s see how
As discussed earlier, Kubernetes and Docker both work at the different level but both can be used together. Kubernetes can be integrated with Docker engine to carry out the scheduling and execution of Docker containers. As docker and Kubernetes are both container orchestrators which means both can help to manage the number containers and also helps in DevOps implementation. Both can automate most of the tasks that are involved in running containerized infrastructure and are open source software projects, governed by an Apache Licence 2.0. Apart from this, both use YAML – formatted files to govern how the tools orchestrate container clusters. When both of them are used together, both Docker and Kubernetes are the best tools for deploying modern cloud architecture. With the exemption of Docker Swarm, both Kubernetes and Docker complement each other.
Kubernetes uses Docker as the main container engine solution and Docker recently announced that it can support Kubernetes as the orchestration layer of its enterprise edition. Apart from this, Docker approves certified Kubernetes program, which makes sure that all Kubernetes API functions as expected. Kubernetes uses the features of Docker Enterprise like Secure Image management, in which Docker EE provides image scanning to make sure if there is an issue in the image used in container. Another is Secure Automation in which organizations can remove inefficiencies such as scanning image for vulnerabilities.
Kubernetes or Docker: Which Can be a Perfect Choice?
Use Kubernetes if,
  • You are looking for mature deployment and monitoring option
  • You are looking for fast and reliable response times
  • You are looking to develop a complex application and requires high resource computing without restrictions
  • You have a pretty big cluster
Use Docker if,
  • You are looking to initiate with the tool without spending much time on configuration and installation
  • You are looking to develop a basic and standard application which is sufficient enough with default docker image
  • Testing and running the same application on the different operating system is not an issue for you
  • You want docker API experience and compatibility
Final Thoughts: Kubernetes and Docker are friends
Whether you choose Kubernetes or Docker, both are considered the best and possess considerable differences. The best way to decide between the two of them is probably to consider which one you already know better or which one fits your existing software stack. If you need to develop the complex app, use Kubernetes and if you are looking to develop the small-scale app, use Docker swarm. Moreover, choosing the right one is a very comprehensive task and solely depends on your project requirements and target audience as well.

Sunday, 21 July 2019

How to use Kubernetes Minikube

This tutorial shows you how to run a simple Hello World Node.js app on Kubernetes using Minikube and Katacoda. Katacoda provides a free, in-browser Kubernetes environment.

Objective of this Post

  1. Deploy a hello world application to Minikube.
  2. Run the app.
  3. View application logs.

Create an Application and Image

  • Create a directory called Minikube
  • Inside the directory create two files , 1. server.js and 2. Dockerfile
  • Content of server.js 
var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200);
  response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
  • Content of Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js

Start Minikube Cluster

  1. Click Launch Terminal
    Note: If you installed Minikube locally, run minikube start.
  2. Open the Kubernetes dashboard in a browser:
    minikube dashboard
  3. Katacoda environment only: At the top of the terminal pane, click the plus sign, and then click Select port to view on Host 1.
  4. Katacoda environment only: Type 30000, and then click Display Port.

Create Deployment

A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A KubernetesDeployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.
  1. Use the kubectl create command to create a Deployment that manages a Pod. The Pod runs a Container based on the provided Docker image.
    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
  2. View the Deployment:
    kubectl get deployments
    Output:
    NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    hello-node   1         1         1            1           1m
  3. View the Pod:
    kubectl get pods
    Output:
    NAME                          READY     STATUS    RESTARTS   AGE
    hello-node-5f76cf6ccf-br9b5   1/1       Running   0          1m
  4. View cluster events:
    kubectl get events
  5. View the kubectl configuration:
    kubectl config view

Create Service

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.
  1. Expose the Pod to the public internet using the kubectl expose command:
    kubectl expose deployment hello-node --type=LoadBalancer --port=8080
    The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
  2. View the Service you just created:
    kubectl get services
    Output:
    NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    hello-node   LoadBalancer   10.108.144.78   <pending>     8080:30369/TCP   21s
    kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          23m
    On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
  3. Run the following command:
    minikube service hello-node
  4. Katacoda environment only: Click the plus sign, and then click Select port to view on Host 1.
  5. Katacoda environment only: Type 30369 (see port opposite to 8080 in services output), and then click
    This opens up a browser window that serves your app and shows the “Hello World” message.

Enable Addons

Minikube has a set of built-in addons that can be enabled, disabled and opened in the local Kubernetes environment.
  1. List the currently supported addons:
    minikube addons list
    Output:
    addon-manager: enabled
    coredns: disabled
    dashboard: enabled
    default-storageclass: enabled
    efk: disabled
    freshpod: disabled
    heapster: disabled
    ingress: disabled
    kube-dns: enabled
    metrics-server: disabled
    nvidia-driver-installer: disabled
    nvidia-gpu-device-plugin: disabled
    registry: disabled
    registry-creds: disabled
    storage-provisioner: enabled
  2. Enable an addon, for example, heapster:
    minikube addons enable heapster
    Output:
    heapster was successfully enabled
  3. View the Pod and Service you just created:
    kubectl get pod,svc -n kube-system
    Output:
    NAME                                        READY     STATUS    RESTARTS   AGE
    pod/heapster-9jttx                          1/1       Running   0          26s
    pod/influxdb-grafana-b29w8                  2/2       Running   0          26s
    pod/kube-addon-manager-minikube             1/1       Running   0          34m
    pod/kube-dns-6dcb57bcc8-gv7mw               3/3       Running   0          34m
    pod/kubernetes-dashboard-5498ccf677-cgspw   1/1       Running   0          34m
    pod/storage-provisioner                     1/1       Running   0          34m
    
    NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/heapster               ClusterIP   10.96.241.45    <none>        80/TCP              26s
    service/kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP       34m
    service/kubernetes-dashboard   NodePort    10.109.29.1     <none>        80:30000/TCP        34m
    service/monitoring-grafana     NodePort    10.99.24.54     <none>        80:30002/TCP        26s
    service/monitoring-influxdb    ClusterIP   10.111.169.94   <none>        8083/TCP,8086/TCP   26s
  4. Disable heapster:
    minikube addons disable heapster
    Output:
    heapster was successfully disabled

Clean Up

Now you can clean up the resources you created in your cluster:
kubectl delete service hello-node
kubectl delete deployment hello-node
Optionally, stop the Minikube virtual machine (VM):
minikube stop
Optionally, delete the Minikube VM:
minikube delete

Source :- Kubernetes

Exploring Amazon Web Services (AWS)

  Compute Services Database Services Storage Services Networking Services Analytics Services Security, Identity, and Compliance Services Ama...