Looking back at the root causes

Why do you need Kubernetes and what can it do? 

What is Kubernetes? 

Kubernetes is an open source platform for managing containerized workloads and related services. Its main characteristics are cross-platform, extensibility, successful use of declarative configuration, and automation. It has a huge, rapidly growing ecosystem. 

The name Kubernetes comes from the Greek and means helmsman or pilot. Google opened access to the Kubernetes project source code in 2014. Kubernetes is built on fifteen years of experience that Google has gained running large-scale workloads, combined with the best-in-class ideas and practices that the community has to offer. 

 

Looking back at the root causes 

Let’s go back in time and see what made Kubernetes so useful. 

The era of traditional deployment: In the beginning, organizations ran applications on physical servers. Since there was no way to set resource utilization limits in this way, it caused problems with resource allocation and distribution on physical servers. For example: if many applications were running on a physical server, there could be cases where one application was taking up the most resources, causing other applications to simply not perform as well. The solution could be to run each application on a separate physical server. But this approach doesn’t scale well because resources aren’t fully utilized; it’s also expensive because organizations need to take care of many physical servers. 

The era of virtualized deployment: Virtualization was introduced as a solution. It allows you to run numerous virtual machines (Virtual Machines or VMs) on a single physical server CPU. Virtualization allowed applications to be isolated within the virtual machines and provided security because the information of an application on one VM was not available to an application on another VM. 

Virtualization provides better resource utilization on the physical server and better scalability as it allows you to easily add and upgrade applications, reduces physical hardware costs, and more. With virtualization, you can represent resources as single-use virtual machines. 

Each VM is a complete machine with all the components, including its own operating system, running on top of the virtualized hardware. 

The era of container deployment: Containers are similar to VMs, but have a simplified isolation option and use a common operating system for all applications. That’s why containers are considered “lightweight” compared to virtualization. Like a VM, a container has its own file system, CPU, memory, process space, etc. Because containers are free from a slave infrastructure, they can be easily moved between cloud providers or operating system distributions. 

Containers have become popular because they provide additional benefits, such as: 

  • Building and deploying applications according to the Agile methodology: simplified and more efficient creation of container images compared to using virtual machine images. 
  • Continuous development, integration, and deployment: ensuring reliable and continuous builds of container images, their rapid deployment, and easy rollbacks (due to image immutability). 
  • Separation of responsibilities between development and operations teams: creating application container images at build/release time as opposed to deployment time, and as a result, releasing applications from the infrastructure. 
  • Monitoring not only information and metrics at the operating system level, but also application status and other signals. 
  • Homogeneity of the environment for development, testing, and workload: runs the same on the desktop as it does on the cloud provider. 
  • OS and cloud cross-platform: runs on Ubuntu, RHEL, CoreOS, in your own data center, Google Kubernetes Engine, and anywhere else. 
  • Application-centric management: increasing the level of abstraction from running the operating system in virtual hardware to running the application in the operating system using logical resources. 
  • Loosely coupled, distributed, elastic, liberated microservices: applications are broken into smaller, independent parts for dynamic deployment and management, as opposed to a monolithic architecture running on a single large dedicated machine. 
  • Resource isolation: Predictable application performance. 
  • Resource utilization: high efficiency and density. 

Why you need Kubernetes and what it can do 

Containers are a great way to package and run your applications. In a productive environment, you need to manage the containers that your applications run in and make sure there is no downtime. For example, if one container stops working, another should be launched to replace it. Wouldn’t it be easier if the system itself managed this? 

That’s where Kubernetes comes in! Kubernetes provides you with a framework for running distributed systems elastically. It takes care of scaling and disaster recovery for your application, offers deployment templates, and more. For example, Kubernetes makes it easy to create canary deployments on your system. 

Kubernetes provides you with: 

Service discovery and load balancing Kubernetes can provide access to a container using a DNS name or its own IP address. If a container experiences too much network load, Kubernetes is able to balance and distribute it so that the quality of service remains stable. 

Kubernetes storage orchestration allows you to automatically mount storage systems of your choice: local storages, cloud solutions, etc. 

Automated deployment and rollback With Kubernetes, you can describe the desired state of the containers being deployed, and it will automatically monitor the fulfillment of this state. For example, you can automate Kubernetes to create new containers for deployment, remove existing containers, and transfer their resources to the newly created containers. 

Automatically provision tasks You provide Kubernetes with a cluster to run containerized tasks and specify how much CPU and memory (RAM) resources each container needs to run. Kubernetes distributes containers across cluster nodes to maximize resource utilization. 

Self-healing Kubernetes restarts failed containers; replaces containers; stops containers that do not respond to user-defined health checks and does not report them to clients until they are up and running. 

Kubernetes Secret and Configuration Management allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and configuration without rebuilding your container images, without exposing secrets to the stack configuration. 

Consultation required