Today even a small computing device like the smartphone has a massive processing capability that can perform billions of calculations per second. Since 1960 the IT community has been concerned with a rising question, whether these physically allocated computing power using efficiently? In order to solve this problem which was caused by under-utilized resources of computing devices, virtualization technology was introduced to the computing arena. Virtualization can be defined as the process of creating logical resource objects out of physical computer resources by breaking the initial bond between the software and hardware and isolating them from the core system. With the adoption of Virtualization to computing, the IT researchers could be able to create virtual LANs (VLANs) that operates on top of physical computer networks to virtual machines that run on top of physical computing machines by achieving elasticity, higher availability, mobility, scalability, security, and more importantly efficiency.
What is Containerization
Cloud computing is an application of virtualization technology to create a large virtualized computing resource pool and let the customers provision on-demand virtual machines (VM) in a cost-effective manner. But virtual machines require to execute their own full guest operating system to act as a computing device and this OS booting process requires 1 minute or in some cases, more than 10 minutes. This start-up time delay was a problem to Cloud Computing services like PaaS and SaaS. In order to overcome this issue, a virtualization attempt called containerizing was introduced to cloud service providers.
Containers are similar to VMs, but they are less time consuming, lightweight, portable and less resource consuming virtual environments to run the application by overcoming dependency issues and delays caused by the operating system layer. The difference between a VM and a container can be presented as below.
|VM needs to execute hypervisor processes and OS start-up (booting) before running an application.
|The application can be directly run without waiting for any other processes.
|Heavyweight virtualized image file
|Lightweight virtualized image file
|Each VM has its own OS
|All containers share the same OS
|OS level fully isolation
|Processes level isolation, yet secured
|Hardware virtualization through a hypervisor
|OS level virtualization through a container orchestrator (Container Engine).
|Allocated memory requirement
|Comparatively less memory requirement
|Comparatively less performance
|Higher native performance for applications
Evolution of Containerization
The first notable virtualization attempt in IT history is Virtual Machine Facility/370 (VM/370). In the 1960s IBM engineers implemented the first virtual machine on a mainframe computer which is able to use different operating systems by multiple users on top of VM/370 operating system which acts as the host OS. However, the modern virtualization concept which uses a specialized virtualization agent software called Virtual Machine Monitor (VMM), currently known as hypervisor was introduced to the world by Gerald J. Popek and Robert P. Goldberg through their article named “Formal requirements for virtualizable third generation architectures” in 1974.
In the 1980s programming languages-based virtualization was introduced and the most ideal example for that is Xerox PARC’s Smalltalk-80. Then in the 1990s, Sun Microsystems released their Java Virtual Machine by allowing the web developers of the newly born World Wide Web to execute web coding in a portable and secure manner. Next, a Stanford University project led the start of a legendary virtualization company who directed virtualization to its next phase called cloud computing.
Cloud Computing became a very innovative concept and lots of new technologies started to develop through its roots. Containers are also such a technology that intercepts the story of Linux as well. The history of Containerization can be pointed out as below,
- In 1979 chroot, a UNIX operating system that is able to isolate (sandbox) disk space for each process introduced.
- Derrick Woolworth developed an operating system called FreeBSD Jails in 2000 that can isolate file system, network and users for each computing process as well.
- Additionally, to the FreeBSD capabilities, LinuX VServer which was introduced in 2001 was able to dedicate CPU power and computer memory for process-level virtualization.
- Solaris Containers was released in 2004 which has the ability to perform zone-based virtualization, snapshotting and cloning features.
- In 2005 Open Virtuzzo (Open VZ), an OS-level virtualization methodology for Linux was introduced.
- In order to control and isolate CPU, disk I/O, memory network for a group of processes, Google developed Process Containers in 2006.
- The first container manager, Linux Containers (LXC) was introduced in 2008.
- Waden, a container management platform that allows managing containers via API, was developed and released in 2011 by CloudFoundry.
- The open-source version of Google container management tool named Let-Me-Contain-That-For-You (LMCTFY) released to the public in 2013.
- The blue whale of containers, Docker came to the cloud industry in 2013 with its own libcontainer ecosystem.
- In 2016, Kubernetes which was originally developed by Google was handed over to Cloud Native Computing Foundation (CNCF). Since then this Open source cloud orchestration tool enabled a cloud-friendly container orchestration in almost every cloud IaaS provider.
Benefits of Using Containers
The following facts can be listed as the benefits of using container-based virtualized application instances.
- The application is totally isolated from the host operating system and has no other non-relative applications that run simultaneously on the container environment.
- By using container orchestration tools, containers can be configured to perform elastic resource scaling based on the demand.
- Containers are lightweight virtualized images and performance is better than a traditional VM.
- Replicating or cloning the identical application runs on the container for backup and testing purposes is less time and resource consuming.
- DevOps or maintaining and upgrading several applications at the same time can be conducted in a single cloud orchestrating dashboard.
- Can provide higher availability to the application, by providing redundant container architecture via container orchestrator.
What is Container Orchestration
Container Orchestration is the method of managing and controlling containers and their lifecycle with automation by providing the following features.
- Deployment and provisioning of containers in a virtualized environment.
- Providing elastic scaling for container resources like CPU, RAM, disk space.
- Container health monitoring
- Proving redundancy and availability to containers
- API based container management
- Cloning and snapshotting containers
- Load balancing between containers.
Container Orchestration has addressed many burning problems of virtualization such as rapid elasticity of server scalability. However, efficient container monitoring is still a challenge in the container orchestration technology which should be addressed in the near future.
Container Orchestration Tools
Container orchestration system is a cloud-based platform that automates provisioning, maintaining, and deposing of the containerized cloud server or applications. In other words, these tools can be also called as the operating systems (OS) in data centres. The researcher has compared three widely using container orchestration tools in the following table.
|Google / Cloud Native Computing Foundation
|Amazon Web Services
|No Auto Scaling
|Client API Provided
|Active and Fast Growing
|Managed and Native
|Can be installed
|Not at all
Among those tools, Kubernetes is the current hero of container orchestration. Popular cloud IaaS like AWS, GCP, Digital Ocean provide their managed Kubernetes service and in GCP, it is called Google Kubernetes Engines (GKE).