Solutions & Products
Bild1_Stage_Einführung in Container Technologie

Modern Application Operation with Containers

Introduction to Container Technology

Introduction to Container Technology
27.04.2021
Cloud
Technical

On the way to the Cloud container technology is ushering in a new era. The way applications are developed, delivered, deployed, and operated is fundamentally changing as containers bring the concept of apps to the server.


The principle is simple: each container contains an application, but also all the resources it needs at runtime.


Containers can develop their advantages particularly well in cluster environments and data centers, because a container combines each individual application together with all its dependencies (libraries, utilities and static data) in an image file that does not require an operating system. This is why there is often talk of "lightweight virtualization".

Lightweight Virtualization

This "lightweight virtualization" facilitates horizontal scalability of workloads. What is meant by this is the ability to increase the computing and/or storage capacity provided by networking multiple hardware or software components so that they operate as a single logical unit.


Horizontal scaling is also called "scaling out". If a cluster needs more resources to improve performance or maintain high availability, it is relatively easy to add more servers to scale that cluster horizontally.


In contrast, vertical scalability increases capacity by adding additional resources to a given server. This can be, for example, more memory or an additional CPU to increase the performance of one server.


Vertical scaling is therefore also referred to as "scaling up" and normally requires a downtime during which the new resources are then installed. Compared to horizontal scalability, there are also tighter limits set by the hardware.


Another major advantage of horizontal scalability is the ability to increase capacity during operation. Thus, at least theoretically, horizontal scalability is only limited by how many servers can be integrated into the cluster.

Comparison: Containers vs. Virtual Machines

What is new about this way of application deployment is: The containers are only deployed at the operating system level and not already at the hardware level like classic virtual machines (VMs). Containers are therefore isolated from each other as well as from the host.


They have their own file systems, cannot "see" the processes of other containers - and they can be strictly limited in resource consumption. This also means that containerization is preferably suitable for new applications that are developed from the outset for this type of infrastructure.


When modernizing legacy applications, on the other hand, which are classically monolithic, scale-up architectures are best used. In the cloud, these could then also be integrated relatively easily with new container applications.

The good thing is: containers are much easier to create than VMs

Due to the decoupling of the underlying infrastructure and the host's file system, containers - unlike VMs - can also be ported across cloud and operating system boundaries.


In both cases, the economic use of resources is difficult. This is the case, for example, when a large number of very small containers have to be managed, each of which produces only a small load.


Even if container virtualization is very clever with the virtualized resources RAM, CPU and mass storage, there is inevitably an overhead that can quickly lead to the many small containers reserving resources without actually being utilized.


Each VM, on the other hand, brings the operating system and the packages as overhead, which can easily lead to an overcommitment of the infrastructure. This is also possible with containers because of the reserved RAM and CPU resources, but the overhead is usually lower here. Vertical autoscaling can be used to dynamically allocate minimum and maximum values for resources. Then administrators no longer have to worry about the values they specify for a container's CPU and memory requirements. Autoscaling can recommend values for CPU and memory requirements, as well as limits, or update the values automatically.

Challenges in Container Operation

Even these trivial examples make it clear that consistent management is essential for the smooth and high-performance operation of containers in the cloud. Specialized knowledge is required for this.


IT departments usually struggle with the integration of the new platform into the existing IT landscape and often have to resort to costly commercial Kubernetes platforms for this purpose, because in addition to the pure setup and the necessary integration, a broad knowledge of update mechanisms, patching and operation is also required.


Complexity increases especially when there is little experience with hyper-scaler platforms, such as Amazon Web Services, and when the IT team must also deal with basic cloud scaling, billing and governance issues.


Container operation as managed services offers a way out of this.

Cloud Transformation with Arvato Systems

Learn more about Arvato Systems' cloud transformation services here.

Multi- and Hybrid Cloud Services

Learn more about Arvato Systems' services in the area of Multi- and Hybrid Cloud.

Written by

MA_Kathrin_Kleinschnittger_Cloud
Philipp Hellmich
Experte für Cloud