The Elastic Kubernetes Service thus provides a Kubernetes cluster without the need to install, operate and maintain a separate Kubernetes control plane or node. What is provided is a "control plane", also called a master. This "Control Plane" can be operated with Kubectl, the standard command line interface of k8s. However, other user interfaces can certainly be used as well. The resources required to run applications are defined in so-called "manifest files" (YAML or JSON).
EKS is intended to provide a platform for automated populating, scaling and maintaining application containers on distributed nodes - and supports the use of a whole range of proven container tools.
Shared storage and network resources
Containers are grouped into so-called "pods" in k8s and run on "nodes". Pods are the smallest distributable compute units that can be created and managed with Kubernetes. Pods can be thought of as "pea pods" that house one or more containers with shared storage and network resources and a specification for operation instead of peas.
The contents of a pod are always co-located and co-planned - and executed in the shared context. A pod thus models, in each case application-specifically, a "logical host" for one or more application containers that are relatively tightly coupled.
Nodes are typically virtual servers that then run the container technology managed using Kubernetes. These nodes are in turn combined in a Kubernetes cluster. In these Kubernetes clusters, the pods/containers can be scaled on the one hand, but also the nodes in the cluster itself on the other. In this way, the nodes and containers can be scaled at any time so that the performance is optimally distributed in the cluster.
Increased configuration effort when using EKS
That sounds good at first and promises extensive independence from the rigid hardware foundation. But as always, the devil is in the details. For one thing, well-founded k8s expertise is necessary for container operation under EKS. Also, you need to know: Not all AWS services can be integrated directly; for example, rights management, which AWS specifies via IAM service, is left out. Here, the customer has to integrate/configure the users, roles, and rights within the cluster himself.
In self-healing, the controller compares the target state of the cluster with the actual state and automatically corrects any differences. It works with the key-value store etcd, a consistent, highly available database in which the target configuration and current cluster state are stored. With the integrated scheduler, the functioning nodes with free capacity are determined in response to requests from the API server, from which the appropriate node is then selected in the course of load balancing for the upcoming pod deployment. Using Kubectl, the containers can be started, stopped and manually monitored, while the connections to the pods are managed and controlled by the Kube proxy as a "load balancer".
Compared to ECS, however, a higher configuration effort must be expected with EKS.