Amazon EKS Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 13: Line 13:


==Worker Node Group==
==Worker Node Group==
When the EKS cluster is created, the operator has the option to [[Amazon_EKS_Create_and_Delete_Cluster#Create_Node_Group|create on or more node group]]. Node groups allow [[#Autoscaling|autoscaling]]. An EKS cluster can use more than one node groups.
When the EKS cluster is created, the operator has the option to [[Amazon_EKS_Create_and_Delete_Cluster#Create_Node_Group|create on or more node group]]. Node groups allow [[#Autoscaling|autoscaling]]. An EKS cluster can use more than one node group.
===Node Group Name===
===Node Group Name===
===Managed Node Group===
===Managed Node Group===

Revision as of 20:38, 30 December 2020

Internal

Overview

EKS Cluster

Control Plane

EKS Worker Node

EKS Worker Node IAM Role

Amazon EKS-optimized AMI

Worker Node Group

When the EKS cluster is created, the operator has the option to create on or more node group. Node groups allow autoscaling. An EKS cluster can use more than one node group.

Node Group Name

Managed Node Group

https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html

Self-Managed Node Group

Contains self-managed worker nodes. The node group name can be used later to identity the Auto Scaling node group that is created for these worker nodes.

Node Group Operations

Cluster Service Role

The cluster service role allows the Kubernetes control plane to manage AWS resources. The cluster service role is different from the role needed to manage computes nodes, which can be created independently. It contains an "AmazonEKSClusterPolicy" policy. The cluster service role is needed when creating the EKS cluster.

Creation procedure:

Create the Cluster Service Role

Cluster Compute Role

https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html

The compute role is an IAM role compute nodes use to operate under. It can be determined from the AWS console by going to EC2, then to a specific node, then looking up "IAM Role" for that node. Also if node groups are used, the IAM role is associated with the node group.

Cluster Endpoint

AWS Infrastructure Requirements

TODO: Topology diagram

Cluster VPC

Subnets

Security Groups

A dedicated security group for each cluster control plane is recommended.

EKS Platform Versions and Kubernetes Versions

Amazon EKS platform version.

Integration with ECR

Logging

Control Plane Logging

SLA

https://aws.amazon.com/eks/sla/

aws-iam-authenticator

Page 17.

aws-iam-authenticator Operations

aws-iam-authenticator

.kube/config Configuration

AWS documentation refers to the Kubernetes configuration file as "kubeconfig".

.kube/config

EKS Security

API Server User Management and Access Control

When an EKS cluster is created, the IAM entity (user or role) that creates the cluster is automatically granted "system:master" permissions in the cluster's RBAC configuration. Where?. Additional IAM users and roles can be added after cluster creation by editing the aws-auth ConfigMap. For more details on how kubectl picks up the caller identity, see Connect to an EKS Cluster with kubectl.

aws-auth ConfigMap

The "aws-auth" ConfigMap is initially created to allow the nodes to join the cluster. It is created only after the cluster is configured with nodes.

kubectl -n kube-system -o yaml get cm aws-auth

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::999999999999:role/playground-eks-compute-role
      username: system:node:{{EC2PrivateDNSName}}

However, the same ConfigMap can be used used to add RBAC access to IAM users and roles, as described below:

IAM Role

See Cluster Service Role.

EKS IAM Permissions

These are technically "actions", but they are commonly referred to as "permissions", which implies that the action is part of a formal permission construct associated with the entity requiring it.

  • eks:DescribeCluster

Pod Security Policy

https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html

Also see:

Pod Security Policy Concepts

By default, the PodSecurityPolicy admission controller is enabled, but a fully permissive security policy with no restrictions, named "eks.privileged" is applied. The permission to "use" "eks.privileged" is imparted by the "eks:podsecuritypolicy:privileged" ClusterRole:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks:podsecuritypolicy:privileged
rules:
- apiGroups:
  - policy
  resourceNames:
  - eks.privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use

The "eks:podsecuritypolicy:privileged" ClusterRole is bound by the "eks:podsecuritypolicy:authenticated" ClusterRoleBinding to all members of the "system:authenticated" Group, which results in the fact that any authenticated identity can use it.

Bearer Tokens

Webhook Token Authentication

EKS supports natively bearer tokens via webhook token authentication. For more details see:

EKS Webhook Token Authentication

Autoscaling

Autoscaling can be implemented in EKS using either node groups or EC2 auto scaling groups. The node groups can be created and associated with the EKS instance via the AWS console, when the EKS instance is created.

What is the difference and why would we prefer one over another?

Cluster Autoscaler

Kubernetes Cluster Autoscaler

Horizontal Pod Autoscaler

Kubernetes Horizontal Pod Autoscaling

Vertical Pod Autoscaler

Vertical Pod Autoscaling

Load Balancing and Ingress

https://docs.aws.amazon.com/eks/latest/userguide/load-balancing-and-ingress.html

Using an Ingress

Kubernetes Ingress Concepts

Using a NLB

TODO: https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support

TODO: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws

metadata:
  name: my-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
    service.beta.kubernetes.io/aws-load-balancer-security-groups: 'sg-00000000000000000'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx'

Also see:

Kubernetes Service Concepts

Idiosyncrasies

It seems that a LoadBalancer deployment causes updating Kubernetes nodes' security group. Rules area added, and if too many rules are added, the LoadBalancer deployment fails with:

Warning  CreatingLoadBalancerFailed  17m service-controller  Error creating load balancer (will retry): failed to ensure load balancer for service xx/xxxxx: error authorizing security group ingress: "RulesPerSecurityGroupLimitExceeded: The maximum number of rules per security group has been reached.\n\tstatus code: 400, request id: a4fc17a6-8803-4fb4-ac78-ee0db99030e0"

The solution was to locate a node → Security → Security groups → pick the SG with "kubernetes.io/rule/nlb/" rules, and delete inbound "kubernetes.io/rule/nlb/" rules.

Storage

Amazon EFS CSI

Amazon EFS CSI

eksctl

eksctl