Amazon EKS Concepts: Difference between revisions
Line 13: | Line 13: | ||
==Worker Node Group== | ==Worker Node Group== | ||
Node groups allow autoscaling. | Node groups allow [[#Autoscaling|autoscaling]]. | ||
===Node Group Name=== | ===Node Group Name=== | ||
===Self-Managed Node Group=== | ===Self-Managed Node Group=== |
Revision as of 17:24, 13 November 2020
Internal
Overview
EKS Cluster
Control Plane
EKS Worker Node
EKS Worker Node IAM Role
Amazon EKS-optimized AMI
Worker Node Group
Node groups allow autoscaling.
Node Group Name
Self-Managed Node Group
Contains self-managed worker nodes. The node group name can be used later to identity the Auto Scaling node group that is created for these worker nodes.
Managed Node Group
Node Group Operations
Cluster Service Role
The cluster service role allows the Kubernetes control plane to manage AWS resources. The cluster service role is different from the role needed to manage computes nodes, which can be created independently. It contains an "AmazonEKSClusterPolicy" policy. The cluster service role is needed when creating the EKS cluster.
Creation procedure:
Cluster Compute Role
The compute role is an IAM role compute nodes use to operate under. It can be determined from the AWS console by going to EC2, then to a specific node, then looking up "IAM Role" for that node. Also if node groups are used, the IAM role is associated with the node group.
Cluster Endpoint
AWS Infrastructure Requirements
TODO: Topology diagram
Cluster VPC
Subnets
Security Groups
A dedicated security group for each cluster control plane is recommended.
EKS Platform Versions and Kubernetes Versions
Amazon EKS platform version.
Integration with ECR
Logging
Control Plane Logging
SLA
aws-iam-authenticator
Page 17.
aws-iam-authenticator Operations
.kube/config Configuration
AWS documentation refers to the Kubernetes configuration file as "kubeconfig".
EKS Security
API Server User Management and Access Control
When an EKS cluster is created, the IAM entity (user or role) that creates the cluster is automatically granted "system:master" permissions in the cluster's RBAC configuration. Where?. Additional IAM users and roles can be added after cluster creation by editing the aws-auth ConfigMap. For more details on how kubectl picks up the caller identity, see Connect to an EKS Cluster with kubectl.
aws-auth ConfigMap
The "aws-auth" ConfigMap is initially created to allow the nodes to join the cluster. It is created only after the cluster is configured with nodes.
kubectl -n kube-system -o yaml get cm aws-auth
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::999999999999:role/playground-eks-compute-role
username: system:node:{{EC2PrivateDNSName}}
However, the same ConfigMap can be used used to add RBAC access to IAM users and roles, as described below:
IAM Role
See Cluster Service Role.
EKS IAM Permissions
These are technically "actions", but they are commonly referred to as "permissions", which implies that the action is part of a formal permission construct associated with the entity requiring it.
- eks:DescribeCluster
Pod Security Policy
Also see:
By default, the PodSecurityPolicy admission controller is enabled, but a fully permissive security policy with no restrictions, named "eks.privileged" is applied. The permission to "use" "eks.privileged" is imparted by the "eks:podsecuritypolicy:privileged" ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
The "eks:podsecuritypolicy:privileged" ClusterRole is bound by the "eks:podsecuritypolicy:authenticated" ClusterRoleBinding to all members of the "system:authenticated" Group, which results in the fact that any authenticated identity can use it.
Bearer Tokens
Webhook Token Authentication
EKS supports natively bearer tokens via webhook token authentication. For more details see:
Autoscaling
Autoscaling can be implemented in EKS using either node groups or auto scaling groups.
Cluster Autoscaler
Horizontal Pod Autoscaler
Vertical Pod Autoscaler
Load Balancing and Ingress
Using an Ingress
Using a NLB
TODO: https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support
TODO: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
service.beta.kubernetes.io/aws-load-balancer-security-groups: 'sg-00000000000000000'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx'
Also see:
Idiosyncrasies
It seems that a LoadBalancer deployment causes updating Kubernetes nodes' security group. Rules area added, and if too many rules are added, the LoadBalancer deployment fails with:
Warning CreatingLoadBalancerFailed 17m service-controller Error creating load balancer (will retry): failed to ensure load balancer for service xx/xxxxx: error authorizing security group ingress: "RulesPerSecurityGroupLimitExceeded: The maximum number of rules per security group has been reached.\n\tstatus code: 400, request id: a4fc17a6-8803-4fb4-ac78-ee0db99030e0"
The solution was to locate a node → Security → Security groups → pick the SG with "kubernetes.io/rule/nlb/" rules, and delete inbound "kubernetes.io/rule/nlb/" rules.