Amazon EKS Concepts: Difference between revisions
Line 145: | Line 145: | ||
===Webhook Token Authentication=== | ===Webhook Token Authentication=== | ||
EKS supports natively bearer tokens via [[Kubernetes_Security_Concepts#Webhook_Token_Authentication|webhook token authentication]]. For more details see: {{Internal|EKS_Webhook_Token_Authentication|EKS Webhook Token Authentication}} | EKS supports natively bearer tokens via [[Kubernetes_Security_Concepts#Webhook_Token_Authentication|webhook token authentication]]. For more details see: {{Internal|EKS_Webhook_Token_Authentication|EKS Webhook Token Authentication}} | ||
=Load Balancing and Ingress= | =Load Balancing and Ingress= |
Revision as of 21:38, 30 December 2020
Internal
Overview
EKS Cluster
Control Plane
EKS Worker Node
EKS Worker Node IAM Role
Amazon EKS-optimized AMI
Worker Node Group
A worker node group is a named EKS management entity that facilitates creation and management of EC2 instance-based Kubernetes worker nodes. There could be managed node groups and self-managed node groups. When the EKS cluster is created, the operator has the option to create on or more node groups. The node groups created this way are managed node groups. An EKS cluster can use more than one node group.
Node Group Name
Managed Node Group
Managed node groups are worker node groups created by default during the provisioning of an EKs cluster.
The main characteristic of a managed node group is that it abstracts out the creation and management of individual EC2 instances. The user does not need to concern themselves with creation of individual EC2 VMs, but only indicate the type and how many are needed. The EC2 instances created by a managed node group are based on EKS-optimized AMIs. When the nodes are updated or terminated, the pods running on them are gracefully drained, in such a way to ensure that the applications stay available.
Each managed node group has a one-to-one association with an EC2 Auto-Scaling group, and all nodes provisioned by the group are automatically made part of the EC2 Auto-Scaling group, so the nodes managed by a group can autoscale: the nodes launched as part of a group are automatically tagged for auto-discovery by the Kuberenetes autoscaler. The associated auto-scaling group name can be retrieved from the managed group configuration.
The nodes managed by a group can run across multiple availability zones.
The node group can be used to apply Kubernetes labels to nodes.
When a managed node group is created, the following need to be specified:
- subnets
Self-Managed Node Group
Contains self-managed worker nodes. The node group name can be used later to identity the Auto Scaling node group that is created for these worker nodes.
Node Group Operations
Cluster Service Role
The cluster service role allows the Kubernetes control plane to manage AWS resources. The cluster service role is different from the role needed to manage computes nodes, which can be created independently. It contains an "AmazonEKSClusterPolicy" policy. The cluster service role is needed when creating the EKS cluster.
Creation procedure:
Cluster Compute Role
The compute role is an IAM role compute nodes use to operate under. It can be determined from the AWS console by going to EC2, then to a specific node, then looking up "IAM Role" for that node. Also if node groups are used, the IAM role is associated with the node group.
Cluster Endpoint
AWS Infrastructure Requirements
TODO: Topology diagram
Cluster VPC
Subnets
Security Groups
A dedicated security group for each cluster control plane is recommended.
EKS Platform Versions and Kubernetes Versions
Amazon EKS platform version.
Integration with ECR
Logging
Control Plane Logging
SLA
aws-iam-authenticator
Page 17.
aws-iam-authenticator Operations
.kube/config Configuration
AWS documentation refers to the Kubernetes configuration file as "kubeconfig".
EKS Security
API Server User Management and Access Control
When an EKS cluster is created, the IAM entity (user or role) that creates the cluster is automatically granted "system:master" permissions in the cluster's RBAC configuration. Where?. Additional IAM users and roles can be added after cluster creation by editing the aws-auth ConfigMap. For more details on how kubectl picks up the caller identity, see Connect to an EKS Cluster with kubectl.
aws-auth ConfigMap
The "aws-auth" ConfigMap is initially created to allow the nodes to join the cluster. It is created only after the cluster is configured with nodes.
kubectl -n kube-system -o yaml get cm aws-auth
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::999999999999:role/playground-eks-compute-role
username: system:node:{{EC2PrivateDNSName}}
However, the same ConfigMap can be used used to add RBAC access to IAM users and roles, as described below:
IAM Role
See Cluster Service Role.
EKS IAM Permissions
These are technically "actions", but they are commonly referred to as "permissions", which implies that the action is part of a formal permission construct associated with the entity requiring it.
- eks:DescribeCluster
Pod Security Policy
Also see:
By default, the PodSecurityPolicy admission controller is enabled, but a fully permissive security policy with no restrictions, named "eks.privileged" is applied. The permission to "use" "eks.privileged" is imparted by the "eks:podsecuritypolicy:privileged" ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
The "eks:podsecuritypolicy:privileged" ClusterRole is bound by the "eks:podsecuritypolicy:authenticated" ClusterRoleBinding to all members of the "system:authenticated" Group, which results in the fact that any authenticated identity can use it.
Bearer Tokens
Webhook Token Authentication
EKS supports natively bearer tokens via webhook token authentication. For more details see:
Load Balancing and Ingress
Using an Ingress
Using a NLB
TODO: https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support
TODO: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
service.beta.kubernetes.io/aws-load-balancer-security-groups: 'sg-00000000000000000'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx'
Also see:
Idiosyncrasies
It seems that a LoadBalancer deployment causes updating Kubernetes nodes' security group. Rules area added, and if too many rules are added, the LoadBalancer deployment fails with:
Warning CreatingLoadBalancerFailed 17m service-controller Error creating load balancer (will retry): failed to ensure load balancer for service xx/xxxxx: error authorizing security group ingress: "RulesPerSecurityGroupLimitExceeded: The maximum number of rules per security group has been reached.\n\tstatus code: 400, request id: a4fc17a6-8803-4fb4-ac78-ee0db99030e0"
The solution was to locate a node → Security → Security groups → pick the SG with "kubernetes.io/rule/nlb/" rules, and delete inbound "kubernetes.io/rule/nlb/" rules.