Amazon EKS kubectl Context
External
Internal
Overview
The local kubectl context for an EKS cluster is generated automatically with the aws eks update-kubeconfig
command, which updates .kube/config. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates .kube/config with a pre-populated server endpoint and certificate authority data values obtained dynamically from the EKS cluster specified by name. If the right region is configured in the profile, there is no need to be specified.
Ensure the cluster is Visible under the Current Identity
aws eks list-clusters
Use the Current AWS Identity
If no --role-arn
option is specified for the aws eks
command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of aws eks
execution, so make sure that AWS_PROFILE
is set to the AWS profile that carries the identity that was used to create the cluster. This identity can be verified with aws sts get-caller-identity.
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>
This creates the following context elements:
1. A "cluster" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>". The cluster's API Server URL and certificate authority are retrieved automatically from the EKS cluster configuration:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
cluster:
certificate-authority-data: LS0...Qo=
server: https://C5000000000000000000000000000000.gr7.us-west-2.eks.amazonaws.com
2. A "user" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>" (same as the cluster name), whose identity is a token generated via an "aws get-token" command:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- <cluster-name>
command: aws
env:
- name: AWS_PROFILE
value: default
This is equivalent with executing the command:
export AWS_PROFILE=<profile-used-when-command-was-run>
aws --region <region> eks get-token --cluster-name <cluster-name>
The result is a bearer token that gets validated via the built-in EKS webhook token authentication. The bearer token is the only piece of information that carries the identity of the caller to the Kubernetes server.
3. The context, named "<context-name>", that binds these two together. If no alias is specified, the context name is the same as the cluster name, and the user name.
- context:
name: <context-name>
cluster: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
Caveat
⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will overwrite the user entry, even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.
Use an IAM Role
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>
Note that the IAM role used for --role-arn is NOT the cluster service role, but a completely different role altogether.
The only difference to using current identity is the way the authentication token is generated:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- arn:aws:iam::999999999999:role/<role-name>
command: aws
env:
- name: AWS_PROFILE
value: default
This is equivalent with executing:
export AWS_PROFILE=<profile-used-when-command-was-run>
aws --region <region> eks get-token --cluster-name <cluster-name> --role arn:aws:iam::999999999999:role/<role-name>
The result is a bearer token that gets validated via the built-in EKS webhook token authentication. The bearer token is the only piece of information that carries the identity of the calling role to the Kubernetes server.
Caveat
⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will overwrite the user entry, even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.
For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see: