Amazon EKS kubectl Context: Difference between revisions
Line 5: | Line 5: | ||
=Overview= | =Overview= | ||
The local kubectl [[.kube_config#Contexts|context]] for an EKS cluster is generated automatically with the <code>aws eks update-kubeconfig</code> command, which updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]]. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]] with a pre-populated server endpoint and certificate authority data values | The local kubectl [[.kube_config#Contexts|context]] for an EKS cluster is generated automatically with the <code>aws eks update-kubeconfig</code> command, which updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]]. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]] with a pre-populated server endpoint and certificate authority data values obtained dynamically from the EKS cluster specified by name. If the right region is configured in the profile, there is no need to be specified. | ||
=Use the Current AWS Identity= | =Use the Current AWS Identity= |
Revision as of 00:19, 29 September 2020
External
Internal
Overview
The local kubectl context for an EKS cluster is generated automatically with the aws eks update-kubeconfig
command, which updates .kube/config. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates .kube/config with a pre-populated server endpoint and certificate authority data values obtained dynamically from the EKS cluster specified by name. If the right region is configured in the profile, there is no need to be specified.
Use the Current AWS Identity
If no --role-arn option is specified for the aws eks
command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of aws eks execution, so make sure that AWS_PROFILE is set to the AWS profile that carries the identity that was used to create the cluster. This identity can be obtained with aws sts get-caller-identity.
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>
This creates the following context elements:
1. A "cluster" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>". The cluster's API Server URL and certificate authority are retrieved automatically from the EKS cluster configuration:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
cluster:
certificate-authority-data: LS0...Qo=
server: https://C5000000000000000000000000000000.gr7.us-west-2.eks.amazonaws.com
2. A "user" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>" (same as the cluster name), whose identity is a token generated via an "aws get-token" command:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- <cluster-name>
command: aws
env:
- name: AWS_PROFILE
value: default
This is equivalent with executing the command:
export AWS_PROFILE=<profile-used-when-command-was-run>
aws --region <region> eks get-token --cluster-name <cluster-name>
The result is an authentication token.
3. The context, named "<context-name>", that binds these two together. If no alias is specified, the context name is the same as the cluster name, and the user name.
- context:
name: <context-name>
cluster: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
Use an IAM Role
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>
Note that the IAM role used for --role-arn is NOT the cluster service role, but a completely different role altogether.
The only difference to using current identity is the way the authentication token is generated:
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- arn:aws:iam::999999999999:role/<role-name>
command: aws
env:
- name: AWS_PROFILE
value: default
This is equivalent with executing:
export AWS_PROFILE=<profile-used-when-command-was-run>
aws --region <region> eks get-token --cluster-name <cluster-name> --role arn:aws:iam::999999999999:role/<role-name>
⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will overwrite the user entry, even if a different context alias was used, producing two identical contexts, yet named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.
For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see: