Amazon EKS kubectl Context: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
=External=
* https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
=Internal=
=Internal=
* [[Amazon_EKS_Operations#kubectl_Context|Amazon EKS Operations]]
* [[Amazon_EKS_Operations#kubectl_Context|Amazon EKS Operations]]

Revision as of 23:43, 28 September 2020

External

Internal

Overview

The local kubectl context for an EKS cluster is generated automatically with the "aws eks update-kubeconfig" command. The command can be used to generate a context for the current AWS identity, or for an AWS IAM role.

Use the Current AWS Identity

Make sure that AWS_PROFILE is set to the AWS profile that carries the identity that was used to create the cluster.

aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>

This creates the following context elements:

1. A "cluster" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>". The cluster's API Server URL and certificate authority are retrieved automatically from the EKS cluster configuration:

- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  cluster:
    certificate-authority-data: LS0...Qo=
    server: https://C5000000000000000000000000000000.gr7.us-west-2.eks.amazonaws.com

2. A "user" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>" (same as the cluster name), whose identity is a token generated via an "aws get-token" command:

- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: default

This is equivalent with executing the command:

export AWS_PROFILE=<profile-used-when-command-was-run>

aws --region <region> eks get-token --cluster-name <cluster-name>

The result is an authentication token.


3. The context, named "<context-name>", that binds these two together. If no alias is specified, the context name is the same as the cluster name, and the user name.

- context:
  name: <context-name>
    cluster: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
    user: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>

Use an IAM Role

aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>

TO PROCESS

Update .kube/config with the EKS cluster definition as follows:

aws eks [--region us-east-1] update-kubeconfig --name example-eks-cluster [--alias <context-alias>] [--role-arn arn:aws:iam::999999999999:role/some-role]

This command constructs a Kubernetes context with pre-populated server endpoint and certificate authority data values for the cluster specified by name. These values can also be obtained from the EKS cluster's page in AWS console.

If the right region is configured in the profile, there is no need to be specified. If no alias is used, the default name of the context is the cluster ARN. The result is to add a new context to .kube/config:

Added new context arn:aws:eks:us-east-1:999999999999:cluster/example-eks-cluster to /Users/testuser/.kube/config

If no --role-arn option is specified for the aws eks command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of aws eks execution. This identity can be obtained with aws sts get-caller-identity. The IAM identity associated with the context can be changed with the --role-arn option. If the --role-arn option is specified, the Kubernetes context will be configured as such that it will not be necessary to explicitly assume the role; kubectl operations in the correct context will simply work. Note that the IAM role used for --role-arn is NOT the cluster service role, but a completely different role altogether.

For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see:

EKS API Server User Management and Access Control

Building upon this capability, it is possible to create two different Kuberenetes context that imply to different sets of RBAC permission on the Kubernetes clusters:

aws eks update-kubeconfig --name example-eks-cluster --alias access-with-cluster-admin-permissions --role-arn arn:aws:iam::999999999999:role/eks-clusterrole-cluster-admin
aws eks update-kubeconfig --name example-eks-cluster --alias access-with-limited-permissions --role-arn arn:aws:iam::999999999999:role/eks-clusterrole-limited-permissions

Switching between Kubernetes contexts is done with kubectl config use-context:

kubectl config use-context access-with-cluster-admin-permissions

kubectl config current-context
access-with-cluster-admin-permissions

kubectl config use-context access-with-limited-permissions 

kubectl config current-context
access-with-limited-permissions

I ran into trouble using a role to configure access to a cluster, it overwrote something locally so I could not login. To investigate.