Amazon EKS kubectl Context: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(44 intermediate revisions by the same user not shown)
Line 1: Line 1:
=External=
* https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
=Internal=
=Internal=
* [[Amazon_EKS_Operations#kubectl_Context|Amazon EKS Operations]]
* [[Amazon_EKS_Operations#kubectl_Context|Amazon EKS Operations]]


=Overview=
=Overview=
The local kubectl [[.kube_config#Contexts|context]] for an EKS cluster is generated automatically with the <code>aws eks update-kubeconfig</code> command, which updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]]. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]] with a pre-populated server endpoint and certificate authority data values obtained dynamically from the EKS cluster specified by name. If the right region is configured in the profile, there is no need to be specified.
=Ensure the cluster is Visible under the Current Identity=
<syntaxhighlight lang='bash'>
aws eks list-clusters
</syntaxhighlight>


=Use the Current AWS Identity=
=Use the Current AWS Identity=
 
If no <code>--role-arn</code> option is specified for the <code>aws eks</code> command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of <code>aws eks</code> execution, so make sure that <code>AWS_PROFILE</code> is set to the AWS profile that carries the identity that was used to create the cluster. This identity can be verified with [[AWS_Security_Operations#IAM_Information|aws sts get-caller-identity]].  
Make sure that AWS_PROFILE is set to the AWS profile that carries the identity that was used to create the cluster.


<syntaxhighlight lang='text'>
<syntaxhighlight lang='text'>
kubectl --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>
</syntaxhighlight>
</syntaxhighlight>


This creates a Kubernetes context named <context-name> configured with the cluster with the server URL and certificate authority corresponding to the <cluster-name>, and with a user that is generated at runtime with the following command:
This creates the following context elements:
<syntaxhighlight lang='text'>
export AWS_PROFILE=<profile-used-when-command-was-run>
aws --region <region> eks get-token --cluster-name <cluster-name>
</syntaxhighlight>
The result is an authentication token.


=Use an IAM Role=
1. A "cluster" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>". The cluster's API Server URL and certificate authority are retrieved automatically from the EKS cluster configuration:


<syntaxhighlight lang='text'>
<syntaxhighlight lang='yaml'>
kubectl --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  cluster:
    certificate-authority-data: LS0...Qo=
    server: https://C5000000000000000000000000000000.gr7.us-west-2.eks.amazonaws.com
</syntaxhighlight>
</syntaxhighlight>


=.kube/config Content=
2. A "user" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>" (same as the cluster name), whose identity is a token generated via an "aws get-token" command:
 
 
 
 
 
 
 
 
--------------


<syntaxhighlight lang='yaml'>
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: default
</syntaxhighlight>


This is equivalent with executing the command:


<syntaxhighlight lang='bash'>
export AWS_PROFILE=<profile-used-when-command-was-run>


aws --region <region> eks get-token --cluster-name <cluster-name>
</syntaxhighlight>


The result is a bearer token that gets validated via the [[EKS_Webhook_Token_Authentication#Overview|built-in EKS webhook token authentication]]. The bearer token is the only piece of information that carries the identity of the caller to the Kubernetes server.


3. The context, named "<context-name>", that binds these two together. If no alias is specified, the context name is the same as the cluster name, and the user name.


<syntaxhighlight lang='yaml'>
- context:
  name: <context-name>
    cluster: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
    user: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
</syntaxhighlight>


==Caveat==
{{Warn|⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will '''overwrite the user entry''', even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.}}


=Use an IAM Role=


<syntaxhighlight lang='text'>
aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>
</syntaxhighlight>


Note that the IAM role used for --role-arn is NOT the [[Amazon_EKS_Concepts#Cluster_Service_Role|cluster service role]], but a completely different role altogether.


The only difference to [[#Use_the_Current_AWS_Identity|using current identity]] is the way the authentication token is generated:


 
<syntaxhighlight lang='yaml'>
 
- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
 
  user:
 
    exec:
{{External|https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html}}
      apiVersion: client.authentication.k8s.io/v1alpha1
 
      args:
Update [[Amazon_EKS_Concepts#.kube.2Fconfig_Configuration|.kube/config]] with the EKS cluster definition as follows:
      - --region
 
      - us-west-2
<syntaxhighlight lang='bash'>
      - eks
aws eks [--region us-east-1] update-kubeconfig --name example-eks-cluster [--alias <context-alias>] [--role-arn arn:aws:iam::999999999999:role/some-role]
      - get-token
      - --cluster-name
      - <cluster-name>
      - --role
      - arn:aws:iam::999999999999:role/<role-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: default
</syntaxhighlight>
</syntaxhighlight>


This command constructs a [[.kube_config#Contexts|Kubernetes context]] with pre-populated server endpoint and certificate authority data values for the cluster specified by name. These values can also be obtained from the EKS cluster's page in AWS console.
This is equivalent with executing:
 
If the right region is configured in the profile, there is no need to be specified. If no alias is used, the default name of the context is the cluster ARN.
The result is to add a new context to .kube/config:


<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
Added new context arn:aws:eks:us-east-1:999999999999:cluster/example-eks-cluster to /Users/testuser/.kube/config
export AWS_PROFILE=<profile-used-when-command-was-run>
</syntaxhighlight>
 
If no --role-arn option is specified for the aws eks command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of aws eks execution. This identity can be obtained with [[AWS_Security_Operations#IAM_Information|aws sts get-caller-identity]]. The IAM identity associated with the context can be changed with the --role-arn option. If the --role-arn option is specified, the [[.kube_config#Contexts|Kubernetes context]] will be configured as such that it will not be necessary to explicitly assume the role; kubectl operations in the correct context will simply work. Note that the IAM role used for --role-arn is NOT the [[Amazon_EKS_Concepts#Cluster_Service_Role|cluster service role]], but a completely different role altogether.


For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see: {{Internal|Amazon_EKS_Concepts#API_Server_User_Management_and_Access_Control|EKS API Server User Management and Access Control}}
aws --region <region> eks get-token --cluster-name <cluster-name> --role arn:aws:iam::999999999999:role/<role-name>
 
Building upon this capability, it is possible to create two different Kuberenetes context that imply to different sets of RBAC permission on the Kubernetes clusters:
 
<syntaxhighlight lang='bash'>
aws eks update-kubeconfig --name example-eks-cluster --alias access-with-cluster-admin-permissions --role-arn arn:aws:iam::999999999999:role/eks-clusterrole-cluster-admin
aws eks update-kubeconfig --name example-eks-cluster --alias access-with-limited-permissions --role-arn arn:aws:iam::999999999999:role/eks-clusterrole-limited-permissions
</syntaxhighlight>
</syntaxhighlight>


Switching between Kubernetes contexts is done with [[Kubectl_config#use-context|kubectl config use-context]]:
The result is a bearer token that gets validated via the [[EKS_Webhook_Token_Authentication#Overview|built-in EKS webhook token authentication]]. The bearer token is the only piece of information that carries the identity of the calling role to the Kubernetes server.
<syntaxhighlight lang='bash'>
kubectl config use-context access-with-cluster-admin-permissions


kubectl config current-context
==Caveat==
access-with-cluster-admin-permissions


kubectl config use-context access-with-limited-permissions
{{Warn|⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will '''overwrite the user entry''', even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.}}


kubectl config current-context
For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see: {{Internal|Amazon_EKS_Concepts#API_Server_User_Management_and_Access_Control|EKS API Server User Management and Access Control}}
access-with-limited-permissions  
</syntaxhighlight>


<font color=darkgray>I ran into trouble using a role to configure access to a cluster, it overwrote something locally so I could not login. To investigate.
=Mapping the IAM Identity back to Kubernetes Identity=
{{Internal|Amazon_EKS_Operations#Allowing_Additional_Users_to_Access_the_Cluster|Allowing Additional Users to Access the Cluster}}

Latest revision as of 01:44, 10 January 2022

External

Internal

Overview

The local kubectl context for an EKS cluster is generated automatically with the aws eks update-kubeconfig command, which updates .kube/config. The command can be used to generate a context for the current AWS identity, given by the AWS_PROFILE, or for an arbitrary AWS IAM role. The command updates .kube/config with a pre-populated server endpoint and certificate authority data values obtained dynamically from the EKS cluster specified by name. If the right region is configured in the profile, there is no need to be specified.

Ensure the cluster is Visible under the Current Identity

aws eks list-clusters

Use the Current AWS Identity

If no --role-arn option is specified for the aws eks command, kubectl context is configured to accesses the EKS cluster with the default AWS CLI IAM user identity at the time of aws eks execution, so make sure that AWS_PROFILE is set to the AWS profile that carries the identity that was used to create the cluster. This identity can be verified with aws sts get-caller-identity.

aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-name>

This creates the following context elements:

1. A "cluster" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>". The cluster's API Server URL and certificate authority are retrieved automatically from the EKS cluster configuration:

- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  cluster:
    certificate-authority-data: LS0...Qo=
    server: https://C5000000000000000000000000000000.gr7.us-west-2.eks.amazonaws.com

2. A "user" with the name "arn:aws:eks:<region>:999999999999:cluster/<cluster-name>" (same as the cluster name), whose identity is a token generated via an "aws get-token" command:

- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: default

This is equivalent with executing the command:

export AWS_PROFILE=<profile-used-when-command-was-run>

aws --region <region> eks get-token --cluster-name <cluster-name>

The result is a bearer token that gets validated via the built-in EKS webhook token authentication. The bearer token is the only piece of information that carries the identity of the caller to the Kubernetes server.

3. The context, named "<context-name>", that binds these two together. If no alias is specified, the context name is the same as the cluster name, and the user name.

- context:
  name: <context-name>
    cluster: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
    user: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>

Caveat


⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will overwrite the user entry, even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.

Use an IAM Role

aws --region <region> eks update-kubeconfig --name <cluster-name> --alias <context-alias> --role-arn <role-arn>

Note that the IAM role used for --role-arn is NOT the cluster service role, but a completely different role altogether.

The only difference to using current identity is the way the authentication token is generated:

- name: arn:aws:eks:us-west-2:999999999999:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      - --role
      - arn:aws:iam::999999999999:role/<role-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: default

This is equivalent with executing:

export AWS_PROFILE=<profile-used-when-command-was-run>

aws --region <region> eks get-token --cluster-name <cluster-name> --role arn:aws:iam::999999999999:role/<role-name>

The result is a bearer token that gets validated via the built-in EKS webhook token authentication. The bearer token is the only piece of information that carries the identity of the calling role to the Kubernetes server.

Caveat


⚠️ If "aws eks update-kubeconfig" command is used twice with the same .kube/config, the last execution will overwrite the user entry, even if a different context alias was used, producing two contexts that behave identically even if they are named differently. That is why is best to only use "aws eks update-kubeconfig" once, while generating the initial configuration, then update .kube/config manually with additional contexts.

For more details on how the IAM user or role identity is linked to a specific set of RBAC permissions, see:

EKS API Server User Management and Access Control

Mapping the IAM Identity back to Kubernetes Identity

Allowing Additional Users to Access the Cluster