Amazon EKS Operations: Difference between revisions
(24 intermediate revisions by the same user not shown) | |||
Line 70: | Line 70: | ||
groups: | groups: | ||
- system:masters | - system:masters | ||
</syntaxhighlight> | |||
3. Edit the trust relationship of the IAM role: | |||
<syntaxhighlight lang='yaml'> | |||
{ | |||
"Version": "2012-10-17", | |||
"Statement": [ | |||
{ | |||
"Effect": "Allow", | |||
"Principal": { | |||
"AWS": [ | |||
"arn:aws:iam::999999999999:user/some.user", | |||
"arn:aws:iam::999999999999:user/some.otheruser" | |||
] | |||
}, | |||
"Action": "sts:AssumeRole", | |||
"Condition": {} | |||
} | |||
] | |||
} | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 75: | Line 95: | ||
Configuring individual user access directly in [[Amazon_EKS_Concepts#aws-auth_ConfigMap|aws-auth ConfigMap]] is less preferable than [[#Allow_Role_Access|using an IAM role]] for access, for reasons explained in that section. | Configuring individual user access directly in [[Amazon_EKS_Concepts#aws-auth_ConfigMap|aws-auth ConfigMap]] is less preferable than [[#Allow_Role_Access|using an IAM role]] for access, for reasons explained in that section. | ||
< | <syntaxhighlight lang='yaml'> | ||
apiVersion: v1 | |||
kind: ConfigMap | |||
metadata: | |||
name: aws-auth | |||
namespace: kube-system | |||
data: | |||
mapRoles: | | |||
... | |||
mapUsers: | | |||
- userarn: arn:aws:iam::999999999999:user/some.user | |||
username: some.user | |||
groups: | |||
- system:masters | |||
</syntaxhighlight> | |||
==Associate an IAM Role with a Kubernetes User== | ==Associate an IAM Role with a Kubernetes User== | ||
Line 99: | Line 133: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=EFS CSI Operations= | =EFS CSI Operations= | ||
{{Internal|Amazon EFS CSI Operations|EFS CSI Operations}} | {{Internal|Amazon EFS CSI Operations|EFS CSI Operations}} | ||
=EKS Webhook Token Authentication= | =EKS Webhook Token Authentication= | ||
{{Internal|EKS Webhook Token Authentication|EKS Webhook Token Authentication}} | {{Internal|EKS Webhook Token Authentication|EKS Webhook Token Authentication}} | ||
=Scale Up Node Group= | =Node Group Operations= | ||
==Create a Node IAM Role== | |||
{{External|https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html#create-worker-node-role}} | |||
This is the procedure to create a [[Amazon_EKS_Concepts#EKS_Worker_Node_IAM_Role|Node IAM Role]], required during the creation of a Node Group. | |||
IAM Console → Roles → Create Role → AWS Service → Choose a use case → Common use cases → EC2 → Next: Permissions | |||
* Filter policies: AmazonEKSWorkerNodePolicy → Check "AmazonEKSWorkerNodePolicy" | |||
* Filter policies: AmazonEC2ContainerRegistryReadOnly → Check "AmazonEC2ContainerRegistryReadOnly" | |||
The "AmazonEKS_CNI_Policy" must be attached either to this role or to a different role that is mapped to the <code>was-node</code> Kubernetes service account. Assigning the role to the service account is recommended, instead of attaching it to the Node IAM Role. <font color=darkkhaki>Develop this. For the time being, filter by "AmazonEKS_CNI_Policy" and select it.</font> | |||
Next: Tags → Next: Review | |||
Role name: blue-node-iam-role. | |||
==Create a New Node Group== | |||
EKS console → Cluster → Configuration → Compute → Add Node Group | |||
Name: | |||
Node IAM Role - select the role created according to the [[#Create_a_Node_IAM_Role|Create a Node IAM Role]] procedure. | |||
Next. | |||
'''Node Group compute configuration''' | |||
AMI type: | |||
Capacity Type: On-Demand | |||
Instance type: t3.micro | |||
Disk size: 20 | |||
'''Node Group scaling configuration''' | |||
Minimum size: 2 | |||
Maximum size: 2 | |||
Desired size: 2 | |||
'''Node Group update configuration''' | |||
Maximum unavailable: Number 1. | |||
Next: | |||
'''Node Group network configuration''' | |||
Subnets: private subnets. | |||
If the creation fails and new nodes fail to join, see: {{Internal|EKS Node Group Nodes Not Able to Join the Cluster|EKS Node Group Nodes Not Able to Join the Cluster }} | |||
==Scale Up Node Group== | |||
{{External|https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html}} | {{External|https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html}} | ||
Line 113: | Line 199: | ||
The current nodes will not be removed. | The current nodes will not be removed. | ||
==Delete Node Group== | |||
Deleting a node group should preserve the state of the cluster and allow the pods to be rescheduled as soon a new node group and new nodes are avaialble. | |||
=Node Operations= | |||
==ssh Tunnel into an EKS NodePort Service== | |||
{{Internal|Amazon EKS Operations ssh Tunnel into an EKS NodePort Service|ssh Tunnel into an EKS NodePort Service}} | |||
=Troubleshooting= | =Troubleshooting= | ||
* https://aws.amazon.com/premiumsupport/knowledge-center/eks-pod-connections/ | ==General Troubleshooting== | ||
* EKS Troubleshooting: https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html | |||
* <span id='https://kb.novaordis.com/index.php/Amazon_EKS_Operations#Load_Balancer_Troubleshooting'></span>Load balancer troubleshooting: https://aws.amazon.com/premiumsupport/knowledge-center/eks-load-balancers-troubleshooting/ | |||
* Pod connection troubleshooting: https://aws.amazon.com/premiumsupport/knowledge-center/eks-pod-connections/ | |||
* How can I get my worker nodes to join my Amazon EKS cluster? https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-cluster/ | |||
==Node Group Nodes Not Able to Join the Cluster== | |||
{{Internal|EKS Node Group Nodes Not Able to Join the Cluster|Node Group Nodes Not Able to Join the Cluster}} | |||
=="Your current user or role does not have access to Kubernetes objects on this EKS cluster" Message in AWS Console== | |||
The behavior is caused the fact that the user accessing the AWS Console <font color=darkgray>(or any of the roles it is associated with)</font> is not listed in the cluster's [[Amazon_EKS_Concepts#aws-auth_ConfigMap|aws-auth ConfigMap]]. The behavior can be fixed by listing the IAM user in aws-auth ConfigMap as described here: [[#Allow_Individual_IAM_User_Access|Amazon EKS Operations | Allow Individual IAM User Access]]. |
Latest revision as of 03:09, 10 January 2022
External
Internal
Overview
Create and Delete Cluster
Cluster Information
Cluster Status
aws eks [--region us-east-1] describe-cluster --name example-cluster --query "cluster.status"
"ACTIVE"
If the right region is configured in the profile, there is no need to be specified.
Cluster Endpoint
aws eks [--region us-east-1] describe-cluster --name example-cluster --query "cluster.endpoint" --output text
https://FDXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.gr0.us-east-1.eks.amazonaws.com
If the right region is configured in the profile, there is no need to be specified.
Cluster Certificate Authority
aws eks [--region us-east-1] describe-cluster --name example-cluster --query "cluster.certificateAuthority.data" --output text
LS0t...LQo=
If the right region is configured in the profile, there is no need to be specified.
kubectl Context
Allowing Additional Users to Access the Cluster
Allow IAM Role Access
Individual AWS users (authenticating as particular IAM Users) can be allowed access if an IAM role is "allowed" access to the Kubernetes cluster by associating it with RBAC roles or groups, and then the IAM role is configured to allow IAM users to assume it. This is the preferred solution, because different roles can be associated with different cluster permissions, and the same user can access the cluster with different permissions, by just using a different role.
1. Create an IAM role dedicated to cluster access, as described here: Create a Role to Delegate Permission to an IAM User.
2. Update aws-auth ConfigMap to allow the IAM role to access the Kubernetes cluster. This is done by associating it with a specific set of RBAC permissions, denoted by a group or Kubernetes role:
kubectl -n kube-system edit cm aws-auth
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::...
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::999999999999:role/playground-eks-cluster-admin
groups:
- system:masters
3. Edit the trust relationship of the IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::999999999999:user/some.user",
"arn:aws:iam::999999999999:user/some.otheruser"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
Allow Individual IAM User Access
Configuring individual user access directly in aws-auth ConfigMap is less preferable than using an IAM role for access, for reasons explained in that section.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
...
mapUsers: |
- userarn: arn:aws:iam::999999999999:user/some.user
username: some.user
groups:
- system:masters
Associate an IAM Role with a Kubernetes User
This procedure describe defining a Kubernetes User from an IAM Role.
1. Create an IAM role dedicated to cluster access, as described here: Create a Role to Delegate Permission to an IAM User. Use the following convention when naming it:
<cluster-name>-eks-namespaced-edit-role
2. Edit aws-auth ConfigMap and associate the IAM role with a Kubernetes User:
kubectl -n kube-system edit cm aws-auth
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::999999999999:role/blue-experimental-role
username: blue-experimental-user
EFS CSI Operations
EKS Webhook Token Authentication
Node Group Operations
Create a Node IAM Role
This is the procedure to create a Node IAM Role, required during the creation of a Node Group.
IAM Console → Roles → Create Role → AWS Service → Choose a use case → Common use cases → EC2 → Next: Permissions
- Filter policies: AmazonEKSWorkerNodePolicy → Check "AmazonEKSWorkerNodePolicy"
- Filter policies: AmazonEC2ContainerRegistryReadOnly → Check "AmazonEC2ContainerRegistryReadOnly"
The "AmazonEKS_CNI_Policy" must be attached either to this role or to a different role that is mapped to the was-node
Kubernetes service account. Assigning the role to the service account is recommended, instead of attaching it to the Node IAM Role. Develop this. For the time being, filter by "AmazonEKS_CNI_Policy" and select it.
Next: Tags → Next: Review
Role name: blue-node-iam-role.
Create a New Node Group
EKS console → Cluster → Configuration → Compute → Add Node Group
Name:
Node IAM Role - select the role created according to the Create a Node IAM Role procedure.
Next.
Node Group compute configuration
AMI type:
Capacity Type: On-Demand
Instance type: t3.micro
Disk size: 20
Node Group scaling configuration
Minimum size: 2
Maximum size: 2
Desired size: 2
Node Group update configuration
Maximum unavailable: Number 1.
Next:
Node Group network configuration
Subnets: private subnets.
If the creation fails and new nodes fail to join, see:
Scale Up Node Group
EKS Console → Amazon EKS Clusters → Select cluster → Compute → Select group → Edit → Minimum/Maximum/Desired size.
Scale minimum and desired up.
The current nodes will not be removed.
Delete Node Group
Deleting a node group should preserve the state of the cluster and allow the pods to be rescheduled as soon a new node group and new nodes are avaialble.
Node Operations
ssh Tunnel into an EKS NodePort Service
Troubleshooting
General Troubleshooting
- EKS Troubleshooting: https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
- Load balancer troubleshooting: https://aws.amazon.com/premiumsupport/knowledge-center/eks-load-balancers-troubleshooting/
- Pod connection troubleshooting: https://aws.amazon.com/premiumsupport/knowledge-center/eks-pod-connections/
- How can I get my worker nodes to join my Amazon EKS cluster? https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-cluster/
Node Group Nodes Not Able to Join the Cluster
"Your current user or role does not have access to Kubernetes objects on this EKS cluster" Message in AWS Console
The behavior is caused the fact that the user accessing the AWS Console (or any of the roles it is associated with) is not listed in the cluster's aws-auth ConfigMap. The behavior can be fixed by listing the IAM user in aws-auth ConfigMap as described here: Amazon EKS Operations | Allow Individual IAM User Access.