Amazon EKS Create and Delete Cluster: Difference between revisions
Line 61: | Line 61: | ||
Create. | Create. | ||
==Test Access== | |||
At this point there are no nodes, but the cluster should be available. You can set up a "creator" context as described here, using the same IAM User identity (not a role): [[Amazon_EKS_Operations#Connect_to_an_EKS_Cluster_with_kubectl|Connect to an EKS Cluster with kubectl]]. | |||
==Provision Compute Nodes== | ==Provision Compute Nodes== |
Revision as of 20:40, 28 September 2020
External
Internal
Creation Procedure
Create Resources
Create a dedicated VPC and associated resources using the pre-defined CloudFormation stack as described here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html.
- Use "public and private subnets" option.
- Do not specify an IAM role.
Write down the name of the stack, as it may be needed to delete the resources.
Also write down VpcId, SecurityGroups, SubnetId
Create the Cluster Service Role
For an explanation of what a cluster service role is see:
Creation:
IAM console → Create role → AWS Service → Select a service to view its use cases → EKS → Select your use case → EKS Cluster → Next: Permissions (by default AmazonEKSClusterPolicy is selected) → Next: Tags
Role name:
<cluster-name>-service-role
Create the Cluster
The cluster will be accessible to the IAM User that creates it without any additional configuration. Other users can be added as described in the Allowing Additional Users to Access the EKS Cluster section, after the cluster is created.
Create the cluster.
From the Console → EKS → Create Cluster
Name: ...
Cluster Service Role: the role created above, <cluster-name>-service-role.
Next.
VPC.
Subnets (all existing are preselected)
Security groups: use the security group created by the CloudFormation stack <cluster-name>-*-ControlPlaneSecurityGroup-*
Cluster Endpoint Access: Public and private.
Next.
Control Plane Logging: all disabled.
Create.
Test Access
At this point there are no nodes, but the cluster should be available. You can set up a "creator" context as described here, using the same IAM User identity (not a role): Connect to an EKS Cluster with kubectl.
Provision Compute Nodes
Provision managed nodes (compute):
Create a "compute" Role (EC2 add AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly policies)
Select the EKS cluster, go to Compute
Add Node Group.
Select only the private subnets.
Verify with
kubectl get nodes
Provision Nodes
Create a dedicated IAM role following the procedure described here. Use the "EKS - Cluster" use case.
Edit the role trust relationship and ensure that the IAM user used to create the cluster (arn:aws:iam::999999999999:user/some.user) has sts:AssumeRole for the IAM role. This is how to enable an IAM User to assume an IAM Role.
Deletion Procedure
Delete Nodes and Cluster
Delete Nodes
Go to the cluster → Compute → Node Groups → Select → Delete.
Deleting the Node Group automatically terminates and deletes the instances.
Delete the Cluster
Delete the cluster.
Delete the Associated Resources
Remove the associated resources (subnets, VPC, etc.) by running Delete on the CloudFormation stack used to create resources.