Amazon EKS Create and Delete Cluster: Difference between revisions
Line 68: | Line 68: | ||
====Security groups==== | ====Security groups==== | ||
The security group provided here are [[Amazon_EKS_Concepts#Additional_Security_Groups|additional security groups]]. The [[Amazon_EKS_Concepts#Cluster_Security_Group|cluster security group]] is created automatically during the cluster creation process. | The security group provided here are [[Amazon_EKS_Concepts#Additional_Security_Groups|additional security groups]]. <font color=darkgray>Can it be left empty and updated later?</font> The [[Amazon_EKS_Concepts#Cluster_Security_Group|cluster security group]] is created automatically during the cluster creation process. | ||
More concepts around EKS security groups available here: {{Internal|Amazon_EKS_Concepts#EKS_Security_Groups|EKS Security Groups}} | More concepts around EKS security groups available here: {{Internal|Amazon_EKS_Concepts#EKS_Security_Groups|EKS Security Groups}} |
Revision as of 23:29, 2 February 2021
External
Internal
Creation Procedure
Create Resources
Create a dedicated VPC and associated resources using the pre-defined CloudFormation stack as described here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html.
- Use "public and private subnets" option.
- Do not specify an IAM role.
Write down the name of the stack, as it may be needed to delete the resources.
Also write down VpcId, SecurityGroups, SubnetId
Create the Cluster Service Role
For an explanation of what a cluster service role is see:
Creation
IAM console → Create role → AWS Service → Select a service to view its use cases → EKS → Select your use case → EKS Cluster → Next: Permissions (by default AmazonEKSClusterPolicy is selected) → Next: Tags
Role name:
<cluster-name>-service-role
Reuse from an Existing Cluster
Cluster → Configuration → Details → "Cluster IAM Role ARN"
Create the Cluster
Cluster Configuration
The cluster will be accessible to the IAM User that creates it without any additional configuration. Other users can be added as described in the Allowing Additional Users to Access the EKS Cluster section, after the cluster is created.
Create the cluster.
From the Console → EKS → Create Cluster
Name: ...
Cluster Service Role: the role created above, <cluster-name>-service-role.
Secrets Encryption
Disabled.
Tags
Optional.
Next.
Networking
VPC
Subnets (all existing subnets in the VPC are preselected)
If you intend to deploy LoadBalancer services that should be available externally, you must select at least one public subnet, otherwise you will get this upon LoadBalancer deployment: "Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB"
Security groups
The security group provided here are additional security groups. Can it be left empty and updated later? The cluster security group is created automatically during the cluster creation process.
More concepts around EKS security groups available here:
Cluster endpoint access
Public and private.
Networking add-ons
Next.
Configure logging
Control Plane Logging
All disabled.
Next.
Create.
Test Access
At this point there are no nodes, but the cluster should be available. You can set up a "creator" context as described here, using the same IAM User identity (not a role): Connect to an EKS Cluster with kubectl.
Provision Compute Nodes
Create a Compute IAM Role
Create a "compute" Role.
AWS Services → EC2 → EC2 → add AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly policies)
Name:
<cluster-name>-compute-role
Create Node Group
Select the EKS cluster, go to the Configuration tab, then Compute tab.
Add Node Group.
Name <cluster-name>-node-group[-postfix]
Node IAM role - the one created previously.
Next.
AMI type: Amazon Linux 2
Capacity type: On-Demand
Instance type: t3.2xlarge
Disk size: 20
Minimus size: 3 Maximu size: 3 Desired size: 3
Next
Select only the private subnets. Even if we will deploy public load balancers, they will work fine as long as the cluster was created with at least one public subnet.
SSH Key Pair - this gives access to the nodes.
Allow remote access from: All
Create.
kubectl get nodes
Other matters:
- If you plan to use NFS, make sure the security groups allow NFS inbound.
Scale-Up Node Group
Configure Access
Deletion Procedure
Delete Nodes and Cluster
Delete Nodes
Go to the cluster → Compute → Node Groups → Select → Delete.
Deleting the Node Group automatically terminates and deletes the instances.
Delete the Cluster
Delete the cluster.
Delete the Associated Resources
Remove the associated resources (subnets, VPC, etc.) by running Delete on the CloudFormation stack used to create resources.