Eks Console Walkthrough

VPV for out EKS

  • We'll be creating a VPC that spans over 2 AZs, and that has 4 subnets, 2 in each AZ
  • For each AZ we'll have a private+public subnet, so that if the cluster will initiate a load balancer, it can use target pods in both AZs.
  • VPC Components:
    • VPC
    • A new internet gateway (IGW).
      Don't forget to attach it to the VPC.
    • a private route table (privateRT) created from the default RT of the VPC
    • a public route table (publicRT)
    • a route inside the public RT, pointing to the IGW
    • subnet private-a-subnet, in AZ a, associated with privateRT
    • subnet private-b-subnet, in AZ b, associated with privateRT
    • subnet public-a-subnet, in AZ a, associated with publicRT
    • subnet public-a-subnet, in AZ a, associated with publicRT

EKS Service Role

  • Our new eks cluster should talk to other aws services from time to time.
  • For example, if we create a kubernetes service of the LoadBalancer type, our cluster would like to talk to the load balancer service and create a new load balancer.
    The new role will give our cluster the required permissions.
  • Go to IAM, choose Roles from the menu, and click Create role
  • Choose AWS Service as the Trusted entity type.
    It will be the eks service that will be given those permissions
  • Find and select EKS in the Use case below.
  • Then choose EKS - Cluster.
    This means that we wish to allow the EKS service do things that will be required by our cluster.
    Again, the example will be the creationg of a load balancer.
    Click Next
  • Now we see that the console has already added to our new role an aws-managed policy called AmazonEKSClusterPolicy. You may click this one to see what it permits (again, not the LB).
    Click Next
  • We can see here that the trust policy is ready for us.
    The EKS service is allowed to use (assume) the new role.
  • Scroll up and fill-in the new role name: myAmazonEKSClusterRole
  • Then scroll down and click on Create role

EKS Cluster

  • Go to EKS service, clisk on Add cluster, then choose create
  • Basic configuration:
    • Name: my-cluster
    • make sure that our new service role is there
    • Scroll to the bottom and click Next
  • Networking configuration:
    • Leave only the 2 private subnets
  • Skip all the rest of the configuration (next-next...)
  • Review and create.
  • The cluster is now creating.
    Don't continue until it is ready.

Config kubectl

  • We use kubectl to control kubernetes cluster
  • In order to config our environment to point to the new cluster type:
    aws eks update-kubeconfig --region --name
    Here's an example:
    aws eks update-kubeconfig --region us-west-2 --name my-cluster
  • To test:
    kubectl get services

Creating nodes

  • We'll use a managed node group (and present Fargate some other time)
  • Create an IAM Node Role that allows nodes to access the cluster:
    • Select service
    • Use case: Select EC2, then EC2 (then click Next)
    • Add this policy: AmazonEKSWorkerNodePolicy (click Next)
    • Trust policy is OK (click Next)
    • Role name: myAmazonEKSNodeRole
    • Review and create
  • Add a Node groups:
    • Go to your cluster-->Compute-->Node Groups-->Add nodegroup
    • Name: my-nodegroup*
    • Choose the role you have just created (then click Next)
  • Set compute and scaling configuration (accept defauls and click Next)
  • Specify networking (accept defauls and click Next)
  • Review your managed node group configuration and choose Create.
  • After several minutes, the Status in the Node Group configuration section will change from Creating to Active.
    Don't continue to the next step until the status is Active.