How does kubeconfig works with aws eks get-token ?
Kubeconfig organises information about clusters, users, namespaces and authentication mechanism.
The kubectl uses kubeconfig to find the information it needs to choose a cluster and communicate with the API Server of a cluster. By default it looks for a file named config in the $HOME/.kube directory.
In this post we will create kubeconfig manually and understand how it works with aws eks get-token.
I have created the EKS cluster ( test-cluster) using eksctl and connected to the it .
- Delete the ~/.kube folder ( created by eksctl )
rm -r ~/.kube
2. Set values for region, cluster_name and account_id
export region_code="ap-south-1"
export cluster-name="test-cluster"
export account_id="account_id"
3. Get the cluster endpoint for your cluster
cluster_endpoint=$(aws eks describe-cluster \--region $region_code \--name $cluster_name \--query "cluster.endpoint" \--output text)
4. Get the Certificate Authority Data
The certificate authority data is base64 encoded Certificate Authority (CA) Certificate.
certificate_data=$(aws eks describe-cluster \--region $region_code \--name $cluster_name \--query "cluster.certificateAuthority.data" \--output text)
To see the certificate data from this value and save it to a file eks.crt.
aws eks describe-cluster --name test-cluster --query "cluster.certificateAuthority.data" --output text| base64 --decode > eks.crt
Inspect the certificate using openssl.
openssl x509 -in eks.cert -text -noout
In this case Issuer: CN=kubernetes and Subject : CN=kubernetes. As Issuer and Subject are same this is a self-signed certificate which is issued by EKS.
5. Create the ~/.kube directory
mkdir -p ~/.kube
6. Create a config file using following command.
This creates a file ~/.kube/config and populates all the required fields.
#!/bin/bash
read -r -d '' KUBECONFIG <<EOF
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: $certificate_data
server: $cluster_endpoint
name: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
contexts:
- context:
cluster: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
user: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
name: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
current-context: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
kind: Config
preferences: {}
users:
- name: arn:aws:eks:$region_code:$account_id:cluster/$cluster_name
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws
args:
- --region
- $region_code
- eks
- get-token
- --cluster-name
- $cluster_name
# - "-r"
# - "arn:aws:iam::$account_id:role/my-role"
# env:
# - name: "AWS_PROFILE"
# value: "aws-profile"
EOF
echo "${KUBECONFIG}" > ~/.kube/config
Here the client token is obtained using AWS CLI.
aws eks get-token --cluster-name test-cluster
This provides an object of type ExecCredential and provides a token.
What is this token ?
- Each token starts with
k8s-aws-v1.
followed by a base64 encoded string. - The string, when decoded, should resemble as below.This is a signed url which the API server uses to validate who the user is.The underlaying request is
GetcallerIdentity.
- The identity is then used for authorization using Kubernetes RBAC .
If you hit the URL via browser or postman then it is expected to get the response with the caller identity !
However this did not work citing the signature does not match !
This happens because the the signature expects the cluster name x-k8s-aws-id = <cluster-name> as part of header which is missing as we are doing this via browser or postman directly. So after adding this header the response correctly shows the caller identity.
The user in the response is the same IAM user which I have used to create the EKS cluster !
7. Verify the cluster access
kubectl get svckubectl get nodes
How does it work ?
The IAM user details obtained earlier is checked for RBAC permission by Kubernetes cluster. Any IAM users or roles are added in aws-auth configmap so let us check that configmap.
kubectl describe configmap aws-auth -n kube-system
However there is no corresponding entry in the aws-auth configmap. So how did it work ? What turns out is that , this user is already having permission to access the EKS cluster! See the below snippet from the AWS documentation for the same.
When you create an Amazon EKS cluster, the AWS Identity and Access Management (IAM) entity user or role, such as a federated user that creates the cluster, is automatically granted
system:masters
permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane. This IAM entity doesn't appear in any visible configuration, so make sure to keep track of which IAM entity originally created the cluster.
However for additional users or roles you need to have corresponding entries in the aws-auth configmap.
Summary
kubectl uses the command aws eks get-token specified in the kubeconfig to get the token.
The token returned is the base64encoded GetCallerIdentity call. The GetCallerIdentiy response provides the identity of the user. This identity is further authorized using kubernetes RBAC.
In the scenario discussed here we don’t see any RBAC entries for the identity. We don’t see any entries in the aws-auth configmap which is odd ! AWS documentation points out that this user is already granted system:masters
permissions but it is not visible !
Looks like this is the control which we have to let go when using managed kubernetes platform such as EKS. However it is worth understanding what is going on to assess the the security posture of you kubernetes cluster.
https://www.buymeacoffee.com/amodkadam