Kubernetes RBAC Tutorial: Stop Giving Cluster-Admin to Everyone

Kubernetes RBAC diagram showing roles, bindings, and service accounts securing a production cluster

Your team just spun up a new Kubernetes cluster. Onboarding is fast, everyone’s excited, and to keep things moving — someone runs kubectl create clusterrolebinding give-access --clusterrole=cluster-admin --serviceaccount=default:default. Problem solved, right?

Until it isn’t.

I’ve watched this exact pattern cause real production pain: a junior developer accidentally deletes a namespace that had no backups. A CI/CD pipeline with cluster-admin gets a leaked token, and an attacker has full cluster access. A runaway script drops every CronJob in the cluster because nothing was stopping it.

Kubernetes RBAC (Role-Based Access Control) exists precisely to prevent these scenarios. It’s how you give people and workloads exactly the permissions they need — nothing more. If you’re not using it properly, your cluster is a time bomb.

This guide covers RBAC from scratch, with real YAML you can use today.


What Is Kubernetes RBAC?

RBAC is Kubernetes’ authorization system. It controls who can do what to which resources in your cluster.

Before RBAC was enabled by default (Kubernetes 1.8+), clusters often used ABAC (Attribute-Based Access Control) — a flat file of rules that was painful to manage. RBAC replaced it with a dynamic, API-driven model where you define permissions as Kubernetes objects.

The core idea: subjects (users, groups, service accounts) are bound to roles (sets of rules) that define what actions are allowed on which resources.


The Four Core RBAC Objects

There are exactly four resource types that make up the RBAC system. You need to understand all four.

1. Role — Namespace-Scoped Permissions

A Role grants permissions within a single namespace. It cannot touch resources in other namespaces or cluster-wide resources.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: production
rules:
  - apiGroups: [""]        # "" means the core API group
    resources: ["pods"]
    verbs: ["get", "list", "watch"]

This role allows reading pods in the production namespace only. That’s it. No deleting, no exec, no access to anything else.

Key fields explained:

  • apiGroups: Which API group the resource belongs to. Core resources (pods, services, configmaps) use "". Others use their group name (e.g., apps for Deployments, batch for Jobs).
  • resources: The Kubernetes resource type, lowercase plural (pods, services, deployments).
  • verbs: What actions are permitted. Common verbs: get, list, watch, create, update, patch, delete.

2. ClusterRole — Cluster-Wide Permissions

A ClusterRole is like a Role, but it applies across the entire cluster. Use it for:

  • Cluster-wide resources (nodes, persistent volumes, namespaces)
  • Resources that span namespaces
  • Non-resource URLs (e.g., /healthz)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-reader
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch"]

Important: You can also bind a ClusterRole to a specific namespace using a RoleBinding. This is actually a common pattern — define a ClusterRole once, reuse it across many namespaces.

3. RoleBinding — Grant a Role in a Namespace

A RoleBinding attaches a Role (or ClusterRole) to a subject within a specific namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: production
subjects:
  - kind: User
    name: jane           # Case-sensitive
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

This binds the pod-reader role to user jane — but only in the production namespace.

Subjects can be:

  • User — a human user (managed by your auth provider, not Kubernetes itself)
  • Group — a group of users
  • ServiceAccount — a Kubernetes service account (for workloads/pods)

4. ClusterRoleBinding — Grant a ClusterRole Cluster-Wide

A ClusterRoleBinding attaches a ClusterRole to a subject across the entire cluster.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-reader-global
subjects:
  - kind: Group
    name: ops-team
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: node-reader
  apiGroup: rbac.authorization.k8s.io

This gives every member of the ops-team group permission to read nodes across the whole cluster.


Understanding the Permission Matrix

Here’s how the four objects relate to each other:

ObjectScopeBinds To
RoleSingle namespace
ClusterRoleCluster-wide
RoleBindingSingle namespaceRole or ClusterRole
ClusterRoleBindingCluster-wideClusterRole only

A common source of confusion: RoleBinding can reference a ClusterRole. This is intentional — you define a ClusterRole as a reusable template, then bind it per-namespace where needed. This saves you from creating duplicate Role objects in every namespace.


Real-World Example: Least Privilege for a Dev Team

This is where RBAC gets practical. Here’s how I’d set up permissions for a development team that needs to work in the staging namespace but shouldn’t be able to touch anything else.

Step 1: Define what they need

Devs need to:

  • View and describe pods (debugging)
  • Read logs
  • List/describe deployments, services, configmaps
  • Create/update their own configmaps
  • Port-forward to pods (for local debugging)

Devs should NOT be able to:

  • Delete pods, deployments, or services
  • Access secrets
  • Touch other namespaces
  • Create or delete namespaces

Step 2: Create the Role

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer
  namespace: staging
rules:
  # Read pods and their logs
  - apiGroups: [""]
    resources: ["pods", "pods/log"]
    verbs: ["get", "list", "watch"]
  # Port-forward access
  - apiGroups: [""]
    resources: ["pods/portforward"]
    verbs: ["create"]
  # Read services and endpoints
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get", "list", "watch"]
  # Full access to configmaps (but not secrets)
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  # Read deployments and replicasets
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "watch"]
  # Read jobs and cronjobs
  - apiGroups: ["batch"]
    resources: ["jobs", "cronjobs"]
    verbs: ["get", "list", "watch"]

Step 3: Create the RoleBinding for the dev group

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developer-binding
  namespace: staging
subjects:
  - kind: Group
    name: dev-team      # Matches group from your OIDC/SSO provider
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

Now the entire dev-team group gets developer-level access in staging — and nothing else.


Service Account RBAC — Securing Your Workloads

Human users aren’t the only thing you need to restrict. Pods run as service accounts, and if those accounts have too much access, a compromised workload can wreak havoc.

The default service account in every namespace gets almost no permissions by default (Kubernetes 1.6+). Good. But the moment you need a pod to call the Kubernetes API — a CI runner, a GitOps agent, a custom operator — you need to create a dedicated service account with scoped permissions.

Example: A pod that needs to read ConfigMaps

# Service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: configmap-reader
  namespace: production

---
# Role with minimal permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: configmap-reader
  namespace: production
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch"]

---
# Bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: configmap-reader
  namespace: production
subjects:
  - kind: ServiceAccount
    name: configmap-reader
    namespace: production
roleRef:
  kind: Role
  name: configmap-reader
  apiGroup: rbac.authorization.k8s.io

---
# Reference the service account in your pod/deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: production
spec:
  template:
    spec:
      serviceAccountName: configmap-reader   # <-- use it here
      containers:
        - name: my-app
          image: my-app:latest

If this pod gets compromised, an attacker can only read configmaps in that namespace. They cannot list secrets, delete pods, or access other namespaces.


How to Check and Debug RBAC Permissions

Once you’ve set things up, you need to verify it actually works. kubectl auth can-i is your best friend here.

# Can the current user list pods?
kubectl auth can-i list pods

# Can a specific user delete deployments in the staging namespace?
kubectl auth can-i delete deployments --namespace staging --as jane

# Can a service account get secrets?
kubectl auth can-i get secrets \
  --namespace production \
  --as system:serviceaccount:production:configmap-reader

# List ALL permissions for the current user
kubectl auth can-i --list

# List all permissions in a specific namespace
kubectl auth can-i --list --namespace staging

To see what roles are bound to a subject:

# View all role bindings in a namespace
kubectl get rolebindings -n staging -o wide

# Describe a specific role binding
kubectl describe rolebinding developer-binding -n staging

# View cluster-wide bindings
kubectl get clusterrolebindings -o wide | grep dev-team

To inspect what a role actually permits:

kubectl describe role developer -n staging
kubectl describe clusterrole node-reader


Common Mistakes and How to Avoid Them

1. Using cluster-admin for everything

The cluster-admin ClusterRole gives unrestricted access to everything. Never give it to service accounts, CI pipelines, or regular users. It’s for cluster operators only — and even then, use it sparingly.

Check who has cluster-admin right now:

kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'

If you see service accounts or regular users in that output, fix it.

2. Forgetting pods/log is a subresource

Giving access to pods doesn’t automatically include pods/log, pods/exec, or pods/portforward. These are subresources and need to be listed explicitly. I’ve seen teams give devs pod access and then wonder why they can’t read logs.

resources: ["pods", "pods/log", "pods/exec", "pods/portforward"]

3. Wildcard verbs and resources

This is lazy RBAC that defeats the purpose:

# Don't do this
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]

This is effectively cluster-admin scoped to a namespace. Enumerate what you actually need.

4. Not scoping service account bindings by namespace

When binding a role to a service account, always specify the namespace in the subjects block:

subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: production   # Always include this

Omitting the namespace can cause unexpected behavior or RBAC rules that don’t match.

5. Giving RBAC access to secrets without thinking

Secrets are special. A pod with get secrets access in a namespace can read database credentials, API keys, and TLS private keys. Apply the strictest possible controls here — most workloads have no business reading secrets directly. Use Kubernetes secret injection via environment variables or volume mounts instead of API access.


Best Practices Summary

  • Principle of least privilege — start with no access, add only what’s needed
  • Use groups, not individual users — bind roles to groups from your SSO/OIDC provider, not individual user names. People join and leave teams; groups handle this automatically
  • One service account per workload — don’t reuse service accounts across multiple pods with different needs
  • Avoid cluster-wide bindings when namespace bindings will do — a RoleBinding to a ClusterRole is cleaner and safer than a ClusterRoleBinding for most use cases
  • Audit periodically — run kubectl get clusterrolebindings,rolebindings -A quarterly. Permissions accumulate over time
  • Name bindings clearlyread-pods-in-staging is better than rb-1. You’ll thank yourself during an incident
  • Use kubectl auth can-i before filing “it doesn’t work” tickets — most RBAC issues are findable in 30 seconds with this command

What’s Next

RBAC is the foundation of Kubernetes security — but it’s not the whole picture. Once you have RBAC locked down, the natural next steps are:

  • Pod Security Admission — restricting what containers can do at the runtime level (replacing deprecated PodSecurityPolicy)
  • Network Policies — controlling which pods can talk to which other pods
  • Secrets management — moving beyond native Kubernetes Secrets to tools like External Secrets Operator with a secrets backend (AWS Secrets Manager, HashiCorp Vault)
  • Audit logging — recording who did what to the API server, for compliance and incident response

If you’re running a managed Kubernetes cluster and want to experiment with RBAC without worrying about the control plane, DigitalOcean Kubernetes is worth a look — it’s one of the simpler managed K8s setups I’ve worked with for testing RBAC configs.

The point of RBAC isn’t to make your team’s life harder. It’s to make sure that when something goes wrong — and it will — the blast radius is contained. That CI pipeline that gets compromised should only be able to do what it was designed to do, nothing more.

Set it up right once, audit it regularly, and you won’t be the team that has a “someone accidentally deleted prod” story.

You Might Also Like

Leave a Reply