Kubernetes Dashboard, Authentication, Isolation

Hello All. Today we are going to look at Kubernetes Dashboard, Authentication, and Isolation.

The Code

Let’s put the code up front; that way, if you don’t want to bother with the article you can start by poking around on your own. Example scripts and manifests are located at the kube-dex-dashboard GitHub repo.

Still there? Great! Let’s talk about the problem…

The Problem

Per this thread on GitHub we had a problem; we had installed kubeadm and configured the Kubernetes Dashboard. Moreover, we used CoreOS dex to integrate our backing FreeIPA domain for authentication.

The logic flow to demonstrate the problem is very simple:

  1. Use kubectl proxy to access dashboard
  2. In the dashboard, select an unpermitted namespace
  3. Create a deployment – it would be denied to the user when running kubectl from the CLI. But this same deployment is permitted under the dashboard.

So that is what – now for why. The problem occurs because – out-of-the-box – Kubernetes Dashboard runs as a system-level process, normally with full cluster permissions. The issue arises when a user wants to authenticate and use the Dashboard – the user effectively runs as the same system identity that Dashboard uses.

Effectively we could not let our users take advantage of the Kubernetes dashboard due to privilege escalation. Bummer.

The Solution

Our solution works around this problem by creating multiple Dashboards – one for each authorized user. It’s not pretty, it’s not particularly scalable, but it works.

Let’s Look at dex, AUTHN, and AUTHZ

Before we jump into the specific multi-dashboard setup, let’s start by looking at authentication for our cluster. Kubernetes Authentication is implemented by the Kubernetes API Server; this makes sense because commands issued via kubectl (the Kubernetes CLI) execute against the API Server. So it follows that to configure authentication within Kubernetes, you will have specific options in your /etc/kubernetes/manifests/kube-apiserver.yaml manifest.

Kubernetes API Server Configuration and dex

The following is how we configured the API server to delegate authentication and authorization to dex:


spec:
  containers:
  - command:
    - kube-apiserver
    [...various cruft...]
    - --authorization-mode=RBAC
    - --oidc-issuer-url=https://kubeadm-clu2.hlsdev.local:32000
    - --oidc-client-id=[our-secret]
    - --oidc-ca-file=[issuing-CA-key]
    - --oidc-username-claim=sub
    - --oidc-groups-claim=groups

Let’s quickly discuss the main settings (for gory details on setting up dex as an RBAC authenticator, see Kubernetes RBAC Authentication setup).

  • --authorization-mode – indicates we are using RBAC
  • --oidc-issuer-url – URI where the dex helper app is running
  • --oidc-client-id – shared secret that permits the dex helper app to communicate with the Kubernetes API Server during delegation
  • --oidc-ca-file – the CA that issues our certificates
  • --oidc-username-claim – as users are authenticated using the dex helper, a set of “claims” are returned. In our case, we map the sub claim to the username within the backing FreeIPA.
  • --oidc-groups-claim – we map the groups claim to the list of groups the authenticated user is a member of on the backing FreeIPA

So the reason all of this matters is that our approach leverages permissions and group memberships to control access to Kubernetes API functions.

What an authentication looks like with dex

Authentication using dex requires us to go through quite a few steps – all of which deserve an article of their own. Suffice it to say that a lot of curl commands are used in our shell script to setup the initial login, indicate the authorizations to use, and extract the all-important bearer ID. In our case, we have it all wrapped up so that we issue:


MacBook-Pro:~ l.abruce$ sab-k8s.sh dex-login
Enter password (abruce):
eyJhbGciOiJSUzI1NiIsImtpZCI6IjFiMmVhODYzYTJlMGI4Nzc4NzZkYzFkMWViODcxYmVkNDgwZWFmZjUifQ.eyJpc3MiOiJodHRwczovL2t1YmVhZG0tY2x1MS5obHNkZXYubG9jYWw6MzIwMDAiLCJzdWIiOiJDZ1poWW5KMVkyVVNFV3hrWVhCZmFHeHpaR1YyWDJ4dlkyRnMiLCJhdWQiOiJrdWJlYWRtLWNsdTEtbG9naW4tYXBwIiwiZXhwIjoxNTAzNjIyMzgyLCJpYXQiOjE1MDM1MzU5ODIsImF0X2hhc2giOiJiYnQteWhhMkJBckhkdzBRY1lieVNRIiwiZW1haWwiOiJhbmR5YnJ1Y2VuZXRAZ21haWwuY29tIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImdyb3VwcyI6WyJncnAua3ViZWFkbS1jbHUxLnVzZXJzIl0sIm5hbWUiOiJBbmRyZXcgQnJ1Y2UifQ.Kt7OrHoz-Smo0gop3aJ-IakE0-3OSjXD_fGpg6oLSyn20FG6aZQ4lO-UaSc_8lmLuIKVEV20_dTUrsrbzGDStExu-xfJube0Jy6WGqZqDo5K6j8Yz3HU5aycb5DXwQx97BucmFc42d2FOPht-ZFCpZd4xe0APw8uL_WcfNbYb62kGGMJarBP552SIMRgPdwZlA6yEfBvfdia5j5Pni6a4XOYECMHX-pff7Bgcu9D2esQOe3PTDGSw_bz97mMI9WKYMCB_VbyAuy90aPJJeLNyMg1QOSibAfR8v-CoHs6aIhKyeIQMbSkz4A7S0lJW3ATpUWJFqo72QosoGe9npFBIw

That last item is the bearer token – this must be injected into every kubectl call so that the Kubernetes API Server can apply authorizations to the invoked query. Here’s an example of a denied query (my user does not have permissions to list all cluster namespaces):


MacBook-Pro:~ l.abruce$ sab-k8s.sh kubectl get namespace
Enter password (abruce):
Error from server (Forbidden): User "https://kubeadm-clu1.hlsdev.local:32000#CgZhYnJ1Y2USEWxkYXBfaGxzZGV2X2xvY2Fs" cannot list namespaces at the cluster scope. (get namespaces)

But other things work fine:


MacBook-Pro:~ l.abruce$ sab-k8s.sh kubectl get pod
NAME                          READY     STATUS    RESTARTS   AGE
gitlab-3787346051-0gk71       1/1       Running   0          49d
postgresql-3002604634-zt03b   1/1       Running   0          49d
redis-240693514-6pnzc         1/1       Running   0          49d

Our dex Users and Kubernetes Permissions

Each authorized user on our Kubernetes cluster is assigned a unique namespace; the user is given “owner” permissions on that namespace. Our users by default have the most minimal privileges elsewhere in the cluster.

We’ll cover roles and roleBindings below.

A Note: FreeIPA Users vs. Kubernetes API Server Users

As you review the GitHub repo code the keen-eyed will observe that in the dex/rbac/sab-users.yaml file we define some variables in this template to setup a new user. Specifically:

  • <%= kubeadm_dex_user %> – This is the name on the FreeIPA domain (such as “abruce” or “fbar”). In other words, it is in plaintext.
  • <%= @kubeadm_dex_login['issuer'] -%> – This is the URI to the dex “issuer” (see Using dex with Kubernetes).
  • <%= scope.function_k8s_dex_uid([kubeadm_dex_user, @kubeadm_dex_login['connector-id']]) -%> – This is actually a function that translates the plaintext kubeadm_dex_user to a base64-encoded value.

So what does that mean? It means that, internally to the Kubernetes API Server, a “user” is actually a reference to the provider and a provider-specific encoding of the user ID. For example: my FreeIPA user ID abruce actually becomes https://kubeadm-clu1.hlsdev.local:32000#CmpicnVjZRIRbGRhcF9obHNkZXZfbG9jYWw= when represented by dex to the Kubernetes API Server. That presented a problem to us because we use Puppet to create roleBindings dynamically, because we had to translate the plaintext kubeadm_dex_user Puppet Hiera variable to the fully-encoded value expected by Kubernetes API Server.

For the sake of completeness, here is the Puppet ERB function that performs this encoding. (We do not provide more Puppet settings because that would be a non-trivial task – we have lots of Puppet manifests and functions as part of our Kubernetes auto-deployment.)


# /etc/puppet/modules/sab/lib/puppet/parser/functions/k8s_dex_uid.rb
#require "base64"

# k8s_dex_uid.rb, ABr
# Solve bogus problem of k8s user IDs under latest 2.4.1 dex

module Puppet::Parser::Functions
  newfunction(:k8s_dex_uid) do |args|
    uid = args[0]
    connector = args[1]

    # create the encoding string
    encoded_string = 0x0A.chr + uid.length.chr + uid + 0x12.chr + connector.length.chr + connector
    result = Base64.strict_encode64(encoded_string)
    while result[-1] == '=' do
      result = result.chop
    end

    # function result
    result
  end
end

You can find a bash version of the above in the GitHub repo under scripts/sab-k8s.sh (look for the sab-k8s-x-dex-username function).

Another Note: FreeIPA Groups in roleBindings aren’t encoded. Why Not?

More keen-eyed developers will have noticed that our roleBindings that target a backing FreeIPA group name – are not encoded. Here’s an example from earlier in this article:


---
# required binding to permit std users to access dashboard
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: kube-system
  name: sab-dashboard-role-binding
subjects:
- kind: Group
  name: "grp.kubeadm-clu<%= @kubeadm_clu -%>.users"

You are correct to be mystified; prior to dex version 2.4.1, users were written as plaintext (e.g. abruce as a user ID) rather than the base64-encoded value. In fact, this dex GitHub issue discusses the problem. And the fact that the same logic is not applied to group names is, well, confusing. But the short answer is that group claims are presented as-is (plaintext) while user IDs are encoded as we do above.

Of course – all of this is subject to change as soon as we upgrade our dex. Because why not…

So dex works – what is the problem?

So the above digression shows that our RBAC implementation works at the Kubernetes API layer. The problem arises because the Kubernetes Dashboard doesn’t actually use the bearer token or authentication / authorization strategy. Instead, you create a single Dashboard instance, normally running as a system account, and then blithely tell your users: “Use kubectl proxy to access the Dashboard.” Because doing that – loses user isolation and privileges as your single Dashboard instance executes all Kubernetes API commands in its own service account’s context.

Brief Talk about Kubernetes Authorizations

We need to discuss Kubernetes Authorizations because they are at the heart of our Dashboard solution. Kubernetes authorization consists of roles and roleBindings. Roles have one or more rules defining permitted API verbs, while roleBindings do exactly what they sound like – they bind an existing role to Kubernetes entities.

roles and clusterRoles

Here is a sample role we developed as part of the isolated Dashboard effort:


---
# role to permit minimal, readonly access
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: kube-system
  name: sab-dashboard-role
rules:
- apiGroups: [""]
  resources: ["services/proxy"]
  verbs: ["create"]

This role is assigned to permit POST (“create”) commands to be executed against the proxy service within a given Kubernetes namespace.

It is also possible to create cluster-wide roles – here’s another example:


---
# cluster role required for initial dashboard get
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: sab-dashboard-clusterrole
rules:
- nonResourceURLs: ["*"]
  verbs: ["get"]

Cluster roles apply to any namespace in the cluster, so it’s important to keep them well understood before granting cluster-wide privileges.

roleBindings and clusterRoleBindings

Bindings are used to assign roles (and associated privileges) to Kubernetes entities. Here is an example of both we used to solve the Dashboard problem:


---
# required binding to permit std users to access dashboard
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: kube-system
  name: sab-dashboard-role-binding
subjects:
- kind: Group
  name: "grp.kubeadm-clu<%= @kubeadm_clu -%>.users"
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: sab-dashboard-role
  apiGroup: rbac.authorization.k8s.io
---
# required cluster binding to permit std users to access dashboard
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: kube-system
  name: sab-dashboard-clusterrole-binding
subjects:
- kind: Group
  name: "grp.kubeadm-clu<%= @kubeadm_clu -%>.users"
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: sab-dashboard-clusterrole
  apiGroup: rbac.authorization.k8s.io

NB: The <%= @kubeadm_clu -%> is because we use Puppet ERB as part of our solution. It may not apply in your case.

NB #2: The keen-eyed will notice triumphantly that we use a clusterRoleBinding and that we provide a namespace – which is stupid because a cluster-wide role…has no namespace. We put it in there because it made it read more easily to us during development, but feel free to remove it from your own implementation.

Be sure to notice that we leverage group memberships in the solution. Basically, if a given FreeIPA user is a member of a given group, then that is enough to provide access to that user’s dashboard (as well as the cluster-wide role which permits a user to access the service endpoint defined in the kube-system namespace). You can use this same type of approach to setup your own security policies based on RBAC.

“Shadow” Accounts and Dashboard

Let’s tie the above together for a solution. Basically, we want to run not a single Dashboard – but multiple Dashboards, where each Dashboard runs as a “shadow” Kubernetes service account that has the same privileges as the corresponding user.

Here’s an example of the service accounts setup on my test cluster:


[root@lpekbclux210 ~]# kubectl --namespace=kube-system get sa | grep sab-sa
sab-sa-abruce               1         55d
sab-sa-acheamitru           1         55d
sab-sa-rkolwitz             1         55d

Each one of these accounts ties back to an actual FreeIPA user. (We use sab-sa- to prefix the service account name). Let’s take a look at the roles / roleBindings for my test account:


[root@lpekbclux210 ~]# kubectl --namespace=kube-system get roles | grep abruce
sab-dashboard-role-abruce                  55d
[root@lpekbclux210 ~]# kubectl --namespace=kube-system get roleBindings | grep abruce
sab-dashboard-rolebinding-abruce       55d

Let’s take a look at the role assigned to the “shadow” service account:


[root@lpekbclux210 ~]# kubectl --namespace=kube-system get roles sab-dashboard-role-abruce -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  creationTimestamp: 2017-06-29T14:51:41Z
  name: sab-dashboard-role-abruce
  namespace: kube-system
  resourceVersion: "842"
  selfLink: /apis/rbac.authorization.k8s.io/v1beta1/namespaces/kube-system/roles/sab-dashboard-role-abruce
  uid: 767a81b1-5cda-11e7-b99b-782bcb74dd3c
rules:
- apiGroups:
  - ""
  resourceNames:
  - sab-dashboard-abruce
  resources:
  - services/proxy
  verbs:
  - get
  - list
  - delete
  - update

The above permits the shadow service account to have required privileges in the kube-system namespace. We also assign ownership privileges to the user’s namespace:


[root@lpekbclux210 ~]# kubectl --namespace=abruce get roleBindings | grep sa-abruce
sab-sa-abruce   55d
[root@lpekbclux210 ~]# kubectl --namespace=abruce get roleBindings sab-sa-abruce -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  creationTimestamp: 2017-06-29T14:51:41Z
  name: sab-sa-abruce
  namespace: abruce
  resourceVersion: "846"
  selfLink: /apis/rbac.authorization.k8s.io/v1beta1/namespaces/abruce/rolebindings/sab-sa-abruce
  uid: 768bae06-5cda-11e7-b99b-782bcb74dd3c
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: sab-ns-owner
subjects:
- kind: ServiceAccount
  name: sab-sa-abruce
  namespace: kube-system

The Dashboard Instances

Now that we have serviceAccounts, roles, and roleBindings (including at the cluster level) we can start creating Dashboard instances.

The Dashboards themselves must run within the kube-system namespace, which is why we have the corresponding Kubernetes role for the service account in the kube-system namespaces.

Here is a manifest for one of the Dashboards, defining the service and the deployment:


[root@lpekbclux210 ~]# kubectl --namespace=kube-system get deploy | grep dashboard-abruce
sab-dashboard-abruce       1         1         1            1           55d
[root@lpekbclux210 ~]# kubectl --namespace=kube-system get deploy sab-dashboard-abruce -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2017-06-29T14:51:46Z
  generation: 1
  labels:
    k8s-app: sab-dashboard-abruce
  name: sab-dashboard-abruce
  namespace: kube-system
  resourceVersion: "1299"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/sab-dashboard-abruce
  uid: 7964d221-5cda-11e7-b99b-782bcb74dd3c
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: sab-dashboard-abruce
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: sab-dashboard-abruce
    spec:
      containers:
      - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        name: sab-dashboard-abruce
        ports:
        - containerPort: 9090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      nodeSelector:
        dedicated: master
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: sab-sa-abruce
      serviceAccountName: sab-sa-abruce
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2017-06-29T14:51:46Z
    lastUpdateTime: 2017-06-29T14:51:46Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
[root@lpekbclux210 ~]# kubectl --namespace=kube-system get service sab-dashboard-abruce -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-06-29T14:51:46Z
  labels:
    k8s-app: sab-dashboard-abruce
  name: sab-dashboard-abruce
  namespace: kube-system
  resourceVersion: "931"
  selfLink: /api/v1/namespaces/kube-system/services/sab-dashboard-abruce
  uid: 797218b7-5cda-11e7-b99b-782bcb74dd3c
spec:
  clusterIP: 10.97.14.26
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9090
  selector:
    k8s-app: sab-dashboard-abruce
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

And the final result:


[root@lpekbclux210 ~]# kubectl --namespace=kube-system get all | grep sab-dashboard-abruce
po/sab-dashboard-abruce-2861302251-7thm3              1/1       Running   2          55d
svc/sab-dashboard-abruce       10.97.14.26              80/TCP           55d
deploy/sab-dashboard-abruce       1         1         1            1           55d
rs/sab-dashboard-abruce-2861302251      1         1         1         55d

Accessing the Dashboard

We started the article above with an example of calling our sab-k8s.sh shell script to login. That shell script also wraps access to kubectl, so we use it to run a local proxy:


sab-k8s.sh kubectl proxy

As with a “normal” kubectl proxy, this permits local forwarding to the Dashboard instance (this, in fact, is exactly the same way that one would normally access a Dashboard). However, because we run proxied instances of each Dashboard where each instance has specialized permissions only for a particular dex user, the URI used to access the Dashboard is different from the Kubernetes standard one.

To access the Dashboard, we use:


http://localhost:8001/api/v1/namespaces/kube-system/services/sab-dashboard-abruce/proxy/

The sab-dashboard-abruce indicates to access the Dashboard service endpoint we defined above. The result? We get access to a dashboard…here’s a curl command to demonstrate it works:


MacBook-Pro:~ l.abruce$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/sab-dashboard-abruce/proxy/
 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.803608cb.css"> <link rel="stylesheet" href="static/app.336a76b4.css"> </head> <body> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
    <![endif]--> <kd-chrome layout="column" layout-fill=""> </kd-chrome> <script src="static/vendor.31531c85.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.f69f96ab.js"></script> </body> </html>

The fact that we got a response indicates that the Dashboard instance is up and running.

Still More Problems!

The biggest problems with this approach include:

  • The approach is kludgy – Duplicating user accounts with a “shadow” service account does not scale. In our case, we use automated shell scripts to detect new user accounts and – if the accounts are members of a particular FreeIPA group – we auto-create the corresponding shadow service account and provision the Dashboard.

Despite the problems, the approach used by this article at least solves the problem of a single, monolithic Dashboard. And, future Kubernetes Dashboard deployments will no doubt address these shortcomings and obviate the need to have multiple Dashboard instances running.

That is all.

Team-oriented systems mentor with deep knowledge of numerous software methodologies, technologies, languages, and operating systems. Excited about turning emerging technology into working production-ready systems. Focused on moving software teams to a higher level of world-class application development. Specialties:Software analysis and development...Product management through the entire lifecycle...Discrete product integration specialist!

Leave a Reply

Your email address will not be published. Required fields are marked *

*