Use OpenUnison to Manage Multiple Kubernetes Clusters


Our open source Kubernetes Identity Manager is a great way to manage your Kubernetes cluster.  It gives you an authentication portal you can use to access to your dashboard and use kubectl without messing with kube/config files as well as a way to consistently provision namespaces and manage access to those namespaces.  There are other great things that you can do with OpenUnison and your cluster too like automate the creation of volumes, integrate with identity data across enterprises and build out powerful authorization models for admission controllers using tools like the Open Policy Agent.  One of the most common questions we get is “Can you manage more then just one cluster?”.  The short answer is “YES”!  The rest of this post is the long answer.

How Does OpenUnison Work With Kubernetes?

Lets make sure we cover our bases and start with how OpenUnison interacts with Kubernetes to begin with.  OpenUnison provides three central functions for Kubernetes:

  1. Authentication
  2. Authorization
  3. Provisioning

When we say authentication, we’re talking about integration with OpenID Connect (OIDC).  Kubernetes is not your usual OIDC integrated system.  Most OIDC integrated applications are just that, apps.  You login to them with a login page of some kind.  Kubernetes isn’t an application like that, its an API.  You authenticate to it using a JSON Web Token (JWT) that tells Kubernetes who you are, and what groups you are a member of.  This token must go into every request to the API server and should be shot lived so when the token expires, you need to refresh it.  We’re not going to cover the details of how this happens here, but take a look at the OpenID Connection section of the Kubernetes authentication documentation.  We think it gives a great detailed look at how this process works (and not just because our CTO wrote most of those docs).  OpenUnison’s integration with kubernetes and other identity sources lets us bridge between OIDC and SAML2, LDAP, other OIDC providers, etc.  We also have a built in reverse proxy that can be used with the dashboard to inject your OIDC tokens so you don’t need to worry about having to move your configs and keys around.

For Authorization, OpenUnison provides a key feature that lets you separate your authorization data from your authentication data.  Why is that important?  In most enterprises the people who control the identity source (usually Active Directory) are not the same people who control Kubernetes.  They’re mostly worried about workstations, printers, etc.  They’ll let you get a read only account pretty easily but a writeable account is often very hard.  You could manage your user authorizations directly via RBAC but why are you going to spend all this time automating your application infrastructure just to manually update RBAC policies?

Finally for provisioning, OpenUnison is hooked directly into API server.  The technology we use to update RBAC policies can be used to create namespaces, persistent volumes, persistent volume claims, etc, etc.  Any time you want to manage who can create a certain type of object (because usually objects reflect resources and resources cost money) an OpenUnison workflow can help.

What does all this look like from an architectural perspective?

OpenUnison runs as a container inside of kubernetes, communicates with the API server via HTTPS and acts a bridge between the API server and your authentication source.  For access to the dashboard, use OpenUnison as a reverse proxy (you can use other reverse proxies too with OpenUnison).

Working With Multiple Clusters

There’s two approaches you can take when working with multiple clusters:

  1. An OpenUnison per cluster
  2. One OpenUnison for all (or multiple) clusters

Option #1 is a good option if you have many small clusters and want to provide self service access management.  Each cluster can customize OpenUnison for their own needs.  As OpenUnison is stateless and deploys like most cloud native applications via a container its pretty easy to build a baseline deployment that can be shared with individual cluster owners.

The rest of this blog is going to be focused on option #2.  When thinking about a multi-tenant OpenUnison deployment for Kubernetes you should answer the following questions:

  1. Will each cluster have the same authorization model?
  2. Will each cluster have the same user population and identity source?
  3. Will each cluster use the same authentication types?

How you answer these questions may change your design of a multi-tenant solution.  Since there are infinite possible combinations to these questions, we’re going to focus on the customization points for each of our primary services for Kubernetes: Authentication, Authorization and Provisioning and where to customize those points.


The OpenUnison quickstart provides authentication via either LDAP or SAML2 out of the box, but really the sky’s the limit on possible authentication types.  Multi-factor, OpenId Connect, web services, etc.  If you’re going to add another authentication point there are multiple customization points.


The first is the “trust” used to let Kubernetes trust the tokens generated by OpenUnison.  This is controlled by the trust object on the OIDC identity provider in openunison-qs-kubernetes/src/main/webapp/WEB-INF/applications/40-k8sIdP.xml. This is where you control who has access to Kubernetes.  For instance the authorization rule:

-- CODE language-xml --
 <rule scope="dn" constraint="o=Tremolo"/>

tells OpenUnison to let any authenticated user in.  The authorization rule is attached to the identity provider in OpenUnison.  If you want to have different authorization rules for each cluster, you’ll want to duplicate the identity provider, change its name and create new authorization rules.


Once a new trust is established, you’ll want to duplicate openunison-qs-kubernetes/src/main/webapp/WEB-INF/applications/10-scale.xml, including just the URLs for /k8stoken and createing new URIs.  For instance instead of /k8stoken you may want /cluster1-k8stoken.  Also make sure to update the configuration for your new cluster and make sure your authorization rules match the identity provider you setup.  Finally, look in openunison-qs-kubernetes/src/main/webapp/WEB-INF/unison.xml for the <portal> section.  You’ll find a url for the Kubernetes token application that looks like:

-- CODE language-xml --
<urls label="Kubernetes Tokens" url="/k8stoken/index.html" name="OAuth2Token" org="687da09f-8ec1-48ac-b035-f2f182b9bd1e" icon="iVBORw0KGgoAAAANSUhEUgAAAPAAAAD..."<azRules>
   <rule scope="filter" constraint="(groups=k8s-*)"/>

You’ll want to duplicate this rule, update the “url” so it points to your new token application and update the azRules so that it matches your identity provider and token application.  This will provide your users with a “badge” when the login to the portal that shows them how to login using kubectl:


How do you know what users have access?  The Kubernetes Identity Manager uses a relational database to track group memberships.  Its the same database used to manage audit data.  By default OpenUnison creates groups for you when you initially start the system and as you provision new namespaces.  OpenUnison provides three roles for Kubernetes out of the box:

  1. Cluster Admin – Gets the cluster admin role in Kubernetes
  2. Namespace Administrator
  3. Namespace Viewer

Each of these functions is handled by the following workflows:

  1. openunison-qs-kubernetes/src/main/webapp/WEB-INF/workflows/40-ClusterAdmin.xml
  2. openunison-qs-kubernetes/src/main/webapp/WEB-INF/workflows/20-ProjectAdministrators.xml
  3. openunison-qs-kubernetes/src/main/webapp/WEB-INF/workflows/50-ProjectViewers.xml

Assuming you want to keep the same model, duplicate these files and update them to reflect your new cluster.  These workflows only work with namespaces created by OpenUnison, so if you want to retrofit existing namespaces for automating access you’ll need to look for a different authorization strategy such as relying on annotations to tell OpenUnison who can authorize access.  You’ll also want to update the provisioning target name in the dynamic workflows where to look to pull namespaces.


Finally, how will OpenUnison communicate with your new Kubernetes cluster?  Each cluster to controlled via a provisioning target in OpenUnison.  The configuration is quite straight forward:

-- CODE language-xml --
<target name="k8s" className="com.tremolosecurity.unison.openshiftv3.OpenShiftTarget">
       <param name="url" value="#[K8S_URL]"/>
       <param name="userName" value=""/>
       <param name="password" value=""/>
       <param name="token" value="#[K8S_TOKEN]"/>
       <param name="useToken" value="true"/>

Don’t mind the fact it says OpenShift.  It only says that because that’s where we started.  All you need is the URL for your API server and a service account token with cluster admin access.  You’ll want to rename the target too and reference it in your workflows.

Creating NameSpaces

Just as with generating kube/config files automatically with the token application, OpenUnison is creating namespaces using a workflow and front end.  The first task to make this work for your cluster is to duplicate the workflow that creates the namespace, groups and role bindings – openunison-qs-kubernetes/src/main/webapp/WEB-INF/workflows/30-NewK8SNameSpace.xml.  Walk through this workflow and you’ll notice that it creates groups internally and rolebindings that reference those groups.  Updating this workflow mainly consists of updating the naming of groups to isolate them to your cluster.  Once the workflow is updated, create a new application using the openunison-qs-kubernetes/src/main/webapp/WEB-INF/applications/10-scale.xml URIs for /newproject referencing your new workflow.  Finally, update the portal urls just as with the token application so each cluster has a “badge” that shows how to create a new namespace.

Security and Configuration Management

OpenUnison’s configuration should be treated as code.  As such you don’t want to put anything environment specific or sensitive in it.  Instead use parameter injection by configuring environment specific items with a #[].  For instance to include the environment variable K8S_CLUSTER_X_TOKEN use #[K8S_CLUSTER_X_TOKEN] and then inject the token as a secret.  This way your configuration and containers are consistent across environments.

Related Posts

No items found.