In vSphere with Tanzu, when you deploy Tanzu Kubernetes Clusters, the clusters are deployed with PodSecurityPolicy Admission Controller enabled. This would mean that you would need a pod security policy to deploy workloads to the guest clusters. Read the documentation here for more information.
When you set permissions for a user on the supervisor cluster, a Cluster Role Binding will get created.
You can review this by running the command kubectl get clusterrolebinding | grep vmware-system
In the example below, I have set the edit permissions on the supervisor cluster namespace gs-dev
for the user jahnin
root@debian:~# k get clusterrolebinding | grep vmware-system
administrator-cluster-role-binding ClusterRole/psp:vmware-system-privileged 66m
vmware-system-auth-sync-wcp:gs-dev:group:vsphere.local:administrators ClusterRole/cluster-admin 3h2m
vmware-system-auth-sync-wcp:gs-dev:user:gs.labs:jahnin ClusterRole/cluster-admin 12s
vmware-system-auth-sync-wcp:gs-dev:user:vsphere.local:administrator ClusterRole/cluster-admin 40m
root@debian:~# k describe clusterrolebinding vmware-system-auth-sync-wcp:gs-dev:user:gs.labs:jahnin
Name: vmware-system-auth-sync-wcp:gs-dev:user:gs.labs:jahnin
Labels: run.tanzu.vmware.com/vmware-system-synced-from-supervisor=yes
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
User sso:jahnin@gs.labs
To manually create the cluster role binding,
- Create a role binding.
- Create a cluster role binding and map to the vmware-system-privileged Pod Security Policy
yaml for creating the RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rolebinding-cluster-user-administrator
namespace: default
roleRef:
kind: ClusterRole
name: edit #Default ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: User
name: sso:testuser01@gs.labs #sso:<username>@<domain>
apiGroup: rbac.authorization.k8s.io
yaml for creating the ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: administrator-cluster-role-binding
roleRef:
kind: ClusterRole
name: psp:vmware-system-privileged
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
List all the Pod Security Policies
root@debian:~# k get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
vmware-system-privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
vmware-system-restricted false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
 
If the ClusterRoleBinding does not exist for a user, you will run into the PodSecurityPolicy: unable to admit pod:
error when deploying pods/deoployments/replicaSets.
For Example:
root@debian:~# k describe rs netinfo
Name: netinfo-6c64b6ddd7
Namespace: default
Selector: app=netinfo,pod-template-hash=6c64b6ddd7
Labels: app=netinfo
pod-template-hash=6c64b6ddd7
Annotations: deployment.kubernetes.io/desired-replicas: 2
deployment.kubernetes.io/max-replicas: 3
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/netinfo
Replicas: 0 current / 2 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=netinfo
pod-template-hash=6c64b6ddd7
Containers:
netinfo:
Image: jahnin/nginx-net-info
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
ReplicaFailure True FailedCreate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 57s (x12 over 68s) replicaset-controller Error creating: pods "netinfo-6c64b6ddd7-" is forbidden: PodSecurityPolicy: unable to admit pod: []
 
As of Kubernetes 1.21 Pod Security Policies are being depreceated. You can read more about it here