Uploaded image for project: 'Red Hat OpenShift Control Planes'
  1. Red Hat OpenShift Control Planes
  2. CNTRLPLANE-2636

Manual Verification: VM-Level Isolation (Dedicated VMs per Cluster)

XMLWordPrintable

    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • None
    • None
    • None

      Objective

      Manually verify that each hosted cluster gets dedicated Virtual Machines for its control plane components, ensuring VM-level isolation between different hosted clusters sharing the same management cluster.

      Parent Work Item

      This task is part of CNTRLPLANE-2630: E2E test for OCPSTRAT-2217 VM-level and Hosted Cluster Isolation levels

      Scope

      Verify that:

      • Each hosted cluster has its own dedicated VMs
      • VMs from different clusters are isolated from each other
      • VMs have proper resource boundaries (CPU, memory, storage)
      • VMs cannot access other cluster's VM resources

      Prerequisites

      • Management cluster with OpenShift Virtualization
      • HyperShift operator deployed
      • Ability to create multiple hosted clusters
      • CLI tools: oc, kubectl

      Manual Test Steps

      Step 1: Create Two Hosted Clusters

      h1. Create first hosted cluster
      hypershift create cluster kubevirt \
        --name cluster-a \
        --namespace clusters \
        --release-image quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 \
        --pull-secret /path/to/pull-secret.json \
        --node-pool-replicas 1
      
      h1. Create second hosted cluster
      hypershift create cluster kubevirt \
        --name cluster-b \
        --namespace clusters \
        --release-image quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 \
        --pull-secret /path/to/pull-secret.json \
        --node-pool-replicas 1
      
      h1. Wait for both clusters
      oc wait --for=condition=Available --timeout=30m hostedcluster/cluster-a -n clusters
      oc wait --for=condition=Available --timeout=30m hostedcluster/cluster-b -n clusters
      

      Step 2: List VMs for Each Cluster

      h1. List VMs for cluster-a
      echo "=== Cluster A VMs ==="
      oc get vmi -n clusters-cluster-a -o wide
      
      h1. List VMs for cluster-b
      echo "=== Cluster B VMs ==="
      oc get vmi -n clusters-cluster-b -o wide
      
      h1. Verify VMs are in separate namespaces
      oc get vmi --all-namespaces | grep -E "cluster-a|cluster-b"
      

      Step 3: Verify VM Ownership and Labels

      h1. Check cluster-a VM labels
      oc get vmi -n clusters-cluster-a -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.metadata.labels.hypershift\\.openshift\\.io/infra-id}{\"\\n\"}{end}'
      
      h1. Check cluster-b VM labels
      oc get vmi -n clusters-cluster-b -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.metadata.labels.hypershift\\.openshift\\.io/infra-id}{\"\\n\"}{end}'
      
      h1. Verify different infra-ids
      echo "Cluster A infra-id: $(oc get hostedcluster cluster-a -n clusters -o jsonpath='{.spec.infraID}')"
      echo "Cluster B infra-id: $(oc get hostedcluster cluster-b -n clusters -o jsonpath='{.spec.infraID}')"
      

      Step 4: Verify Resource Isolation

      h1. Get VM resource allocations for cluster-a
      echo "=== Cluster A VM Resources ==="
      oc get vmi -n clusters-cluster-a -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n  CPU: \"}{.spec.domain.cpu.cores}{\"\\n  Memory: \"}{.spec.domain.resources.requests.memory}{\"\\n\"}{end}'
      
      h1. Get VM resource allocations for cluster-b
      echo "=== Cluster B VM Resources ==="
      oc get vmi -n clusters-cluster-b -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n  CPU: \"}{.spec.domain.cpu.cores}{\"\\n  Memory: \"}{.spec.domain.resources.requests.memory}{\"\\n\"}{end}'
      
      h1. Verify VMs use separate cgroups
      CLUSTER_A_VM=$(oc get vmi -n clusters-cluster-a -o jsonpath='{.items[0].metadata.name}')
      CLUSTER_B_VM=$(oc get vmi -n clusters-cluster-b -o jsonpath='{.items[0].metadata.name}')
      
      echo "Cluster A VM: $CLUSTER_A_VM"
      echo "Cluster B VM: $CLUSTER_B_VM"
      

      Step 5: Verify Network Isolation Between VMs

      h1. Get VirtLauncher NetworkPolicy for cluster-a
      oc get networkpolicy virt-launcher -n clusters-cluster-a -o yaml > cluster-a-netpol.yaml
      
      h1. Get VirtLauncher NetworkPolicy for cluster-b
      oc get networkpolicy virt-launcher -n clusters-cluster-b -o yaml > cluster-b-netpol.yaml
      
      h1. Compare NetworkPolicies
      diff cluster-a-netpol.yaml cluster-b-netpol.yaml
      
      h1. Verify each policy only allows traffic for its own cluster
      oc get networkpolicy virt-launcher -n clusters-cluster-a -o jsonpath='{.spec.podSelector.matchLabels.hypershift\\.openshift\\.io/infra-id}{\"\\n\"}'
      oc get networkpolicy virt-launcher -n clusters-cluster-b -o jsonpath='{.spec.podSelector.matchLabels.hypershift\\.openshift\\.io/infra-id}{\"\\n\"}'
      

      Step 6: Verify Namespace Isolation

      h1. Verify separate namespaces
      oc get namespace | grep -E "clusters-cluster-a|clusters-cluster-b"
      
      h1. Check RBAC isolation
      oc auth can-i get vmi --namespace clusters-cluster-a --as system:serviceaccount:clusters-cluster-b:default
      h1. Expected: no
      
      oc auth can-i get vmi --namespace clusters-cluster-b --as system:serviceaccount:clusters-cluster-a:default
      h1. Expected: no
      

      Step 7: Verify Storage Isolation

      h1. Check PVCs for each cluster
      echo "=== Cluster A PVCs ==="
      oc get pvc -n clusters-cluster-a
      
      echo "=== Cluster B PVCs ==="
      oc get pvc -n clusters-cluster-b
      
      h1. Verify DataVolumes (if using containerized data importer)
      oc get datavolume -n clusters-cluster-a
      oc get datavolume -n clusters-cluster-b
      

      Step 8: Attempt Cross-Cluster Access (Should Fail)

      h1. Try to access cluster-b resources from cluster-a namespace
      h1. This should fail due to RBAC and network policies
      
      h1. Get cluster-a kubeconfig
      oc extract secret/cluster-a-admin-kubeconfig -n clusters --to=- > /tmp/cluster-a-kubeconfig
      
      h1. Try to list cluster-b VMs using cluster-a context (should fail)
      oc --kubeconfig=/tmp/cluster-a-kubeconfig get vmi -n clusters-cluster-b 2>&1
      
      h1. Expected error: forbidden or unauthorized
      

      Expected Results

      VM-Level Isolation Confirmed if:

      Each cluster has VMs in separate namespaces

      VMs have unique infra-id labels per cluster

      VirtLauncher NetworkPolicies are cluster-specific

      RBAC prevents cross-cluster resource access

      VMs use separate resource allocations (cgroups)

      Storage (PVCs) is isolated per cluster

      Cross-cluster access attempts fail

      Validation Matrix

      Isolation Aspect Cluster A Cluster B Isolated?
      Namespace clusters-cluster-a clusters-cluster-b
      Infra ID unique-id-a unique-id-b
      VM Names cluster-a-xxxxx-_ cluster-b-xxxxx-_
      NetworkPolicy virt-launcher (a) virt-launcher (b)
      RBAC SA cluster-a SA cluster-b
      Resources Dedicated CPU/Mem Dedicated CPU/Mem
      Storage PVCs cluster-a PVCs cluster-b

      Acceptance Criteria

      • Two hosted clusters successfully created
      • VMs confirmed in separate namespaces
      • Unique infra-ids verified for each cluster
      • NetworkPolicies are cluster-specific
      • RBAC isolation confirmed
      • Resource boundaries verified
      • Cross-cluster access fails as expected
      • All findings documented with evidence

      Deliverables

      • VM listing outputs for both clusters
      • NetworkPolicy comparison results
      • RBAC test results
      • Resource allocation documentation
      • Evidence of isolation (screenshots, logs)
      • Summary: "VM-level isolation ✓ CONFIRMED"

      Estimated Time

      6-8 hours (including cluster creation and thorough verification)

      Related Tasks

              wk2019 Ke Wang
              wk2019 Ke Wang
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: