Uploaded image for project: 'Red Hat OpenShift Control Planes'
  1. Red Hat OpenShift Control Planes
  2. CNTRLPLANE-2637

Manual Verification: Hosted Cluster Network Isolation (NetworkPolicy & VPC)

XMLWordPrintable

    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • None
    • None
    • None

      Objective

      Manually verify network-level isolation between hosted clusters, including NetworkPolicy enforcement, VPC/VNet isolation (cloud platforms), and control plane-to-data plane isolation.

      Parent Work Item

      This task is part of CNTRLPLANE-2630: E2E test for OCPSTRAT-2217 VM-level and Hosted Cluster Isolation levels

      Scope

      Verify network isolation at multiple levels:

      • Pod-level isolation via NetworkPolicy
      • Namespace-level network segregation
      • Cloud-level isolation (VPC/VNet for cloud platforms)
      • Control plane to data plane communication (Konnectivity)
      • Egress restrictions from VirtLauncher VMs

      Prerequisites

      • Management cluster with OpenShift Virtualization (for KubeVirt)
      • OR Cloud platform (AWS/Azure/GCP) for cloud-based testing
      • Multiple hosted clusters deployed
      • Network testing tools: curl, nc, ping

      Manual Test Steps (KubeVirt Platform)

      Step 1: Verify NetworkPolicy Existence

      h1. List all NetworkPolicies in cluster-a namespace
      oc get networkpolicy -n clusters-cluster-a
      
      h1. Expected policies:
      h1. - virt-launcher
      h1. - same-namespace
      h1. - openshift-ingress
      h1. - kube-apiserver
      h1. - etc.
      
      h1. Get VirtLauncher NetworkPolicy details
      oc get networkpolicy virt-launcher -n clusters-cluster-a -o yaml
      

      Step 2: Verify Pod Selector Labels

      h1. Check VirtLauncher NetworkPolicy pod selector
      oc get networkpolicy virt-launcher -n clusters-cluster-a \
        -o jsonpath='{.spec.podSelector.matchLabels}' | jq .
      
      h1. Expected:
      h1. {
      h1.   "kubevirt.io": "virt-launcher",
      h1.   "hypershift.openshift.io/infra-id": "[cluster-a-infra-id]"
      h1. }
      
      h1. Verify only cluster-a VMs match this selector
      oc get pods -n clusters-cluster-a -l kubevirt.io=virt-launcher \
        --show-labels
      

      Step 3: Verify Ingress Rules

      h1. Get ingress rules for VirtLauncher NetworkPolicy
      oc get networkpolicy virt-launcher -n clusters-cluster-a \
        -o jsonpath='{.spec.ingress}' | jq .
      
      h1. Expected: Allow all TCP, UDP, SCTP traffic
      h1. (VMs need to receive traffic from guest cluster workloads)
      

      Step 4: Verify Egress Rules (Most Critical)

      h1. Get egress rules for VirtLauncher NetworkPolicy
      oc get networkpolicy virt-launcher -n clusters-cluster-a \
        -o jsonpath='{.spec.egress}' | jq .
      
      h1. Expected egress rules should:
      h1. 1. Allow traffic to 0.0.0.0/0 (internet)
      h1. 2. EXCEPT management cluster pod networks
      h1. 3. EXCEPT management cluster service networks
      h1. 4. Allow DNS (port 53)
      h1. 5. Allow specific services (konnectivity, etc.)
      
      h1. Check blocked networks
      oc get networkpolicy virt-launcher -n clusters-cluster-a \
        -o jsonpath='{.spec.egress[0].to[0].ipBlock.except}' | jq .
      

      Step 5: Test Network Isolation Between Clusters

      h1. Get IP of a pod in cluster-a control plane
      CLUSTER_A_POD_IP=$(oc get pod -n clusters-cluster-a -l app=kube-apiserver \
        -o jsonpath='{.items[0].status.podIP}')
      
      h1. Get IP of a pod in cluster-b control plane
      CLUSTER_B_POD_IP=$(oc get pod -n clusters-cluster-b -l app=kube-apiserver \
        -o jsonpath='{.items[0].status.podIP}')
      
      echo "Cluster A API Server IP: $CLUSTER_A_POD_IP"
      echo "Cluster B API Server IP: $CLUSTER_B_POD_IP"
      
      h1. Try to access cluster-b from cluster-a VM (should be blocked by NetworkPolicy)
      CLUSTER_A_VM_POD=$(oc get pods -n clusters-cluster-a -l kubevirt.io=virt-launcher \
        --no-headers | head -1 | awk '{print $1}')
      
      h1. Attempt connection (should fail or timeout)
      oc exec $CLUSTER_A_VM_POD -n clusters-cluster-a -- \
        timeout 5 curl -k https://$CLUSTER_B_POD_IP:6443 2>&1
      
      h1. Expected: Connection timeout or failure (blocked by NetworkPolicy)
      

      Step 6: Verify Management Cluster Network Block

      h1. Get management cluster network CIDRs
      MGMT_POD_NETWORK=$(oc get network.config.openshift.io cluster \
        -o jsonpath='{.spec.clusterNetwork[0].cidr}')
      MGMT_SVC_NETWORK=$(oc get network.config.openshift.io cluster \
        -o jsonpath='{.spec.serviceNetwork[0]}')
      
      echo "Management Pod Network: $MGMT_POD_NETWORK"
      echo "Management Service Network: $MGMT_SVC_NETWORK"
      
      h1. Verify these are in the NetworkPolicy except list
      oc get networkpolicy virt-launcher -n clusters-cluster-a \
        -o jsonpath='{.spec.egress[0].to[0].ipBlock.except}' | jq . | \
        grep -E "$MGMT_POD_NETWORK|$MGMT_SVC_NETWORK"
      
      h1. Should find these CIDRs in the except list
      

      Step 7: Verify Same-Namespace Policy

      h1. Get same-namespace NetworkPolicy
      oc get networkpolicy same-namespace -n clusters-cluster-a -o yaml
      
      h1. This policy should allow:
      h1. - Ingress from same namespace
      h1. - Egress to same namespace
      h1. - DNS access
      
      h1. Verify pod selector is empty (applies to all pods)
      oc get networkpolicy same-namespace -n clusters-cluster-a \
        -o jsonpath='{.spec.podSelector}'
      

      Step 8: Test Konnectivity Connectivity

      h1. Verify Konnectivity is used for control plane to data plane communication
      oc get service konnectivity-server -n clusters-cluster-a
      
      h1. Check Konnectivity server deployment
      oc get deployment konnectivity-server -n clusters-cluster-a
      
      h1. Verify Konnectivity agent in guest cluster
      oc --kubeconfig=/tmp/cluster-a-kubeconfig get daemonset konnectivity-agent -n openshift-konnectivity
      
      h1. Check Konnectivity connectivity
      oc --kubeconfig=/tmp/cluster-a-kubeconfig logs -n openshift-konnectivity \
        -l app=konnectivity-agent --tail=20 | grep -i connected
      

      Manual Test Steps (Cloud Platforms - AWS Example)

      Step 1: Verify VPC Isolation (AWS)

      h1. Get hosted cluster VPC ID
      aws ec2 describe-vpcs --filters "Name=tag:kubernetes.io/cluster/cluster-a,Values=owned" \
        --query 'Vpcs[0].VpcId' --output text
      
      h1. Get hosted cluster subnets
      aws ec2 describe-subnets --filters "Name=tag:kubernetes.io/cluster/cluster-a,Values=owned" \
        --query 'Subnets[*].[SubnetId,CidrBlock]' --output table
      
      h1. Verify security groups
      aws ec2 describe-security-groups --filters "Name=tag:kubernetes.io/cluster/cluster-a,Values=owned" \
        --query 'SecurityGroups[*].[GroupId,GroupName]' --output table
      

      Step 2: Verify Security Group Rules

      h1. Get security group ID for worker nodes
      SG_ID=$(aws ec2 describe-security-groups \
        --filters "Name=tag:kubernetes.io/cluster/cluster-a,Values=owned" \
        "Name=tag:Name,Values=_worker_" \
        --query 'SecurityGroups[0].GroupId' --output text)
      
      h1. Get inbound rules
      aws ec2 describe-security-group-rules --filters "Name=group-id,Values=$SG_ID" \
        --query 'SecurityGroupRules[?IsEgress=={{false}}].[IpProtocol,FromPort,ToPort,CidrIpv4]' \
        --output table
      
      h1. Verify no cross-cluster access
      h1. Rules should only allow traffic from same cluster CIDR
      

      Expected Results

      Network Isolation Confirmed if:

      VirtLauncher NetworkPolicy exists with correct selectors

      Egress rules block management cluster networks

      Same-namespace NetworkPolicy allows intra-namespace communication

      Cross-cluster pod communication is blocked

      Konnectivity provides secure control-plane to data-plane tunnel

      (Cloud) Separate VPCs/VNets per hosted cluster

      (Cloud) Security groups restrict cross-cluster traffic

      Validation Matrix

      Isolation Type Implementation Verified?
      Pod-to-Pod (Same Cluster) NetworkPolicy - same-namespace
      Pod-to-Pod (Cross Cluster) NetworkPolicy - blocked
      VM Egress VirtLauncher NetworkPolicy
      Management Network Egress except rules
      Control-to-Data Plane Konnectivity tunnel
      Cloud VPC (AWS) Separate VPC
      Cloud Security Groups Cluster-specific rules

      Acceptance Criteria

      • NetworkPolicies exist for all hosted clusters
      • VirtLauncher egress rules confirmed blocking management networks
      • Cross-cluster pod communication fails as expected
      • Konnectivity connectivity verified
      • (Cloud) VPC isolation confirmed
      • (Cloud) Security group rules validated
      • All findings documented with evidence

      Deliverables

      • NetworkPolicy YAML dumps
      • Network connectivity test results
      • Konnectivity logs showing connectivity
      • (Cloud) VPC/Security group configuration
      • Evidence of isolation (command outputs)
      • Summary: "Network isolation ✓ CONFIRMED"

      Estimated Time

      6-8 hours (KubeVirt), 8-10 hours (Cloud platforms with VPC verification)

      Related Tasks

              wk2019 Ke Wang
              wk2019 Ke Wang
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: