Manual Test: PendingWorkloads List Verification for ClusterQueue and LocalQueue

Test Overview

Test ID: E2E-VISIBILITY-001

Feature: Kueue Visibility On Demand API

Test Type: Manual UI Test

Platform: OpenShift Container Platform (OCP) 4.19

Duration: ~5-10 minutes

Purpose

Verify that the Kueue Visibility API correctly displays pending workloads for ClusterQueues and LocalQueues in the OCP 4.19 Console, with proper priority ordering, and that the list dynamically updates as workloads are executed. The test also verifies that all workloads transition to a "Finished" state upon completion.

Test Strategy

  1. Create blocker jobs that consume all available resources in each ClusterQueue
  2. Wait for blocker job workloads to be created and admitted (to avoid race conditions)
  3. Create additional jobs with different priorities that will be pending
  4. Verify pending workload lists show correct count and priority ordering
  5. Monitor workloads as they transition from pending → admitted → finished
  6. Verify all workloads complete successfully with "Finished" condition

Prerequisites

Required Access

Tools Required

Test Setup

Phase 1: Create Cluster-Level Resources

Step 1.1: Generate Unique Test ID

Generate a unique identifier for this test run to avoid conflicts:

TEST_ID=$(printf "%04x" $RANDOM)
echo "Test ID: $TEST_ID"
Expected Result: A 4-digit hex value (e.g., a3f2)

Step 1.2: Create ResourceFlavor

Create a ResourceFlavor to represent available compute resources.

cat <<EOF | oc apply -f -
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
  name: resource-flavor-${TEST_ID}
EOF

Verification:

oc get resourceflavors resource-flavor-${TEST_ID}
Expected Output:
NAME                      AGE
resource-flavor-a3f2      5s

Step 1.3: Create ClusterQueue A

Create the first ClusterQueue with limited resources (1 CPU, 1Gi memory).

cat <<EOF | oc apply -f -
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
  name: cluster-queue-a-${TEST_ID}
spec:
  namespaceSelector: {}
  resourceGroups:
  - coveredResources: ["cpu", "memory"]
    flavors:
    - name: resource-flavor-${TEST_ID}
      resources:
      - name: "cpu"
        nominalQuota: 1
      - name: "memory"
        nominalQuota: 1Gi
EOF

Verification:

oc get clusterqueue cluster-queue-a-${TEST_ID}

Step 1.4: Create ClusterQueue B

Create the second ClusterQueue with the same resource limits (1 CPU, 1Gi memory).

cat <<EOF | oc apply -f -
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
  name: cluster-queue-b-${TEST_ID}
spec:
  namespaceSelector: {}
  resourceGroups:
  - coveredResources: ["cpu", "memory"]
    flavors:
    - name: resource-flavor-${TEST_ID}
      resources:
      - name: "cpu"
        nominalQuota: 1
      - name: "memory"
        nominalQuota: 1Gi
EOF

Verification:

oc get clusterqueues | grep ${TEST_ID}
Expected Output:
cluster-queue-a-a3f2      5s
cluster-queue-b-a3f2      3s

Step 1.5: Create PriorityClasses

Create three priority classes for workload prioritization.

High Priority (100):

cat <<EOF | oc apply -f -
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority-${TEST_ID}
value: 100
globalDefault: false
description: "High priority class for testing"
EOF

Medium Priority (75):

cat <<EOF | oc apply -f -
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: medium-priority-${TEST_ID}
value: 75
globalDefault: false
description: "Medium priority class for testing"
EOF

Low Priority (50):

cat <<EOF | oc apply -f -
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: low-priority-${TEST_ID}
value: 50
globalDefault: false
description: "Low priority class for testing"
EOF

Verification:

oc get priorityclasses | grep ${TEST_ID}
Expected Output:
high-priority-a3f2     100          false      5s
medium-priority-a3f2   75           false      4s
low-priority-a3f2      50           false      3s

Phase 2: Create Namespace-Level Resources

Step 2.1: Create Namespace A

cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: namespace-a-${TEST_ID}
  labels:
    kueue.openshift.io/managed: "true"
EOF

Verification:

oc get namespace namespace-a-${TEST_ID}

Step 2.2: Create LocalQueue A

cat <<EOF | oc apply -f -
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
  name: local-queue-a-${TEST_ID}
  namespace: namespace-a-${TEST_ID}
spec:
  clusterQueue: cluster-queue-a-${TEST_ID}
EOF

Verification:

oc get localqueue -n namespace-a-${TEST_ID}

Step 2.3: Create Namespace B

cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: namespace-b-${TEST_ID}
  labels:
    kueue.openshift.io/managed: "true"
EOF

Step 2.4: Create LocalQueue B

cat <<EOF | oc apply -f -
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
  name: local-queue-b-${TEST_ID}
  namespace: namespace-b-${TEST_ID}
spec:
  clusterQueue: cluster-queue-b-${TEST_ID}
EOF

Verification:

oc get localqueues --all-namespaces | grep ${TEST_ID}

Phase 3: Create RBAC Resources

Step 3.1: Create Service Account

cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kueue-test-user-${TEST_ID}
  namespace: namespace-a-${TEST_ID}
EOF

Step 3.2: Create ClusterRoleBinding

cat <<EOF | oc apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: pending-workloads-admin-${TEST_ID}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kueue-batch-admin-role
subjects:
- kind: ServiceAccount
  name: kueue-test-user-${TEST_ID}
  namespace: namespace-a-${TEST_ID}
EOF

Verification:

oc get clusterrolebinding pending-workloads-admin-${TEST_ID}

Test Execution

Phase 4: Submit Test Workloads

Step 4.1: Submit Job-Blocker to Namespace A

This job will consume all available resources in ClusterQueue A.

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-blocker
  namespace: namespace-a-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-a-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: high-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Wait 5-10 seconds for the workload to be created.

Verify job and workload are created:

oc get jobs -n namespace-a-${TEST_ID}
oc get workloads -n namespace-a-${TEST_ID}
oc get pods -n namespace-a-${TEST_ID}
Expected: Job should be running (unsuspended) and workload should be admitted.

Step 4.2: Submit Additional Jobs to Namespace A

These jobs will be pending because ClusterQueue A is fully utilized by the blocker job.

Job High-A (High Priority = 100):

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-high-a
  namespace: namespace-a-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-a-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: high-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Job Medium-A (Medium Priority = 75):

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-medium-a
  namespace: namespace-a-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-a-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: medium-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Job Low-A (Low Priority = 50):

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-low-a
  namespace: namespace-a-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-a-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: low-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Verify jobs were created:

oc get jobs -n namespace-a-${TEST_ID}
Expected: All 4 jobs should be visible (job-blocker, job-high-a, job-medium-a, job-low-a)

Step 4.3: Submit Jobs to Namespace B

Submit jobs to Namespace B. The blocker job will be admitted and run immediately, while job-high-b will be pending.

Job Blocker-B (High Priority = 100):

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-blocker-b
  namespace: namespace-b-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-b-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: high-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Job High-B (High Priority = 100):

cat <<EOF | oc apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: job-high-b
  namespace: namespace-b-${TEST_ID}
  labels:
    kueue.x-k8s.io/queue-name: local-queue-b-${TEST_ID}
spec:
  parallelism: 1
  completions: 1
  template:
    spec:
      priorityClassName: high-priority-${TEST_ID}
      restartPolicy: Never
      containers:
      - name: test-container
        image: busybox
        command: ["sh", "-c", "echo 'Hello Kueue'; sleep 30"]
        resources:
          requests:
            cpu: "1"
            memory: "1Gi"
EOF

Verify jobs were created:

oc get jobs -n namespace-b-${TEST_ID}
Expected: Both jobs should be visible (job-blocker-b, job-high-b)

Wait 5-10 seconds for the blocker-b workload to be created.

Verify blocker-b job and workload are created:

oc get jobs -n namespace-b-${TEST_ID}
oc get workloads -n namespace-b-${TEST_ID}
oc get pods -n namespace-b-${TEST_ID}
Expected: job-blocker-b should be running (unsuspended), its workload should be admitted, and job-high-b should be suspended with pending workload

Verification Steps

Phase 5: Verify Pending Workloads in UI

✅ Checkpoint 1: Verify ClusterQueue A Pending Workloads

Action: Verify ClusterQueue A has 3 pending workloads with correct priority ordering

Important: Before checking pending workloads, ensure the blocker job workload has been created and admitted. This prevents race conditions where pending workloads might not have been created yet.

Method 1 - Via OCP 4.19 Web Console UI:

  1. Open OCP 4.19 Web Console in your web browser
    • URL typically: https://console-openshift-console.apps.<cluster-domain>
  2. Ensure you're in the Administrator perspective (top-left dropdown)
  3. In the left navigation menu, navigate to:
    • WorkloadsBatchJobs (to verify jobs exist)
  4. For Kueue resources, go to:
    • HomeAPI Explorer
    • In the search box, type: ClusterQueue
    • Click on ClusterQueue under kueue.x-k8s.io/v1beta1
    • Click Instances tab
    • Find and click on cluster-queue-a-${TEST_ID}

Once in the ClusterQueue Details Page:

  1. Look for the Pending Workloads section in the Details view
  2. Or click on the YAML tab and check the status section
  3. For Visibility API access, you may need to use CLI or a custom view

Method 2 - Via OC Commands:

First, verify blocker job workload is created and admitted:

oc get workloads -n namespace-a-${TEST_ID} -o json | \
  jq '.items[] | select(.metadata.ownerReferences[].name=="job-blocker") | {name: .metadata.name, admitted: (.status.conditions[] | select(.type=="Admitted") | .status)}'
Expected: Blocker workload should show "admitted": "True"

Check all workloads status:

oc get workloads -n namespace-a-${TEST_ID}

View pending workloads with details:

oc get workloads -n namespace-a-${TEST_ID} -o wide

Get pending workloads sorted by priority (if jq is available):

oc get workloads -n namespace-a-${TEST_ID} -o json | \
  jq '.items[] | select(.status.conditions[] | select(.type=="Admitted" and .status=="False")) | {name: .metadata.name, priority: .spec.priority}' | \
  jq -s 'sort_by(.priority) | reverse'

Describe ClusterQueue to see status:

oc describe clusterqueue cluster-queue-a-${TEST_ID}

Check jobs status:

oc get jobs -n namespace-a-${TEST_ID}

Expected Results:

Field Expected Value
Total Pending Workloads 3
1st Workload Name job-high-a-*
1st Workload Priority 100
2nd Workload Name job-medium-a-*
2nd Workload Priority 75
3rd Workload Name job-low-a-*
3rd Workload Priority 50

UI Screenshot Location: screenshots/checkpoint-1-clusterqueue-a.png

✅ Checkpoint 2: Verify ClusterQueue B Pending Workloads

Action: Verify ClusterQueue B has 1 pending workload with high priority

Important: Before checking pending workloads, ensure the blocker-b job workload has been created and admitted.

Method 1 - Via OCP 4.19 Web Console UI:

  1. In the OCP Console, navigate to: HomeAPI Explorer
  2. Search for: ClusterQueue
  3. Click on ClusterQueue under kueue.x-k8s.io/v1beta1
  4. Click Instances tab
  5. Find and click on cluster-queue-b-${TEST_ID}

Once in the ClusterQueue Details Page:

  1. Look for the Pending Workloads section in the Details view
  2. Or click on the YAML tab and check the status section

Method 2 - Via OC Commands:

First, verify blocker-b job workload is created and admitted:

oc get workloads -n namespace-b-${TEST_ID} -o json | \
  jq '.items[] | select(.metadata.ownerReferences[].name=="job-blocker-b") | {name: .metadata.name, admitted: (.status.conditions[] | select(.type=="Admitted") | .status)}'
Expected: Blocker-b workload should show "admitted": "True"

Check all workloads status:

oc get workloads -n namespace-b-${TEST_ID}

Expected Results:

Field Expected Value
Total Pending Workloads 1
1st Workload Name job-high-b-*
1st Workload Priority 100

UI Screenshot Location: screenshots/checkpoint-2-clusterqueue-b.png

✅ Checkpoint 3: Verify Priority Ordering

Action: Verify that workloads are displayed in descending priority order

Expected UI Behavior:

Pass Criteria:

✅ Checkpoint 4: Monitor Workload Execution

Action: Wait for jobs to complete (approximately 30-45 seconds)

What to observe in the UI:

  1. Watch the pending workloads count decrease
  2. Monitor as workloads transition from "Pending" to "Running"
  3. Observe the dynamic update of the pending list

Expected Timeline:

CLI Monitoring (Optional):

watch -n 2 "oc get workloads -A | grep ${TEST_ID}"

Or using oc-specific watch:

oc get workloads -A --watch | grep ${TEST_ID}

✅ Checkpoint 5: Verify All Workloads Completed

Action: Verify all workloads transition to "Finished" status

CLI Verification - Check Workload Status:

# Check all workloads in namespace A
oc get workloads -n namespace-a-${TEST_ID} -o json | \
  jq '.items[] | {name: .metadata.name, finished: (.status.conditions[] | select(.type=="Finished") | .status)}'
# Check all workloads in namespace B
oc get workloads -n namespace-b-${TEST_ID} -o json | \
  jq '.items[] | {name: .metadata.name, finished: (.status.conditions[] | select(.type=="Finished") | .status)}'
Expected Results:

Verify Empty Pending Lists:

# Should return no items
oc get workloads -A | grep ${TEST_ID} | grep -v Finished

UI Verification:

For ClusterQueue A:

For ClusterQueue B:

UI Screenshot Location: screenshots/checkpoint-5-completed-workloads.png

Success Criteria

Functional Requirements

Requirement Status Notes
Pending workloads are correctly displayed for ClusterQueue A ☐ Pass ☐ Fail Should show 3 workloads
Pending workloads are correctly displayed for ClusterQueue B ☐ Pass ☐ Fail Should show 1 workload
Workloads are ordered by priority (highest first) ☐ Pass ☐ Fail 100 → 75 → 50
Pending count matches expected values ☐ Pass ☐ Fail 3 for Queue A, 1 for Queue B
Pending list updates dynamically as jobs execute ☐ Pass ☐ Fail Real-time updates
All workloads transition to Finished status ☐ Pass ☐ Fail All workloads complete successfully
Lists become empty when all jobs complete ☐ Pass ☐ Fail Both queues show 0 pending

UI/UX Requirements

Requirement Status Notes
Priority values are clearly displayed ☐ Pass ☐ Fail Visible in UI
Workload names are human-readable ☐ Pass ☐ Fail Not truncated
Refresh/reload functionality works ☐ Pass ☐ Fail Manual refresh if needed
No UI errors or crashes ☐ Pass ☐ Fail Clean operation
Loading states are appropriate ☐ Pass ☐ Fail Spinners, etc.

Test Cleanup

Phase 6: Delete Test Resources

Execute the following commands to clean up all test resources.

Step 6.1: Delete Jobs and Workloads

Note: Workloads are typically cleaned up automatically when their parent jobs are deleted.
oc delete jobs --all -n namespace-a-${TEST_ID}
oc delete jobs --all -n namespace-b-${TEST_ID}

Verification:

oc get jobs -n namespace-a-${TEST_ID}
oc get jobs -n namespace-b-${TEST_ID}

Verify workloads are also deleted:

oc get workloads -n namespace-a-${TEST_ID}
oc get workloads -n namespace-b-${TEST_ID}
Expected: No jobs or workloads should remain

Step 6.2: Delete LocalQueues

oc delete localqueue local-queue-a-${TEST_ID} -n namespace-a-${TEST_ID}
oc delete localqueue local-queue-b-${TEST_ID} -n namespace-b-${TEST_ID}

Step 6.3: Delete Namespaces

oc delete namespace namespace-a-${TEST_ID}
oc delete namespace namespace-b-${TEST_ID}
Note: Deleting namespaces will also delete all resources within them (ServiceAccounts, Jobs, LocalQueues, etc.)

Step 6.4: Delete ClusterRoleBinding

oc delete clusterrolebinding pending-workloads-admin-${TEST_ID}

Step 6.5: Delete ClusterQueues

oc delete clusterqueue cluster-queue-a-${TEST_ID}
oc delete clusterqueue cluster-queue-b-${TEST_ID}

Step 6.6: Delete ResourceFlavor

oc delete resourceflavor resource-flavor-${TEST_ID}

Step 6.7: Delete PriorityClasses

oc delete priorityclass high-priority-${TEST_ID}
oc delete priorityclass medium-priority-${TEST_ID}
oc delete priorityclass low-priority-${TEST_ID}

Step 6.8: Final Verification

Verify all test resources have been cleaned up:

# Should return no results
oc get all,clusterqueues,localqueues,resourceflavors,priorityclasses -A | grep ${TEST_ID}
Expected: No output (all resources deleted)

References

Source Information

Documentation Links

CLI Command Cheat Sheet

Action OpenShift CLI Command
Get ClusterQueues oc get clusterqueues
Get LocalQueues oc get localqueues -n <namespace>
Get Workloads oc get workloads -n <namespace>
Get Pending Workloads oc get workloads -n <namespace> --field-selector=status.admission=false
Describe ClusterQueue oc describe clusterqueue <name>
Get ResourceFlavors oc get resourceflavors
Check Kueue Operator oc get pods -n kueue-system
View Logs oc logs -n kueue-system deployment/kueue-controller-manager

Document Version: 3.1

Last Updated: 2025-10-29

Platform: OpenShift Container Platform (OCP) 4.19

CLI Tool: oc (OpenShift CLI)

Maintained By: QA Team

Document Changelog

Version Date Changes
3.1 2025-10-29 Updated to align with latest test code: Added workload completion verification; Updated timeline; Added verification for "Finished" condition; Enhanced cleanup steps
3.0 2025-10-28 Simplified to inline-only commands; Added OCP 4.19-specific UI navigation; Removed file-based options
2.0 2025-10-28 Updated all commands to use oc CLI instead of kubectl; Added YAML file examples; Enhanced troubleshooting
1.0 2025-10-28 Initial version with kubectl commands