Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-59257

CI fails on sig-api-machinery FieldValidation tests

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • 4.19, 4.20
    • kube-apiserver
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Low
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem

      CI is flaky because of test failures for the following tests:

      • [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
      • [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
      • [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
      • [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields

      These failures look like the following:

        STEP: Creating a kubernetes client @ 07/10/25 19:52:37.714
        STEP: Building a namespace api object, basename field-validation @ 07/10/25 19:52:37.716
      I0710 19:52:37.777934 20405 namespace.go:59] About to run a Kube e2e test, ensuring namespace/e2e-field-validation-7456 is privileged
        STEP: Waiting for a default service account to be provisioned in namespace @ 07/10/25 19:52:38.133
        STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 07/10/25 19:52:38.136
      I0710 19:53:10.759154 20405 field_validation.go:575] Unexpected error: deleting CustomResourceDefinition: 
          <wait.errInterrupted>: 
          timed out waiting for the condition
          {
              cause: <*errors.errorString | 0xc000464ba0>{
                  s: "timed out waiting for the condition",
              },
          }
        [FAILED] in [It] - k8s.io/kubernetes/test/e2e/apimachinery/field_validation.go:575 @ 07/10/25 19:53:10.759
        STEP: dump namespace information after failure @ 07/10/25 19:53:10.774
        STEP: Collecting events from namespace "e2e-field-validation-7456". @ 07/10/25 19:53:10.774
        STEP: Found 0 events. @ 07/10/25 19:53:10.784
      I0710 19:53:10.786828 20405 resource.go:168] POD  NODE  PHASE  GRACE  CONDITIONS
      I0710 19:53:10.786860 20405 resource.go:178] 
      I0710 19:53:10.796368 20405 dump.go:81] skipping dumping cluster info - cluster too large
        STEP: Destroying namespace "e2e-field-validation-7456" for this suite. @ 07/10/25 19:53:10.796
      
      fail [k8s.io/kubernetes/test/e2e/apimachinery/field_validation.go:575]: deleting CustomResourceDefinition: timed out waiting for the condition
      

      This particular failure comes from https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_cluster-ingress-operator/1227/pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn-techpreview/1943373659754205184. Search.ci has other similar failures.

      Version-Release number of selected component (if applicable)

      I have seen this in 4.20 and 4.19 CI jobs.

      How reproducible

      Presently, search.ci shows the following stats for the past two days:

      periodic-ci-openshift-release-master-konflux-nightly-4.19-e2e-aws-ovn-upgrade (all) - 60 runs, 3% failed, 50% of failures match = 2% impact
      pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn-techpreview (all) - 4 runs, 75% failed, 33% of failures match = 25% impact
      openshift-ovn-kubernetes-2658-nightly-4.19-e2e-aws-ovn-upgrade-fips (all) - 20 runs, 100% failed, 5% of failures match = 5% impact
      periodic-ci-openshift-release-master-ci-4.20-upgrade-from-stable-4.19-e2e-azure-ovn-upgrade (all) - 51 runs, 53% failed, 4% of failures match = 2% impact
      periodic-ci-openshift-multiarch-master-nightly-4.19-ocp-e2e-ovn-remote-libvirt-s390x (all) - 9 runs, 89% failed, 13% of failures match = 11% impact
      periodic-ci-openshift-release-master-ci-4.20-upgrade-from-stable-4.19-e2e-gcp-ovn-rt-upgrade (all) - 20 runs, 60% failed, 8% of failures match = 5% impact
      

      Steps to Reproduce

      1. Post a PR and have bad luck.
      2. Check search.ci.

      Actual results

      CI fails.

      Expected results

      CI passes, or fails on some other test failure.

              Unassigned Unassigned
              mmasters1@redhat.com Miciah Masters
              None
              None
              Ke Wang Ke Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: