Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-4877

MCO warns unknown fields from ControllerConfig

    • None
    • MCO Sprint 234
    • 1
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Upgraded from 4.11.17 -> 4.12.0 rc3 and found (after successful upgrade) this repeating in Machine Config Operator logs:
      
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511120       1 warnings.go:70] unknown field "spec.dns.metadata.creationTimestamp"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511140       1 warnings.go:70] unknown field "spec.dns.metadata.generation"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511143       1 warnings.go:70] unknown field "spec.dns.metadata.managedFields"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511146       1 warnings.go:70] unknown field "spec.dns.metadata.name"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511148       1 warnings.go:70] unknown field "spec.dns.metadata.resourceVersion"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511151       1 warnings.go:70] unknown field "spec.dns.metadata.uid"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511153       1 warnings.go:70] unknown field "spec.infra.metadata.creationTimestamp"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511155       1 warnings.go:70] unknown field "spec.infra.metadata.generation"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511157       1 warnings.go:70] unknown field "spec.infra.metadata.managedFields"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511159       1 warnings.go:70] unknown field "spec.infra.metadata.name"
      2022-12-13T23:11:51.511167249Z W1213 23:11:51.511161       1 warnings.go:70] unknown field "spec.infra.metadata.resourceVersion"
      2022-12-13T23:11:51.511211644Z W1213 23:11:51.511163       1 warnings.go:70] unknown field "spec.infra.metadata.uid"

      Version-Release number of selected component (if applicable):

      4.12.0-rc3
      Platform agnostic installation 

      How reproducible:

      Just once (working with user outside RH)

      Steps to Reproduce:

      1. Install 4.11.17
      2. Set candidate-4.12 upgrade channel
      3. Initiate upgrade (apply admin ack as needed)
      4. After upgrade, check Machine Config Operator logs
      

      Actual results:

      The upgrade went fine and I don't see any symptoms outside of warnings repeating in MCO log

      Expected results:

      I don't expect the warnings to be logged repeatedly 

      Additional info:

       

            [OCPBUGS-4877] MCO warns unknown fields from ControllerConfig

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:5006

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:5006

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:5006

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:5006

            Rio Liu added a comment -

            upgrade from 4.13.0-0.nightly-2023-04-06-060829 to 4.14.0-0.nightly-2023-04-06-232846 successfully

            check pod log of machine-config-operator, no keyword 'unknown field' found

            oc logs -n openshift-machine-config-operator machine-config-operator-7c97bc89f-8kf4k | grep 'unknown field'
            Defaulted container "machine-config-operator" out of: machine-config-operator, kube-rbac-proxy 

             

             

             

            Rio Liu added a comment - upgrade from 4.13.0-0.nightly-2023-04-06-060829 to 4.14.0-0.nightly-2023-04-06-232846 successfully check pod log of machine-config-operator, no keyword 'unknown field' found oc logs -n openshift-machine-config- operator machine-config- operator -7c97bc89f-8kf4k | grep 'unknown field' Defaulted container "machine-config- operator " out of: machine-config- operator , kube-rbac-proxy      

            Sinny Kumari added a comment - - edited These warning are still present in 4.14 PRs https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/3611/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1636301603910717440/artifacts/e2e-gcp-upgrade/gather-extra/artifacts/pods/openshift-machine-config-operator_machine-config-operator-66d8b99cc8-r5jhp_machine-config-operator.log . That means, controllerconfig spec update didn't help here.

            W. Trevor King added a comment - - edited

            oc adm upgrade got into trouble with something similar, because we were Update ing the resource, and the lossy-deserialize -> edit -> Update flow dropped fields. We had to pivot to Patch. Not sure if the MCO is vulnerable to that sort of thing.

            W. Trevor King added a comment - - edited oc adm upgrade got into trouble with something similar, because we were Update ing the resource, and the lossy-deserialize -> edit -> Update flow dropped fields. We had to pivot to Patch . Not sure if the MCO is vulnerable to that sort of thing.

            John Kyros added a comment -

            I've seen this firsthand too, but haven't prioritized because it's cosmetic – the MCO's actual controllerconfig golang type is technically out of sync with the CRD. The go ControllerConfig type includes the entire dns/infra objects (which have metadata fields) , but the controllerconfig CRD does not care about these and does not include them.

            I don't think this is a regression on our (the MCO) side (it's been like this for awhile), it seems like something outside the MCO changed and cares enough now that our object contains "extra fields" that aren't in the CRD that it's logging these warnings.

            We plan to do a "tech debt" sprint early next year and one of our goals is to try and get the MCO's API/CRD back under the generators, which should alleviate this type of thing going forward.

            TL;DR: annoying/noisy but not risky, will hopefully fix early next year

            John Kyros added a comment - I've seen this firsthand too, but haven't prioritized because it's cosmetic – the MCO's actual controllerconfig golang type is technically out of sync with the CRD. The go ControllerConfig type includes the entire dns/infra objects (which have metadata fields) , but the controllerconfig CRD does not care about these and does not include them. I don't think this is a regression on our (the MCO) side (it's been like this for awhile), it seems like something outside the MCO changed and cares enough now that our object contains "extra fields" that aren't in the CRD that it's logging these warnings. We plan to do a "tech debt" sprint early next year and one of our goals is to try and get the MCO's API/CRD back under the generators, which should alleviate this type of thing going forward. TL;DR: annoying/noisy but not risky, will hopefully fix early next year

            This issue is reproduced in CI, e.g. this run:

            $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade/1603025581102862336/artifacts/e2e-aws-sdn-upgrade/gather-extra/artifacts/pods/openshift-machine-config-operator_machine-config-operator-6f685b56bc-x6c9n_machine-config-operator.log | grep 'unknown field' | tail
            W1214 16:44:25.634065       1 warnings.go:70] unknown field "spec.dns.metadata.managedFields"
            W1214 16:44:25.634067       1 warnings.go:70] unknown field "spec.dns.metadata.name"
            W1214 16:44:25.634070       1 warnings.go:70] unknown field "spec.dns.metadata.resourceVersion"
            W1214 16:44:25.634072       1 warnings.go:70] unknown field "spec.dns.metadata.uid"
            W1214 16:44:25.634075       1 warnings.go:70] unknown field "spec.infra.metadata.creationTimestamp"
            W1214 16:44:25.634077       1 warnings.go:70] unknown field "spec.infra.metadata.generation"
            W1214 16:44:25.634080       1 warnings.go:70] unknown field "spec.infra.metadata.managedFields"
            W1214 16:44:25.634083       1 warnings.go:70] unknown field "spec.infra.metadata.name"
            W1214 16:44:25.634085       1 warnings.go:70] unknown field "spec.infra.metadata.resourceVersion"
            W1214 16:44:25.634088       1 warnings.go:70] unknown field "spec.infra.metadata.uid"
            

            So it's possibly something where fixes could be tested pre-merge using payload testing.

            W. Trevor King added a comment - This issue is reproduced in CI, e.g. this run : $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade/1603025581102862336/artifacts/e2e-aws-sdn-upgrade/gather-extra/artifacts/pods/openshift-machine-config-operator_machine-config-operator-6f685b56bc-x6c9n_machine-config-operator.log | grep 'unknown field' | tail W1214 16:44:25.634065 1 warnings.go:70] unknown field "spec.dns.metadata.managedFields" W1214 16:44:25.634067 1 warnings.go:70] unknown field "spec.dns.metadata.name" W1214 16:44:25.634070 1 warnings.go:70] unknown field "spec.dns.metadata.resourceVersion" W1214 16:44:25.634072 1 warnings.go:70] unknown field "spec.dns.metadata.uid" W1214 16:44:25.634075 1 warnings.go:70] unknown field "spec.infra.metadata.creationTimestamp" W1214 16:44:25.634077 1 warnings.go:70] unknown field "spec.infra.metadata.generation" W1214 16:44:25.634080 1 warnings.go:70] unknown field "spec.infra.metadata.managedFields" W1214 16:44:25.634083 1 warnings.go:70] unknown field "spec.infra.metadata.name" W1214 16:44:25.634085 1 warnings.go:70] unknown field "spec.infra.metadata.resourceVersion" W1214 16:44:25.634088 1 warnings.go:70] unknown field "spec.infra.metadata.uid" So it's possibly something where fixes could be tested pre-merge using payload testing .

              jkyros@redhat.com John Kyros
              rhn-support-rbost Robert Bost
              Rio Liu Rio Liu
              Red Hat Employee
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

                Created:
                Updated:
                Resolved: