Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-53180

Network-console-plugin is degraded and it is showing "Failed to get a valid plugin manifest from /api/plugins/networking-console-plugin/"

XMLWordPrintable

    • Quality / Stability / Reliability
    • True
    • Hide

      None

      Show
      None
    • None
    • Critical
    • None
    • None
    • None
    • None
    • In Progress
    • Release Note Not Required
    • None
    • None
    • None
    • None
    • None

      The network-console-plugin is degraded and it is showing "Failed to get a valid plugin manifest from /api/plugins/networking-console-plugin/"

       

      Below are the errors from the logs:

      Console pod logs shows below:

      $ oc logs console-69f759dd97-wc2tm -n openshift-console
      2025-03-10T12:46:04.270027809Z E0310 12:46:04.269937       1 handlers.go:164] failed to send GET request for "networking-console-plugin" plugin: Get "https://networking-console-plugin.openshift-network-console.svc.cluster.local:9443/plugin-manifest.json": dial tcp [fd02::40d8]:9443: connect: connection refused

      Console operator pod logs shows:

      $  oc logs pod/console-operator-58897d9998-bxwrs -n openshift-console-operator

      2025-03-12T06:32:00.929616322Z E0312 06:32:00.929556       1 sync_v400.go:642] failed to get "odf-client-console" plugin: consoleplugin.console.openshift.io "odf-client-console" not found

      However, the odf-console plugin and the other plugins such as monitoring are also enabled, but the plugin manifest for networking-console-plugin is not available and reports the  stated error - Failed to get a valid plugin manifest from /api/plugins/networking-console-plugin/
      Cluster ID: 01243e29-2ea7-434e-9a93-0dde4f27a9a9
      Cluster Version: 4.18.1(unverified)
      Desired Version: 4.18.1
      Channel: stable-4.18
      Previous Version(s): 
       
      Infrastructure
      --------------
      Platform: BareMetal
      Control Plane Topology: HighlyAvailable
      apiServerInternalIP: 2405:200:8a4:2302:b00::1
      apiServerInternalIPs: 2405:200:8a4:2302:b00::1
      ingressIP: 2405:200:8a4:2302:b00::2
      ingressIPs: 2405:200:8a4:2302:b00::2
      loadBalancer: None
      machineNetworks: 2405:200:8a4:2302::/64
      Install Type: infrastructure-operator
       
      Network
      -------
      Network Type: OVNKubernetes
      httpProxy: None
      httpsProxy: None
      Cluster network: fd01::/48
      Host prefix: 64
      Max nodes: 65536
      Max pods per node: 18446744073709551616
       

      The customer is using a new cluster with version 4.18, while they also have another 4.18 cluster where this issue is not present.

       

              ocohen@redhat.com Oren Cohen
              rhn-support-eruby Esther Ruby R
              None
              None
              Guohua Ouyang Guohua Ouyang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

                Created:
                Updated:
                Resolved: