Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-8881

Deployment fail with Proxy mode enabled, IPv6 only

    XMLWordPrintable

Details

    • Moderate
    • 2
    • Metal Platform 233, Metal Platform 234, Metal Platform 235, Metal Platform 236, Metal Platform 237, Metal Platform 238
    • 6
    • Unspecified
    • If docs needed, set a value

    Description

      Run regular deployment with PROVISIONING_IPV6 and BAREMETAL_IPV6 and PROXY enabled.

      ClusterID: 5a40d237-cb95-4112-9666-3d8ec5e69023
      ClusterVersion: Installing "4.6.28" for 3 hours: Unable to apply 4.6.28: some cluster operators have not yet rolled out
      ClusterOperators:
      clusteroperator/authentication is not available (ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).
      OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints
      OAuthServiceCheckEndpointAccessibleControllerAvailable: Get "https://[fd02::3409]:443/healthz": dial tcp [fd02::3409]:443: connect: connection refused
      WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap "oauth-openshift" not found (check authentication operator, it is supposed to create this)) because OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready
      OAuthServiceCheckEndpointAccessibleControllerDegraded: Get "https://[fd02::3409]:443/healthz": dial tcp [fd02::3409]:443: connect: connection refused
      IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server
      WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap "oauth-openshift" not found (check authentication operator, it is supposed to create this)
      OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps "oauth-openshift" not found
      RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp-edge-cluster-0.qe.lab.redhat.com: route status ingress is empty
      OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address
      OAuthServerDeploymentDegraded: deployments.apps "oauth-openshift" not found
      OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp-edge-cluster-0.qe.lab.redhat.com: route status ingress is empty
      clusteroperator/console is not available () because
      clusteroperator/ingress is not available (Not all ingress controllers are available.) because Some ingresscontrollers are degraded: ingresscontroller "default" is degraded: DegradedConditions: One or more other status conditions indicate a degraded state: PodsScheduled=False (PodsNotScheduled: Some pods are not scheduled: Pod "router-default-56b7cd79df-2sbg6" cannot be scheduled: 0/3 nodes are available: 3 node(s) didn't match node selector. Pod "router-default-56b7cd79df-4dsm5" cannot be scheduled: 0/3 nodes are available: 3 node(s) didn't match node selector. Make sure you have sufficient worker nodes.), DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.), DeploymentReplicasMinAvailable=False (DeploymentMinimumReplicasNotMet: 0/2 of replicas are available, max unavailable is 1), DeploymentReplicasAllAvailable=False (DeploymentReplicasNotAvailable: 0/2 of replicas are available)
      clusteroperator/kube-storage-version-migrator is not available (Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available) because
      clusteroperator/monitoring is not available () because Failed to rollout the stack. Error: running task Updating Alertmanager failed: waiting for Alertmanager Route to become ready failed: waiting for route openshift-monitoring/alertmanager-main: no status available

      Attachments

        Activity

          People

            dhiggins@redhat.com Derek Higgins
            openshift_jira_bot OpenShift Jira Bot
            Amit Ugol Amit Ugol
            Red Hat Employee
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated: