Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-1600

managedcluster CR does not show API/console URL

XMLWordPrintable

    • False
    • None
    • False
    • None

      We recently updated the HostedCluster CR to use KubernetesOVN for the network type as the default (instead of OpenshiftSDN). 

      The managedcluster has been successfully registered, but we are unable to see the console/API URL.

      The managedclusteraddon on the SC are all un Uknown state:

      oc get managedclusteraddon -n 1u96hviqicdnstdd87heiq76m6k89sl1
      NAME                          AVAILABLE   DEGRADED   PROGRESSING
      cert-policy-controller        Unknown                
      config-policy-controller      Unknown                
      governance-policy-framework   Unknown                
      iam-policy-controller         Unknown                
      work-manager                  Unknown  

      On the hosted cluster, all pods in the `open-cluster-management-agent-addon` are pending:

       

      oc get po -n open-cluster-management-agent-addon 
      NAME                                           READY   STATUS    RESTARTS   AGE
      cert-policy-controller-6f78fdb8f-w6drh         0/1     Pending   0          4m36s
      config-policy-controller-c649b9667-sdz78       0/1     Pending   0          4h46m
      governance-policy-framework-6dd9fcdb6b-54vlm   0/3     Pending   0          4h47m
      iam-policy-controller-76fd5575dc-mlnjh         0/1     Pending   0          4m36s
      klusterlet-addon-workmgr-7ccf9864b4-czvjv      0/1     Pending   0          4h45m 

      The pod description:

       

       

       oc describe po -n open-cluster-management-agent-addon cert-policy-controller-6f78fdb8f-w6drh
      Name:           cert-policy-controller-6f78fdb8f-w6drh
      Namespace:      open-cluster-management-agent-addon
      Priority:       0
      Node:           ip-10-100-131-44.ec2.internal/10.100.131.44
      Start Time:     Tue, 23 Aug 2022 16:08:12 +0000
      Labels:         app=cert-policy-controller
                      chart=cert-policy-controller-2.2.0
                      heritage=Helm
                      pod-template-hash=6f78fdb8f
                      release=cert-policy-controller
      Annotations:    openshift.io/scc: restricted-v2
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
      Status:         Pending
      IP:             
      IPs:            <none>
      Controlled By:  ReplicaSet/cert-policy-controller-6f78fdb8f
      Containers:
        cert-policy-controller:
          Container ID:  
          Image:         quay.io/stolostron/cert-policy-controller@sha256:f0bb71377b9270681c0ff2362b34efcf4b830788fcb6ec56db01fee3d8e4c354
          Image ID:      
          Port:          <none>
          Host Port:     <none>
          Args:
            --enable-lease=true
            --cluster-name=1u96hviqicdnstdd87heiq76m6k89sl1
            --update-frequency=30
            --log-encoder=console
            --log-level=0
            --v=-1
          State:          Waiting
            Reason:       ContainerCreating
          Ready:          False
          Restart Count:  0
          Limits:
            memory:  300Mi
          Requests:
            memory:   150Mi
          Liveness:   exec [sh -c pgrep cert-policy -l] delay=30s timeout=5s period=10s #success=1 #failure=3
          Readiness:  exec [sh -c exec echo start certificate-policy-controller] delay=10s timeout=2s period=10s #success=1 #failure=3
          Environment:
            WATCH_NAMESPACE:  1u96hviqicdnstdd87heiq76m6k89sl1
            POD_NAME:         cert-policy-controller-6f78fdb8f-w6drh (v1:metadata.name)
            OPERATOR_NAME:    cert-policy-controller
            HTTP_PROXY:       
            HTTPS_PROXY:      
            NO_PROXY:         
          Mounts:
            /var/run/klusterlet from klusterlet-config (rw)
            /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s9gfv (ro)
      Conditions:
        Type              Status
        Initialized       True 
        Ready             False 
        ContainersReady   False 
        PodScheduled      True 
      Volumes:
        klusterlet-config:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  cert-policy-controller-hub-kubeconfig
          Optional:    false
        kube-api-access-s9gfv:
          Type:                    Projected (a volume that contains injected data from multiple sources)
          TokenExpirationSeconds:  3607
          ConfigMapName:           kube-root-ca.crt
          ConfigMapOptional:       <nil>
          DownwardAPI:             true
          ConfigMapName:           openshift-service-ca.crt
          ConfigMapOptional:       <nil>
      QoS Class:                   Burstable
      Node-Selectors:              <none>
      Tolerations:                 CriticalAddonsOnly op=Exists
                                   dedicated:NoSchedule op=Exists
                                   node-role.kubernetes.io/infra:NoSchedule op=Exists
                                   node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                                   node.kubernetes.io/not-ready:NoSchedule op=Exists
                                   node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
      Events:
        Type     Reason             Age                  From                Message
        ----     ------             ----                 ----                -------
        Warning  FailedScheduling   5m12s                default-scheduler   0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
        Normal   Scheduled          36s                  default-scheduler   Successfully assigned open-cluster-management-agent-addon/cert-policy-controller-6f78fdb8f-w6drh to ip-10-100-131-44.ec2.internal by kube-scheduler-7c679c8f5-dcw5x
        Normal   NotTriggerScaleUp  63s (x25 over 5m3s)  cluster-autoscaler  pod didn't trigger scale-up:
        Warning  FailedMount        15s (x6 over 30s)    kubelet             MountVolume.SetUp failed for volume "klusterlet-config" : object "open-cluster-management-agent-addon"/"cert-policy-controller-hub-kubeconfig" not registered
        Warning  FailedMount        14s (x6 over 30s)    kubelet             MountVolume.SetUp failed for volume "kube-api-access-s9gfv" : [object "open-cluster-management-agent-addon"/"kube-root-ca.crt" not registered, object "open-cluster-management-agent-addon"/"openshift-service-ca.crt" not registered]
        Warning  NetworkNotReady    6s (x13 over 30s)    kubelet             network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? 

       

       

              rokejungrh Roke Jung
              sbarouti@redhat.com Samira Barouti (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: