Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-6383

[release-6.1] When enabling inputs receivers in CLF, the tls secrets can't be created making collector pods can't be ready.

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Before this update, an input receiver's service was endlessly created and deleted causing tls secret mounting issues. With this update, an input receiver's service is created and only deleted if not defined in the spec.
    • Bug Fix
    • Log Collection - Sprint 262, Log Collection - Sprint 263
    • Moderate

      Description of problem:

      Creating CLF with below yaml to enable inputs receivers:

      apiVersion: observability.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: http-to-splunk
        namespace: e2e-test-vector-splunk-kb99w
      spec:
        inputs:
        - name: httpserver1
          receiver:
            http:
              format: kubeAPIAudit
            port: 8081
            type: http
          type: receiver
        - name: httpserver2
          receiver:
            http:
              format: kubeAPIAudit
            port: 8082
            type: http
          type: receiver
        - name: httpserver3
          receiver:
            http:
              format: kubeAPIAudit
            port: 8083
            type: http
          type: receiver
        managementState: Managed
        outputs:
        - name: splunk-aosqe
          splunk:
            authentication:
              token:
                key: hecToken
                secretName: to-splunk-secret-68303
            index: main
            tuning: {}
            url: http://splunk-http-0.e2e-test-vector-splunk-kb99w.svc:8088
          type: splunk
        pipelines:
        - inputRefs:
          - httpserver1
          - httpserver2
          - httpserver3
          name: forward-log-splunk
          outputRefs:
          - splunk-aosqe
        serviceAccount:
          name: clf-eqa0oeox 

      then check pods' status, the collector pods can't be ready:

      % oc get pod
      NAME                   READY   STATUS              RESTARTS   AGE
      http-to-splunk-2krd8   0/1     ContainerCreating   0          2m24s
      http-to-splunk-76glg   0/1     ContainerCreating   0          2m24s
      http-to-splunk-d4nvj   0/1     ContainerCreating   0          2m24s
      http-to-splunk-m9wmf   0/1     ContainerCreating   0          2m24s
      http-to-splunk-nzsb4   0/1     ContainerCreating   0          2m24s
      http-to-splunk-qhrqd   0/1     ContainerCreating   0          2m24s
      http-to-splunk-rdqpn   0/1     ContainerCreating   0          2m24s 

      And found below issue:

      Events:
        Type     Reason       Age                    From               Message
        ----     ------       ----                   ----               -------
        Normal   Scheduled    2m53s                  default-scheduler  Successfully assigned e2e-test-vector-splunk-kb99w/http-to-splunk-2krd8 to ip-10-0-68-51.ap-northeast-1.compute.internal
        Warning  FailedMount  2m53s (x2 over 2m53s)  kubelet            MountVolume.SetUp failed for volume "metrics" : secret "http-to-splunk-metrics" not found
        Warning  FailedMount  110s (x8 over 2m53s)   kubelet            MountVolume.SetUp failed for volume "http-to-splunk-httpserver3" : secret "http-to-splunk-httpserver3" not found
        Warning  FailedMount  50s (x6 over 2m53s)    kubelet            MountVolume.SetUp failed for volume "http-to-splunk-httpserver1" : secret "http-to-splunk-httpserver1" not found
        Warning  FailedMount  46s (x9 over 2m53s)    kubelet            MountVolume.SetUp failed for volume "http-to-splunk-httpserver2" : secret "http-to-splunk-httpserver2" not found 

      Secrets:

      % oc get secret
      NAME                           TYPE                      DATA   AGE
      builder-dockercfg-prgwq        kubernetes.io/dockercfg   1      4m14s
      clf-eqa0oeox-dockercfg-gjp7h   kubernetes.io/dockercfg   1      3m23s
      default-dockercfg-2gxjv        kubernetes.io/dockercfg   1      4m14s
      deployer-dockercfg-l47xz       kubernetes.io/dockercfg   1      4m14s
      http-to-splunk-metrics         kubernetes.io/tls         2      3m15s
      splunk-http                    Opaque                    6      4m9s
      to-splunk-secret-68303         Opaque                    1      3m23s 

      Version-Release number of selected component (if applicable):

      cluster-logging.v6.1.0

      cluster-logging.v6.0.1

      How reproducible:

      Always

      Steps to Reproduce:

      See `Description of problem` part

      Actual results:

      Some secrets are not created making collector pods can't be ready.

      Expected results:

      CLO should request certificates from the cluster's cert signing service when TLS is not defined. And collector pods should be ready.

      Additional info:

      No issue in logging 6.0.0. 

              rh-ee-calee Calvin Lee
              qitang@redhat.com Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: