Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-5021

CLO should report errors if no cloudwatch secret

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Before this fix, clusterlogfowarder doesn't report error when forward logs to cloudwatch without a secret . With this fix, clusterlogfowarder show the message "secret must be provided for cloudwatch output"
    • Bug Fix
    • Log Collection - Sprint 248
    • Moderate

      Description of problem:

      when forward logs to cloudwatch, secret must be provided. if there isn't secret, an invalid message should be reported.

      Steps to Reproduce:

      1) create output cloudwatch without secret

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        generation: 3
        name: instance
        namespace: openshift-logging
      spec:
        outputs:
        - cloudwatch:
            groupBy: logType
            region: us-east-2
          name: cw
          type: cloudwatch
        pipelines:
        - inputRefs:
          - infrastructure
          - audit
          - application
          name: infra-logs
          outputRefs:
          - cw
      

      Actual results:

      1) when vector is used as input.

      1. There isn't error in ClusterLogForwarder/instance, collector pod are started.
      2. auth.access_key_id = "", auth.secret_access_key = ""
        [sinks.cw]
        type = "aws_cloudwatch_logs"
        inputs = ["cw_dedot"]
        region = "us-east-2"
        compression = "none"
        group_name = "{{ group_name }}"
        stream_name = "{{ stream_name }}"
        auth.access_key_id = ""
        auth.secret_access_key = ""
        encoding.codec = "json"
        healthcheck.enabled = false
        

      2) when fluentd is used as input. The CLO panic

      oc logs cluster-logging-operator-78d568b958-bmfld
      {"_ts":"2024-01-25T15:33:23.963253079Z","_level":"0","_component":"cluster-logging-operator","_message":"starting up...","go_arch":"amd64","go_os":"linux","go_version":"go1.20.12","operator_version":"5.8.0"}
      {"_ts":"2024-01-25T15:33:23.995850688Z","_level":"0","_component":"cluster-logging-operator","_message":"migrating resources provided by the manifest"}
      {"_ts":"2024-01-25T15:33:24.00056613Z","_level":"0","_component":"cluster-logging-operator","_message":"Registering Components."}
      {"_ts":"2024-01-25T15:33:24.013022214Z","_level":"0","_component":"cluster-logging-operator","_message":"Starting the Cmd."}
      panic: runtime error: invalid memory address or nil pointer dereference [recovered]
      	panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x16705d2]
      
      goroutine 299 [running]:
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:119 +0x1fa
      panic({0x18392a0, 0x2945b30})
      	/usr/lib/golang/src/runtime/panic.go:884 +0x213
      github.com/openshift/cluster-logging-operator/internal/generator/fluentd/output/cloudwatch.SecurityConfig({{0xc00080474a, 0x2}, {0xc000804750, 0xa}, {0x0, 0x0}, {0x0, 0x0, 0x0, 0x0, ...}, ...}, ...)
      	/remote-source/cluster-logging-operator/app/internal/generator/fluentd/output/cloudwatch/cloudwatch.go:117 +0x232
      github.com/openshift/cluster-logging-operator/internal/generator/fluentd/output/cloudwatch.OutputConf(0xc0015acb10?, 0xc0015970c0?, {{0xc00080474a, 0x2}, {0xc000804750, 0xa}, {0x0, 0x0}, {0x0, 0x0, ...}, ...}, ...)
      	/remote-source/cluster-logging-operator/app/internal/generator/fluentd/output/cloudwatch/cloudwatch.go:100 +0xaa
      github.com/openshift/cluster-logging-operator/internal/generator/fluentd/output/cloudwatch.Conf(0x17bd240?, 0x47cb01?, {{0xc00080474a, 0x2}, {0xc000804750, 0xa}, {0x0, 0x0}, {0x0, 0x0, ...}, ...}, ...)
      	/remote-source/cluster-logging-operator/app/internal/generator/fluentd/output/cloudwatch/cloudwatch.go:86 +0x510
      github.com/openshift/cluster-logging-operator/internal/generator/fluentd.Outputs(0xc002496c30, 0xc00002f4c0?, 0xc0004770c8, 0x11?)
      	/remote-source/cluster-logging-operator/app/internal/generator/fluentd/outputs.go:62 +0x97a
      github.com/openshift/cluster-logging-operator/internal/generator/fluentd.Conf(0xc001198640?, 0x7f98e44adeb8?, 0x7f990e70af18?, {0xc0002afd40, 0x11}, {0xc0015b8018?, 0x0?}, 0xc0011985e8?)
      	/remote-source/cluster-logging-operator/app/internal/generator/fluentd/conf.go:41 +0x20a
      github.com/openshift/cluster-logging-operator/internal/generator/forwarder.(*ConfigGenerator).GenerateConf(0xc0015b8018, 0x7?, 0x1a5babb?, 0xe?, {0xc0002afd40?, 0x0?}, {0xc000804728?, 0x0?}, 0x0?)
      	/remote-source/cluster-logging-operator/app/internal/generator/forwarder/generator.go:53 +0x63
      github.com/openshift/cluster-logging-operator/internal/k8shandler.(*ClusterLoggingRequest).generateCollectorConfig(0xc001199068)
      	/remote-source/cluster-logging-operator/app/internal/k8shandler/forwarding.go:51 +0x2e7
      github.com/openshift/cluster-logging-operator/internal/k8shandler.(*ClusterLoggingRequest).CreateOrUpdateCollection(0xc001199068)
      	/remote-source/cluster-logging-operator/app/internal/k8shandler/collection.go:72 +0x2b2
      github.com/openshift/cluster-logging-operator/internal/k8shandler.Reconcile(0xc0006c1ba0, 0xc000476fc0, {0x1d15bd8, 0xc000094780}, {0x7f98e4582cc0, 0xc00056e5b0}, {0x1d0d550, 0xc000546480}, {0xc00027aba0, 0x22}, ...)
      	/remote-source/cluster-logging-operator/app/internal/k8shandler/reconciler.go:67 +0x845
      github.com/openshift/cluster-logging-operator/controllers/forwarding.(*ReconcileForwarder).Reconcile(0xc000121320, {0x30?, 0xc00006ec00?}, {{{0xc0002afd40?, 0x0?}, {0xc000804728?, 0x4142c7?}}})
      	/remote-source/cluster-logging-operator/app/controllers/forwarding/forwarding_controller.go:115 +0xc45
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1d0e080?, {0x1d0e080?, 0xc00249a270?}, {{{0xc0002afd40?, 0x17bc100?}, {0xc000804728?, 0x0?}}})
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:122 +0xc8
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000564500, {0x1d0dfd8, 0xc00009fe00}, {0x18bfe60?, 0xc000561400?})
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:323 +0x35f
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000564500, {0x1d0dfd8, 0xc00009fe00})
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:274 +0x1d9
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:235 +0x85
      created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
      	/remote-source/cluster-logging-operator/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:231 +0x587
      
      

      Expected results:

      invalid message in CLF.status.

      Additional info:

              vparfono Vitalii Parfonov
              rhn-support-anli Anping Li
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: