Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-5525

Updating the data.token field in the cloudwatch secret for a ClusterLogForwarder does not trigger an update

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      Before this change the collector deployment did not recognize secrets that changed which could result in logs being rejected by receivers. This adds an annotation to the deployment that will cause a new pod to be rolled out when the value changes and the process will reload the secrets.
      Show
      Before this change the collector deployment did not recognize secrets that changed which could result in logs being rejected by receivers. This adds an annotation to the deployment that will cause a new pod to be rolled out when the value changes and the process will reload the secrets.
    • Bug Fix
    • Log Collection - Sprint 257, Log Collection - Sprint 258
    • Important

      Description of problem:

      In our architecture we have a requirement to use a service account token that is not running in the same namespace as the ClusterLogForwarder CR, specifically for OIDC-based AssumeRoleWithWebIdentity STS functionality. We have a workload that will mint this token and put it into a secret inside the same namespace as the ClusterLogForwarder for use with the CloudWatch output.

      When we first deploy this secret and the ClusterLogForwarder, it works perfectly fine until the token expires ~1h after the token was minted. When we update the token in the secret, nothing happens, and eventually the ClusterLogForwarder fails to continue forwarding logs to CloudWatch.

      When I manually rolled the log forwarder pods after rotating the token the pods were able to then begin forwarding again until that token expired. This is relatively expected, as the token is passed through as an environment variable, but the unexpected part is that we would have expected the deployment to update with the new env var after the secret had been updated.

      Version-Release number of selected component (if applicable): 5.9.1

      How reproducible:

      It happened both times that I attempted it.

      Steps to Reproduce:

      1. Create a ClusterLogForwarder with a valid input and a Cloudwatch output referencing a secret.
      2. Set the Cloudwatch output secret to use an IAM role that has a trust policy with a serviceaccount from another namespace or another cluster
      3. Manually generate a token from that serviceaccount (on another cluster) and add it to the cloudwatch output secret under the .data.token field
      4. Wait 10m or so
      5. Manually generate a new token from the serviceaccount (on another cluster) and update the cloudwatch output secret.
      6. Observe that the pods for the Log Forwarder do not get updated.

            jcantril@redhat.com Jeffrey Cantrill
            iamkirkbater Kirk Bater
            Kabir Bharti Kabir Bharti
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: