Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-5396

[release-5.6] Improve validation of provided S3 storage configuration

    • Icon: Task Task
    • Resolution: Done
    • Icon: Minor Minor
    • Logging 5.6.18
    • Logging 5.9.0
    • Log Storage
    • None
    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      Before this update, the loki operator did not provide any validation on the S3 endpoint used in the storage secret. After this update, the S3 endpoint is validated to make sure it is a valid S3 URL, and the LokiStack status is updated to report when it is not.
      Show
      Before this update, the loki operator did not provide any validation on the S3 endpoint used in the storage secret. After this update, the S3 endpoint is validated to make sure it is a valid S3 URL, and the LokiStack status is updated to report when it is not.
    • Enhancement
    • Log Storage - Sprint 252

      The current style of S3 configuration for LokiStack when using Amazon AWS S3 can be confusing to users, because we ask for an "endpoint", but do not actually use virtual-host style access as suggested by the AWS documentation.

      Because the AWS S3 URLs are well-formed, we could introduce a small additional validation into the secret handling in the Loki Operator that would detect this issue and produce a validation error.

      Currently, when a user configures the endpoint as suggested by the AWS documentation:

      https://bucket-name.s3.us-west-2.amazonaws.com/ 

      and also configures "bucketnames" as "bucket-name", this leads to Loki treating this as the "subdirectory bucket-name inside the bucket called bucket-name" which causes errors in some components.

      Loki does not need the "bucket-name" as part of the endpoint (not even in virtual-host mode), because it constructs the hostname internally. So we could just validate that the URL is of the pattern "https://s3.REGION.amazonaws.com" when Amazon AWS S3 is used.

      Implementation notes

      The validation could look something like this:

      • If the storage is configured as S3
        • Check if the provided "endpoint" is a parseable URL and has "http" or "https" as scheme
        • If the endpoint points to ".amazonaws.com"
          • Check that it is of the form "https://s3.REGION.amazonaws.com" where REGION needs to match the configured "region"

       

            [LOG-5396] [release-5.6] Improve validation of provided S3 storage configuration

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Moderate: security update Logging for Red Hat OpenShift - 5.6.18), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2024:2092

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Moderate: security update Logging for Red Hat OpenShift - 5.6.18), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:2092

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of openshift-logging / Log Storage Midstream on branch openshift-logging-5.6-rhel-8_ upstream _45e92a9ca500378e1891dc268ccdd9a9 : Updated 2 upstream sources

              btaani@redhat.com Bayan Taani (Inactive)
              rojacob@redhat.com Robert Jacob
              Kabir Bharti Kabir Bharti
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: