Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-5392

[release-5.8] Improve validation of provided S3 storage configuration

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Minor Minor
    • Logging 5.8.6
    • Logging 5.9.0
    • Log Storage
    • None
    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      Before this update, the loki operator did not provide any validation on the S3 endpoint used in the storage secret. After this update, the S3 endpoint is validated to make sure it is a valid S3 URL, and the LokiStack status is updated to report when it is not.
      Show
      Before this update, the loki operator did not provide any validation on the S3 endpoint used in the storage secret. After this update, the S3 endpoint is validated to make sure it is a valid S3 URL, and the LokiStack status is updated to report when it is not.
    • Enhancement
    • Log Storage - Sprint 252

      The current style of S3 configuration for LokiStack when using Amazon AWS S3 can be confusing to users, because we ask for an "endpoint", but do not actually use virtual-host style access as suggested by the AWS documentation.

      Because the AWS S3 URLs are well-formed, we could introduce a small additional validation into the secret handling in the Loki Operator that would detect this issue and produce a validation error.

      Currently, when a user configures the endpoint as suggested by the AWS documentation:

      https://bucket-name.s3.us-west-2.amazonaws.com/ 

      and also configures "bucketnames" as "bucket-name", this leads to Loki treating this as the "subdirectory bucket-name inside the bucket called bucket-name" which causes errors in some components.

      Loki does not need the "bucket-name" as part of the endpoint (not even in virtual-host mode), because it constructs the hostname internally. So we could just validate that the URL is of the pattern "https://s3.REGION.amazonaws.com" when Amazon AWS S3 is used.

      Implementation notes

      The validation could look something like this:

      • If the storage is configured as S3
        • Check if the provided "endpoint" is a parseable URL and has "http" or "https" as scheme
        • If the endpoint points to ".amazonaws.com"
          • Check that it is of the form "https://s3.REGION.amazonaws.com" where REGION needs to match the configured "region"

       

              btaani@redhat.com Bayan Taani
              rojacob@redhat.com Robert Jacob
              Kabir Bharti Kabir Bharti
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: