Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2536

Setting up ODF S3 for loki

    XMLWordPrintable

Details

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Before this update, <X problem> caused <Y situation> (OPTIONAL: under the following <Z conditions>). With this update, <fix> resolves the issue (OPTIONAL: and <agent> can <perform operation> successfully).
    • Logging (LogExp) - Sprint 219, Logging (LogExp) - Sprint 220, Log Storage - Sprint 221

    Description

      Configuring S3 storage for the loki stack on OpenShift with ODF installed. Trying to set up with RADOS Gateway (RGW):

       

      > s3cmd --configure --no-check-certificate
      New settings:
        Access Key: KE1L5UM77NJUZXPH208M
        Secret Key: AFSqIcV3jPppiLvOV8ZiRPCFn5zpCDClvUTPedg7
        Default Region: eu-west-1
        S3 Endpoint: rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
        DNS-style bucket+hostname:port template for accessing a bucket: loki-datastore
        Encryption password: 
        Path to GPG program: /usr/bin/gpg
        Use HTTPS protocol: True
        HTTP Proxy server name: 
        HTTP Proxy server port: 0Test access with supplied credentials? [Y/n] 
      Please wait, attempting to list all buckets...
      Success. Your access key and secret key worked fine :-) 
      
      > s3cmd ls s3://
      2022-04-26 11:58  s3://loki-datastore
      

      Credentials are from a test cluster, there is no need to obfuscate, and I want to double check they are the same in the configuration:

       

      loki-s3 secret

       

      kind: Secret
      apiVersion: v1
      metadata:
        name: loki-s3
        namespace: openshift-logging
      stringData:
        bucketnames: loki-datastore
        endpoint: rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
        region: "eu-west-1"
        access_key_id: KE1L5UM77NJUZXPH208M
        access_key_secret: AFSqIcV3jPppiLvOV8ZiRPCFn5zpCDClvUTPedg7
      

      lokistack:

       

      piVersion: loki.grafana.com/v1beta1
      kind: LokiStack
      metadata:
        name: loki
        namespace: openshift-logging
      spec:
        size: 1x.extra-small
        storage:
          secret:
            name: loki-s3
            type: s3
        storageClassName: ocs-storagecluster-ceph-rbd
        tenants:
          mode: openshift-logging 

       

       

      The storage part of the configmap:

       

      ...
      common:
        storage:
          
          
          
          s3:
            s3: rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
            bucketnames: loki-datastore
            region: eu-west-1
            access_key_id: KE1L5UM77NJUZXPH208M
            secret_access_key: AFSqIcV3jPppiLvOV8ZiRPCFn5zpCDClvUTPedg7
            s3forcepathstyle: true
          
          
      compactor:
      ...

       

       

      But all pods raise the following exception: 

       

      level=error ts=2022-04-26T16:26:24.16437656Z caller=reporter.go:202 msg="failed to delete corrupted cluster seed file, deleting it" err="InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.\n\tstatus code: 403, request id: ZSAAZJ9AYVGSY2Q9, host id: ZPA/4v9tuiazvVfsBgOhjkst/2QHcIq99lUl0sGwGLLKb7Ac5jAg1dgAfJCzCzTbSkQUiDAfSQo="28 

      With different request id and host id. Any idea what is happening?

       

      Attachments

        Activity

          People

            ptsiraki@redhat.com Periklis Tsirakidis
            rgordill1@redhat.com Ramon Gordillo Gutierrez
            Qiaoling Tang Qiaoling Tang
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: