Uploaded image for project: 'Data Foundation Bugs'
  1. Data Foundation Bugs
  2. DFBUGS-2658

invalid_storage is shown in the --all_connection_details

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • odf-4.20
    • odf-4.18.4
    • noobaa-nc
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Committed
    • ?
    • x86_64
    • ?
    • Committed
    • Release Note Not Required
    • Hide

      As mentioned in the reproduction steps , the connection file in Scale still continue to show as DEGRADED which is an issue ideally because the noobaa-cli diagnose command is showing as invalid_storage

        "response": {
          "code": "HealthStatus",
          "message": "Health status retrieved successfully",
          "reply": {
            "service_name": "noobaa",
            "status": "OK",
            "memory": "312.0M",
            "checks": {
              "services": [
               

      {             "name": "noobaa",             "service_status": "active",             "pid": "3361422"           }

              ],
              "endpoint": {
                "endpoint_state": {
                  "response":

      {               "response_code": "RUNNING",               "response_message": "Endpoint running successfuly."             }

      ,
                  "total_fork_count": 2,
                  "running_workers": [
                    2,
                    1
                  ]
                },
                "error_type": "TEMPORARY"
              },
              "config_directory_status": {
                "phase": "CONFIG_DIR_UNLOCKED",
                "config_dir_version": "1.0.0",
                "upgrade_package_version": "5.18.4",
                "upgrade_status":

      {             "message": "there is no in-progress upgrade"           }

              },
              "connections_status": {
                "invalid_storages": [
                 

      {               "name": "notify_event",               "config_path": "/mnt/cesSharedRoot/ces/s3-config/connections/notify_event.json",                              "code": "UNKNOWN_ERROR"             }

                ],
                "valid_storages": [],
                "error_type": "TEMPORARY"
              }
            }
          }
        }
      }

       less /mnt/cesSharedRoot/ces/s3-config/connections/notify_event.json  | jq
      {
        "agent_request_object":

      {     "host": "10.0.100.19",     "port": 8090,     "timeout": 1000   }

      ,
        "request_options_object":

      {     "auth": "wqWFOqvk6dYBa6t1m4lS1wj1",     "path": "/webhook"   }

      ,
        "notification_protocol": "http",
        "name": "notify_event",
        "master_key_id": "6xxxxxxx"
      }

      Show
      As mentioned in the reproduction steps , the connection file in Scale still continue to show as DEGRADED which is an issue ideally because the noobaa-cli diagnose command is showing as invalid_storage   "response": {     "code": "HealthStatus",     "message": "Health status retrieved successfully",     "reply": {       "service_name": "noobaa",       "status": "OK",       "memory": "312.0M",       "checks": {         "services": [           {             "name": "noobaa",             "service_status": "active",             "pid": "3361422"           }         ],         "endpoint": {           "endpoint_state": {             "response": {               "response_code": "RUNNING",               "response_message": "Endpoint running successfuly."             } ,             "total_fork_count": 2,             "running_workers": [               2,               1             ]           },           "error_type": "TEMPORARY"         },         "config_directory_status": {           "phase": "CONFIG_DIR_UNLOCKED",           "config_dir_version": "1.0.0",           "upgrade_package_version": "5.18.4",           "upgrade_status": {             "message": "there is no in-progress upgrade"           }         },         "connections_status": {           "invalid_storages": [             {               "name": "notify_event",               "config_path": "/mnt/cesSharedRoot/ces/s3-config/connections/notify_event.json",                              "code": "UNKNOWN_ERROR"             }           ],           "valid_storages": [],           "error_type": "TEMPORARY"         }       }     }   } }  less /mnt/cesSharedRoot/ces/s3-config/connections/notify_event.json  | jq {   "agent_request_object": {     "host": "10.0.100.19",     "port": 8090,     "timeout": 1000   } ,   "request_options_object": {     "auth": "wqWFOqvk6dYBa6t1m4lS1wj1",     "path": "/webhook"   } ,   "notification_protocol": "http",   "name": "notify_event",   "master_key_id": "6xxxxxxx" }
    • Important
    • None

       

      Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:

      Health connection file for Notification is not updating appropriately

      The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):

       

      The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):

       

       

      The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):

       

       

      Does this issue impact your ability to continue to work with the product?

       

       

      Is there any workaround available to the best of your knowledge?

       

       

      Can this issue be reproduced? If so, please provide the hit rate

       

       

      Can this issue be reproduced from the UI?

       

      If this is a regression, please provide more details to justify this:

       

      Steps to Reproduce:

      1. Set the connection file and then mmhealth showed event as (s3_webhook_connection_notok)

      2. Start the Webhook Server with the required protocol

      3. Create a bucket, set put-bucket-notification , get-bucket-notification, upload an object
      4. Observed the "noobaa-cli diagnose health --all_connection_details" it showed

         "config_directory_status": {
                "phase": "CONFIG_DIR_UNLOCKED",
                "config_dir_version": "1.0.0",
                "upgrade_package_version": "5.18.4",
                "upgrade_status":

      {             "message": "there is no in-progress upgrade"           }

              },
              "connections_status": {
                "invalid_storages": [
                 

      {               "name": "notify_event",               "config_path": "/mnt/cesSharedRoot/ces/s3-config/connections/notify_event.json",               "code": "UNKNOWN_ERROR"             }

                ],
                "valid_storages": [],
                "error_type": "TEMPORARY"

      5.  Invalid_storage is concerning where the mmhealth is showing in Scale as DEGRADED state
      6. Notification event is posted for the upload of an object (which is working)..

      Could you take a look , why does it say Invalid storage as

      Node name:      rk27bld-24.openstacklocal

      Component          Status        Status Change            Reasons & Notices
      ----------------------------------------------------------------------------------------------------------------
      S3                 DEGRADED      2025-05-29 12:11:56      s3_webhook_connection_notok(notify_event)
        newbucket-maps   HEALTHY       2025-05-29 11:30:23      -
        notify_event     DEGRADED      2025-05-29 11:44:24      s3_webhook_connection_notok(notify_event)
        s3user7001       HEALTHY       2025-05-29 10:26:20      -
        s3user7002       HEALTHY       2025-05-29 10:26:20      -

       

      The exact date and time when the issue was observed, including timezone details:

       

      Actual results:

       

       

      Expected results:

      It should refresh the health and get the status to be updated appropriately

      Logs collected and log location:

       

      Additional info:

       
       

              rh-ee-aprinzse Amit Prinz Setter
              rkomandu@in.ibm.com Ravi Kumar Komanduri (Inactive)
              Amit Prinz Setter
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

                Created:
                Updated:
                Resolved: