Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-1071

CloudWatch fluentd plugin posts all logs it forwards to AWS to its own log

    XMLWordPrintable

Details

    • False
    • False
    • NEW
    • NEW
    • Hide
      Before this update, the AWS CloudWatch fluentd plugin logged its AWS API calls to the fluentd log at all log levels, thus creating undue pressure on the resources of OCP nodes. With this update, the AWS CloudWatch fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, fluentd does not consume extra OCP node resources.
      Show
      Before this update, the AWS CloudWatch fluentd plugin logged its AWS API calls to the fluentd log at all log levels, thus creating undue pressure on the resources of OCP nodes. With this update, the AWS CloudWatch fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, fluentd does not consume extra OCP node resources.
    • Logging (Core) - Sprint 198, Logging (Core) - Sprint 199, Logging (LogExp) - Sprint 209

    Description

       

      This would cause an increase in memory usage (which we already are hitting the limit on; see https://issues.redhat.com/browse/LOG-1059) and CPU (which we hit at 100% during testing when we unbound the fluentd pod memory usage).

       

      A snip from the current fluentd pod logs:

      ```

      \"kubernetes\":

      {\"container_name\":\"log-generator2\",\"namespace_name\":\"log-generator\",\"pod_name\":\"log-generator-pbhsq\",\"container_image\":\"quay.io/dry923/log_generator:latest\",\"container_image_id\":\"quay.io/dry923/log_generator@sha256:143fd046876838d1f8e4acff828f9f71e7da7472df348f407e92e561014dec6d\",\"pod_id\":\"b56cfa57-28f5-41be-a30b-ec09b6a6a40a\",\"host\":\"ip-10-0-224-163.us-west-2.compute.internal\",\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"5134c2df-a339-4eae-be8e-75753329102f\",\"flat_labels\":[\"controller-uid=98ef04c1-df8e-4aca-9f94-66bf8b54ae59\",\"job-name=log-generator\",\"name=log-generator\"]}

      ,\"message\":\"NWTJEKR56QIAQ5VINCQP0WQNLTJH5IX2XGBHSB8DM8C16OQA3B3VRN8PXMN783MVYKVZN43KT0SGP0BXTSRADE4192ACMSPBLOKNXG22MMEE28WN84O60ETQ7FD158IU9FCP7P0KLC83S0I6JPPI9K68EDSL99M975W81HP16BVKMX5T8WCYEGSF6AFO41JHVQR74KK9KA1570MXWYBF6IJ5W12VIPXWDJMWY6EIVJTPOD4XAY76HDI95QEU3A9FEN3GOWTODZH3ZA27XY4ZLT8DG1Q4" ... (2181 bytes)>}],log_group_name:"dry-logtest-7mbsj.application",log_stream_name:"kubernetes.var.log.containers.log-generator-pbhsq_log-generator_log-generator2-9cb0c46bf5e2860fa3ad7a39c2376bcaf878e672191328e1f9828e5cdd1e8341.log",sequence_token:"49609932663458872515816355989557366195202703253053985314")

      ```

      Attachments

        Activity

          People

            syedriko_sub@redhat.com Sergey Yedrikov
            rhn-support-rzaleski Russell Zaleski
            Kabir Bharti Kabir Bharti
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: