Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4258

Fluentd fails when configured passphase sending to Elasticsearch

    XMLWordPrintable

Details

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Before this update, Fluentd is unable to send logs to an Elasticsearch cluster when a passphrase is applied to the private key provided.
      With this chages, the issue was resolved by adding the necessary configuration to the Fluentd Elasticsearch plugin, Fluentd now properly handles passphrase-protected private keys when establishing a connection with Elasticsearch.
      Show
      Before this update, Fluentd is unable to send logs to an Elasticsearch cluster when a passphrase is applied to the private key provided. With this chages, the issue was resolved by adding the necessary configuration to the Fluentd Elasticsearch plugin, Fluentd now properly handles passphrase-protected private keys when establishing a connection with Elasticsearch.
    • Bug Fix
    • Log Collection - Sprint 238
    • Important

    Description

      Description of problem:

      The clusterlogging operator is not generating the right configuration missing to generate the {}client_key_pass{} for fluentd when the output defined is Elasticsearch and using passphrase

      Version-Release number of selected component (if applicable):

      /// Logging version$ ns="openshift-logging"$ oc get csv -n $nsNAME                                       DISPLAY                                          VERSION                   REPLACES                        PHASEcluster-logging.v5.7.2                     Red Hat OpenShift Logging                        5.7.2                     cluster-logging.v5.7.1          Succeededelasticsearch-operator.v5.7.2              OpenShift Elasticsearch Operator                 5.7.2                     elasticsearch-operator.v5.7.1   Succeeded 

      Also tested and failing in latest from stable-5.6

      How reproducible:

      Always

      Steps to Reproduce:

      /// clusterlogging configuration running fluentd
      $ oc get clusterlogging instance -n $ns -o yaml
      ...
      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
        namespace: "openshift-logging"
      spec:
        collection:
          logs:
            type: "fluentd"
            fluentd:
              resources: {}
      
      
      /// Create secret containing certificates + passphrase for private key
      $ oc create secret generic -n openshift-logging es-secret --from-file=tls.key=tls.key --from-file=tls.crt=tls.crt --from-file=ca-bundle.crt=ca-bundle.crt --from-literal=username=user --from-literal=password=pass --from-literal=passphrase=test
      
      /// clusterlogforwarder configured to send to an elasticsearch output using certificates that requires passphrase
      $ oc get clusterlogforwarder instance -o yaml -n $ns
      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        creationTimestamp: "2023-06-23T10:25:47Z"
        generation: 3
        name: instance
        namespace: openshift-logging
        resourceVersion: "2863397"
        uid: 7bc98f0e-c985-4e0b-bbfb-2dc0b5620b6f
      spec:
        outputs:
        - elasticsearch:
            version: 8
          name: test
          secret:
            name: es-secret
          type: elasticsearch
          url: https://elasticsearch:9200
        pipelines:
        - inputRefs:
          - infrastructure
          - application
          - audit
          name: all-to-default
          outputRefs:
          - test
      

      Actual results:

      Collector not able to establish SSL connection because not able to open the private key and returning the error:

      /// Review is present the error "Enter PEM pass phrase
      $ for pod in $(oc get pods -l component=collector -o name -n $ns); do oc logs $pod -n $ns -c collector |grep -i "Enter PEM"; done|head -1
      Enter PEM pass phrase:
      

      The fluentd configuration generated by the clusterlogging operator doesn't contain the {}client_key_pass{} :

      /// Confirm in the fluentd configuration that is as below 
      $ 
      
        <match **>
          @type elasticsearch
          @id test
          host elasticsearch
          port 9200
          scheme https
          ssl_version TLSv1_2
          user "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/username') ? open('/var/run/ocp-collector/secrets/es-secret/username','r') do |f|f.read end : ''}"
          password "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/password') ? open('/var/run/ocp-collector/secrets/es-secret/password','r') do |f|f.read end : ''}"
          client_key '/var/run/ocp-collector/secrets/es-secret/tls.key'
          client_cert '/var/run/ocp-collector/secrets/es-secret/tls.crt'
          ca_file '/var/run/ocp-collector/secrets/es-secret/ca-bundle.crt'
      ...
      

      Expected results:

      Fluentd able to send logs to an Elasticsearch when using passphrase for the private key provided. Then, the configuration generated should be like:

        <match **>
          @type elasticsearch
          @id test
          host elasticsearch
          port 9200
          scheme https
          ssl_version TLSv1_2
          user "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/username') ? open('/var/run/ocp-collector/secrets/es-secret/username','r') do |f|f.read end : ''}"
          password "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/password') ? open('/var/run/ocp-collector/secrets/es-secret/password','r') do |f|f.read end : ''}"
          client_key '/var/run/ocp-collector/secrets/es-secret/tls.key'
          client_cert '/var/run/ocp-collector/secrets/es-secret/tls.crt'
          ca_file '/var/run/ocp-collector/secrets/es-secret/ca-bundle.crt'
          client_key_pass "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/passphrase') ? open('/var/run/ocp-collector/secrets/es-secret/passphrase','r') do |f|f.read end : ''}"
      ...
      

      The same applies to the {}retry{} section in the fluentd configuration

      Workaround

      Modify the Elasticsearch Operator for not upgrading automatically and needed manual approval:

      $ oc patch subs elasticsearch-operator -n openshift-operators-redhat --type='json' -p='[{"op": "replace", "path": "/spec/installPlanApproval","value": "Manual"}]'
      subscription.operators.coreos.com/elasticsearch-operator patched
      

      Verify the Elasticsearch Operator needs manual approval

      $ oc get sub elasticsearch-operator -n openshift-operators-redhat-o jsonpath='{.spec.installPlanApproval}' 
      Manual
      

      Modify the Cluster Logging Operator for not upgrading automatically and needed manual approval

      $ oc patch subs cluster-logging -n openshift-logging --type='json' -p='[{"op": "replace", "path": "/spec/installPlanApproval","value": "Manual"}]'
      subscription.operators.coreos.com/cluster-logging patched
      

      Verify the ClusterLogging Operator needs manual approval

      $ oc get sub cluster-logging -n openshift-logging -o jsonpath='{.spec.installPlanApproval}'
      Manual
      

      Move the Cluster Logging Operator from Managed to Unmanaged

      $ oc patch  clusterlogging/instance -n openshift-logging --type='json' -p='[{"op": "replace", "path": "/spec/managementState","value": "Unmanaged"}]'
      clusterlogging.logging.openshift.io/instance patched
      

      Edit the collector configmap:

      $ oc edit cm collector -n openshift-logging
      

      Add the definition of `client_key_pass`

        <match **>
          @type elasticsearch
          @id elasticsearch    <--- verify the name of this id to be elasticsearch as you have called in the clusterlogfowarder output       
          host elasticsearch
          ...
          client_key '/var/run/ocp-collector/secrets/es-secret/tls.key'
          client_cert '/var/run/ocp-collector/secrets/es-secret/tls.crt'
          ca_file '/var/run/ocp-collector/secrets/es-secret/ca-bundle.crt'
          client_key_pass "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/passphrase') ? open('/var/run/ocp-collector/secrets/es-secret/passphrase','r') do |f|f.read end : ''}"   <---- add this line. Replace <es-secret> by the name giving to the secret used
      

      Do the same for the retry from the buffer file:

        <match retry_elasticsearch>  <--- this is the retry section
          @type elasticsearch
          @id elasticsearch    <--- verify the name of this id to be elasticsearch as you have called in the clusterlogfowarder output
          host elasticsearch
          ... 
          client_key '/var/run/ocp-collector/secrets/es-secret/tls.key'
          client_cert '/var/run/ocp-collector/secrets/es-secret/tls.crt'
          ca_file '/var/run/ocp-collector/secrets/es-secret/ca-bundle.crt'
          client_key_pass "#{File.exists?('/var/run/ocp-collector/secrets/es-secret/passphrase') ? open('/var/run/ocp-collector/secrets/es-secret/passphrase','r') do |f|f.read end : ''}"   <---- add this line. Replace <es-secret> by the name giving to the secret used
      

      Restart the collectors:

      $ oc delete pods -l component=collectors -n openshift-logging
      

      Verify that the running fluentd configuration has defined `client_key_pass`:

      $ oc -n openshift-logging rsh <collector pod>  grep client_key_pass /etc/fluent/fluent.conf 
      

      Verify no more "Enter PEM pass phrase" errors:

      $ for pod in $(oc get pods -l component=collector -o name -n openshift-logging); do oc logs $pod -n openshift-logging -c collector |grep -i "Enter PEM";done
      

      Once the bug is fixed, revert to Managed and to Automatic approval of the updates. It should be:

      /// Move to automatic the approval of the installPlan
      $ oc patch subs elasticsearch-operator -n openshift-operators-redhat --type='json' -p='[{"op": "replace", "path": "/spec/installPlanApproval","value": "Automatic"}]'
      
      /// Move to automatic the approval of the installPlan
      $ oc patch subs cluster-logging -n openshift-logging --type='json' -p='[{"op": "replace", "path": "/spec/installPlanApproval","value": "Automatic"}]'
      
      /// Move to Managed the clusterLogging CR
      $ oc patch  clusterlogging/instance -n openshift-logging --type='json' -p='[{"op": "replace", "path": "/spec/managementState","value": "Managed"}]'
      

      Attachments

        Activity

          People

            vparfono Vitalii Parfonov
            rhn-support-ocasalsa Oscar Casal Sanchez
            Anping Li Anping Li
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: