- 
    Bug 
- 
    Resolution: Unresolved
- 
    Major 
- 
    4.16, 4.17, 4.18, 4.19, 4.20
- 
    None
- 
        Quality / Stability / Reliability
- 
        False
- 
        
- 
        None
- 
        Important
- 
        None
- 
        None
- 
        Rejected
- 
        None
- 
        In Progress
- 
        Bug Fix
- 
        Fixed overflowing of logs, gRPC connection logs are now on V(4) log level.
- 
        None
- 
        None
- 
        None
- 
        None
Description of problem:
In openshift-cluster-csi-drivers namespace, csi-driver-node deployment , csi-node-registrar container is logging grpc connection check to /registration/csi.... every 10 seconds and this is filling up our Elasticsearch space fast. 
For example, for a cluster in ARO, these logs are being printed out to the console every 10 seconds.  
I0730 04:27:23.729346 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/disk.csi.azure.com-reg.sock"
I0730 04:27:23.729931 1 node_register.go:141] Calling node registrar to check if it still responds
I0730 04:27:23.730243 1 main.go:90] Received GetInfo call: &InfoRequest{}
I0730 04:27:33.729491 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/disk.csi.azure.com-reg.sock"
I0730 04:27:33.730082 1 node_register.go:141] Calling node registrar to check if it still responds
I0730 04:27:33.730380 1 main.go:90] Received GetInfo call: &InfoRequest{}
I0730 04:27:43.730173 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/disk.csi.azure.com-reg.sock"
I0730 04:27:43.730756 1 node_register.go:141] Calling node registrar to check if it still responds
I0730 04:27:43.731093 1 main.go:90] Received GetInfo call: &InfoRequest{}
I0730 04:27:53.729913 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/disk.csi.azure.com-reg.sock"
I0730 04:27:53.730464 1 node_register.go:141] Calling node registrar to check if it still responds
Same thing happens in our on-prem VMware clusters. For example, here are the logs for this one:
I0730 04:08:47.182096 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/csi.vsphere.vmware.com-reg.sock"
I0730 04:08:47.183085 1 node_register.go:141] Calling node registrar to check if it still responds
I0730 04:08:47.183359 1 main.go:90] Received GetInfo call: &InfoRequest{}
I0730 04:08:57.182859 1 node_register.go:133] Attempting to open a gRPC connection with: "/registration/csi.vsphere.vmware.com-reg.sock"
I0730 04:08:57.183469 1 node_register.go:141] Calling node registrar to check if it still responds
I0730 04:08:57.184060 1 main.go:90] Received GetInfo call: &InfoRequest{}
So this is happening all across all clusters and we need to reduce the frequency of these logs and find the root cause of this.In csi-driver-node deployment, csi-node-registrar container logging grpc connection check to /registration/csi.... every 10 seconds 
Impact:
Elasticsearch is deployed as a SAAS service and these logs are filling up the space fast which increases the cost to the customer.
   
Version-Release number of selected component (if applicable):
4.16.42
How reproducible:
    
Steps to Reproduce:
    1.
    2.
    3.
    
Actual results:
Millions of informational logs are being generated for this namespace (openshift-cluster-csi-drivers)
Expected results:
No more or reduced logging of informational messages.
Additional info:
    
- blocks
- 
                    OCPBUGS-62844 [4.20] Remove info message logging for csi-driver-node deployment, csi-node-registrar container -         
- Closed
 
-         
- is cloned by
- 
                    OCPBUGS-62660 [4.20] Remove info message logging for csi-driver-node deployment, csi-node-registrar container -         
- Closed
 
-         
- 
                    OCPBUGS-62844 [4.20] Remove info message logging for csi-driver-node deployment, csi-node-registrar container -         
- Closed
 
-         
- links to