Uploaded image for project: 'Red Hat Advanced Cluster Security'
  1. Red Hat Advanced Cluster Security
  2. ROX-30860

Fake workload: Limit simultaneous port use by multiple processes

    • Icon: Task Task
    • Resolution: Done
    • Icon: Undefined Undefined
    • 4.9.0
    • None
    • None
    • None
    • Future Sustainability
    • False
    • Hide

      None

      Show
      None
    • False
    • Rox Sprint 4.9H - Global, Rox Sprint 4.9I - Global

      Problem

      The current fake workload behavior is unrealistic and causes memory pressure in Sensor's enrichment pipeline.

      The cause for that is that multiple processes listening on the same `IP:port` are generated without proper endpoint closure message.  Note that generating multiple 'open' endpoints with the same IP and Port but different originators (processes) without a 'close' message in between is not a realistic scenario and occurs very rarely in production (practically only when a process deliberately abuses the same IP and Port in parallel to a different process).

      Note that if Sensor sees an open endpoint for `<container1, 1.1.1.1:80, nginx>` and then another open endpoint for `<container1, 1.1.1.1:80, apache2>`, then Sensor will keep the nginx-entry forever, as there was no 'close' message in between.

      That behavior creates excessive deduplication overhead, as Sensor would retain stale entries indefinitely when no close messages were sent between different originators on the same endpoint.

      Solution

      The solution is to change the generation of the originators (processes) for endpoints, so that a p-part of processes are always closed before a new process opens on the same endpoint, whereas (1-p)-part of processes would reuse the same endpoint. The probability p should be high (>0.95) so that the endpoint-reuse is a rare phenomenon.

              prygiels@redhat.com Piotr Rygielski
              prygiels@redhat.com Piotr Rygielski
              ACS Sensor & Ecosystem
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: