Uploaded image for project: 'Distributed Tracing'
  1. Distributed Tracing
  2. TRACING-5949

[Upstream] [Tempo] Network policy uses ipBlock which doesn't match cluster service IPs with port 443 in different namespaces

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • None
    • Tempo
    • None
    • Tracing Sprint # 283

      Summary:

      Network policies generated by the Tempo operator use `ipBlock` with CIDR `0.0.0.0/0` to allow egress traffic to object storage. However, `ipBlock` rules do not match ClusterIPs for services in different namespaces and using port 443. This prevents Tempo pods from connecting to cross-namespace cluster services like OpenShift Data Foundation, while same-namespace MinIO deployments work fine.

      Why this wasn't caught in E2E tests.

      E2E tests deploy MinIO in the {}same namespace{} as TempoStack. Same-namespace service access works with `ipBlock`, but cross-namespace access requires `namespaceSelector`.

      Test setup (e.g., multitenancy-rbac):

      • MinIO deployed in: `chainsaw-rbac`
      • TempoStack deployed in: `chainsaw-rbac`
      • Result: ✅ Works (same namespace)

      Production ODF setup:

      • ODF S3 service in: `openshift-storage`
      • TempoStack deployed in: User namespace (e.g., `chainsaw-multitenancy`)
      • Result: ❌ Fails (cross-namespace)

      Root Cause:

      File: `internal/manifests/networkpolicies/components.go` (lines 100-105)

      ```go
      case netPolicys3Storage:
          // Allow egress to any S3 storage api.
          // This is necessary for cross-namespace access to object storage like MinIO
          return networkingv1.NetworkPolicyPeer
      {         IPBlock: &networkingv1.IPBlock\{CIDR: "0.0.0.0/0"}
      ,
          }
      ```
      

      The comment says "necessary for cross-namespace access" but `ipBlock` does NOT work for cross-namespace service access.

      `ipBlock` with CIDR `0.0.0.0/0` matches:

      • ✅ External IPs (AWS S3, GCS, Azure)
      • ✅ Pod IPs in the same namespace
      • ❌ ClusterIPs from services in different namespaces (ODF, cross-namespace MinIO)

      Reproduction Steps

      1. Deploy TempoStack in namespace `tempo-ns` with network policies enabled

      2. Configure storage to use ODF S3 service in `openshift-storage` namespace:

         ```yaml
         endpoint: https://s3.openshift-storage.svc:443
         ```
      

      3. Observe Tempo pods entering CrashLoopBackOff with errors:

        ```
         dial tcp 172.30.137.108:443: i/o timeout
         ```
      

       

      Solution

      Support both same-namespace and cross-namespace services by including both `ipBlock` and `namespaceSelector`:

       

      ```go
      case netPolicys3Storage:
          return []networkingv1.NetworkPolicyPeer{
              // External services (AWS S3, GCS, Azure) + same-namespace pod IPs
              {IPBlock: &networkingv1.IPBlock{CIDR: "0.0.0.0/0"}},
              // Services in any namespace (cross-namespace access)
              {NamespaceSelector: &metav1.LabelSelector{}},
          }
      ```
      

      This generates egress rules that work universally:

      ```yaml
      egress:
      
      ports:
        - port: 443
          protocol: TCP
        to:
        - ipBlock:
            cidr: 0.0.0.0/0
        - namespaceSelector: {}
      ```
      

      With an empty `namespaceSelector: {}`, the policy allows traffic to services in {}any namespace{} while maintaining port restrictions for security.

      Test Coverage Gap

      Current tests deploy storage in the same namespace as Tempo. Consider adding a test case that:

      1. Deploys MinIO in a separate namespace (e.g., `storage-namespace`)

      2. Deploys TempoStack in a different namespace (e.g., `tempo-namespace`)

      3. Verifies cross-namespace connectivity works with network policies enabled

      This would have caught the issue before it affected ODF users.

       

              bbongart@redhat.com Benedikt Bongartz
              rhn-support-ikanse Ishwar Kanse
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: