-
Bug
-
Resolution: Not a Bug
-
Normal
-
None
-
4.20
-
None
-
None
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem: Bugs in iptables-alerter.sh prevent flagging containers using iptables.
Version-Release number of selected component (if applicable): 4.20 (observed)
How reproducible:
Go through the commands to see they cannot work as expected.
Steps to Reproduce:
1. The output template for the `crictl inspectp` command on line 43 returns an error, so no results will ever be found:
```
[root@rhcos-pentest-gbmdq-master-0 ~]# crictl inspectp -o go-template --template '{{.status.metadata.namespace}} {{.status.metadata.name}} {{.status.metadata.uid}} {{.status.linux.namespaces.options.network}} {{range .info.runtimeSpec.linux.namespaces }}{{if eq .type "network"}}{{.path}}{{end}}{{end}}'
FATA[0000] get the status of pod sandboxes: execute template: failed to template data: template: tmplExecuteRawJSON:1:9: executing "tmplExecuteRawJSON" at <.status.metadata.namespace>: can't evaluate field status in type []interface {}
```
2. The namespaces' file location are under `/run/netns/` not `/var/run/netns`
```
[root@rhcos-pentest-gbmdq-master-0 ~]# lsns -t net
NS TYPE NPROCS PID USER NETNSID NSFS COMMAND
4026531840 net 298 1 root unassigned /run/netns/7f8e55cd-394e-4d54-a212-69f4753c7e4e /usr/lib/systemd/systemd --switched-root --system --deserialize 28
/run/netns/5c8c2950-6c65-4b72-9ef0-0f445ed86302
[...snip...]
```
Actual results:
Commands executed by the script error-out, making the script bail early.
Expected results:
Commands executed by the script do not fail/bail before flagging results.
Additional info:
Found while pentesting RHCOS.
[root@rhcos-pentest-gbmdq-master-0 ~]# ps auxwwf | grep iptables-alerter.sh
root 4667 0.0 0.0 4320 2944 ? Ss Nov20 0:01 \_ /bin/bash /iptables-alerter/iptables-alerter.sh
[root@rhcos-pentest-gbmdq-master-0 ~]# find / -name iptables-alerter.sh -type f 2>/dev/null
/sysroot/ostree/deploy/rhcos/var/lib/kubelet/pods/71cf26d6-3584-476e-85f9-91f2542f7730/volumes/kubernetes.io~configmap/iptables-alerter-script/..2025_11_20_23_50_06.1907295509/iptables-alerter.sh
/var/lib/kubelet/pods/71cf26d6-3584-476e-85f9-91f2542f7730/volumes/kubernetes.io~configmap/iptables-alerter-script/..2025_11_20_23_50_06.1907295509/iptables-alerter.sh
[root@rhcos-pentest-gbmdq-master-0 ~]# crictl pods -v
ID: 7909959e7c764fd40873e128a9bd5edf29c506ad80fef41433cce1e50a3b83db
Name: iptables-alerter-29j59
UID: 71cf26d6-3584-476e-85f9-91f2542f7730
Namespace: openshift-network-operator
Status: Ready
Created: 2025-11-20 23:50:06.734894452 +0000 UTC
Labels:
app -> iptables-alerter
component -> network
controller-revision-hash -> 6b77488cc8
io.kubernetes.container.name -> POD
io.kubernetes.pod.name -> iptables-alerter-29j59
io.kubernetes.pod.namespace -> openshift-network-operator
io.kubernetes.pod.uid -> 71cf26d6-3584-476e-85f9-91f2542f7730
openshift.io/component -> network
pod-template-generation -> 1
type -> infra
Annotations:
cluster-autoscaler.kubernetes.io/enable-ds-eviction -> false
kubernetes.io/config.seen -> 2025-11-20T23:50:06.409699435Z
kubernetes.io/config.source -> api
run.oci.systemd.subgroup ->
Runtime: (default)
[root@rhcos-pentest-gbmdq-master-0 ~]# cat -n /var/lib/kubelet/pods/71cf26d6-3584-476e-85f9-91f2542f7730/volumes/kubernetes.io~configmap/iptables-alerter-script/..2025_11_20_23_50_06.1907295509/iptables-alerter.sh
1 #!/bin/bash
2
3 set -euo pipefail
4
5 function crictl {
6 chroot /host /bin/crictl "$@"
7 }
8 function ip {
9 chroot /host /sbin/ip "$@"
10 }
11 function nsenter {
12 chroot /host /bin/nsenter "$@"
13 }
14
15 function check_pods {
16 # We need to use crictl to be able to map pod information to network namespace
17 # information, but there seems to be some bug in crictl that causes excessive CPU
18 # usage on some hosts, for unknown reasons. Since we expect that most nodes won't
19 # have any iptables-using pods anyway, do a pre-scan of all (non-hostnetwork)
20 # namespaces without using crictl, and bail out early if we don't find anything
21 iptables_output=""
22 for netns_pid in $(lsns -t net -o pid -nr | sort -u | grep -v '^1$'); do
23 # Set iptables_output to the first iptables rule in the network namespace, if any.
24 # (We use `awk` here rather than `grep` intentionally to avoid awkwardness with
25 # grep's exit status on no match.)
26 iptables_output=$(
27 (nsenter -n -t "${netns_pid}" iptables-save || true;
28 nsenter -n -t "${netns_pid}" ip6tables-save || true) 2>/dev/null | \
29 awk '/^-A/ {print; exit}'
30 )
31 if [[ -n "${iptables_output}" ]]; then
32 break
33 fi
34 done
35 if [[ -z "${iptables_output}" ]]; then
36 # Nothing to see here
37 return 0
38 fi
39
40 # Somebody was using iptables, so now we have to figure out who.
41 for id in $(crictl pods -q); do
42 # Inspect the pod
43 read pod_namespace pod_name pod_uid netns netns_path <<<$(crictl inspectp -o go-template --template '{{.status.metadata.namespace}} {{.status.metadata.name}} {{.status.metadata.uid}} {{.status.linux.namespaces.options.network}} {{range .info.runtimeSpec.linux.namespaces }}{{if eq .type "network"}}{{.path}}{{end}}{{end}}' ${id} 2>/dev/null || true )
44
45 # Check that it's a pod-network pod. (This also catches "crictl errored out".)
46 if [[ "${netns}" != "POD" ]]; then
47 continue
48 fi
49 if [[ ! "${netns_path}" =~ ^/var/run/netns/ ]]; then
50 continue
51 fi
52 netns=$(basename "${netns_path}")
53
54 # Set iptables_output to the first iptables rule in the pod's network
55 # namespace, if any. (We use `awk` here rather than `grep` intentionally
56 # to avoid awkwardness with grep's exit status on no match.)
57 iptables_output=$(
58 (ip netns exec "${netns}" iptables-save || true;
59 ip netns exec "${netns}" ip6tables-save || true) 2>/dev/null | \
60 awk '/^-A/ {print; exit}'
61 )
62 if [[ -z "${iptables_output}" ]]; then
63 continue
64 fi
65
66 # Check if we already logged an event for it
67 events=$(kubectl get events -n "${pod_namespace}" -l pod-uid="${pod_uid}" 2>/dev/null)
68 if [[ -n "${events}" ]]; then
69 echo "Skipping pod ${pod_namespace}/${pod_name} which we already logged an event for."
70 continue
71 fi
72
73 echo "Logging event for ${pod_namespace}/${pod_name} which has iptables rules"
74
75 # eg "2023-10-19T15:45:10.353846Z"
76 event_time=$(date -u +%FT%T.%6NZ)
77
78 kubectl create -f - <<EOF
79 apiVersion: events.k8s.io/v1
80 kind: Event
81 metadata:
82 namespace: ${pod_namespace}
83 generateName: iptables-alert-
84 labels:
85 pod-uid: ${pod_uid}
86 regarding:
87 apiVersion: v1
88 kind: Pod
89 namespace: ${pod_namespace}
90 name: ${pod_name}
91 uid: ${pod_uid}
92 reportingController: openshift.io/iptables-deprecation-alerter
93 reportingInstance: ${ALERTER_POD_NAME}
94 action: IPTablesUsageObserved
95 reason: IPTablesUsageObserved
96 type: Normal
97 note: |
98 This pod appears to have created one or more iptables rules. IPTables is
99 deprecated and will no longer be available in RHEL 10 and later. You should
100 consider migrating to another API such as nftables or eBPF. See also
101 https://access.redhat.com/solutions/6739041
102
103 Example iptables rule seen in this pod:
104 ${iptables_output}
105 eventTime: ${event_time}
106 EOF
107 done
108 }
109
110 while :; do
111 date
112 check_pods
113 echo ""
114
115 sleep 3600
116 done