-
Feature Request
-
Resolution: Unresolved
-
Undefined
-
None
-
openshift-4.12, openshift-4.13, openshift-4.14, openshift-4.15, openshift-4.16, openshift-4.17, openshift-4.18
-
None
-
Improvement
-
False
-
None
-
False
-
Not Selected
-
-
-
-
1. Proposed title of this feature request
OCP 4: Allow support for accessing ports 22623 and 22624 after installation
2. What is the nature and description of the request?
Currently, ports 22623 and 22624 are available ONLY during ignition, and subsequently an iptable rule is injected on hosts to disable access to ANY subnet/IP at that port, not just the IP of the ignition/rendezvous/master hosts. This means that external calls to servers that use those ports are automatically dropped, failing communications unless the upstream servers move ports.
3. Why does the customer need this? (List the business requirements here)
There are 2 places where iptables rules are added, one for the node level you linked and one for the pod level I linked.
the docs for the machine config make it seem like securing the 22623 and 22624 port are on the cluster admin: https://docs.openshift.com/container-platform/4.13/security/certificate_types_descriptions/machine-config-operator-certificates.html
Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates.
To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies.
The docs for an external load balancer at install time also indicate ensuring the ports are secured is on the cluster administrator: https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.html#nw-osp-configuring-external-load-balancer_ipi-install-post-installation-configuration
The fact the the 2 ports are blocked isn't even mentioned in the "about networking", "understanding the cluster network operator", "about the OVN-Kubernetes network plugin", "the about the openshift SDN network plugin", or any other page in the openshift documentation as it should be. The only way it would get figured out that the ports are blocked are like we did, have a random app fail when trying to use them and have to trace by why.
There used to be warnings in the docs that hardware networks and additional network could allow containers to by-pass network policies, I assume the MultiNetworkPolicy Feature is why those aren't there now.
- I expect that because we have "useMultiNetworkPolicy: false" in our cluster network config, the iptables rules in the pod would be set to only apply to the default interface, eth0, in the pod.
- I would expect to be able to use the tcp ports 22623 and 22624 between pods/services on worker nodes in the cluster. There is nothing that prevents me from creating a service on ports 22623 and 22624, even though it wont work.
- I would expect that the cluster network Operator is configured with the cluster subnet, service subnet, machine subnet, and able to aquire the control plane ndoeIPs and the api/api-int load balancer IPs the iptables rules blocking the ports can be more fine grained that blocking all tcp ports 22623 and 22624 traffic.
4. List any affected packages or components.
SRIOV traffic handling
OVN/SDN
Egress
Multus
//additional notes:
- See KCS: https://access.redhat.com/solutions/7007012 describing issue
- See code:
#ovn:https://github.com/openshift/ovn-kubernetes/blob/14fb7c43a5b54e9be4063de628c996fcfcc3b5ad/go-controller/pkg/node/OCP_HACKS.go#L19
Likely this could be fixed with a specific IP range block (or annotation/allow rule at SDN/OVN network operator layer plausibly) to override the block.
- Additionally: Should improve documentation in the meantime to ensure this is noted as restricted ports until Resolved (as applicable).