Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-1741

We need better placement control of keepalive static pods.

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • Installer
    • False
    • False
    • Undefined

      1. Proposed title of this feature request

      Control openshift-XXXX-infra pod placement

      2. What is the nature and description of the request?

      IPI installation.

      The customer uses multiple VLANs for the node allocations.

      IPI install keepalives for VIP management i.e Vmware, rhev, OSP, etc..

      Keepalive needs VRRP communication to be allowed between nodes. When the nodes are coming from the different VLANs likely vrrp will be blocked and for security reasons, they can't open it.  The other thing is, VIP of one VLAN is not routable in other VLANs and therefore it doesn't make sense to run keepalive in other VLANs.

      We should either allow only one VLAN or should have better control on keepalive static pod placements.

      Side effect:- A side effect of not having this would be, keepalive running in another VLAN would try to claim the same VIP as either side is not receiving vrrp messages. 

      When it comes to deciding where we should run these pods. I think we need to calculate no of nodes in VLAN ( the larger size, we get more room for deployments) or ask the customer?

      3. Why does the customer need this? (List the business requirements here)

      • To avoid other VLAN nodes claiming non-routable VIP addresses which is not going to work.
      • There may be some other side effects of the current behavior.

      4. List any affected packages or components.

      Keepalive static pods

       

       

            racedoro@redhat.com Ramon Acedo
            rhn-support-rupatel Rupesh Patel
            Votes:
            1 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated:
              Resolved: