-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
4.12
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
Rejected
-
CNF Network Sprint 228
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Metallb controller and speaker pods can be limited to nodes using the pod affinity.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Install metallb operator
2. Install pod that has label name=test-only-pod
2.1 Create a service with json below:-
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"name": "test-rc"
},
"name": "test-rc"
},
"spec": {
"replicas": 2,
"template": {
"metadata": {
"labels": {
"name": "test-pods"
}
},
"spec": {
"containers": [
{
"image": "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4",
"name": "test-pod",
"imagePullPolicy": "IfNotPresent",
"resources":{
"limits":{
"memory":"340Mi"
}
}
}
]
}
}
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"name": "test-service"
},
"name": "test-service"
},
"spec": {
"ports": [
{
"name": "http",
"port": 27017,
"protocol": "TCP",
"targetPort": 8080
}
],
"selector": {
"name": "test-pods"
}
}
}
]
}
2.2 oc label pod test-rc-gpm5t name=test-only-pod
pod/test-rc-gpm5t labeled
3. Create a metallb CR with YAML below:-
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
controllerConfig:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- test-only-pod
topologyKey: kubernetes.io/hostname
logLevel: debug
speakerConfig:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- test-only-pod
topologyKey: kubernetes.io/hostname
Actual results:
Pods are not started. Expected controller and speaker pod be scheduled on the node that is running a pod that has label name=test-only-pod oc get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-6f57c98555-tcdmt 0/2 Pending 0 20m metallb-operator-controller-manager-cc796468-nkcg8 1/1 Running 0 32m metallb-operator-webhook-server-858555566f-48gsd 1/1 Running 0 32m speaker-98ds9 0/6 Pending 0 20m speaker-dpp7t 0/6 Pending 0 20m speaker-fpxq9 0/6 Pending 0 20m speaker-ljgk9 0/6 Pending 0 20m speaker-llwgc 0/6 Pending 0 20m
Expected results:
Controller and Speaker pods should be in running status
Additional info:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 21m default-scheduler 0/5 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 5 node(s) didn't match pod affinity rules. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.
Warning FailedScheduling 5m12s (x9 over 20m) default-scheduler 0/5 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 5 node(s) didn't match pod affinity rules. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.
oc get pod -l name=test-only-pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-rc-gpm5t 1/1 Running 0 35m 10.131.0.34 asood-10211-lrdzn-worker-tqznp <none> <none>