-
Bug
-
Resolution: Cannot Reproduce
-
Normal
-
None
-
CNV v4.16.0
-
0.42
-
False
-
-
False
-
None
-
---
-
---
-
-
Low
-
No
Description of problem:
Connectivity is lost after the migration of a virtual machine (VM) that has a secondary interface configured on a Linux bridge with VLAN tagging.
Version-Release number of selected component (if applicable):
Tested on: 4.16.0 PSI cluster 4.13.23 BM cluster
How reproducible:
Always
Steps to Reproduce:
1.Create a virtual machine with a secondary interface configured on a Linux bridge with VLAN tagging. 2.Migrate the virtual machine to a different node. 3.Check the network connectivity of the virtual machine after migration.
Actual results:
After migration, the virtual machine loses network connectivity on the secondary interface that is configured on the Linux bridge with VLAN tagging.
Expected results:
The virtual machine should retain network connectivity on all interfaces, including the secondary interface configured on the Linux bridge with VLAN tagging, after migration.
Additional info:
- This issue occurs consistently across multiple migrations. - The primary interface of the virtual machine, retains connectivity after migration.
NodeNetworkConfigurationPolicy:
oc get nncp NAME STATUS REASON network-bridge-1-nncp Available SuccessfullyConfigured oc get nncp network-bridge-1-nncp -o yaml apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: annotations: meta.helm.sh/release-name: network-bridge meta.helm.sh/release-namespace: test-bug nmstate.io/webhook-mutating-timestamp: "1714994532095397501" creationTimestamp: "2024-05-06T11:21:47Z" generation: 2 labels: app.kubernetes.io/managed-by: Helm name: network-bridge-1-nncp resourceVersion: "8131371" uid: 67f1ad46-b1d5-40cd-9c7f-26c0ae0e3baf spec: desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens10 ipv4: enabled: false name: br1 state: up type: linux-bridge nodeSelector: node-role.kubernetes.io/worker: "" status: conditions: - lastHeartbeatTime: "2024-05-06T11:22:28Z" lastTransitionTime: "2024-05-06T11:22:28Z" message: 3/3 nodes successfully configured reason: SuccessfullyConfigured status: "True" type: Available - lastHeartbeatTime: "2024-05-06T11:22:28Z" lastTransitionTime: "2024-05-06T11:22:28Z" reason: SuccessfullyConfigured status: "False" type: Degraded - lastHeartbeatTime: "2024-05-06T11:22:28Z" lastTransitionTime: "2024-05-06T11:22:28Z" reason: ConfigurationProgressing status: "False" type: Progressing lastUnavailableNodeCountUpdate: "2024-05-06T11:22:28Z"
NetworkAttachmentDefinition (with VLAN id):
oc get network-attachment-definition NAME AGE bridge-nad 3h21m oc get network-attachment-definition bridge-nad -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: meta.helm.sh/release-name: network-bridge meta.helm.sh/release-namespace: test-bug creationTimestamp: "2024-05-06T08:02:28Z" generation: 4 labels: app.kubernetes.io/managed-by: Helm name: bridge-nad namespace: test-bug resourceVersion: "8004361" uid: 23fa4052-62c7-448d-8439-eb12e4b14e41 spec: config: '{ "cniVersion": "0.3.1", "name": "bridge-network", "type": "cnv-bridge", "bridge": "br1", "macspoofchk": true, "ipam": {}, "vlan": 1001, "preserveDefaultVlan": false }'
VirtualMachine:
oc get vm NAME AGE STATUS READY rhel-1-vm 3h3m Running True rhel-2-vm 3h3m Running True oc get vm -o yaml apiVersion: v1 items: - apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: kubevirt.io/latest-observed-api-version: v1 kubevirt.io/storage-observed-api-version: v1 meta.helm.sh/release-name: rhel meta.helm.sh/release-namespace: test-bug creationTimestamp: "2024-05-06T08:22:06Z" finalizers: - kubevirt.io/virtualMachineControllerFinalize generation: 1 labels: app.kubernetes.io/managed-by: Helm name: rhel-1-vm namespace: test-bug resourceVersion: "8129614" uid: d303a935-a17f-408e-b95b-9fcb4b6e8498 spec: running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: rhel-1-vm spec: architecture: amd64 domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - macAddress: 02:99:4b:00:00:1a masquerade: {} name: default - bridge: {} macAddress: 02:99:4b:00:00:1b name: bridge-net rng: {} machine: type: pc-q35-rhel9.4.0 resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: bridge-nad name: bridge-net terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: registry.redhat.io/rhel8/rhel-guest-image:latest name: containerdisk - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: - fd10:0:2::2/120 dhcp4: true gateway6: fd10:0:2::1 eth1: addresses: - 10.200.0.1/24 userData: |- #cloud-config password: password chpasswd: { expire: False } name: cloudinitdisk status: conditions: - lastProbeTime: null lastTransitionTime: "2024-05-06T11:19:57Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: null status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable - lastProbeTime: "2024-05-06T11:20:37Z" lastTransitionTime: null status: "True" type: AgentConnected created: true desiredGeneration: 1 observedGeneration: 1 printableStatus: Running ready: true volumeSnapshotStatuses: - enabled: false name: containerdisk reason: Snapshot is not supported for this volumeSource type [containerdisk] - enabled: false name: cloudinitdisk reason: Snapshot is not supported for this volumeSource type [cloudinitdisk] - apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: kubevirt.io/latest-observed-api-version: v1 kubevirt.io/storage-observed-api-version: v1 meta.helm.sh/release-name: rhel meta.helm.sh/release-namespace: test-bug creationTimestamp: "2024-05-06T08:22:06Z" finalizers: - kubevirt.io/virtualMachineControllerFinalize generation: 1 labels: app.kubernetes.io/managed-by: Helm name: rhel-2-vm namespace: test-bug resourceVersion: "8129615" uid: 34f29fdf-7db7-4682-87cb-9c3f54a8a3ee spec: running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: rhel-2-vm spec: architecture: amd64 domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - macAddress: 02:99:4b:00:00:18 masquerade: {} name: default - bridge: {} macAddress: 02:99:4b:00:00:19 name: bridge-net rng: {} machine: type: pc-q35-rhel9.4.0 resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: bridge-nad name: bridge-net terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: registry.redhat.io/rhel8/rhel-guest-image:latest name: containerdisk - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: - fd10:0:2::2/120 dhcp4: true gateway6: fd10:0:2::1 eth1: addresses: - 10.200.0.2/24 userData: |- #cloud-config password: password chpasswd: { expire: False } name: cloudinitdisk status: conditions: - lastProbeTime: null lastTransitionTime: "2024-05-06T11:19:54Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: null status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable - lastProbeTime: "2024-05-06T11:20:37Z" lastTransitionTime: null status: "True" type: AgentConnected created: true desiredGeneration: 1 observedGeneration: 1 printableStatus: Running ready: true volumeSnapshotStatuses: - enabled: false name: containerdisk reason: Snapshot is not supported for this volumeSource type [containerdisk] - enabled: false name: cloudinitdisk reason: Snapshot is not supported for this volumeSource type [cloudinitdisk] kind: List metadata: resourceVersion: ""
- is related to
-
CNV-29499 [2213262] Lost connectivity after live migration of a VM with a hot-plugged disk
- Closed