-
Bug
-
Resolution: Unresolved
-
Blocker
-
None
-
2.10.0
-
None
MTV 2.10.0-3
storage offload sanity cycle with single VM failed on :
"flag provided but not defined: -source-vm-id"
[kni@f01-h07-000-r640 Storage_Offload]$ oc logs populate-d648df35-363b-4f8a-a240-c6ee9279cd32 -noffload-bsaeline flag provided but not defined: -source-vm-id Usage of /bin/vsphere-xcopy-volume-populator: -add_dir_header If true, adds the file directory to the header of the log messages -alsologtostderr log to standard error as well as files (no effect when -logtostderr=true) -cr-name string The Custom Resouce Name -cr-namespace string The Custom Resouce Namespace -http-endpoint :8080 The TCP network address where the HTTP server for diagnostics, including metrics and leader election health check, will listen (example: :8080). The default is empty string, which means the server is disabled. -kubeconfig string Path to a kubeconfig. Only required if out-of-cluster.
[kni@f01-h07-000-r640 Storage_Offload]$ oc get pods/populate-d648df35-363b-4f8a-a240-c6ee9279cd32 -noffload-bsaeline -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.128.1.34/23"],"mac_address":"0a:58:0a:80:01:22","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.169.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.1.34/23","gateway_ip":"10.128.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.1.34" ], "mac": "0a:58:0a:80:01:22", "default": true, "dns": {} }] openshift.io/scc: forklift-controller-scc seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: "2025-09-11T12:49:34Z" labels: migration: 7c46ac7f-b882-4758-9f7f-6aed29cc94c8 pvcName: offload-bsaeline-offload-baseline-disk-0-cbd7f293 name: populate-d648df35-363b-4f8a-a240-c6ee9279cd32 namespace: offload-bsaeline resourceVersion: "1312193213" uid: 0b790e79-3f0e-4aca-a79c-a04b66e6f0a1 spec: containers: - args: - --source-vm-id=vm-31400 - --source-vmdk=[PerfTest_VC7_1_ISCSI_24TB] offload_baseline/offload_baseline_3.vmdk - --target-namespace=offload-bsaeline - --cr-name=offload-bsaeline-offload-baseline-disk-0-cbd7f293 - --cr-namespace=offload-bsaeline - --owner-name=offload-bsaeline-offload-baseline-disk-0-cbd7f293 - --secret-name=offload-bsaeline-vm-31400-rzm55 - --storage-vendor-product=ontap - --pvc-size=53687091200 - --owner-uid=d648df35-363b-4f8a-a240-c6ee9279cd32 envFrom: - secretRef: name: offload-bsaeline-vm-31400-rzm55 image: registry.redhat.io/mtv-candidate/mtv-vsphere-xcopy-volume-populator-rhel9@sha256:474aec7f503b4c3def8e685db86e46c5407f55c95783dd1a1763be0d0901587b imagePullPolicy: IfNotPresent name: populate ports: - containerPort: 8443 name: metrics protocol: TCP resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 107 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeDevices: - devicePath: /dev/block name: target volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lqbq6 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: populator-dockercfg-gwfqf nodeName: worker002-r640 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 107 seccompProfile: type: RuntimeDefault serviceAccount: populator serviceAccountName: populator terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: target persistentVolumeClaim: claimName: prime-d648df35-363b-4f8a-a240-c6ee9279cd32 - name: kube-api-access-lqbq6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2025-09-11T12:49:43Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-09-11T12:49:34Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-09-11T12:49:34Z" reason: PodFailed status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2025-09-11T12:49:34Z" reason: PodFailed status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-09-11T12:49:34Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://3d79159fb17e651da3e0c934f34aeb05d22d67704a72a47794cd52c077153fe5 image: registry.redhat.io/mtv-candidate/mtv-vsphere-xcopy-volume-populator-rhel9@sha256:474aec7f503b4c3def8e685db86e46c5407f55c95783dd1a1763be0d0901587b imageID: registry.redhat.io/mtv-candidate/mtv-vsphere-xcopy-volume-populator-rhel9@sha256:474aec7f503b4c3def8e685db86e46c5407f55c95783dd1a1763be0d0901587b lastState: {} name: populate ready: false restartCount: 0 started: false state: terminated: containerID: cri-o://3d79159fb17e651da3e0c934f34aeb05d22d67704a72a47794cd52c077153fe5 exitCode: 2 finishedAt: "2025-09-11T12:49:41Z" reason: Error startedAt: "2025-09-11T12:49:41Z" volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lqbq6 readOnly: true recursiveReadOnly: Disabled hostIP: 10.1.60.15 hostIPs: - ip: 10.1.60.15 phase: Failed podIP: 10.128.1.34 podIPs: - ip: 10.128.1.34 qosClass: BestEffort startTime: "2025-09-11T12:49:34Z"
from rgolan1@redhat.com
when doing skopeo inspect docker://registry.redhat.io/mtv-candidate/mtv-vsphere-xcopy-volume-populator-rhel9@sha256:474aec7f503b4c3def8e685db86e46c5407f55c95783dd1a1763be0d0901587bI can see that this image is old -4:24here is the labels of that image:4:24 "Labels": { "architecture": "x86_64", "build-date": "2025-06-20T11:18:45", "com.redhat.component": "mtv-vsphere-xcopy-volume-populator-container", "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI", "description": "Migration Toolkit for Virtualization - vSphere XCOPY Volume Populator", "distribution-scope": "public", "io.buildah.version": "1.39.0-dev", "io.k8s.description": "Migration Toolkit for Virtualization - vSphere XCOPY Volume Populator", "io.k8s.display-name": "Migration Toolkit for Virtualization", "io.openshift.expose-services": "", "io.openshift.tags": "migration,mtv,forklift", "license": "Apache License 2.0", "maintainer": "Migration Toolkit for Virtualization Team \u003cmigtoolkit-virt@redhat.com\u003e", "name": "mtv-candidate/mtv-vsphere-xcopy-volume-populator-rhel9", "release": "1747218906", "revision": "bfa0495925b0fdfd15209b2d5a60be7dc8d27976", "summary": "Migration Toolkit for Virtualization - vSphere XCOPY Volume Populator", "url": "https://www.redhat.com", "vcs-ref": "bfa0495925b0fdfd15209b2d5a60be7dc8d27976", "vcs-type": "git", "vendor": "Red Hat, Inc.", "version": "2.9.0" },
the image is from June
if I take the vcs-ref and git log it,
$ git log bfa0495925b0fdfd15209b2d5a60be7dc8d27976 -1 --shortstat
commit bfa0495925b0fdfd15209b2d5a60be7dc8d27976
Author: Stefan Olenocin <solenoci@redhat.com>
Date: Fri Jun 20 13:15:36 2025 +0200
chore(deps): update tekton refs (#2055)
Signed-off-by: Stefan Olenocin <solenoci@redhat.com>
28 files changed, 206 insertions(+), 206 deletions(-)
and that version really doesn't have the source-vm-id
- relates to
-
MTV-3253 [Scale] Performance regression testing for 2.10.0 release
-
- In Progress
-