Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-4225 Velero/NodeAgent pod logs has wrong timezone value
  3. OADP-4338

[IBM QE-Z] Verify Bug OADP-4225 - Velero/NodeAgent pod logs has wrong timezone value

XMLWordPrintable

    • Icon: Sub-task Sub-task
    • Resolution: Done
    • Icon: Undefined Undefined
    • OADP 1.4.0
    • None
    • None
    • None
    • 4
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • 0.000
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown

      Description of problem:

      Created DPA CR with setting timezone value to America/New_York. Velero/NodeAgent pod logs has wrong timezone value. 

      Velero pod logs without setting timezone. 

      time="2024-06-05T07:20:54Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=openshift-adp/ts-dpa-1 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:141"

      Velero pod logs after setting timezone value to America/New_York

      time="2024-06-05T07:22:32Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=openshift-adp/ts-dpa-1 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:126"

      I can see the environment variable is set on the velero pod. 

         env:
          - name: TZ
            value: America/New_York

      Version-Release number of selected component (if applicable):
      OADP 1.4 (Installed via oadp-1.4 branch)
      OCP 4.16

       

      How reproducible:
      Always

       

      Steps to Reproduce:
      1. Create a DPA with setting timezone value. 

      oc get dpa ts-dpa -o yaml
      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        creationTimestamp: "2024-06-05T07:22:26Z"
        generation: 4
        name: ts-dpa
        namespace: openshift-adp
        resourceVersion: "69732"
        uid: 32f69445-d641-43ac-9aa8-a2c0118dbfa0
      spec:
        backupLocations:
        - velero:
            credential:
              key: cloud
              name: cloud-credentials-gcp
            default: true
            objectStorage:
              bucket: oadp82611s6ntt
              prefix: velero-e2e-e2a1d624-2306-11ef-a448-845cf3eff33a
            provider: gcp
        configuration:
          nodeAgent:
            enable: true
            podConfig:
              env:
              - name: TZ
                value: America/New_York
            uploaderType: kopia
          velero:
            defaultPlugins:
            - openshift
            - gcp
            - kubevirt
            podConfig:
              env:
              - name: TZ
                value: America/New_York
      status:
        conditions:
        - lastTransitionTime: "2024-06-05T07:22:26Z"
          message: Reconcile complete
          reason: Complete
          status: "True"
          type: Reconciled

      2. Check velero/NodeAgent pod logs

      Actual results:

      Velero/NodeAgent pod is not respecting timezone config. 

      Expected results:

      Velero/NodeAgent pod should respect the timezone config.

       

      Additional info:

      Velero pod yaml 

      $ oc get pod velero-597cb56895-lmkgx -o yaml
      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.131.0.22/23"],"mac_address":"0a:58:0a:83:00:16","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.131.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.131.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.131.0.1"}],"ip_address":"10.131.0.22/23","gateway_ip":"10.131.0.1"}}'
          k8s.v1.cni.cncf.io/network-status: |-
            [{
                "name": "ovn-kubernetes",
                "interface": "eth0",
                "ips": [
                    "10.131.0.22"
                ],
                "mac": "0a:58:0a:83:00:16",
                "default": true,
                "dns": {}
            }]
          openshift.io/scc: restricted-v2
          prometheus.io/path: /metrics
          prometheus.io/port: "8085"
          prometheus.io/scrape: "true"
          seccomp.security.alpha.kubernetes.io/pod: runtime/default
        creationTimestamp: "2024-06-05T07:22:26Z"
        generateName: velero-597cb56895-
        labels:
          app.kubernetes.io/component: server
          app.kubernetes.io/instance: ts-dpa
          app.kubernetes.io/managed-by: oadp-operator
          app.kubernetes.io/name: velero
          component: velero
          deploy: velero
          openshift.io/oadp: "True"
          pod-template-hash: 597cb56895
        name: velero-597cb56895-lmkgx
        namespace: openshift-adp
        ownerReferences:
        - apiVersion: apps/v1
          blockOwnerDeletion: true
          controller: true
          kind: ReplicaSet
          name: velero-597cb56895
          uid: 63fa7a43-47cb-4f3b-be5a-10a64868ad5b
        resourceVersion: "67897"
        uid: a75193c1-d764-465e-a9be-d3ba40ab40fb
      spec:
        containers:
        - args:
          - server
          - --uploader-type=kopia
          - --fs-backup-timeout=4h
          - --restore-resource-priorities=securitycontextconstraints,customresourcedefinitions,klusterletconfigs.config.open-cluster-management.io,managedcluster.cluster.open-cluster-management.io,namespaces,roles,rolebindings,clusterrolebindings,klusterletaddonconfig.agent.open-cluster-management.io,managedclusteraddon.addon.open-cluster-management.io,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io
          - --disable-informer-cache=false
          command:
          - /velero
          env:
          - name: VELERO_SCRATCH_DIR
            value: /scratch
          - name: VELERO_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: LD_LIBRARY_PATH
            value: /plugins
          - name: TZ
            value: America/New_York
          - name: OPENSHIFT_IMAGESTREAM_BACKUP
            value: "true"
          image: quay.io/konveyor/velero:oadp-1.4
          imagePullPolicy: Always
          name: velero
          ports:
          - containerPort: 8085
            name: metrics
            protocol: TCP
          resources:
            requests:
              cpu: 500m
              memory: 128Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            runAsUser: 1000690000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /plugins
            name: plugins
          - mountPath: /scratch
            name: scratch
          - mountPath: /etc/ssl/certs
            name: certs
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-g2p2d
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        imagePullSecrets:
        - name: velero-dockercfg-vmhcl
        initContainers:
        - image: quay.io/konveyor/openshift-velero-plugin:oadp-1.4
          imagePullPolicy: Always
          name: openshift-velero-plugin
          resources:
            requests:
              cpu: 500m
              memory: 128Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            runAsUser: 1000690000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /target
            name: plugins
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-g2p2d
            readOnly: true
        - image: quay.io/konveyor/velero-plugin-for-gcp:oadp-1.4
          imagePullPolicy: Always
          name: velero-plugin-for-gcp
          resources:
            requests:
              cpu: 500m
              memory: 128Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            runAsUser: 1000690000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /target
            name: plugins
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-g2p2d
            readOnly: true
        - image: quay.io/konveyor/kubevirt-velero-plugin:v0.2.0
          imagePullPolicy: Always
          name: kubevirt-velero-plugin
          resources:
            requests:
              cpu: 500m
              memory: 128Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            runAsNonRoot: true
            runAsUser: 1000690000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /target
            name: plugins
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-g2p2d
            readOnly: true
        nodeName: oadp-82611-s6ntt-worker-b-nr8sx
        preemptionPolicy: PreemptLowerPriority
        priority: 0
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 1000690000
          seLinuxOptions:
            level: s0:c26,c20
          seccompProfile:
            type: RuntimeDefault
        serviceAccount: velero
        serviceAccountName: velero
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - effect: NoSchedule
          key: node.kubernetes.io/memory-pressure
          operator: Exists
        volumes:
        - emptyDir: {}
          name: plugins
        - emptyDir: {}
          name: scratch
        - emptyDir: {}
          name: certs
        - name: kube-api-access-g2p2d
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
            - configMap:
                items:
                - key: service-ca.crt
                  path: service-ca.crt
                name: openshift-service-ca.crt
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2024-06-05T07:22:29Z"
          status: "True"
          type: PodReadyToStartContainers
        - lastProbeTime: null
          lastTransitionTime: "2024-06-05T07:22:31Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2024-06-05T07:22:32Z"
          status: "True"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2024-06-05T07:22:32Z"
          status: "True"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2024-06-05T07:22:26Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: cri-o://a290b630e8408869744b3730d92fc3ca1b26a5df356639f15ff3e6271ec9a958
          image: quay.io/konveyor/velero:oadp-1.4
          imageID: quay.io/konveyor/velero@sha256:2a15b8eae19037988f9066e4a18864e7d51baec96e0fa107d39980ff7b7c9543
          lastState: {}
          name: velero
          ready: true
          restartCount: 0
          started: true
          state:
            running:
              startedAt: "2024-06-05T07:22:32Z"
        hostIP: 10.0.128.3
        hostIPs:
        - ip: 10.0.128.3
        initContainerStatuses:
        - containerID: cri-o://54d7b9aa9c4b44c5e57abb05a35fd494b6b12ff4a519b8c7fec250fccedb584c
          image: quay.io/konveyor/openshift-velero-plugin:oadp-1.4
          imageID: quay.io/konveyor/openshift-velero-plugin@sha256:1579ec458595a7decfe9a9a6f7710074f288324180274b73e27d42b48c8d9ceb
          lastState: {}
          name: openshift-velero-plugin
          ready: true
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: cri-o://54d7b9aa9c4b44c5e57abb05a35fd494b6b12ff4a519b8c7fec250fccedb584c
              exitCode: 0
              finishedAt: "2024-06-05T07:22:28Z"
              reason: Completed
              startedAt: "2024-06-05T07:22:28Z"
        - containerID: cri-o://f84971af51259b7ac100a2b7d92211371cdbd342bc1a7232084124dc3819590e
          image: quay.io/konveyor/velero-plugin-for-gcp:oadp-1.4
          imageID: quay.io/konveyor/velero-plugin-for-gcp@sha256:424b59e74e996c579bcc90789439f8614d0af1ab38c2c5b9be44add536e0bdf5
          lastState: {}
          name: velero-plugin-for-gcp
          ready: true
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: cri-o://f84971af51259b7ac100a2b7d92211371cdbd342bc1a7232084124dc3819590e
              exitCode: 0
              finishedAt: "2024-06-05T07:22:30Z"
              reason: Completed
              startedAt: "2024-06-05T07:22:30Z"
        - containerID: cri-o://05078ecc991b69fec80273e9b60a33d3c25516dbb48864e14ed183f502e5b6b5
          image: quay.io/konveyor/kubevirt-velero-plugin:v0.2.0
          imageID: quay.io/konveyor/kubevirt-velero-plugin@sha256:9a8adbfa8c5c552b8ae58eba6385332f5e0586b69605f63b95ffa0bff0df7f56
          lastState: {}
          name: kubevirt-velero-plugin
          ready: true
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: cri-o://05078ecc991b69fec80273e9b60a33d3c25516dbb48864e14ed183f502e5b6b5
              exitCode: 0
              finishedAt: "2024-06-05T07:22:31Z"
              reason: Completed
              startedAt: "2024-06-05T07:22:31Z"
        phase: Running
        podIP: 10.131.0.22
        podIPs:
        - ip: 10.131.0.22
        qosClass: Burstable
        startTime: "2024-06-05T07:22:26Z"

              uprasad@redhat.com Ukthi Prasad
              akarol@redhat.com Aziza Karol
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: