Uploaded image for project: 'Migration Toolkit for Red Hat Openshift Project Lead'
  1. Migration Toolkit for Red Hat Openshift Project Lead
  2. MTRHO-87

[MTRHO] Migrations are failing due to namespace change from src to tgt cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Blocker Blocker
    • MTRHO 1.0
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • No

      Description of problem: 

      Tried migrating an application from dev sandbox to OCP 4.11 cluster, migration failed and the application pod was in error state. All the rolebinding resources are getting migrated to the target cluster, which is creating permission issues for the application.

      Also the migration failed at the kubectl-apply-kustomize step. Attached error logs below

      resource mapping not found for name: "terminal-pe0c2m" namespace: "test-crane" from "/workspace/kustomize": no matches for kind "DevWorkspace" in version "workspace.devfile.io/v1alpha2"
      ensure CRDs are installed first
      Error from server: error when creating "/workspace/kustomize": invalid origin role binding crtadmin-pods: attempts to reference role in namespace "rhn-support-prajoshi-dev" instead of current namespace "test-crane"
      Error from server: error when creating "/workspace/kustomize": invalid origin role binding devworkspacedw: attempts to reference role in namespace "rhn-support-prajoshi-dev" instead of current namespace "test-crane"
      Error from server: error when creating "/workspace/kustomize": invalid origin role binding rhn-support-prajoshi-rbac-edit: attempts to reference role in namespace "rhn-support-prajoshi-dev" instead of current namespace "test-crane"
      

      Version-Release number of selected component (if applicable):

      Source GCP 4.6

      Target GCP 4.11

      How reproducible:

      Always

      Steps to Reproduce:

      1. Deploy an stateless application in Dev sandbox env
      2. Trigger migration.

      Actual results: 

      Application pod going in error state due to the new deployer rolebinding is referring to the wrong namespace.

      $ oc get pods -n test-mig
      NAME                                              READY   STATUS      RESTARTS   AGE
      nginx-deployment-1-deploy                         0/1     Error       0          103s
      redis-1-deploy                                    0/1     Error       0          103s
      test-tdh9ko-apply-pod                             0/1     Completed   0          2m22s
      test-tdh9ko-export-pod                            0/1     Completed   0          3m33s
      test-tdh9ko-generate-destination-kubeconfig-pod   0/1     Completed   0          4m27s
      test-tdh9ko-generate-source-kubeconfig-pod        0/1     Completed   0          5m7s
      test-tdh9ko-kubectl-apply-kustomize-pod           0/1     Error       0          2m2s
      test-tdh9ko-kustomize-init-pod                    0/1     Completed   0          2m13s
      test-tdh9ko-transform-pod                         0/1     Completed   0          2m30s
      
      $ oc  logs redis-1-deploy -n test-mig
      error: couldn't get deployment redis-1: replicationcontrollers "redis-1" is forbidden: User "system:serviceaccount:test-mig:deployer" cannot get resource "replicationcontrollers" in API group "" in the namespace "test-mig"
      

      RoleBinding:

      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:deployer
      subjects:
      - kind: ServiceAccount
        name: deployer
        namespace: rhn-support-prajoshi-dev
      

      Expected results:  

      Default rolebinding resources shouldn’t get updated. Also the application pod should be in running state.

      Additional info:

            rhn-engineering-jmontleo Jason Montleon
            rhn-support-prajoshi Prasad Joshi
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: