XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Blocker Blocker
    • 2.10.0.GA
    • 2.10.0.GA
    • testing
    • None
    • False
    • False
    • Undefined

      We need to re-new existed QE OCP 3.11 instance https://console.ocp311.crw-qe.com:8443, because old one became not ready, and had TLS certification problem https://issues.redhat.com/browse/CRW-2025

      Ansible deploy scripthttps://gitlab.cee.redhat.com/codeready-workspaces/deploy-ocp-crew

       

      Installation issues:

      1. "dns.crw-qe.com named[22087]: managed-keys-zone: Unable to fetch DNSKEY set '.': timed out"

      TASK [set-cluster-admin : Set cluster admin] ***********************************task path: /root/deploy-ocp-crew/ocp-setup/roles/set-cluster-admin/tasks/main.yml:3fatal: [10.0.204.239]: FAILED! => {"changed": false, "cmd": "oc adm policy add-cluster-role-to-user cluster-admin developer --as=system:admin", "msg": "[Errno 2] No such file or directory", "rc": 2}to retry, use: --limit @/root/deploy-ocp-crew/ocp-setup/post-install-actions.retry

      Workaround:

      Go to DNS server and fix next files"
      1.1
      vi /usr/lib/systemd/system/named.service

      change https://forums.centos.org/viewtopic.php?f=50&t=74591&sid=fd22655d2ee355ccf4427ea87916a4a5 :
      ExecStart=/usr/sbin/named -4 -u named -c ${NAMEDCONF} $OPTIONS

      1.2
      vi /etc/named.conf

      change

                 forwarders {
                    10.11.5.19;
                 };
      
      /*
      zone "crw-qe.com" IN {
              type master;
              file "dynamic/db.crw-qe.com.zone";
      	allow-update { 
      		10.0.0.0/8;
      		key rndc-key; 
      	};
      };
      */
      

      2. Terminated "deploy-ocp-crew/deploy-ocp.sh" script execution.

      Solution:

      • increase execution timeout from default 10 secs to 1200 secs, by adding ansible-playbook parameter: "--timeout 1200".
      • re-run "./deploy-ocp-crew/deploy-ocp.sh > command.log &" on ansible VM manually.

      3. CRW workspace mount problem

      (combined from similar events): MountVolume.SetUp failed for volume "pv12" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/bfb1e087-d268-11eb-a2b0-fa163e8b5b8b/volumes/kubernetes.io~nfs/pv12 --scope -- mount -t nfs master:/nfs-server/pv12/ /var/lib/origin/openshift.local.volumes/pods/bfb1e087-d268-11eb-a2b0-fa163e8b5b8b/volumes/kubernetes.io~nfs/pv12 Output: Running scope as unit run-54941.scope. mount.nfs: Failed to resolve server master: Name or service not known mount.nfs: Operation already in progress
      https://serverfault.com/questions/776114/mount-nfs-failed-to-resolve-server
      

      Solution: add master.crw-qe.node IP to /etc/hosts:
      node1 /etc/hosts
      node2 /etc/hosts

      10.0.207.143 master

              dnochevn Dmytro Nochevnov
              dnochevn Dmytro Nochevnov
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: