-
Bug
-
Resolution: Done
-
Blocker
-
DO280 - OSE 3.0 1 20151019
-
None
-
6
-
en-US (English)
URL:
Reporter RHNID:
Section: -
Language: en-US (English)
Workaround: Edit /root/DO280/support/solutions/deploy-volume/nfs-setup.sh
change: /root/DO280/labs/deploy-volume/firewall-cmd.txt
to: /root/DO280/labs/deploy-volume/iptables.sh
change: /root/DO280/labs/deploy-volume/firewall-sysconfig.txt
to: /root/DO280/labs/deploy-volume/iptables-sysconfig.txt
Now the registryvol nfs share should be accessible from node (check using mount)
Run /root/DO280/support/solutions/deploy-volume/nfs-setup.sh on master
ignore errors about:
- duplicate export entries
- PV and PVC already existing
Force new registry pod re-deployment:
[root@master ~]# oc deploy docker-registry --latest
Started deployment #3
Now oc status and oc get pods should show a new registry pod running.
At end of lab see the NFS share is not empty
Description: GL 6.1 script from "before you begin" references wrong files and doesn't work as expected.
- Instructions refers to the wrong script folder: instead of /root/DO280/support/solutions/deploy-volume/ should be /root/DO280/support/solutions/deploy-s2i/
- The script is supposed to configure NFS server on master and configure the internal registro to use a PV, but running the script yelds error:
[root@master ~]# bash /root/DO280/support/solutions/deploy-volume/nfs-setup.sh
sh: /root/DO280/labs/deploy-volume/firewall-cmd.txt: Arquivo ou diretório não encontrado
End result: export is created:
[root@master ~]# cat /etc/exports
/var/export/registryvol *(rw,sync,all_squash)
But it is not accessible from node:
[root@node ~]# mount -t nfs master.pod0.example.com:/var/export/registryvol /mnt
mount.nfs: Connection timed out
PV is created and bound:
[root@master ~]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
registry-volume deploymentconfig=docker-registry 10737418240 RWX Bound default/registry-pvclaim
But re-deploying the registry to use the PV failed:
[root@master ~]# oc status
In project default
service/docker-registry - 172.30.4.43:5000
dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.0.1.0
#2 deployment failed 16 minutes ago
#1 deployed 2 hours ago - 1 pod
...
Bug can be masqueraded because OSE leaves the old registry pod running, problem will only be felt if the node host is restarted:
[root@master ~]# oc get pod
NAME READY STATUS RESTARTS AGE
docker-registry-1-cs8jj 1/1 Running 0 2h
docker-registry-2-deploy 0/1 ExitCode:255 0 16m
trainingrouter-1-oyng6 1/1 Running 0 2h
Looking at nfs-setup.sh it also references a second missing file (firewall-sysconfig.txt), but this one fails silently.