Lab2, VNC connection keeps dropping
After successfully completing Lab1 and having all the environment set, i moved to Lab2 but keep facing the same issue again: when trying to make the onboarding in the ONAP portal (SDC), the VNC connection suddenly drops and logging in again is not possible anymore.
I checked the status with
kubectl get pods --all-namespaces -a
and can see the the onap-portal related names are all in status "Running", so OK. Then I checked
docker ps | grep vnc
and noticed that it is "unhealthy". I deleted the corresponding pod (e.g. kubectl delete pod -n onap-portal vnc-portal-1252894321-vgt8b) and it is automatically recreated again and VNC works for a while again. But it keeps crashing.
Any help you could give me to make sure i have a stable VNC connection? Since this trouble shooting takes a lot of time I have to eventually kill the complete instance (in order not to incur too many costs in google cloud) and start the deployment from scratch again, which is really annoying.
thanks in advance for your help!
Roberto
Comments
-
We have never faced this issue before. We have two options:
A) Redeploy from scratch -- Amsterdam release had stability issues, so sometimes this is required
Debug it, in which case, could you pls. send outputs of the following:
1) kubectl describe pods -n onap <vnc-portal-XXX>2) kubectl logs -n onap <vnc-portal-XXX>Thanks,
0 -
I see this same issue happen frequently.
I was able to get around by killing the vnc-portal pod and let it restart.
But each time need to copy the vFW zip files fresh and so this was cumbersome.Some logs below when this happened ---
aarna@ubuntu:~$ kubectl describe pods -n onap-portal vnc-portal-1252894321-dw8p5
Name: vnc-portal-1252894321-dw8p5
Namespace: onap-portal
Node: ubuntu/172.17.0.1
Start Time: Wed, 15 Aug 2018 14:20:36 +0000
Labels: app=vnc-portal
pod-template-hash=1252894321
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace
":"onap-portal","name":"vnc-portal-1252894321","uid":"641c2a56-a006-11e8-a60f-022496...
Status: Running
IP: 10.42.71.203
Created By: ReplicaSet/vnc-portal-1252894321
Controlled By: ReplicaSet/vnc-portal-1252894321
Init Containers:
vnc-portal-readiness:
Container ID: docker://b5610f8a8961ed2c672932af9de9a97d83c0c29ac77b6405f3d6686f6ccd0672
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a2
5ecb
Port:
Command:
/root/ready.py
Args:
--container-name
portalapps
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 15 Aug 2018 14:20:40 +0000
Finished: Wed, 15 Aug 2018 14:20:41 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-portal (v1:metadata.namespace)
Mounts:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
vnc-pap-readiness:
Container ID: docker://3788d71dbfdbaf2db8aff2a0cdc4e81e4bd8a2c53b15384049bf1ac0064dca26
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
pap
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 15 Aug 2018 14:20:44 +0000
Finished: Wed, 15 Aug 2018 14:20:45 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-policy
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
vnc-sdc-readiness:
Container ID: docker://832f9540f67cc06df1b31ae2002ccf5a03f06dfc387da77ee03807117ead985f
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
sdc-fe
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 15 Aug 2018 14:20:48 +0000
Finished: Wed, 15 Aug 2018 14:20:49 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-sdc
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
vnc-vid-readiness:
Container ID: docker://35a8bf5eb464f175c7800e6fa3f27a3193423125cbdd2535489b35082e84a315
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
vid-server
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 15 Aug 2018 14:20:51 +0000
Finished: Wed, 15 Aug 2018 14:20:51 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-vid
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
vnc-init-hosts:
Container ID: docker://d5d88f68bc540827fa1ec9ae2e380814a0504603becfc94555836c478141b12f
Image: oomk8s/ubuntu-init:1.0.0
Image ID: docker-pullable://oomk8s/ubuntu-init@sha256:3adf3d21ad3b402690f2dda268e97999fe1822d07b27e1bd2b8acc3d1a771d95
Port:
Command:
/bin/sh
-c
Args:
echohost sdc-be.onap-sdc | awk '{print$4}'
sdc.api.be.simpledemo.onap.org >> /ubuntu-init/hosts; echohost portalapps.onap-portal | awk '{print$4}'
portal.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost pap.onap-policy | awk '{print$4}'
policy.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost sdc-fe.onap-sdc | awk '{print$4}'
sdc.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost vid-server.onap-vid | awk '{print$4}'
vid.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost sparky-be.onap-aai | awk '{print$4}'
aai.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost cli.onap-cli | awk '{print$4}'
cli.api.simpledemo.onap.org >> /ubuntu-init/hosts
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 15 Aug 2018 14:20:55 +0000
Finished: Wed, 15 Aug 2018 14:20:55 +0000
Ready: True
Restart Count: 0
Environment:
Mounts:
/ubuntu-init/ from ubuntu-init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
Containers:
vnc-portal:
Container ID: docker://8f9fd368b5639c7f729df446ec5e7f021ba5e731a8acba21e3dbaf136b0f8936
Image: dorowu/ubuntu-desktop-lxde-vnc
Image ID: docker-pullable://dorowu/ubuntu-desktop-lxde-vnc@sha256:f5c6f9a1cae95d176d6957907a0c51640e5390cdee73def315a4ac7c2e26a733
Port:
State: Running
Started: Wed, 15 Aug 2018 14:20:59 +0000
Ready: True
Restart Count: 0
Environment:
VNC_PASSWORD: password
Mounts:
/etc/localtime from localtime (ro)
/root/.init_profile/profiles.ini from vnc-profiles-ini (rw)
/ubuntu-init/ from ubuntu-init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
ubuntu-init:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
vnc-profiles-ini:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vnc-profiles-ini
Optional: false
default-token-d69q4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d69q4
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:0 -
aarna@ubuntu:~$ kubectl logs -n onap-portal vnc-portal-1252894321-dw8p5
stored passwd in file: /.password2
2018-08-15 14:21:00,030 CRIT Supervisor running as root (no user in config file)
2018-08-15 14:21:00,031 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2018-08-15 14:21:00,044 INFO RPC interface 'supervisor' initialized
2018-08-15 14:21:00,044 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-08-15 14:21:00,044 INFO supervisord started with pid 21
2018-08-15 14:21:01,047 INFO spawned: 'nginx' with pid 36
2018-08-15 14:21:01,050 INFO spawned: 'web' with pid 37
2018-08-15 14:21:01,053 INFO spawned: 'novnc' with pid 38
2018-08-15 14:21:01,055 INFO spawned: 'wm' with pid 39
2018-08-15 14:21:01,058 INFO spawned: 'pcmanfm' with pid 40
2018-08-15 14:21:01,060 INFO spawned: 'lxpanel' with pid 42
2018-08-15 14:21:01,063 INFO spawned: 'xvfb' with pid 43
2018-08-15 14:21:01,066 INFO spawned: 'x11vnc' with pid 45
2018-08-15 14:21:01,086 INFO exited: x11vnc (exit status 1; not expected)
2018-08-15 14:21:01,279 INFO Listening on http://localhost:6079 (run.py:87)
2018-08-15 14:21:02,105 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,105 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:21:02,107 INFO spawned: 'x11vnc' with pid 95
2018-08-15 14:21:03,171 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
127.0.0.1 - - [2018-08-15 14:21:30] "GET /api/health HTTP/1.0" 200 141 0.195426
127.0.0.1 - - [2018-08-15 14:22:00] "GET /api/health HTTP/1.0" 200 141 0.176478
127.0.0.1 - - [2018-08-15 14:22:30] "GET /api/health HTTP/1.0" 200 141 0.205411sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.127.0.0.1 - - [2018-08-15 14:22:46] "GET /api/state?video=false&id=-1&w=1004&h=579 HTTP/1.0" 200 301 0.463919
2018-08-15 14:22:46,796 INFO waiting for xvfb to stop
2018-08-15 14:22:46,796 INFO waiting for wm to stop
2018-08-15 14:22:46,796 INFO waiting for pcmanfm to stop
2018-08-15 14:22:46,797 INFO waiting for lxpanel to stop
2018-08-15 14:22:46,797 INFO waiting for x11vnc to stop
2018-08-15 14:22:46,797 INFO waiting for novnc to stop
2018-08-15 14:22:46,799 INFO stopped: novnc (exit status 143)
2018-08-15 14:22:46,811 INFO stopped: x11vnc (exit status 2)
2018-08-15 14:22:46,817 INFO stopped: lxpanel (terminated by SIGTERM)
2018-08-15 14:22:46,822 INFO stopped: xvfb (terminated by SIGKILL)
2018-08-15 14:22:46,826 INFO stopped: wm (exit status 1)
2018-08-15 14:22:46,842 INFO stopped: pcmanfm (exit status 1)
2018-08-15 14:22:47,853 INFO spawned: 'xvfb' with pid 129
2018-08-15 14:22:47,856 INFO spawned: 'wm' with pid 130
2018-08-15 14:22:47,858 INFO spawned: 'pcmanfm' with pid 131
2018-08-15 14:22:47,861 INFO spawned: 'lxpanel' with pid 132
2018-08-15 14:22:47,864 INFO spawned: 'x11vnc' with pid 133
2018-08-15 14:22:47,867 INFO spawned: 'novnc' with pid 134
2018-08-15 14:22:48,908 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:22:48,908 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:22:48,909 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:22:48,909 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:22:48,909 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-08-15 14:22:48,909 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
x:xvfb: stopped
x:wm: stopped
x:pcmanfm: stopped
x:lxpanel: stopped
x:x11vnc: stopped
x:novnc: stopped
x:xvfb: started
x:wm: started
x:pcmanfm: started
x:lxpanel: started
x:x11vnc: started
x:novnc: started
127.0.0.1 - - [2018-08-15 14:22:49] "GET /api/reset?video=false&id=-1&w=1004&h=579 HTTP/1.0" 200 145 2.516023sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.127.0.0.1 - - [2018-08-15 14:22:50] "GET /api/state?video=false&id=1&w=1004&h=579 HTTP/1.0" 200 301 0.794495
sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
127.0.0.1 - - [2018-08-15 14:22:52] "GET /api/state?video=false&id=4&w=1004&h=579 HTTP/1.0" 200 301 0.247857
127.0.0.1 - - [2018-08-15 14:23:00] "GET /api/health HTTP/1.0" 200 141 0.170622sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
127.0.0.1 - - [2018-08-15 14:23:24] "GET /api/state?video=false&id=5&w=1004&h=579 HTTP/1.0" 200 301 30.289092
127.0.0.1 - - [2018-08-15 14:23:31] "GET /api/health HTTP/1.0" 200 141 0.173056sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
0 -
Thanks jnavali for providing the logs and for pointing to the additional effort to copy the .zip files after a new VNC is created, this costs time
0 -
I am facing this same issue. VNC connection is UP only for a couple of minutes and lost everytime I delete the container and it restarts.
Is any resolution found for this issue?
0 -
aarna@ubuntu:~$ docker ps | grep vnc
4202e0c7cb05 dorowu/ubuntu-desktop-lxde-vnc@sha256:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64 "/startup.sh" About a minute ago Up About a minute (healthy) k8s_vnc-portal_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
2484781cec86 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
aarna@ubuntu:~$
aarna@ubuntu:~$
aarna@ubuntu:~$
aarna@ubuntu:~$
aarna@ubuntu:~$ docker ps | grep vnc
4202e0c7cb05 dorowu/ubuntu-desktop-lxde-vnc@sha256:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64 "/startup.sh" 8 minutes ago Up 8 minutes (unhealthy) k8s_vnc-portal_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
2484781cec86 gcr.io/google_containers/pause-amd64:3.0 "/pause" 9 minutes ago Up 9 minutes k8s_POD_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_00 -
See my post from 8/14 above.
0 -
The thing is I have exhausted the Google Cloud free tier Quota since the onap VM was running for 4-5 days while I was performing the troubleshooting. So now I am unable to login and provide you the kubectl logs. I am hoping since this issue is faced by more than one guy, you guys should actively look into it. Also the other person has provided the relevant logs.
0 -
The engineering team is looking into this more. It's an ONAP issue, we will see if there is a solution.
0 -
@sharangn please try this workaround and let us know if it solves the problem.
Step-1
# JUST SIMPLY DELETE AND REDEPLOY ALL SDC CONTAINERS AGAIN ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/ cd $ONAP_OC source ./setenv.bash # DELETE ALL SDC CONTAINERS ./deleteAll.bash -n onap -a sdc # CREATE ALL SDC CONTAINERS again ./createAll.bash -n onap -a sdc # WAIT FOR ALL SDC CONTAINTES TO COME UP
Step-2
ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/ cd $ONAP_OC source ./setenv.bash # DELETE ALL PORTAL CONTAINERS ./deleteAll.bash -n onap -a portal # CREATE ALL PORTAL CONTAINERS ./createAll.bash -n onap -a portal # WAIT FOR ALL PORTAL CONTAINERS TO COMEUP
0 -
I tried the workaround as suggested and it worked temporarily for about 30 minutes. Same issue after that. I have captured the logs as required. Please let me know if anything else required from my end.
aarna@ubuntu:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl describe pods -n onap-portal vnc-portal-1252894321-r294c
Name: vnc-portal-1252894321-r294c
Namespace: onap-portal
Node: ubuntu/192.168.122.231
Start Time: Mon, 08 Oct 2018 18:35:18 +0000
Labels: app=vnc-portal
pod-template-hash=1252894321
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"onap-portal","name":"vnc-portal-1252894321","uid":"e7c154ad-cb28-11e8-a60c-0206ea...
Status: Running
IP: 10.42.233.138
Created By: ReplicaSet/vnc-portal-1252894321
Controlled By: ReplicaSet/vnc-portal-1252894321
Init Containers:
vnc-portal-readiness:
Container ID: docker://a330440d32cb4877fdeb18b3ecb4b215f0bc005ce4465d78361d6254967f467c
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
portalapps
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Oct 2018 18:35:22 +0000
Finished: Mon, 08 Oct 2018 18:35:52 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-portal (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
vnc-pap-readiness:
Container ID: docker://a32f8e42a5d74c50976ab6a8e783d1150143493f05be384e28ffac3f6d376d7f
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
pap
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Oct 2018 18:35:54 +0000
Finished: Mon, 08 Oct 2018 18:35:55 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-policy
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
vnc-sdc-readiness:
Container ID: docker://0bf2fcbb5d31b30c0ef4974ebd375c608f35e361a0366f0afec186d364f4e636
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
sdc-fe
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Oct 2018 18:35:57 +0000
Finished: Mon, 08 Oct 2018 18:35:58 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-sdc
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
vnc-vid-readiness:
Container ID: docker://68468a6041736636ca07fa1e90a5377709bec0867aed93091d2d624d144b5264
Image: oomk8s/readiness-check:1.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
Port:
Command:
/root/ready.py
Args:
--container-name
vid-server
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Oct 2018 18:35:59 +0000
Finished: Mon, 08 Oct 2018 18:36:00 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap-vid
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
vnc-init-hosts:
Container ID: docker://4b2cc92b00adfa042b72ed216430e10cd914c939ab5c0a71af7bf901e911d57d
Image: oomk8s/ubuntu-init:1.0.0
Image ID: docker-pullable://oomk8s/ubuntu-init@sha256:3adf3d21ad3b402690f2dda268e97999fe1822d07b27e1bd2b8acc3d1a771d95
Port:
Command:
/bin/sh
-c
Args:
echohost sdc-be.onap-sdc | awk '{print$4}'
sdc.api.be.simpledemo.onap.org >> /ubuntu-init/hosts; echohost portalapps.onap-portal | awk '{print$4}'
portal.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost pap.onap-policy | awk '{print$4}'
policy.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost sdc-fe.onap-sdc | awk '{print$4}'
sdc.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost vid-server.onap-vid | awk '{print$4}'
vid.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost sparky-be.onap-aai | awk '{print$4}'
aai.api.simpledemo.onap.org >> /ubuntu-init/hosts; echohost cli.onap-cli | awk '{print$4}'
cli.api.simpledemo.onap.org >> /ubuntu-init/hosts
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Oct 2018 18:36:02 +0000
Finished: Mon, 08 Oct 2018 18:36:02 +0000
Ready: True
Restart Count: 0
Environment:
Mounts:
/ubuntu-init/ from ubuntu-init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
Containers:
vnc-portal:
Container ID: docker://332f9f1897d44472f623ca418f00485a2fbe0d726e11a5468c0023a55c38d459
Image: dorowu/ubuntu-desktop-lxde-vnc
Image ID: docker-pullable://dorowu/ubuntu-desktop-lxde-vnc@sha256:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64
Port:
State: Running
Started: Mon, 08 Oct 2018 18:36:03 +0000
Ready: True
Restart Count: 0
Environment:
VNC_PASSWORD: password
Mounts:
/etc/localtime from localtime (ro)
/root/.init_profile/profiles.ini from vnc-profiles-ini (rw)
/ubuntu-init/ from ubuntu-init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
ubuntu-init:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
vnc-profiles-ini:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vnc-profiles-ini
Optional: false
default-token-dbgh0:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dbgh0
Optional: false0 -
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Pulling pulling image "oomk8s/readiness-check:1.0.0"
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Pulled Successfully pulled image "oomk8s/readiness-check:1.0.0"
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Created Created container
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Started Started container
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Pulling pulling image "oomk8s/ubuntu-init:1.0.0"
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Pulled Successfully pulled image "oomk8s/ubuntu-init:1.0.0"
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Created Created container
37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Started Started container
37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Pulling pulling image "dorowu/ubuntu-desktop-lxde-vnc"
37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Pulled Successfully pulled image "dorowu/ubuntu-desktop-lxde-vnc"
37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Created Created container
37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Started Started container
aarna@ubuntu:~/ONAP_Kubernetes/oom/kubernetes/robot$
aarna@ubuntu:~/ONAP_Kubernetes/oom/kubernetes/robot$aarna@ubuntu:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl logs -n onap vnc-portal-1252894321-r294c
Error from server (NotFound): pods "vnc-portal-1252894321-r294c" not found
aarna@ubuntu:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl logs -n onap-portal vnc-portal-1252894321-r294c
stored passwd in file: /.password2
2018-10-08 18:36:04,053 CRIT Supervisor running as root (no user in config file)
2018-10-08 18:36:04,053 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2018-10-08 18:36:04,065 INFO RPC interface 'supervisor' initialized
2018-10-08 18:36:04,066 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-10-08 18:36:04,066 INFO supervisord started with pid 19
2018-10-08 18:36:05,069 INFO spawned: 'nginx' with pid 33
2018-10-08 18:36:05,072 INFO spawned: 'web' with pid 34
2018-10-08 18:36:05,074 INFO spawned: 'novnc' with pid 35
2018-10-08 18:36:05,077 INFO spawned: 'wm' with pid 36
2018-10-08 18:36:05,079 INFO spawned: 'pcmanfm' with pid 37
2018-10-08 18:36:05,082 INFO spawned: 'lxpanel' with pid 39
2018-10-08 18:36:05,084 INFO spawned: 'xvfb' with pid 41
2018-10-08 18:36:05,086 INFO spawned: 'x11vnc' with pid 45
2018-10-08 18:36:05,293 INFO Listening on http://localhost:6079 (run.py:87)
2018-10-08 18:36:06,126 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,126 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,126 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,126 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,127 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,127 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,127 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:36:06,127 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
127.0.0.1 - - [2018-10-08 18:36:34] "GET /api/health HTTP/1.1" 200 122 0.175875sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.127.0.0.1 - - [2018-10-08 18:37:00] "GET /api/state?video=false&id=-1&w=1920&h=966 HTTP/1.0" 200 301 0.292369
2018-10-08 18:37:00,548 INFO stopped: x11vnc (exit status 2)
2018-10-08 18:37:00,549 INFO waiting for xvfb to stop
2018-10-08 18:37:00,549 INFO waiting for wm to stop
2018-10-08 18:37:00,549 INFO waiting for pcmanfm to stop
2018-10-08 18:37:00,549 INFO waiting for lxpanel to stop
2018-10-08 18:37:00,549 INFO waiting for novnc to stop
2018-10-08 18:37:00,552 INFO stopped: lxpanel (terminated by SIGTERM)
2018-10-08 18:37:00,553 INFO stopped: novnc (exit status 143)
2018-10-08 18:37:00,553 INFO stopped: xvfb (terminated by SIGKILL)
2018-10-08 18:37:00,554 INFO stopped: wm (exit status 1)
2018-10-08 18:37:01,558 INFO stopped: pcmanfm (exit status 1)
2018-10-08 18:37:01,565 INFO spawned: 'xvfb' with pid 107
2018-10-08 18:37:01,567 INFO spawned: 'wm' with pid 108
2018-10-08 18:37:01,570 INFO spawned: 'pcmanfm' with pid 109
2018-10-08 18:37:01,573 INFO spawned: 'lxpanel' with pid 110
2018-10-08 18:37:01,577 INFO spawned: 'x11vnc' with pid 111
2018-10-08 18:37:01,581 INFO spawned: 'novnc' with pid 112
2018-10-08 18:37:02,623 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:37:02,623 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:37:02,624 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:37:02,624 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:37:02,624 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-10-08 18:37:02,624 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
x:x11vnc: stopped
x:xvfb: stopped
x:wm: stopped
x:pcmanfm: stopped
x:lxpanel: stopped
x:novnc: stopped
x:xvfb: started
x:wm: started
x:pcmanfm: started
x:lxpanel: started
x:x11vnc: started
x:novnc: started
127.0.0.1 - - [2018-10-08 18:37:02] "GET /api/reset?video=false&id=-1&w=1920&h=966 HTTP/1.0" 200 145 2.445096sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.127.0.0.1 - - [2018-10-08 18:37:04] "GET /api/state?video=false&id=1&w=1920&h=966 HTTP/1.0" 200 301 0.263159
127.0.0.1 - - [2018-10-08 18:37:04] "GET /api/health HTTP/1.1" 200 122 0.1799250 -
-
We recommend the workaround to recover from the VNC crash (already mentioned by Amar earlier).
# SSH into ONAPVM and do the following
ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/
cd $ONAP_OC
source ./setenv.bash# DELETE ALL PORTAL CONTAINERS ./deleteAll.bash -n onap -a portal # CREATE ALL PORTAL CONTAINERS ./createAll.bash -n onap -a portal # WAIT FOR ALL PORTAL CONTAINERS TO COME UP kubectl get pods --all-namespaces | grep portal
We decided not to spend time on resolving this, since this is a non-issue in Beijing release (which does not use VNC any more).
We (at Aarna) have the Beijing based images ready on Google cloud, and we would like to invite anyone interested in Beta testing these images. Please let us know, and we will provide access to them.
0 -
@srupanagunta
Is there an updated Lab guide based on the workflows in Beijing (if they have changed)?
Thanks
Jay0 -
Yes its in beta. Let me contact you directly.
0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 152 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 260 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)