Welcome to the new Linux Foundation Forum!

Lab2, VNC connection keeps dropping

EDDROPI135EDDROPI135 Posts: 7
edited August 8 in LFS263 Class Forum

After successfully completing Lab1 and having all the environment set, i moved to Lab2 but keep facing the same issue again: when trying to make the onboarding in the ONAP portal (SDC), the VNC connection suddenly drops and logging in again is not possible anymore. 

I checked the status with 

kubectl get pods --all-namespaces -a

and can see the the onap-portal related names are all in status "Running", so OK. Then I checked

docker ps | grep vnc

and noticed that it is "unhealthy". I deleted the corresponding pod (e.g. kubectl delete pod -n onap-portal vnc-portal-1252894321-vgt8b) and it is automatically recreated again and VNC works for a while again. But it keeps crashing.

Any help you could give me to make sure i have a stable VNC connection? Since this trouble shooting takes a lot of time I have to eventually kill the complete instance (in order not to incur too many costs in google cloud) and start the deployment from scratch again, which is really annoying.

thanks in advance for your help!

Roberto

Comments

  • akapadiaakapadia Posts: 17

    We have never faced this issue before. We have two options:

    A) Redeploy from scratch -- Amsterdam release had stability issues, so sometimes this is required

    B) Debug it, in which case, could you pls. send outputs of the following:

    1) kubectl describe pods -n onap <vnc-portal-XXX>


    2) kubectl logs -n onap <vnc-portal-XXX>


     


    Thanks,

  • jnavalijnavali Posts: 12

    I see this same issue happen frequently.
    I was able to get around by killing the vnc-portal pod and let it restart.
    But each time need to copy the vFW zip files fresh and so this was cumbersome.

    Some logs below when this happened ---
    [email protected]:~$ kubectl describe pods -n onap-portal vnc-portal-1252894321-dw8p5
    Name: vnc-portal-1252894321-dw8p5
    Namespace: onap-portal
    Node: ubuntu/172.17.0.1
    Start Time: Wed, 15 Aug 2018 14:20:36 +0000
    Labels: app=vnc-portal
    pod-template-hash=1252894321
    Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace
    ":"onap-portal","name":"vnc-portal-1252894321","uid":"641c2a56-a006-11e8-a60f-022496...
    Status: Running
    IP: 10.42.71.203
    Created By: ReplicaSet/vnc-portal-1252894321
    Controlled By: ReplicaSet/vnc-portal-1252894321
    Init Containers:
    vnc-portal-readiness:
    Container ID: docker://b5610f8a8961ed2c672932af9de9a97d83c0c29ac77b6405f3d6686f6ccd0672
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a2
    5ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    portalapps
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 15 Aug 2018 14:20:40 +0000
    Finished: Wed, 15 Aug 2018 14:20:41 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-portal (v1:metadata.namespace)
    Mounts:
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    vnc-pap-readiness:
    Container ID: docker://3788d71dbfdbaf2db8aff2a0cdc4e81e4bd8a2c53b15384049bf1ac0064dca26
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    pap
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 15 Aug 2018 14:20:44 +0000
    Finished: Wed, 15 Aug 2018 14:20:45 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-policy
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    vnc-sdc-readiness:
    Container ID: docker://832f9540f67cc06df1b31ae2002ccf5a03f06dfc387da77ee03807117ead985f
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    sdc-fe
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 15 Aug 2018 14:20:48 +0000
    Finished: Wed, 15 Aug 2018 14:20:49 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-sdc
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    vnc-vid-readiness:
    Container ID: docker://35a8bf5eb464f175c7800e6fa3f27a3193423125cbdd2535489b35082e84a315
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    vid-server
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 15 Aug 2018 14:20:51 +0000
    Finished: Wed, 15 Aug 2018 14:20:51 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-vid
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    vnc-init-hosts:
    Container ID: docker://d5d88f68bc540827fa1ec9ae2e380814a0504603becfc94555836c478141b12f
    Image: oomk8s/ubuntu-init:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:3adf3d21ad3b402690f2dda268e97999fe1822d07b27e1bd2b8acc3d1a771d95
    Port:
    Command:
    /bin/sh
    -c
    Args:
    echo host sdc-be.onap-sdc | awk '{print$4}' sdc.api.be.simpledemo.onap.org >> /ubuntu-init/hosts; echo host portalapps.onap-portal | awk '{print$4}' portal.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host pap.onap-policy | awk '{print$4}' policy.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host sdc-fe.onap-sdc | awk '{print$4}' sdc.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host vid-server.onap-vid | awk '{print$4}' vid.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host sparky-be.onap-aai | awk '{print$4}' aai.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host cli.onap-cli | awk '{print$4}' cli.api.simpledemo.onap.org >> /ubuntu-init/hosts
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 15 Aug 2018 14:20:55 +0000
    Finished: Wed, 15 Aug 2018 14:20:55 +0000
    Ready: True
    Restart Count: 0
    Environment:
    Mounts:
    /ubuntu-init/ from ubuntu-init (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    Containers:
    vnc-portal:
    Container ID: docker://8f9fd368b5639c7f729df446ec5e7f021ba5e731a8acba21e3dbaf136b0f8936
    Image: dorowu/ubuntu-desktop-lxde-vnc
    Image ID: docker-pullable://dorowu/[email protected]:f5c6f9a1cae95d176d6957907a0c51640e5390cdee73def315a4ac7c2e26a733
    Port:
    State: Running
    Started: Wed, 15 Aug 2018 14:20:59 +0000
    Ready: True
    Restart Count: 0
    Environment:
    VNC_PASSWORD: password
    Mounts:
    /etc/localtime from localtime (ro)
    /root/.init_profile/profiles.ini from vnc-profiles-ini (rw)
    /ubuntu-init/ from ubuntu-init (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-d69q4 (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    PodScheduled True
    Volumes:
    localtime:
    Type: HostPath (bare host directory volume)
    Path: /etc/localtime
    ubuntu-init:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    vnc-profiles-ini:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: vnc-profiles-ini
    Optional: false
    default-token-d69q4:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-d69q4
    Optional: false
    QoS Class: BestEffort
    Node-Selectors:
    Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
    node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:

  • jnavalijnavali Posts: 12

    [email protected]:~$ kubectl logs -n onap-portal vnc-portal-1252894321-dw8p5
    stored passwd in file: /.password2
    2018-08-15 14:21:00,030 CRIT Supervisor running as root (no user in config file)
    2018-08-15 14:21:00,031 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
    2018-08-15 14:21:00,044 INFO RPC interface 'supervisor' initialized
    2018-08-15 14:21:00,044 CRIT Server 'unix_http_server' running without any HTTP authentication checking
    2018-08-15 14:21:00,044 INFO supervisord started with pid 21
    2018-08-15 14:21:01,047 INFO spawned: 'nginx' with pid 36
    2018-08-15 14:21:01,050 INFO spawned: 'web' with pid 37
    2018-08-15 14:21:01,053 INFO spawned: 'novnc' with pid 38
    2018-08-15 14:21:01,055 INFO spawned: 'wm' with pid 39
    2018-08-15 14:21:01,058 INFO spawned: 'pcmanfm' with pid 40
    2018-08-15 14:21:01,060 INFO spawned: 'lxpanel' with pid 42
    2018-08-15 14:21:01,063 INFO spawned: 'xvfb' with pid 43
    2018-08-15 14:21:01,066 INFO spawned: 'x11vnc' with pid 45
    2018-08-15 14:21:01,086 INFO exited: x11vnc (exit status 1; not expected)
    2018-08-15 14:21:01,279 INFO Listening on http://localhost:6079 (run.py:87)
    2018-08-15 14:21:02,105 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,105 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:21:02,107 INFO spawned: 'x11vnc' with pid 95
    2018-08-15 14:21:03,171 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    127.0.0.1 - - [2018-08-15 14:21:30] "GET /api/health HTTP/1.0" 200 141 0.195426
    127.0.0.1 - - [2018-08-15 14:22:00] "GET /api/health HTTP/1.0" 200 141 0.176478
    127.0.0.1 - - [2018-08-15 14:22:30] "GET /api/health HTTP/1.0" 200 141 0.205411

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
    sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-08-15 14:22:46] "GET /api/state?video=false&id=-1&w=1004&h=579 HTTP/1.0" 200 301 0.463919
    2018-08-15 14:22:46,796 INFO waiting for xvfb to stop
    2018-08-15 14:22:46,796 INFO waiting for wm to stop
    2018-08-15 14:22:46,796 INFO waiting for pcmanfm to stop
    2018-08-15 14:22:46,797 INFO waiting for lxpanel to stop
    2018-08-15 14:22:46,797 INFO waiting for x11vnc to stop
    2018-08-15 14:22:46,797 INFO waiting for novnc to stop
    2018-08-15 14:22:46,799 INFO stopped: novnc (exit status 143)
    2018-08-15 14:22:46,811 INFO stopped: x11vnc (exit status 2)
    2018-08-15 14:22:46,817 INFO stopped: lxpanel (terminated by SIGTERM)
    2018-08-15 14:22:46,822 INFO stopped: xvfb (terminated by SIGKILL)
    2018-08-15 14:22:46,826 INFO stopped: wm (exit status 1)
    2018-08-15 14:22:46,842 INFO stopped: pcmanfm (exit status 1)
    2018-08-15 14:22:47,853 INFO spawned: 'xvfb' with pid 129
    2018-08-15 14:22:47,856 INFO spawned: 'wm' with pid 130
    2018-08-15 14:22:47,858 INFO spawned: 'pcmanfm' with pid 131
    2018-08-15 14:22:47,861 INFO spawned: 'lxpanel' with pid 132
    2018-08-15 14:22:47,864 INFO spawned: 'x11vnc' with pid 133
    2018-08-15 14:22:47,867 INFO spawned: 'novnc' with pid 134
    2018-08-15 14:22:48,908 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:22:48,908 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:22:48,909 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:22:48,909 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:22:48,909 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-08-15 14:22:48,909 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    x:xvfb: stopped
    x:wm: stopped
    x:pcmanfm: stopped
    x:lxpanel: stopped
    x:x11vnc: stopped
    x:novnc: stopped
    x:xvfb: started
    x:wm: started
    x:pcmanfm: started
    x:lxpanel: started
    x:x11vnc: started
    x:novnc: started
    127.0.0.1 - - [2018-08-15 14:22:49] "GET /api/reset?video=false&id=-1&w=1004&h=579 HTTP/1.0" 200 145 2.516023

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
    sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-08-15 14:22:50] "GET /api/state?video=false&id=1&w=1004&h=579 HTTP/1.0" 200 301 0.794495

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-08-15 14:22:52] "GET /api/state?video=false&id=4&w=1004&h=579 HTTP/1.0" 200 301 0.247857
    127.0.0.1 - - [2018-08-15 14:23:00] "GET /api/health HTTP/1.0" 200 141 0.170622

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-08-15 14:23:24] "GET /api/state?video=false&id=5&w=1004&h=579 HTTP/1.0" 200 301 30.289092
    127.0.0.1 - - [2018-08-15 14:23:31] "GET /api/health HTTP/1.0" 200 141 0.173056

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.

  • Thanks jnavali for providing the logs and for pointing to the additional effort to copy the .zip files after a new VNC is created, this costs time

  • I am facing this same issue. VNC connection is UP only for a couple of minutes and lost everytime I delete the container and it restarts.

    Is any resolution found for this issue?

  • [email protected]:~$ docker ps | grep vnc
    4202e0c7cb05 dorowu/[email protected]:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64 "/startup.sh" About a minute ago Up About a minute (healthy) k8s_vnc-portal_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
    2484781cec86 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
    [email protected]:~$
    [email protected]:~$
    [email protected]:~$
    [email protected]:~$
    [email protected]:~$ docker ps | grep vnc
    4202e0c7cb05 dorowu/[email protected]:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64 "/startup.sh" 8 minutes ago Up 8 minutes (unhealthy) k8s_vnc-portal_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0
    2484781cec86 gcr.io/google_containers/pause-amd64:3.0 "/pause" 9 minutes ago Up 9 minutes k8s_POD_vnc-portal-1252894321-3fd2g_onap-portal_0296da92-c0ef-11e8-bc81-02ab35370126_0

  • See my post from 8/14 above.

  • The thing is I have exhausted the Google Cloud free tier Quota since the onap VM was running for 4-5 days while I was performing the troubleshooting. So now I am unable to login and provide you the kubectl logs. I am hoping since this issue is faced by more than one guy, you guys should actively look into it. Also the other person has provided the relevant logs.

  • akapadiaakapadia Posts: 17

    The engineering team is looking into this more. It's an ONAP issue, we will see if there is a solution.

  • akapadiaakapadia Posts: 17

    @sharangn please try this workaround and let us know if it solves the problem.

    Step-1

    # JUST SIMPLY DELETE AND REDEPLOY ALL SDC CONTAINERS AGAIN
    ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/
    cd $ONAP_OC
    source ./setenv.bash
    
    # DELETE ALL SDC CONTAINERS
    ./deleteAll.bash -n onap -a sdc
    
    # CREATE ALL SDC CONTAINERS again
    ./createAll.bash -n onap -a sdc
    
    # WAIT FOR ALL SDC CONTAINTES TO COME UP
    

    Step-2

    ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/
    cd $ONAP_OC
    source ./setenv.bash
    
    # DELETE ALL PORTAL CONTAINERS
    ./deleteAll.bash -n onap -a portal
    
    # CREATE ALL PORTAL CONTAINERS
    ./createAll.bash -n onap -a portal
    
    # WAIT FOR ALL PORTAL CONTAINERS TO COMEUP
    
  • sharangnsharangn Posts: 6

    I tried the workaround as suggested and it worked temporarily for about 30 minutes. Same issue after that. I have captured the logs as required. Please let me know if anything else required from my end.

    [email protected]:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl describe pods -n onap-portal vnc-portal-1252894321-r294c
    Name: vnc-portal-1252894321-r294c
    Namespace: onap-portal
    Node: ubuntu/192.168.122.231
    Start Time: Mon, 08 Oct 2018 18:35:18 +0000
    Labels: app=vnc-portal
    pod-template-hash=1252894321
    Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"onap-portal","name":"vnc-portal-1252894321","uid":"e7c154ad-cb28-11e8-a60c-0206ea...
    Status: Running
    IP: 10.42.233.138
    Created By: ReplicaSet/vnc-portal-1252894321
    Controlled By: ReplicaSet/vnc-portal-1252894321
    Init Containers:
    vnc-portal-readiness:
    Container ID: docker://a330440d32cb4877fdeb18b3ecb4b215f0bc005ce4465d78361d6254967f467c
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    portalapps
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Mon, 08 Oct 2018 18:35:22 +0000
    Finished: Mon, 08 Oct 2018 18:35:52 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-portal (v1:metadata.namespace)
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    vnc-pap-readiness:
    Container ID: docker://a32f8e42a5d74c50976ab6a8e783d1150143493f05be384e28ffac3f6d376d7f
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    pap
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Mon, 08 Oct 2018 18:35:54 +0000
    Finished: Mon, 08 Oct 2018 18:35:55 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-policy
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    vnc-sdc-readiness:
    Container ID: docker://0bf2fcbb5d31b30c0ef4974ebd375c608f35e361a0366f0afec186d364f4e636
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    sdc-fe
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Mon, 08 Oct 2018 18:35:57 +0000
    Finished: Mon, 08 Oct 2018 18:35:58 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-sdc
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    vnc-vid-readiness:
    Container ID: docker://68468a6041736636ca07fa1e90a5377709bec0867aed93091d2d624d144b5264
    Image: oomk8s/readiness-check:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:
    Command:
    /root/ready.py
    Args:
    --container-name
    vid-server
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Mon, 08 Oct 2018 18:35:59 +0000
    Finished: Mon, 08 Oct 2018 18:36:00 +0000
    Ready: True
    Restart Count: 0
    Environment:
    NAMESPACE: onap-vid
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    vnc-init-hosts:
    Container ID: docker://4b2cc92b00adfa042b72ed216430e10cd914c939ab5c0a71af7bf901e911d57d
    Image: oomk8s/ubuntu-init:1.0.0
    Image ID: docker-pullable://oomk8s/[email protected]:3adf3d21ad3b402690f2dda268e97999fe1822d07b27e1bd2b8acc3d1a771d95
    Port:
    Command:
    /bin/sh
    -c
    Args:
    echo host sdc-be.onap-sdc | awk '{print$4}' sdc.api.be.simpledemo.onap.org >> /ubuntu-init/hosts; echo host portalapps.onap-portal | awk '{print$4}' portal.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host pap.onap-policy | awk '{print$4}' policy.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host sdc-fe.onap-sdc | awk '{print$4}' sdc.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host vid-server.onap-vid | awk '{print$4}' vid.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host sparky-be.onap-aai | awk '{print$4}' aai.api.simpledemo.onap.org >> /ubuntu-init/hosts; echo host cli.onap-cli | awk '{print$4}' cli.api.simpledemo.onap.org >> /ubuntu-init/hosts
    State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Mon, 08 Oct 2018 18:36:02 +0000
    Finished: Mon, 08 Oct 2018 18:36:02 +0000
    Ready: True
    Restart Count: 0
    Environment:
    Mounts:
    /ubuntu-init/ from ubuntu-init (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    Containers:
    vnc-portal:
    Container ID: docker://332f9f1897d44472f623ca418f00485a2fbe0d726e11a5468c0023a55c38d459
    Image: dorowu/ubuntu-desktop-lxde-vnc
    Image ID: docker-pullable://dorowu/[email protected]:25644c1867dcda03a656aeaa2d598ba76157d1141b94d0f08c8b1b990f0f2c64
    Port:
    State: Running
    Started: Mon, 08 Oct 2018 18:36:03 +0000
    Ready: True
    Restart Count: 0
    Environment:
    VNC_PASSWORD: password
    Mounts:
    /etc/localtime from localtime (ro)
    /root/.init_profile/profiles.ini from vnc-profiles-ini (rw)
    /ubuntu-init/ from ubuntu-init (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbgh0 (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    PodScheduled True
    Volumes:
    localtime:
    Type: HostPath (bare host directory volume)
    Path: /etc/localtime
    ubuntu-init:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    vnc-profiles-ini:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: vnc-profiles-ini
    Optional: false
    default-token-dbgh0:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-dbgh0
    Optional: false

  • sharangnsharangn Posts: 6

    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Pulling pulling image "oomk8s/readiness-check:1.0.0"
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Pulled Successfully pulled image "oomk8s/readiness-check:1.0.0"
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Created Created container
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-vid-readiness} Normal Started Started container
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Pulling pulling image "oomk8s/ubuntu-init:1.0.0"
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Pulled Successfully pulled image "oomk8s/ubuntu-init:1.0.0"
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Created Created container
    37m 37m 1 kubelet, ubuntu spec.initContainers{vnc-init-hosts} Normal Started Started container
    37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Pulling pulling image "dorowu/ubuntu-desktop-lxde-vnc"
    37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Pulled Successfully pulled image "dorowu/ubuntu-desktop-lxde-vnc"
    37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Created Created container
    37m 37m 1 kubelet, ubuntu spec.containers{vnc-portal} Normal Started Started container
    [email protected]:~/ONAP_Kubernetes/oom/kubernetes/robot$
    [email protected]:~/ONAP_Kubernetes/oom/kubernetes/robot$

    [email protected]:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl logs -n onap vnc-portal-1252894321-r294c
    Error from server (NotFound): pods "vnc-portal-1252894321-r294c" not found
    [email protected]:~/ONAP_Kubernetes/oom/kubernetes/robot$ kubectl logs -n onap-portal vnc-portal-1252894321-r294c
    stored passwd in file: /.password2
    2018-10-08 18:36:04,053 CRIT Supervisor running as root (no user in config file)
    2018-10-08 18:36:04,053 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
    2018-10-08 18:36:04,065 INFO RPC interface 'supervisor' initialized
    2018-10-08 18:36:04,066 CRIT Server 'unix_http_server' running without any HTTP authentication checking
    2018-10-08 18:36:04,066 INFO supervisord started with pid 19
    2018-10-08 18:36:05,069 INFO spawned: 'nginx' with pid 33
    2018-10-08 18:36:05,072 INFO spawned: 'web' with pid 34
    2018-10-08 18:36:05,074 INFO spawned: 'novnc' with pid 35
    2018-10-08 18:36:05,077 INFO spawned: 'wm' with pid 36
    2018-10-08 18:36:05,079 INFO spawned: 'pcmanfm' with pid 37
    2018-10-08 18:36:05,082 INFO spawned: 'lxpanel' with pid 39
    2018-10-08 18:36:05,084 INFO spawned: 'xvfb' with pid 41
    2018-10-08 18:36:05,086 INFO spawned: 'x11vnc' with pid 45
    2018-10-08 18:36:05,293 INFO Listening on http://localhost:6079 (run.py:87)
    2018-10-08 18:36:06,126 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,126 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,126 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,126 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,127 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,127 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,127 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:36:06,127 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    127.0.0.1 - - [2018-10-08 18:36:34] "GET /api/health HTTP/1.1" 200 122 0.175875

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
    sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-10-08 18:37:00] "GET /api/state?video=false&id=-1&w=1920&h=966 HTTP/1.0" 200 301 0.292369
    2018-10-08 18:37:00,548 INFO stopped: x11vnc (exit status 2)
    2018-10-08 18:37:00,549 INFO waiting for xvfb to stop
    2018-10-08 18:37:00,549 INFO waiting for wm to stop
    2018-10-08 18:37:00,549 INFO waiting for pcmanfm to stop
    2018-10-08 18:37:00,549 INFO waiting for lxpanel to stop
    2018-10-08 18:37:00,549 INFO waiting for novnc to stop
    2018-10-08 18:37:00,552 INFO stopped: lxpanel (terminated by SIGTERM)
    2018-10-08 18:37:00,553 INFO stopped: novnc (exit status 143)
    2018-10-08 18:37:00,553 INFO stopped: xvfb (terminated by SIGKILL)
    2018-10-08 18:37:00,554 INFO stopped: wm (exit status 1)
    2018-10-08 18:37:01,558 INFO stopped: pcmanfm (exit status 1)
    2018-10-08 18:37:01,565 INFO spawned: 'xvfb' with pid 107
    2018-10-08 18:37:01,567 INFO spawned: 'wm' with pid 108
    2018-10-08 18:37:01,570 INFO spawned: 'pcmanfm' with pid 109
    2018-10-08 18:37:01,573 INFO spawned: 'lxpanel' with pid 110
    2018-10-08 18:37:01,577 INFO spawned: 'x11vnc' with pid 111
    2018-10-08 18:37:01,581 INFO spawned: 'novnc' with pid 112
    2018-10-08 18:37:02,623 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:37:02,623 INFO success: wm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:37:02,624 INFO success: pcmanfm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:37:02,624 INFO success: lxpanel entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:37:02,624 INFO success: xvfb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-10-08 18:37:02,624 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    x:x11vnc: stopped
    x:xvfb: stopped
    x:wm: stopped
    x:pcmanfm: stopped
    x:lxpanel: stopped
    x:novnc: stopped
    x:xvfb: started
    x:wm: started
    x:pcmanfm: started
    x:lxpanel: started
    x:x11vnc: started
    x:novnc: started
    127.0.0.1 - - [2018-10-08 18:37:02] "GET /api/reset?video=false&id=-1&w=1920&h=966 HTTP/1.0" 200 145 2.445096

    sending remote command: "cmd=fb" via X11VNC_REMOTE X property.
    sending remote command: "qry=dpy_x,dpy_y" via X11VNC_REMOTE X property.

    127.0.0.1 - - [2018-10-08 18:37:04] "GET /api/state?video=false&id=1&w=1920&h=966 HTTP/1.0" 200 301 0.263159
    127.0.0.1 - - [2018-10-08 18:37:04] "GET /api/health HTTP/1.1" 200 122 0.179925

  • sharangnsharangn Posts: 6

    Logs attached

  • We recommend the workaround to recover from the VNC crash (already mentioned by Amar earlier).

    # SSH into ONAPVM and do the following
    ONAP_OC=/home/aarna/ONAP_Kubernetes/oom/kubernetes/oneclick/
    cd $ONAP_OC
    source ./setenv.bash

    # DELETE ALL PORTAL CONTAINERS
    ./deleteAll.bash -n onap -a portal
    
    # CREATE ALL PORTAL CONTAINERS
    ./createAll.bash -n onap -a portal
    
    # WAIT FOR ALL PORTAL CONTAINERS TO COME UP
    kubectl get pods --all-namespaces | grep portal
    

    We decided not to spend time on resolving this, since this is a non-issue in Beijing release (which does not use VNC any more).

    We (at Aarna) have the Beijing based images ready on Google cloud, and we would like to invite anyone interested in Beta testing these images. Please let us know, and we will provide access to them.

  • jnavalijnavali Posts: 12

    @srupanagunta
    Is there an updated Lab guide based on the workflows in Beijing (if they have changed)?
    Thanks
    Jay

Sign In or Register to comment.