Community
 
 
 

CloudPlatform 4.x

284 followers
 
Avatar
Administrator

Management Server shut down alone !!

Avatar

Management Server shut down alone !!

hi everybody,

 

I installed a virtual machine centos with this tutorial http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/qig.html

 

I have centos 6.5, xenserver 6.2

 

Installation finished with success.

I have a web access http://IP:8080/client

I'm ready for basic configuration. But 3 or 5 minut the VM shut down alone clearly.

 

What's wrong with Virtual Machine. I must look where (log) ?

 

Can you help me ?

 

Thanks for your help.

 

Clément.


Clément Mutz MEMBERS 08 April 2014 - 08:13 AM
16 comments
0

Please sign in to leave a comment.

 
 

Previous 16 comments

Avatar
Administrator
Avatar

Management Server shut down alone !!

Just for clarity, the hypervisor running your management VM is different from the hypervisor that you will use as a compute node within your cloud, correct?

 

There are a couple of places you can look, to determine why a VM shutdown. First I would look at you hypervisor logs to ensure that it didn't shut down the VM for some reason. Next, I would look in /var/log on the VM itself. Specifically, look at the "messages" file, or similar depending on timing. If you run `cd /var/log; ls -ltr messages*` you will get a list of messages files in date order. You can also look a the output of dmesg (maybe piped through less) but you will have to ignore the most recent boot messages, which will be at the end of the output.

 

Good Luck,

 

--Mike


Michael Little MEMBERS 08 April 2014 - 15:48 PM
Comment actions Permalink
Avatar
Administrator
Avatar

Thanks for your replys Michael ! 

 

VM running in same hypervisor. i tested to install cloudstack 4.3 with centos, ubuntu 12.04and debian wheezy but always same result.

 

I see in log xenserver when my vm shut down :

 


Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|backtrace] Raised at xapi_session.ml:384.13-58 -> xapi_session

.ml:36.12-17 -> xapi_session.ml:36.67-68 -> server_helpers.ml:79.11-41
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|dispatcher] Server_helpers.exec exception_handler: Got exception SESSION_AUTHENTICATION_FAILED: [ root; Authentication failure ]
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|dispatcher] Raised at string.ml:150.25-34 -> stringext.ml:108.13-29
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|backtrace] Raised at string.ml:150.25-34 -> stringext.ml:108.13-29
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|xapi] Raised at server_helpers.ml:94.14-15 -> pervasiveext.ml:22.2-9
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|session.login_with_password D:c555a40c90f4|xapi] Raised at pervasiveext.ml:26.22-25 -> pervasiveext.ml:22.2-9
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|dispatch:session.login_with_password D:1b3e3e87f094|xapi] Raised at pervasiveext.ml:26.22-25 -> pervasiveext.ml:22.2-9
Apr  9 12:30:49 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147363 INET 0.0.0.0:80|dispatch:session.login_with_password D:1b3e3e87f094|backtrace] Raised at pervasiveext.ml:26.22-25 -> server_helpers.ml:140.10-106 -> server.ml:441.23-187 -> server_helpers.ml:119.4-7
Apr  9 12:30:54 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147364 INET 0.0.0.0:80|session.slave_local_login_with_password D:4c9f7120c126|xapi] Add session to local storage
Apr  9 12:30:54 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147364 INET 0.0.0.0:80|host.call_plugin R:50958afa844b|audit] Host.call_plugin host = '9f60c2e7-bbd3-4fa0-91dc-2894ce1f220b (srv-tc1-xshv1)'; plugin = 'echo'; fn = 'main'; args = [  ]
Apr  9 12:30:54 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147364 INET 0.0.0.0:80|host.call_plugin R:1e2390fa8359|audit] Host.call_plugin host = '9f60c2e7-bbd3-4fa0-91dc-2894ce1f220b (srv-tc1-xshv1)'; plugin = 'echo'; fn = 'main'; args = [  ]
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147364 INET 0.0.0.0:80|host.call_plugin R:4be7b145c51a|audit] Host.call_plugin host = '9f60c2e7-bbd3-4fa0-91dc-2894ce1f220b (srv-tc1-xshv1)'; plugin = 'echo'; fn = 'main'; args = [  ]
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147364 INET 0.0.0.0:80|dispatch:VM.set_affinity D:2bae341dfbcc|api_effect] VM.set_affinity
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [ info|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|dispatcher] spawning a new thread to handle the current task (trackid=6a5759a631c53e6783c492d0f3271e15)
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|audit] VM.clean_shutdown: VM = 'a073ed70-dcd4-18b3-a838-65c1ddb1c2df (Ubuntu Precise Pangolin 12.04 (64-bit) (1))'
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [ info|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|xenops] xenops: VM.shutdown a073ed70-dcd4-18b3-a838-65c1ddb1c2df
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|mscgen] xapi=>xenops [label="VM.shutdown"];
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|9247|queue|xenops] Queue.push ["VM_poweroff", ["a073ed70-dcd4-18b3-a838-65c1ddb1c2df", [1200.000000]]] onto a073ed70-dcd4-18b3-a838-65c1ddb1c2df:[  ]
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8||xenops] Queue.pop returned ["VM_poweroff", ["a073ed70-dcd4-18b3-a838-65c1ddb1c2df", [1200.000000]]]
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] Task 1971 reference Async.VM.clean_shutdown R:432416c051a8: ["VM_poweroff", ["a073ed70-dcd4-18b3-a838-65c1ddb1c2df", [1200.000000]]]
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] VM.poweroff a073ed70-dcd4-18b3-a838-65c1ddb1c2df
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|xenops_client] Waiting for task id=1971 to finish
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|mscgen] xapi=>xenops [label="UPDATES.last_id"];
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|mscgen] xapi=>xenops [label="TASK.stat"];
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] Performing: ["VM_hook_script", ["a073ed70-dcd4-18b3-a838-65c1ddb1c2df", "VM_pre_destroy", "clean-shutdown"]]
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] TASK.signal 1971 = ["Pending", 0.066667]
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] Performing: ["VM_shutdown_domain", ["a073ed70-dcd4-18b3-a838-65c1ddb1c2df", "Halt", 1200.000000]]
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|528|xenops events D:648ebebb4755|xenops] Processing event: ["Task", "1971"]
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|528|xenops events D:648ebebb4755|xenops] xenops event on Task 1971
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|528|xenops events D:648ebebb4755|mscgen] xapi=>xenops [label="TASK.stat"];
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|xenops_client] Calling UPDATES.get Async.VM.clean_shutdown R:432416c051a8 5940 30
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147365|Async.VM.clean_shutdown R:432416c051a8|mscgen] xapi=>xenops [label="UPDATES.get"];

Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|6||xenops] Scheduler sleep until 1397039486 (another 30 seconds)
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] VM = a073ed70-dcd4-18b3-a838-65c1ddb1c2df; domid = 58; Waiting for PV domain to acknowledge shutdown request
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] watch: watching xenstore paths: [ /local/domain/58/control/shutdown; /local/domain/58/tools/xenops/cancel; /local/domain/58/tools/xenops/shutdown ] with timeout 60.000000 seconds
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [ info|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] VM = a073ed70-dcd4-18b3-a838-65c1ddb1c2df; domid = 58; Domain acknowledged shutdown request
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: d8675fd9-5c0b-46dc-8737-bdc546b39cc0
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|6||xenops] Scheduler sleep until 1397039486 (another 30 seconds)
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 83e0f5e7-57bd-7bc1-e38c-072b69244996
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 3 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000005
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 2 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: c2fa26d9-a964-0665-bb6a-80f5ade6b3e1
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: e64832d3-71ec-fea9-033b-dec0cff6417c
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 93dbcbaa-8b15-e4fa-68f7-5c3d9fe0f11e
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 6253f497-5c51-887a-b955-66a2bc3b3a7b
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 1af3002d-2f96-1eaa-c618-e42d24a35fa4
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 57223efc-413f-085e-48f2-5fcc8802b1ba
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 19e30d91-fb10-438a-e6e2-5e588a1ec9eb
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 2cc1ff8c-ab44-50aa-4339-cfc8da0f2c3c
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 3 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef0000001a
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 68030ba0-2129-2496-4142-e80e9bfce5b5
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000003
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef0000001d
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000020
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000022

Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 3 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 64587021-1e40-5f8b-f2a1-ad6ed3792af1
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000025
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: eab0874b-ea0e-2a61-ede9-736376e6d4b1
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000034
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: deadbeef-dead-beef-dead-beef00000035
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 3 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on other VM: 203c4a4b-901c-2a46-1a4e-5c3f4a72f7d2
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 last message repeated 2 times
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] EVENT on our VM: a073ed70-dcd4-18b3-a838-65c1ddb1c2df
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|8|Async.VM.clean_shutdown R:432416c051a8|xenops] OTHER EVENT
Apr  9 12:30:55 srv-tc1-xshv1 xenopsd: [debug|srv-tc1-xshv1|6||xenops] Scheduler sleep until 1397039486 (another 30 seconds)
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|529|xapi events D:7a103738eb78|xenops] Event on VM a073ed70-dcd4-18b3-a838-65c1ddb1c2df; resident_here = true
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|529|xapi events D:7a103738eb78|mscgen] xapi=>xenops [label="VM.exists"];
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|529|xapi events D:7a103738eb78|mscgen] xapi=>xapi [label="(XML)"];
Apr  9 12:30:55 srv-tc1-xshv1 xapi: [debug|srv-tc1-xshv1|147366 UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:event.from D:4c70e699f390 created by task D:7a103738eb78
 

 

 

 

 

What's wrong with xenserver ?

 

Thanks for your help.

 

Clément


Clément Mutz MEMBERS 09 April 2014 - 10:39 AM
Comment actions Permalink
Avatar
Gert Jensen

hello Clement

 

You are not allowed to run any vm not managed by CSP, on a hypervisor managed by CSP.

They will shutdown.. :)

 

Just to clarify.

In 1 Xenserver you install a CSP Management VM then you configure CSP to manage the same xenserver, then i belive the management server wil be shutdown by it self.

 

What you should do is.

1 x xenserver with the Managment VM or a physical machine with management server

1 x xenserver which i managed by the managemend server on the other xenserver

 

you could loog in the logs on the management server.

/var/log/cloudstack/management/management-server.log

 

Kind regards

Gert


Comment actions Permalink
Avatar
Administrator
Avatar

Thanks for you reply Gert.

 

So i don't remember to install CSP. How do you uninstall CSP if it's possible ?

 

In this moment I have already a Management VM on xenserver that i installed with this tutorial http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/qig.html

 

How install Management vm in same xenserver without CSP ?

 

I would like to run VM management in same pool of Xenserver and you said that ok. I understood ?


Clément Mutz MEMBERS 10 April 2014 - 11:08 AM
Comment actions Permalink
Avatar
Gert Jensen

Hello Clement,

 

No i did not mean that.

 

your management server must not be on or in the xenserver/pool of xenservers etc. that csp is managing.

 

So what I meant was.

 

You have a xenserver or a pool of xenservers, this is managed by CSP.

You have a totaly diffrent system either a physical computer or a xenserver/pool of xenservers, here you can install the CSP management server, this wil manage the other xenserver/pool of xenservers.

 

 

So if you only have one pool of xenservers or one xenserver, you do not have enough servers..

 

:)

 

Kind regards

Gert


Comment actions Permalink
Avatar
Administrator
Avatar

Ahhh ok Gert I understand :)  Thank you !

 

Ok I started a Virtual Machine Ubuntu with cloudstack-management on another pool xenserver with success !! 

 

Last question. My networks already separated in multiples sub networks (SAN network, administrator network, VM network, public network ...).

Interfaces Xenserver configured bonded (attach file)

 

Virtual machine must have a leg each network of Xenserver  ?

 

Thanks :)

 

Clément


Clément Mutz MEMBERS 11 April 2014 - 14:05 PM
Comment actions Permalink
Avatar
Gert Jensen
Hello

I AM writing from My Phone so sorry for the typing.

No the management server only need access to the xenserver (to mange Them. And vms) and internet so you can browse to it

Kind regres gert
Comment actions Permalink
Avatar
Gert Jensen
I can not see the attached file it Will take App. 1 then i AM at a computer
Comment actions Permalink
Avatar
Gert Jensen

Hello Clement

 

Did you attach the file ?

 

I can not see it.

 

Kind regards

Gert

 
Comment actions Permalink
Avatar
Gert Jensen

Hello,

 

Yes it seems like it is compatible, i would my self choose shorter names with no spaces..

 

But what i do not understand is your vms need to connect to your different networks ?

 

Normally when you ise CSP it creates a uniq vlan and a router for the internet, so it is contained..

I think if you need to use your other networks, i can not see why you do, but then you create a gateway.

 

I only use advanced network, though... :), but you properbly can use basic allso, then you need yet another pool of xenservers..

 

Kind regards

Gert


Comment actions Permalink
Avatar
Administrator
Avatar

Hello Gert,

 

Thanks you for your reply. So If I want access at my differents network for my future VM, I must configure Cloudstack with Advanced Network Type. Huuum doesn't work. 

 

I think I forget something.

 

My Network settings on xenserver ( I attached the file Gert ;) )

 

My actual network configuration of xenserver is compatible with cloudstack ?

 

Thanks you.

Attached Thumbnails

  • xenserver-11042014.png

Clément Mutz MEMBERS 14 April 2014 - 08:11 AM
Comment actions Permalink
Avatar
Administrator
Avatar

Yes some VM must use differents networks. 

 

example : Bond 0+1 Vl60 : use for to assign a public IP

                Bond 2+3 Vl30 : use for internal traffic (i access by vpn ;) )  interface tagged.&

                Bond 2+3 Vl50 : use for management Xenserver (network of xenserver)

                Bond 4+5 Vl20 : use for attached nfs partition of NAS on VM.

 

My gateway is already created on my router. It's your question, no ? :)

 

I'm testing a configuration with Advanced network on Cloudstack.

 


Clément Mutz MEMBERS 14 April 2014 - 09:50 AM
Comment actions Permalink
Avatar
Administrator
Avatar

Ok I tried this configuration (see attach files) if you have enough courage :)

 

Failed for adding host xenserver.

 

less /var/log/cloudstack/management/management-server.log

 


2014-04-14 12:37:12,391 DEBUG [c.c.a.ApiServlet] (catalina-exec-6:ctx-588bb89f) ===START===  10.254.11.10 -- POST  command=addHost&response=json&sessionkey=rkEtmO3oGYfJSwGwuHeuhha5vNY%3D

2014-04-14 12:37:12,399 INFO  [c.c.r.ResourceManagerImpl] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Trying to add a new host at http://10.254.50.1 in data center 6
2014-04-14 12:37:12,437 DEBUG [c.c.h.x.r.XenServerConnectionPool] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Slave logon to 10.254.50.1
2014-04-14 12:37:12,442 DEBUG [c.c.h.x.r.XenServerConnectionPool] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Logging on as the master to 10.254.50.1
2014-04-14 12:37:12,498 INFO  [c.c.h.x.d.XcpServerDiscoverer] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Found host srv-tc1-xshv1 ip=10.254.50.1 product version=6.2.0
2014-04-14 12:37:12,656 DEBUG [c.c.h.x.r.CitrixResourceBase] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Management network is on pif=65d2cbd9-03d2-6ab5-83f5-6df0fd72b319
2014-04-14 12:37:12,659 WARN  [c.c.h.x.r.CitrixResourceBase] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Unable to find storage network SAN Management Bond for host 10.254.50.1
2014-04-14 12:37:12,659 WARN  [c.c.h.x.r.CitrixResourceBase] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Unable to get host information for 10.254.50.1
java.lang.IllegalArgumentException: Unable to find storage network SAN Management Bond for host 10.254.50.1
        at com.cloud.hypervisor.xen.resource.CitrixResourceBase.getHostInfo(CitrixResourceBase.java:4834)
        at com.cloud.hypervisor.xen.resource.CitrixResourceBase.initialize(CitrixResourceBase.java:4975)
        at com.cloud.hypervisor.xen.resource.XenServer56Resource.initialize(XenServer56Resource.java:279)
        at com.cloud.resource.ResourceManagerImpl.createHostAndAgentDeferred(ResourceManagerImpl.java:1770)
        at com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:760)
        at com.cloud.resource.ResourceManagerImpl.discoverHosts(ResourceManagerImpl.java:575)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:622)
        at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
        at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
        at com.sun.proxy.$Proxy145.discoverHosts(Unknown Source)
        at org.apache.cloudstack.api.command.admin.host.AddHostCmd.execute(AddHostCmd.java:143)
        at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:161)
        at com.cloud.api.ApiServer.queueCommand(ApiServer.java:531)
        at com.cloud.api.ApiServer.handleRequest(ApiServer.java:374)
        at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:323)
        at com.cloud.api.ApiServlet.access$000(ApiServlet.java:53)
        at com.cloud.api.ApiServlet$1.run(ApiServlet.java:115)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:112)
        at com.cloud.api.ApiServlet.doPost(ApiServlet.java:79)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:615)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
        at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)

       at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:701)
2014-04-14 12:37:12,661 WARN  [c.c.h.x.r.CitrixResourceBase] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Unable to get host information for 10.254.50.1
2014-04-14 12:37:12,661 INFO  [c.c.r.ResourceManagerImpl] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Unable to fully initialize the agent because no StartupCommands are returned
2014-04-14 12:37:12,661 INFO  [c.c.r.ResourceManagerImpl] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) server resources successfully discovered by XCP Agent
2014-04-14 12:37:12,661 INFO  [c.c.a.ApiServer] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) Failed to add host
2014-04-14 12:37:12,662 DEBUG [c.c.a.ApiServlet] (catalina-exec-6:ctx-588bb89f ctx-d21ff36a) ===END===  10.254.11.10 -- POST  command=addHost&response=json&sessionkey=rkEtmO3oGYfJSwGwuHeuhha5vNY%3D
2014-04-14 12:37:15,088 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Resetting hosts suitable for reconnect
2014-04-14 12:37:15,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Completed resetting hosts suitable for reconnect
2014-04-14 12:37:15,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Acquiring hosts for clusters already owned by this management server
2014-04-14 12:37:15,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Completed acquiring hosts for clusters already owned by this management server
2014-04-14 12:37:15,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Acquiring hosts for clusters not owned by any management server
2014-04-14 12:37:15,091 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-e32dbf32) Completed acquiring hosts for clusters not owned by any management server
2014-04-14 12:37:30,901 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-8ffe4e15) Skip capacity scan due to there is no Primary Storage UPintenance mode
2014-04-14 12:37:34,772 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-8174cff7) StorageCollector is running...
2014-04-14 12:37:36,922 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-7c037343) VmStatsCollector is running...
2014-04-14 12:37:39,661 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0257016b) Found 0 routers to update status. 
2014-04-14 12:37:39,662 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0257016b) Found 0 networks to update RvR status. 
2014-04-14 12:37:52,409 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-d4e129a1) HostStatsCollector is running...
2014-04-14 12:38:00,902 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-f49813ff) Skip capacity scan due to there is no Primary Storage UPintenance mode
2014-04-14 12:38:09,661 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-fb91307a) Found 0 routers to update status. 
2014-04-14 12:38:09,662 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-fb91307a) Found 0 networks to update RvR status. 
2014-04-14 12:38:30,901 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-feb9e205) Skip capacity scan due to there is no Primary Storage UPintenance mode
2014-04-14 12:38:34,775 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-73b2a978) StorageCollector is running...
2014-04-14 12:38:36,924 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-2856a7f6) VmStatsCollector is running...
2014-04-14 12:38:39,661 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-abf718f2) Found 0 routers to update status. 
2014-04-14 12:38:39,662 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-abf718f2) Found 0 networks to update RvR status. 
2014-04-14 12:38:45,088 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Resetting hosts suitable for reconnect
2014-04-14 12:38:45,089 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Completed resetting hosts suitable for reconnect
2014-04-14 12:38:45,089 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Acquiring hosts for clusters already owned by this management server
2014-04-14 12:38:45,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Completed acquiring hosts for clusters already owned by this management server
2014-04-14 12:38:45,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Acquiring hosts for clusters not owned by any management server
2014-04-14 12:38:45,090 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-0fc87aa8) Completed acquiring hosts for clusters not owned by any management server
2014-04-14 12:38:52,411 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-03533554) HostStatsCollector is running...
2014-04-14 12:39:00,902 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-cb68103b) Skip capacity scan due to there is no Primary Storage UPintenance mode
2014-04-14 12:39:09,580 DEBUG [c.c.n.ExternalDeviceUsageManagerImpl] (ExternalNetworkMonitor-1:ctx-896a4958) External devices stats collector is running...
2014-04-14 12:39:09,613 INFO  [c.c.h.HighAvailabilityManagerImpl] (HA-4:ctx-24833172) checking health of usage server
2014-04-14 12:39:09,615 DEBUG [c.c.h.HighAvailabilityManagerImpl] (HA-4:ctx-24833172) usage server running? false, heartbeat: null
2014-04-14 12:39:09,615 WARN  [o.a.c.alerts] (HA-4:ctx-24833172)  alertType:: 13 // dataCenterId:: 0 // podId:: 0 // clusterId:: null // message:: No usage server process running
2014-04-14 12:39:09,616 DEBUG [c.c.a.AlertManagerImpl] (HA-4:ctx-24833172) Have already sent: 1 emails for alert type '13' -- skipping send email
2014-04-14 12:39:09,659 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterMonitor-1:ctx-185cc4c0) Found 0 running routers. 
2014-04-14 12:39:09,661 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-05b7fb00) Found 0 routers to update status. 
2014-04-14 12:39:09,662 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-05b7fb00) Found 0 networks to update RvR status. 
2014-04-14 12:39:09,663 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (SnapshotPollTask:ctx-3e7f3982) Snapshot scheduler.poll is being called at 2014-04-14 10:39:09 GMT
2014-04-14 12:39:09,664 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (SnapshotPollTask:ctx-3e7f3982) Got 0 snapshots to be executed at 2014-04-14 10:39:09 GMT
2014-04-14 12:39:19,592 DEBUG [c.c.n.l.LBHealthCheckManagerImpl] (LBHealthCheck-1:ctx-1e3d9334) LB HealthCheck Manager is running and getting the updates from LB providers and updating service status
2014-04-14 12:39:19,629 DEBUG [c.c.n.l.LBHealthCheckManagerImpl] (LBHealthCheck-1:ctx-1e3d9334) LB HealthCheck Manager is running and getting the updates from LB providers and updating service status
2014-04-14 12:39:30,902 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-3063d844) Skip capacity scan due to there is no Primary Storage UPintenance mode
2014-04-14 12:39:34,777 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-d6c9a618) StorageCollector is running...
2014-04-14 12:39:36,926 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-f509fb98) VmStatsCollector is running...
2014-04-14 12:39:39,661 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-810d7e64) Found 0 routers to update status. 
2014-04-14 12:39:39,662 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-810d7e64) Found 0 networks to update RvR status. 

 

You can see Unable to find storage network SAN Management Bond for host 10.254.50.1

 

After i tried to add a primary storage but I must have a Hosts.

 

What's wrong with my configuration ?

 

Thanks you,

 

Clément.

Attached Thumbnails

  • 01-cloudstack.PNG
  • 02-cloudstack.PNG
  • 03-cloudstack.PNG
  • 04-cloudstack.PNG
  • 05-cloudstack.PNG
  • 06-cloudstack.PNG
  • 07-cloudstack.PNG
  • 08-cloudstack.PNG
  • 09-cloudstack.PNG
  • 10-cloudstack.PNG

Clément Mutz MEMBERS 14 April 2014 - 10:44 AM
Comment actions Permalink
Avatar
Gert Jensen

Hello Clement,

 

Just to verify, the management server shutting down thing os done..?

 

You should properbly start a different thread regarding the storage on xenserver.. :)

 

That beeing said..

 

your storage bond has an ip in 10.254.50,X ?

can you ping 10.254.50.1 from xenserver hosts, both of them ?

can you try to attach the NFS directly ? after this is done successfully you can deattach..

In my experiance it usually comes down to 3 things, permissions,routing or version of NFS.

 

Kind regards

Gert


Comment actions Permalink
Avatar
Administrator
Avatar

When configuring CCP networks, remember that the "Physical Network Name" is internal to CCP. So you could use "cloud-storage" for example. The "Label" on the other hand must match the name defined in XenServer (or other hypervisor). I would also recommend avoiding spaces in those labels/names. For example, you could use "Bond4_5-vl20" and the label for the "cloud-storage" physical network.

 

Hope that helps.

 

--Mike


Michael Little MEMBERS 17 April 2014 - 17:45 PM
Comment actions Permalink
Avatar
Administrator
Avatar

Thanks you very much !

Sorry for my answer later. I was on a another project.

You helped me ! With Good label configuration that worked.

 

Thanks !!

Clément.


Clément Mutz MEMBERS 02 June 2014 - 12:16 PM
Comment actions Permalink

Top Contributors