Compute Service State down

Not sure where I may have went wrong during the Nova installation.  the compute services are enabled but the state for all of them is down.  I will probably need to revert back to the snapshot I took before starting the Nova installation but hope I may get some pointers on what I may have messed up. 
root@controller:~# openstack compute service list
+----+------------------+------------+----------+---------+-------+------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+------------+
| 3 | nova-scheduler | controller | internal | enabled | down | None |
| 4 | nova-consoleauth | controller | internal | enabled | down | None |
| 5 | nova-conductor | controller | internal | enabled | down | None |
+----+------------------+------------+----------+---------+-------+------------+


  • post-author-pic
    Amy M
    11-20-2018

    Shane can you restart say nova-scheduler with 'service nova-scheduler restart' and check both /var/log/syslog and /var/log/nova/nova-scheduler.log? You should get more information to narrow down the misconfiguration.

  • post-author-pic
    Shane H
    11-20-2018

    Below is common in the respective logs for each of the nova services

    ERROR oslo_service.service AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'nova' in vhost '/' refused for user 'openstack
    I checked the history to see if I  got the paasword correct when setting the openstack user for Rabbitmq
    root@controller:~# history | grep rabbitmqctl
    52 rabbitmqctl add_user openstack linuxacademy123
    53 rabbitmqctl set_permissions openstack ".* " ".*" ".*"

    I also checked the transport_url in the nova.conf files on both the controller and compute nodes.
    transport_url = rabbit://openstack:linuxacademy123@controller
    Is there somewhere else I should be looking? Thanks!

  • post-author-pic
    Amy M
    11-20-2018

    Shane everything you've provided looks correct. I'm assuming you didn't change the rabbitmq port, but it might be worth adding it to the transport_url with @controller:5672 to see if that makes a difference. Also, lets verify rabbitmq is working properly with the following commands:

    rabbitmqctl list_users
    rabbitmqctl authenticate_user openstack linuxacademy123
    rabbitmqctl list_vhosts
    rabbitmqctllist_permissions -p /

  • post-author-pic
    Shane H
    11-20-2018

    I've added port 5672 to the transort_url as you sugguested in the nova.conf file for both the controller and compute nodes, restarted the services but hey still come up inactive (dead).

    Here are the outputs of the rabbitmq commands.  Not sure what to expect.  Should openstack show as [administrator] ?  The only vhost was /  is that correct?

    root@controller:~# rabbitmqctl list_users
    Listing users ...
    guest [administrator]
    openstack []
    root@controller:~# rabbitmqctl authenticate_user openstack linuxacademy123
    Authenticating user "openstack" ...
    Success
    root@controller:~# rabbitmqctl list_vhosts
    Listing vhosts ...
    /
    root@controller:~# rabbitmqctl list_permissions -p /
    Listing permissions in vhost "/" ...
    guest .* .* .*
    openstack .* .* .*


  • post-author-pic
    Amy M
    11-20-2018

    The rabbitmq is correctly configured with the correct password and permissions which is what those commands confiormed. Can you restart the nova-scheduler and grab me that secionof the syslog and nova-scheduler.log? Also, I noticed nova-compute isn't shown in your service list. Let me see what the logs show on the restart and then we'll look at the relaventconf file sections for the course. Can you also confirm that cellvs and placement have been configured?

  • post-author-pic
    Shane H
    11-20-2018

    The nova-compute was only installed on the compute node.  from the compute node the command service nova-compute status shows the service is active and running.  but doesn't show up in the compute service list on the compute or controller nodes.  I checked rthe cells and placement configurations as much as I know how.  Not sure how to verify the "nova-manage" commands that were run besides looking at the history to see they were executed.

    Here are the syslog and nova-scheduler.log sections right after restarting nova-scheduler.  I also added  some output from the rabbit@controller.log.  the same three messages show up every five seconds.


    syslog

    20 13:38:08 controller systemd[1]: Stopping OpenStack Compute API...
    Nov 20 13:38:08 controller systemd[1]: Stopped OpenStack Compute API.
    Nov 20 13:38:08 controller systemd[1]: Starting OpenStack Compute API...
    Nov 20 13:38:08 controller systemd[1]: Started OpenStack Compute API.
    Nov 20 13:38:26 controller systemd[1]: Stopped OpenStack Compute Console.
    Nov 20 13:38:26 controller systemd[1]: Starting OpenStack Compute Console...
    Nov 20 13:38:26 controller systemd[1]: Started OpenStack Compute Console.
    Nov 20 13:38:58 controller systemd[1]: Stopped OpenStack Compute Scheduler.
    Nov 20 13:38:58 controller systemd[1]: Starting OpenStack Compute Scheduler...
    Nov 20 13:38:58 controller systemd[1]: Started OpenStack Compute Scheduler.
    Nov 20 13:39:18 controller systemd[1]: Stopped OpenStack Compute Conductor.
    Nov 20 13:39:18 controller systemd[1]: Starting OpenStack Compute Conductor...
    Nov 20 13:39:18 controller systemd[1]: Started OpenStack Compute Conductor.
    Nov 20 13:39:41 controller systemd[1]: Stopping OpenStack Compute novncproxy...
    Nov 20 13:39:41 controller systemd[1]: Stopped OpenStack Compute novncproxy.
    Nov 20 13:39:41 controller systemd[1]: Starting OpenStack Compute novncproxy...
    Nov 20 13:39:41 controller systemd[1]: Started OpenStack Compute novncproxy.
    Nov 20 13:40:24 controller systemd[1]: Stopped OpenStack Compute Scheduler.
    Nov 20 13:40:24 controller systemd[1]: Starting OpenStack Compute Scheduler...
    Nov 20 13:40:24 controller systemd[1]: Started OpenStack Compute Scheduler.
    Nov 20 14:02:59 controller systemd[1]: Stopped OpenStack Compute Scheduler.
    Nov 20 14:02:59 controller systemd[1]: Starting OpenStack Compute Scheduler...
    Nov 20 14:02:59 controller systemd[1]: Started OpenStack Compute Scheduler.

    nova-scheduler.log

    2018-11-20 14:06:01.311 3863 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
    2018-11-20 14:06:01.314 3863 INFO oslo_service.periodic_task [-] Skipping periodic task _discover_hosts_in_cells because its interval is negative
    2018-11-20 14:06:01.701 3863 WARNING oslo_config.cfg [req-3b549e8d-2308-4fec-8fee-3bed48f1d684 - - - - -] Option "firewall_driver" from group "DEFAULT" is deprecated for removal (
    nova-network is deprecated, as are any related configuration options.
    ). Its value may be silently ignored in the future.
    2018-11-20 14:06:01.707 3863 WARNING oslo_config.cfg [req-3b549e8d-2308-4fec-8fee-3bed48f1d684 - - - - -] Option "use_neutron" from group "DEFAULT" is deprecated for removal (
    nova-network is deprecated, as are any related configuration options.
    ). Its value may be silently ignored in the future.
    2018-11-20 14:06:01.726 3863 WARNING oslo_config.cfg [req-3b549e8d-2308-4fec-8fee-3bed48f1d684 - - - - -] Option "enable" from group "cells" is deprecated for removal (Cells v1 is being replaced with Cells v2.). Its value may be silently ignored in the future.
    2018-11-20 14:06:01.739 3863 INFO nova.service [req-28cca200-2b8b-414a-a723-b580d6d79c94 - - - - -] Starting scheduler node (version 16.1.4)
    2018-11-20 14:06:01.785 3863 ERROR oslo.messaging._drivers.impl_rabbit [req-28cca200-2b8b-414a-a723-b580d6d79c94 - - - - -] Failed to declare consumer for topic 'scheduler': Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'nova' in vhost '/' refused for user 'openstack': AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'nova' in vhost '/' refused for user 'openstack'
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service [req-28cca200-2b8b-414a-a723-b580d6d79c94 - - - - -] Error starting thread.: AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'nova' in vhost '/' refused for user 'openstack'
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service Traceback (most recent call last):
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 721, in run_service
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service service.start()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/service.py", line 192, in start
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.rpcserver.start()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 270, in wrapper
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service log_after, timeout_timer)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 190, in run_once
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service post_fn = fn()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 269, in <lambda>
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service states[state].run_once(lambda: fn(self, *args, **kwargs),
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/server.py", line 416, in start
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.listener = self._create_listener()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 148, in _create_listener
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return self.transport._listen(self._target, 1, None)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 138, in _listen
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service batch_timeout)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 591, in listen
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service callback=listener)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1126, in declare_topic_consumer
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.declare_consumer(consumer)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1028, in declare_consumer
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service error_callback=_connect_error)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 807, in ensure
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service ret, channel = autoretry_method()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 494, in _ensured
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return fun(*args, **kwargs)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 570, in __call__
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return fun(*args, channel=channels[0], **kwargs), channels[0]
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 796, in execute_method
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service method()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1016, in _declare_consumer
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service consumer.declare(self)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 304, in declare
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.queue.declare()
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/kombu/entity.py", line 604, in declare
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self._create_exchange(nowait=nowait, channel=channel)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/kombu/entity.py", line 611, in _create_exchange
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.exchange.declare(nowait=nowait, channel=channel)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/kombu/entity.py", line 185, in declare
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service nowait=nowait, passive=passive,
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/channel.py", line 630, in exchange_declare
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service wait=None if nowait else spec.Exchange.DeclareOk,
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 73, in send_method
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return self.wait(wait, returns_tuple=returns_tuple)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 93, in wait
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service self.connection.drain_events(timeout=timeout)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 464, in drain_events
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return self.blocking_read(timeout)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 469, in blocking_read
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service return self.on_inbound_frame(frame)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/method_framing.py", line 68, in on_frame
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service callback(channel, method_sig, buf, None)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 473, in on_inbound_method
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service method_sig, payload, content,
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 142, in dispatch_method
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service listener(*args)
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/amqp/channel.py", line 293, in _on_close
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service reply_code, reply_text, (class_id, method_id), ChannelError,
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'nova' in vhost '/' refused for user 'openstack'

    rabbit@controller.log SAME THREE MESSAGES EVERY FIVE SECONDS

    =INFO REPORT==== 20-Nov-2018::14:42:42 ===
    accepting AMQP connection <0.23174.8> (10.0.0.31:44976 -> 10.0.0.11:5672)

    =ERROR REPORT==== 20-Nov-2018::14:42:42 ===
    Channel error on connection <0.23174.8> (10.0.0.31:44976 -> 10.0.0.11:5672, vhost: '/', user: 'openstack'), channel 1:
    {amqp_error,access_refused,
    "access to exchange 'reply_6aad28aca71642dd8b75909bc5d85bd1' in vhost '/' refused for user 'openstack'",
    'exchange.declare'}

    =WARNING REPORT==== 20-Nov-2018::14:42:43 ===
    closing AMQP connection <0.23174.8> (10.0.0.31:44976 -> 10.0.0.11:5672):
    connection_closed_abruptly
    2018-11-20 14:06:01.786 3863 ERROR oslo_service.service


  • post-author-pic
    Amy M
    11-21-2018

    Shane the error would suggest a permissions error though we did verify those. Let’s go ahead and reset the permissions with the same command you already used and also double check the transport_url line for any bad characters and maybe retype the password there. As neither Keystone or Glance are configured for rabbitmq our next step may be to uninstall and reinstall the rabbitmq, redo  the user and permissions.

  • post-author-pic
    Shane H
    11-21-2018

    reseting the permissions with the same command did the trick, thanks Amy!

  • post-author-pic
    Amy M
    11-21-2018

    Shane very odd but glad it worked!!!

  • post-author-pic
    Michael H
    11-26-2018

    Congrats!

Looking For Team Training?

Learn More