Manual openstack installation

From Notes_Wiki
Revision as of 14:15, 24 March 2013 by Saurabh (talk | contribs) (Created page with "<yambe:breadcrumb>Openstack|Openstack</yambe:breadcrumb> =Manual Openstack installation= ==Reference== - http://docs.openstack.org/trunk/openstack-compute/install/yum/conte...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

<yambe:breadcrumb>Openstack|Openstack</yambe:breadcrumb>

Manual Openstack installation

Reference

  - http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-ntp.html
  - http://docs.openstack.org/trunk/openstack-compute/admin/content/ch_getting-started-with-openstack.html


Important note

As indicated towards end of this article the cirros or tty-linux instances created using these installation steps were not responding to network requests. Further their graphical access through novnc or directly via virt-viewer indicated that they might have frozen during booting. Hence successful deployment of openstack after following these steps is not guaranteed.


Basic setup of base machine

Configure ntp

Assuming a ntp server is available for use on all base machine use following steps to configure ntp on base machine:

  1. Edit '/etc/ntp.conf' and '/etc/ntp/step-tickers' files and add location of time server such as time.iiit.ac.in at appropriate locations
  2. Use 'service ntpdate start' and 'chkconfig ntpdate on'
  3. Optionally a file can be created in '/etc/cron.hourly' with following contents:
#!/bin/bash

ntpdate -b <time-server>
hwclock -w


Configure MySQL

Assuming full installation and availability of packages such as mysql-server, mysql, MySQL-python etc. use:

  1. service mysqld start
  2. /usr/bin/mysql_secure_installation
  3. chkconfig mysqld on


Install messaging server

Install a messaging server using:

  1. yum install openstack-utils memcached qpid-cpp-server

Note that this requires adding 'rpc_backend=nova.rpc.impl_qpid' to '/etc/nova/nova.conf' file. Author forgot about this during the lengthy installation and installed rabbitmq-server later for messaging.


LVM configuration

For openstack a LVM volume group with name nova-volumes should be present so that openstack can use it for providing disk images to other hosts using iSCSI. Since the name is configurable if there is other LVM group available then the same can be used during configuration without requiring reinstalllation of OS or renaming of volume group. However, the created volume group should be empty (no logical volumes) and should not be in use.



Installing and configuring Identity service

Package installation

Install identity service related packages using

  1. yum install openstack-utils openstack-keystone python-keystoneclient'


Configure keystone database

Configure keystone database using:

  1. openstack-db --init --service keystone
    This will prompt for mysql root password. This creates a database named keystone with user named keystone for accessing the database. The password for keystone user is also set to keystone.
    The configuration file '/etc/keystone/keystone.conf will have following lines:
    [sql]
    connection = mysql://keystone:keystone@localhost/keystone
    which indicate database URL format to be
    <database-server>://<username>:<password>@<host>/<database-name>


Generate and configure keystone admin token

Generate and configure keystone admin token using:

  1. export ADMIN_TOKEN=$(openssl rand -hex 10)
  2. openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
    This will cause following configuration line to be added in '/etc/keystone/keystone.conf' file:
    admin_token = <token>


Start keystone and enable it on start-up

Start keystone and enable it to run automatically on start-up using

  1. service openstack-keystone start
  2. chkconfig openstack-keystone on


Initialize the new keystone database

Initialize the new keystone database using:

  1. keystone-manage db_sync


Export environment variables

Export environment variables using:

  1. export OS_SERVICE_TOKEN=858d06a05a5518d7d359
  2. export OS_SERVICE_ENDPOINT=http://10.4.15.2:35357/v2.0

Here, token and endpoint port number can be obtained from '/etc/keystone/keystone.conf' file under variable names 'admin_token' and 'admin_port'.

Exporting these variables is *necessary* for steps mentioned in next sub-sections to work. If these tokens are not specified in environment variables then they must be specified on command line using --token and --endpoint.


Create default tenant and admin user

Create default tenant and admin user using:

  1. Create default tenant for admin user
    keystone tenant-create --name default --description "Default Tenant"
  2. Create admin user as part of default tenant
    keystone user-create --tenant-id <from-above-output> --name admin --pass iiit123
  3. Create admin role for keystone. Privileges for each role are defined in '/etc/keystone/policy.json file'.
    keystone role-create --name admin
  4. Grant admin role to admin user with default tenant
    keystone user-role-add --user-id <from-above> --tenant-id <from-above> --role-id <from-above>


Create service tenant

Service tenant contains all services that are part of service catalog.

  1. Create service tenant
    keystone tenant-create --name service --description "Service Tenant"
  2. Create Glance user in service tenant
    keystone user-create --tenant-id <from-above> --name glance --pass glance
  3. Grant admin role to Glance user in service tenant
    keystone user-role-add --user-id <from-above> --tenant-id <from-above> --role-id <from-previous-subsection>
    Note that one can find role id using mysql. Login into mysql as keystone or root. Use keystone database by 'use keystone;'. Then use 'select * from role;' to see id for admin role.
  4. Do same for Nova, EC2 and Object storage services
    keystone user-create --tenant-id <from-above> --name nova --pass nova
    keystone user-role-add --user-id <from-above> --tenant-id <from-above> --role-id <from-before> #:: #For nova
    keystone user-create --tenant-id <from-above> --name ec2 --pass ec2
    keystone user-role-add --user-id <from-above> --tenant-id <from-above> --role-id <from-before> #:: #For ec2
    keystone user-create --tenant-id <from-above> --name swift --pass swift
    keystone user-role-add --user-id <from-above> --tenant-id <from-above> --role-id <from-before> #:: #For swift
    Note that tenant ID can be seen using 'select * from tenant' in mysql for keystone database.
  5. Verify user tenant relationship using
    select user.name,tenant.name from user_tenant_membership,user,tenant where user.id=user_tenant_membership.user_id and tenant.id=user_tenant_membership.tenant_id;
  6. Also verify that all the above users have admin roles in their respective tenants using:
    select * form role;
    select * from metadata;


Configuring use of database for service catalog

Keystone acts as directory or catalog for services. Other services query keystone to find each others service end-points (eg TCP/IP ports and IP addresses). Although keystone supports use of a template file for catalog, use of template file is not recommended as it is not easy to update. Use of database for catalog operations as explained further is recommended.

For keystone to use database for catalog '/etc/keystone/keystone.conf' file should contain

[catalog]
driver = keystone.catalog.backends.sql.Catalog

That is comment 'template_file' line. Reload openstack-keystone service using:

service openstack-keystone reload

It was realized later that commenting line may not be necessary.


Basics of service catalog entry

Service catalog entries are created using 'keystone service-create' and 'keystone endpoint-create' commands. Their syntax's are:

keystone service-create --name=<name> --type=<type> --description="<description>"
keystone endpoint-create --region <region> \
 --service-id=<from-above> --publicurl=<public-url> \
 --internalurl=<internal-url> --adminurl=<admin-url>

For example

keystone service-create --name=keystone --type=identity --description="Identity Service"
keystone endpoint-create \
   --region RegionOne \
   --service-id=15c11a23667e427e91bc31335b45f4bd \
   --publicurl=http://192.168.206.130:5000/v2.0 \
   --internalurl=http://192.168.206.130:5000/v2.0 \
   --adminurl=http://192.168.206.130:35357/v2.0

Note that for many services internal URL and public URL is same. Further few services use the same URL even for admin purposes.


Create appropriate service catalog entries

Assuming keystone service is not added to catalog and assuming OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT are exported as shown few sections before, create important service enpoints using:

keystone service-create --name=keystone --type=identity --description="Identity Service"
keystone endpoint-create \
   --region <region> \
   --service-id=<from-above> \
   --publicurl=http://<ip>:5000/v2.0 \
   --internalurl=http://<ip>:5000/v2.0 \
   --adminurl=http://<ip>:35357/v2.0

keystone service-create --name=nova --type=compute --description="Compute Service"
keystone endpoint-create \
   --region <region> \
   --service-id=<from-above> \
   --publicurl='http://<ip>:8774/v2/%(tenant_id)s' \
   --internalurl='http://<ip>:8774/v2/%(tenant_id)s' \
   --adminurl='http://<ip>:8774/v2/%(tenant_id)s'

keystone service-create --name=volume --type=volume --description="Volume Service"
keystone endpoint-create \
 --region <region> \
 --service-id=<from-above> \
 --publicurl='http://<ip>:8776/v1/%(tenant_id)s' \
 --internalurl='http://<ip>:8776/v1/%(tenant_id)s' \
 --adminurl='http://<ip>:8776/v1/%(tenant_id)s'

keystone service-create --name=glance --type=image --description="Image Service"
keystone endpoint-create \
   --region <region> \
   --service-id=<from-above> \
   --publicurl=http://<ip>:9292 \
   --internalurl=http://<ip>:9292 \
   --adminurl=http://<ip>:9292

keystone service-create --name=ec2 --type=ec2 --description="EC2 Compatibility Layer"
keystone endpoint-create \
   --region <region> \
   --service-id=<from-above> \
   --publicurl=http://<ip>:8773/services/Cloud \
   --internalurl=http://<ip>:8773/services/Cloud \
   --adminurl=http://<ip>:8773/services/Admin

keystone service-create --name=swift --type=object-store --description="Object Storage Service"
keystone endpoint-create \
   --region <region> \
   --service-id=<from-above> \
   --publicurl 'http://<ip>:8888/v1/AUTH_%(tenant_id)s' \
   --adminurl 'http://<ip>:8888/v1' \
   --internalurl 'http://<ip>:8888/v1/AUTH_%(tenant_id)s'


Debugging keystone usage

Keystone log messages go to '/var/log/keystone/keystone.log file'. One can also see debug messages by using '--debug' command line option.


Verifying keystone installation

Keystone installation can be verified using:

  1. Unset environment variables *Important*
    unset OS_SERVICE_TOKEN
    unset OS_SERVICE_ENDPOINT
  2. Check authentication and service end point using:
    keystone --os-username=admin --os-password=iiit123 --os-auth-url=http://<ip>:35357/v2.0 token-get
  3. Check authorization against a tenant using:
    keystone --os-username=admin --os-password=iiit123 --os-tenant-name=default --os-auth-url=http://<ip>:35357/v2.0 token-get
  4. Username, password, auth-url and tenant name can be stored in environment variables. Create a file named 'keystone-vars.sh' using:
    export OS_USERNAME=admin
    export OS_PASSWORD=iiit123
    export OS_TENANT_NAME=default
    export OS_AUTH_URL=http://<ip>:35357/v2.0
  5. Then use 'source keystone-vars.sh' to get values exported in current shell
  6. Check if environment variables are working using 'keystone token-get'
  7. Finally verify that role is admin using 'keystone user-list'


Installing and configuring Image service

Install required packages

Install required packages using:

  1. yum install openstack-nova openstack-glance
  2. If glance sqlite database is present at /var/lib/glance then remove it using:
    rm /var/lib/glance/glance.sqlite


Configure glance database backend

Configure glance database backend using:

  1. Log into MySQL using
    mysql -u root -p
  2. Create glance database using
    CREATE DATABASE glance;
  3. Grant all permissions to glance user on glance database using:
    GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '<password>';
    GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '<password>';
    flush privileges;
    quit


Configure Glance API

Configure Glance API using:

  1. Edit '/etc/glance/glance-api.conf' file as follows:
    1. uncomment 'enable_v2_api = True' line.
    2. set appropriate value for image_cache_dir
    3. Modify the last few line of file as follows
      [keystone_authtoken]
      auth_host = 127.0.0.1
      auth_port = 35357
      auth_protocol = http
      admin_tenant_name = service
      admin_user = glance
      admin_password = glance
      [paste_deploy]
      config_file = /etc/glance/glance-api-paste.ini
      flavor=keystone
    4. Verify that value of 'sql_connection' is correct. That is it uses mysql, glance username, glance password and glance database. Hostname can be configured as localhost or glance server IP.
  2. Start glance-api service
    service openstack-glance-api start
  3. Enable glance-api service on start-up
    chkconfig openstack-glance-api on


Configure Glance Registry

Configure Glance Registry using

  1. Edit '/etc/glance/glance-registry.conf' file as follows:
    1. Verify that value of sql_connection is correct
    2. Same as before modify lines near end of file to:
      [keystone_authtoken]
      auth_host = 127.0.0.1
      auth_port = 35357
      auth_protocol = http
      admin_tenant_name = service
      admin_user = glance
      admin_password = glance
      [paste_deploy]
      config_file = /etc/glance/glance-registry-paste.ini
      flavor=keystone
  2. Verify file '/etc/glance/glance-registry-paste.ini' contains
    # Use this pipeline for keystone auth
    [pipeline:glance-registry-keystone]
    pipeline = authtoken context registryapp
  3. Start glance-registry service using
    service openstack-glance-registry start
  4. Enable glance-registry service on start-up
    chkconfig openstack-glance-registry on


Initialize glance database

Initialize glance database using

  1. Use 'glance-manage db_sync' to initialize glance database.
  2. Use following to restart the services once after initializing database
    service openstack-glance-api restart
    service openstack-glance-registry restart


Troubleshooting glance setup

Look at '/var/log/glance/{api,registry}.log' files and verify that they end with something like:

INFO keystone.middleware.auth_token [-] Starting keystone auth_token middleware
INFO keystone.middleware.auth_token [-] Using /var/lib/glance/keystone-signing as cache directory for signing certificate


Verifying glance service installation

Glance installation can be verified using:

  1. Obtain a test image using at appropriate location:
    wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz
    tar -zxvf ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz
  2. Create openstack-vars.sh file with following values
    export OS_USERNAME=admin
    export OS_TENANT_NAME=default
    export OS_PASSWORD=iiit123
    export OS_AUTH_URL=http://10.4.15.2:5000/v2.0/
    export OS_REGION_NAME=332b
  3. Source file using 'source openstack-vars.sh'
    This is a very important file and would require to be sourced very often. Do not delete this file and remember to source it before using openstack commands in a new terminal.
  4. Create image using
    glance image-create \
    --name="tty-linux-kernel" \
    --disk-format=aki \
    --container-format=aki < ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
    glance image-create \
    --name="tty-linux-ramdisk" \
    --disk-format=ari \
    --container-format=ari < ttylinux-uec-amd64-12.1_2.6.35-22_1-loader
    glance image-create \
    --name="tty-linux" \
    --disk-format=ami \
    --container-format=ami \
    --property kernel_id=<from-above> \
    --property ramdisk_id=<from-above> < ttylinux-uec-amd64-12.1_2.6.35-22_1.img
  5. Verify that images got created using 'glance image-list'



Installing and configuring Computer service

The steps assume use of KVM hypervisor. For other hypervisors steps can be learned using references.

Install nova packages

Install nova packages using:

yum install openstack-nova


Configuring KVM as hypervisor

To configure openstack to use KVM use:

  1. Ensure that '/etc/nova/nova.conf' has
    compute_driver = libvirt.LibvirtDriver
    libvirt_type = kvm
  2. Verify that processor supports virtualization using:
    egrep '(vmx|svm)' --color=always /proc/cpuinfo
  3. Verify that kvm modules are loaded using:
    lsmod | grep kvm
  4. Ensure that '/etc/nova/nova.conf' file has following lines:
    libvirt_cpu_mode = host-model
    [keystone_authtoken]
    admin_tenant_name = default
    admin_user = nova
    admin_password = nova

nova logs are generated in '/var/log/nova/nova-compute.log' where libvirt errors would get reported in case of incorrect hypervisor configuration


Configuring QEMU hypervisor (Not required)

As we have configured KVM, qemu configuration is not required. This information is included so that if openstack is being installed inside a VM then only qemu would be available.

  1. For Qemu '/etc/nova/nova.conf' should contain
    compute_driver=libvirt.LibvirtDriver
    libvirt_type=qemu
  2. Install libguestfs-tools using
    yum install libguestfs-tools
  3. Run following commands to configure qemu, disable SELinux, link qemu-img executable and restart libvirtd
    sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
    setsebool -P virt_use_execmem on
    sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
    sudo service libvirtd restart


Configure network

Configure network as follows:

  1. Configure bridge interface br0 for networking on base machine. Use Creating bridge interfaces (br0) for virtual hosts to use shared interface as reference
  2. Install dnsmask-utils using
    yum install dnsmasq-utils
    service libvirtd restart
  3. In RHEL 6.2 'force_dhcp_release' should be set to False using:
    sudo openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release False


Create MySQL database for nova

Create MySQL database for nova using:

mysql -u root -p
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
flush privileges;
quit

Support for PostgreSQL is also present. Refer to http://docs.openstack.org/trunk/openstack-compute/install/yum/content/setting-up-sql-database-postgresql.html for learning postgreSQL configuration.


nova.conf file format

nova.conf uses INI file format. Basic description of this format is available at http://docs.openstack.org/trunk/openstack-compute/install/yum/content/compute-options.html


Install rabbitmq and Configure complete nova

qpidd could have been used too, but appropriate configuration must be done in /etc/nova/nova.conf for using qpidd. Default configuration assumes use of rabbitmq.

  • Install rabbitmq using
yum -y install rabbitmq-server
service rabbitmq-server start
chkconfig rabbitmq-server on
  • Put following contents in '/etc/nova/nova.conf' file after replacing 10.4.15.2 with appropriate IP everywhere. Also set correct value for volume_group.
[DEFAULT]

# LOGS/STATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf

# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# VOLUMES
volume_driver=nova.volume.driver.ISCSIDriver
volume_group=nova-volumes
volume_name_template=volume-%s
iscsi_helper=tgtadm

# DATABASE
sql_connection=mysql://nova:nova@192.168.206.130/nova

# COMPUTE
libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=10.4.15.2
s3_host=10.4.15.2

# RABBITMQ
rabbit_host=10.4.15.2

# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=10.4.15.2:9292

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=False
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=10.4.15.2
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br0
flat_interface=eth0
fixed_range=192.168.100.0/24

# NOVNC CONSOLE
novncproxy_base_url=http://10.4.15.2:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=10.4.15.2
vncserver_listen=10.4.15.2

# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova
  • Note if a new file is created ensure the following
chown -R nova:nova /etc/nova
chmod 640 /etc/nova/nova.conf
  • Create /var/lock/nova as follows:
mkdir /var/lock/nova
chown -R nova:nova /var/lock/nova
  • Stop all services and enable them on start-up using
for svc in api objectstore compute network volume scheduler cert; do 
   service openstack-nova-$svc stop ;
   chkconfig openstack-nova-$svc on ; 
done
  • Initialize database using 'nova-manage db sync'
  • Start all services using
for svc in api objectstore compute network volume scheduler cert; do 
   service openstack-nova-$svc start;
done
  • Create a network using
nova-manage network create private --fixed_range_v4=192.168.100.0/24 --bridge_interface=br100 --num_networks=1 --network_size=256
  • Start and enable openstack-nova-consoleauth using
service openstack-nova-consoleauth start
chkconfig openstack-nova-consoleauth on
  • Check that services are running using 'nova-manage service list '. Running service has status ':-)' whereas stopped service has status 'XXX'. Debug services if they are not running using log files in '/var/log/nova' folder. Sample output expected after following above steps is:
Binary           Host                                 Zone             Status     State Updated_At
nova-compute     cloud1                               nova             enabled    :-)   2013-03-23 08:38:32
nova-scheduler   cloud1                               nova             enabled    :-)   2013-03-23 08:38:32
nova-cert        cloud1                               nova             enabled    :-)   2013-03-23 08:38:32
nova-network     cloud1                               nova             enabled    :-)   2013-03-23 08:38:32
nova-volume      cloud1                               nova             enabled    :-)   2013-03-23 08:38:35
nova-consoleauth cloud1                               nova             enabled    :-)   None      
  • Verify version using 'nova-manage version list' where 2012.2 corresponds to folsom release


Configuring other compute nodes (Not tried)

The same '/etc/nova/nova.conf' can be used on other nodes after replacing the corresponding nodes IP for

  • my_ip
  • vncserver_listen
  • vncserver_proxyclient_address

On other node setup br0 based networking, install openstack-nova-compute, install mysql-client, copy nova.conf and start/enable openstack-nova-compute service.


Define compute and image service credentials

Verify access to glance and compute using:

  1. Source openstack-vars.sh file created before
  2. Use 'nova image-list' command to list images stored in glance


Register virtual-machine images

Altough a tty-linux image was added to glance, another image can be added using following steps:

mkdir stackimages
wget -c https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O stackimages/cirros.img
glance index
glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare < stackimages/cirros.img
glance index
nova image-list


Running VM instance

Run a cirros instance

  • Use following to modify default security group
nova secgroup-list
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-list-rules default
  • Use following to add public key of current user to keypair list
nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
nova keypair-list
ssh-keygen -l
  • Verify that all nova-services are running properly
nova-manage service list
  • Get various lists and IDS
nova image-list
nova flavor-list
nova keypair-list
nova secgroup-list
  • Create a VM after replacing appropriate values from above lists
nova boot --flavor 2 --image <image-id> --key_name mykey --security_group default cirros1
  • To check if VM is in state BUILD / ACTIVE / ERROR use 'nova list'. If all is fine the output can be similar to
+--------------------------------------+---------+--------+-----------------------+
| ID                                   | Name    | Status | Networks              |
+--------------------------------------+---------+--------+-----------------------+
| 636edf29-01f7-4f1e-b44c-0de5e4ec57ac | cirros1 | ACTIVE | private=192.168.100.3 |
+--------------------------------------+---------+--------+-----------------------+
  • If there is ERROR then use 'nova show cirros1' and use'tail -f /var/log/messages' while booting VM to help with troubleshooting. To delete VM use 'nova delete cirros1' and try creating a new VM while monitoring /var/log/messages.


Note: Steps used so far have some bug because of which 192.168.100.3 is not pinging even when VM is active.



Installing and configuring Dashboard service

Install necessary packages

  1. yum install -y memcached mod-wsgi openstack-dashboard
  2. Edit '/etc/openstack-dashboard/local_settings' file and set
    CACHE_BACKEND='memcached://127.0.0.1:11211/'
    The value 11211 can be verified using 'cat /etc/sysconfig/memcached'
  3. Use 'service httpd start' and 'chkconfig httpd on'


Login and access dashboard

  1. Open http://127.0.0.1/dashboard
  2. Login using 'admin' and 'iiit123'


Accessing VM through no vnc

  1. yum -y install novnc
  2. cd /usr/share/novnc/
  3. /usr/bin/novnc_server
  4. Then try to use dashboard.

Note : The steps have some problem as after using these steps vnc base console for started VMs is not very useful


Accessing VM through virt-viewer

  1. Use 'virsh list'
  2. and 'virt-viewer <n>' to access VM.

Note : The steps seem to hang the cirros and tty-linux images during boot. Hence both are not accessible over novnc, virt-viewer and even over network.



<yambe:breadcrumb>Openstack|Openstack</yambe:breadcrumb>