Installation guide for installing OpenStack with Ironic component + files which were used during different testing installations.
![]() |
преди 7 години | |
---|---|---|
README.md | преди 7 години | |
packstack_ironic_ipmi.txt | преди 7 години | |
packstack_ironic_vbox.txt | преди 7 години |
In this installation guide we will cover the topic of installing OpenStack platform together with bare metal component called Ironic. In order to easily follow the procedure, I suggest you to do this on a lab or virtual machine with at least:
Before we begin with installation, you should choose one of two possible deployment tools which are tested to work well with this guide (there are probably many others, but for the sake of simplicity, we will focus on the mentioned ones). Both deployments are currently more or less focused to work with specific distributions:
I would say it's best to choose the tools you already know in case you're in a hurry, otherwise choose the opposite to gain some new knowledge. As a prerequisite to solve problems in case you will be trapped into them somewhere meanwhile the procedure, it's also recommended to know how to interact with some sort of shell, basics of OpenStack, some networking basics and to know what is Ironic actually meant for.
The following guide was tested with stable versions of PackStack (on CentOS 7) and OpenStack Ansible (AIO build on Ubuntu 16.04 LTS), which were prepared for OpenStack Ocata (15th release, February 2017). All commands were run in Bash, which was already installed on OSes themselves. The complete guide depends only on the tools, which I used for my deployments and were described above, so you will have to adjust your environment in case you're using anything else.
Before continuing you should firstly choose either PackStack deployment or OpenStack Ansible deployment. Depending on which you select, you also have to follow appropriate section below. The last section is almost the same for both installations, so when you end with base deployment, go to it for further instructions. Just a notice before starting - you're doing the whole procedure on your own responsibility, so keep in mind that there may be additional things to prepare or setup on your machine/network/... which are not covered by this guide.
The installation of PackStack is really easy. Everything you need to have prepared before starting the commands, is just a freshly (not really required, but recommended) installed CentOS 7 with basic network configuration, so you're able to reach internet. After that we have to install git, clone the repository where all fixes for correct installation of OpenStack with Ironic are applied along with Ironic UI component (for Horizon) and at the end we go into directory where installation script is located. The script is intended just to correct initialization of config files, which were failing back then. As a parameters we have to pass keyword ironic
and the name of network interface (in my case enp2s0
) on which we want to establish network bridge which will also be used for communication between Ironic host and baremetal node(s).
$ sudo yum install -y git
$ git clone https://git.susnik.work/jan/packstack_ironic_ocata.git
$ cd packstack_ironic_ocata
$ sudo bash run_setup.sh ironic enp2s0
Once the installation is successfully completed, find the credentials file named as keystonerc_admin
- usually in the same directory from where you started script or in root user's directory (/root
). After that we have to source the file to gather access for administrative tasks:
$ . keystonerc_admin
You can also print the contents of that file if you want to use credentials for login into the dashboard.
Now we have prepared our environment for the most important part of this guide, where we configure everything to get our baremetal nodes up and running with chosen operating system.
Installation of OpenStack Ansible is a bit trickier, but still easy if you have system configured correctly from the ground. Before we start you should have freshly (not really required, but recommended) installed Ubuntu 16.04 with basic network configuration, so you're able to reach internet. After that we'll update package source lists and upgrade packages and kernel in case latest versions are not yet present on system. Then we will install few packages which are required in order to successfully run Ansible deployment - those packages are build-essential, git, openssh-server and python-dev. Their names are more or less describing themselves, so there's no real need to describe them further.
$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo apt-get install build-essential git openssh-server python-dev bridge-utils vlan
Now we need to enable module for VLAN support, since the network will be separated into three different parts, each using its own VLAN tag.
$ echo '8021q' >> /etc/modules
Next thing is to setup NTP synchronization, so the system time will be correct, as some OpenStack components are heavily dependent on it. Edit config file /etc/systemd/timesyncd.conf
and add NTP server (best to use one which is hosted in your country), to end up with something like this:
[Time]
NTP=3.si.pool.ntp.org
FallbackNTP=ntp.ubuntu.com
To avoid from locale errors we will install en_US.UTF-8
- OpenStack Ansible officialy supports only that one, so in case we would left out this step, the installation would most probably failed after the first few minutes - unless you're already using this locale as a primary one. In the dialog on the last step select both en_US.UTF-8
as your previously generated locale.
$ sudo locale-gen "en_US.UTF-8"
$ sudo dpkg-reconfigure locales
So far we have prepared system to be ready for starting deployment. But before that, we have to restart the system, so the latest kernel will be used and the module which we enabled in the beginning. Once the system is ready, we have to switch to the most privileged user - root
. Before you run any command you don't know, just read its manual (man command
) or search for it on the web. And as you know - think twice before running any command.
$ sudo -s
Now we will clone the official OpenStack Ansible repository to the /opt
directory and open it. Afterwards we'll change to master branch, so we'll install stable version for sure.
$ git clone https://git.openstack.org/openstack/openstack-ansible /opt/openstack-ansible
$ cd /opt/openstack-ansible
$ git checkout master
We have to run bootstrap scripts, which will prepare our current environment for later deployment. We'll also copy prepared Ironic configuration file to deployment conf.d
directory, which will enable automatic installation of Ironic component for us.
$ scripts/bootstrap-ansible.sh
$ scripts/bootstrap-aio.sh
$ cp etc/openstack_deploy/conf.d/ironic.yml.aio /etc/openstack_deploy/conf.d/ironic.yml
And now it's finally time to run the playbooks, to deploy OpenStack.
$ scripts/run-playbooks.sh
Once the installation is successfully completed, find the credentials file named as openrc
- usually in root's directory (/root
). After that we have to source the file to gather access for administrative tasks:
$ . openrc
You can also print the contents of that file if you want to use credentials for login into the dashboard.
Now we have prepared our environment for the most important part of this guide, where we configure everything to get our baremetal nodes up and running with chosen operating system. Usually OpenStack Ansible already correctly setup most if not all of the services which are being installed in the next section - there was also some testing done meanwhile installation against required services, so they should be running. Just in case I recommend you to check out if they are really running, to reduce the possibility of failures.
Since all OpenStack components installed with OpenStack Ansible are running in LXC containers, there are two useful commands, you'll probably need:
lxc-ls
will list all the existing containers on your systemlxc-attach
will attach to a shell of specified container which you pass by parameterExample usage:
$ lxc-ls
aio1_cinder_api_container-bf4fabfb
aio1_cinder_scheduler_container-287ea62a
aio1_designate_container-681b0048
...
$ lxc-attach --name aio1_cinder_api_container-bf4fabfb
$ # you're now in the shell of specified container
In order to start running commands to setup Ironic and all needed services, just make sure you have sourced file (keystonerc_admin
or openrc
) with variables needed for connecting to the OpenStack API. One of the ways to check if you have done this correctly is to print one of the variables which are specified in file - for example username of administrative user.
$ echo $OS_USERNAME
If you got back correct username and not an empty line, then you have everything set for further process. In case you will setup Ironic in virtual environment with VirtualBox, you will need to enable SSH access to user account which can then create virtual machines. Otherwise you need correctly configured network, so that another machine is reachable via protocol used for powering on/off - here we will use IPMI. For the most of commands you will also need root privileges, so make sure you are either using root
account or prepend sudo
whenever needed. For baremetal node I will be using IP 10.0.0.2
and for the main machine (which runs both server and node) in case of VirtualBox I'll be using 10.0.0.1
. You'll also see I'm using IDs of entries like a378e5f9-8e44-44ae-9ba1-1f5b973a6b36
from my testing deployment - you must absolutely change them with yours (you can find them in the output of commands which you run before those which depend on IDs).
First of all we'll create Ironic node, add port and check if machine powered on - please watch out commands and run those which are intended for your environment. In case you're using IPMI also make sure you specify the correct version.
$ NODE_HOSTNAME="baremetal"
# IPMI
$ ironic node-create -n "$NODE_HOSTNAME" -d pxe_ipmitool -i ipmi_address='10.0.0.2' -i ipmi_username='user' -i ipmi_password='password'
# VirtualBox
ironic node-create -n "$NODE_HOSTNAME" -d pxe_ssh -i ssh_address=10.0.0.1 -i ssh_username=username -i ssh_virt_type=vbox -i ssh_key_contents="$(cat private.key)"
$ ironic node-update a378e5f9-8e44-44ae-9ba1-1f5b973a6b36 add driver_info/ipmi_protocol_version='1.5'
$ ironic port-create -n a378e5f9-8e44-44ae-9ba1-1f5b973a6b36 -a 00:12:34:56:78:90
# Check if connection to machine works
$ ironic node-set-power-state "$NODE_HOSTNAME" on
# For VirtualBox - when you'll prompted for boot image click Cancel and afterwards stop Ironic node
# Check if machine was successfully powered on for IPMI
$ ipmipower -h 10.0.0.2 -u user -p password --stat
# When you'll be sure connection works as expected
$ ironic node-set-power-state "$NODE_HOSTNAME" off
Now we need to create TFTP directory and prepare all required files into it. After that we'll install packages on which everything we need to configure depend.
$ mkdir -p /tftpboot
$ chown -R ironic /tftpboot
# RHEL based distros
$ yum install -y tftp-server syslinux-tftpboot xinetd
# Ubuntu and related
$ sudo apt-get install xinetd tftpd-hpa syslinux-common pxelinux
Edit file /etc/xinetd.d/tftp
and replace its service contents.
service tftp
{
protocol = udp
port = 69
socket_type = dgram
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -v -v -v -v -v --map-file /tftpboot/map-file /tftpboot
disable = no
# This is a workaround for Fedora, where TFTP will listen only on
# IPv6 endpoint, if IPv4 flag is not used.
flags = IPv4
}
We'll restart xinetd
and prepare few files now.
$ systemctl restart xinetd
$ cp /usr/share/syslinux/{pxelinux.0,chain.c32} /tftpboot/
$ echo 're ^(/tftpboot/) /tftpboot/\2' > /tftpboot/map-file
$ echo 're ^/tftpboot/ /tftpboot/' >> /tftpboot/map-file
$ echo 're ^(^/) /tftpboot/\1' >> /tftpboot/map-file
$ echo 're ^([^/]) /tftpboot/\1' >> /tftpboot/map-file
# RHEL based distros only
$ chcon -R -t tftpdir_rw_t /tftpboot
If your driver supports web console, you can enable it. In case of IPMI, this is only supported from version 2.0 onward.
# RHEL based distros
$ yum install -y epel-release
$ yum --enablerepo=epel install -y shellinabox
# Ubuntu and related
$ apt-get install -y shellinabox
Let's uncomment and change some lines in /etc/ironic/ironic.conf
.
# Uncomment following lines for TFTP/PXE ...
my_ip = 10.1.2.1
tftp_server = $my_ip
tftp_root = /tftpboot
pxe_bootfile_name = pxelinux.0
# ... and change pxe_append_params line to - in case you installed web console:
pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8
# in other case :
pxe_append_params = nofb nomodeset vga=normal
And restart Ironic Conductor to apply changes.
$ systemctl restart openstack-ironic-conductor
In case you enabled web console, you also have to enable it on Ironic node:
$ ironic node-update a378e5f9-8e44-44ae-9ba1-1f5b973a6b36 add driver_info/ipmi_terminal_port=8023
$ ironic node-set-console-mode a378e5f9-8e44-44ae-9ba1-1f5b973a6b36 true
Next we need to create new DHCP enabled network where our baremetal nodes will fetch IP address and will be connected to.
$ neutron net-create ironic-net --shared --provider:network_type flat --provider:physical_network physnet1
$ neutron subnet-create ironic-net 10.1.2.176/28 --name ironic-subnet --ip-version=4 --allocation-pool start=10.1.2.178,end=10.1.2.190 --gateway 10.1.2.1 --enable-dhcp --dns-nameservers list=true 8.8.4.4 8.8.8.8
Next we need to configure file /etc/neutron/plugins/ml2/ml2_conf.ini
to set VLAN ranges to correct interface.
[ml2_type_vlan]
network_vlan_ranges = physnet1
And restart two Neutron services.
$ systemctl restart neutron-{openvswitch-agent,server}
We also need to update cleaning network in /etc/ironic/ironic.conf
.
cleaning_network = ironic-net
And restart Ironic Conductor.
$ systemctl restart openstack-ironic-conductor
All required services for booting our baremetal node are now prepared, so now we are going to build our own disk image which will be used during the boot process and will be installed to our node. First we'll install Python package manager pip
which is needed to install additional packages on which image building depends. If you have container based environment make sure to run pip
in isolated mode (add --isolated
flag to each pip
command). At the end we will install diskimage-builder
which helps us building images and tripleo-image-elements
which cointain additional elements which are required for successful image building. At the end we override kernel version for Ubuntu image and export new variable needed for diskimage-builder
to know where it can find additional image elements.
# RHEL based distros
$ yum install -y python-pip
# Ubuntu and related
$ apt-get install -y python-pip
$ pip install --upgrade pip
$ pip install diskimage-builder
$ pip install tripleo-image-elements
$ echo 'linux-image-generic-lts-xenial:' > /usr/lib/python2.7/site-packages/diskimage_builder/elements/ubuntu/package-installs.yaml
$ export ELEMENTS_PATH=/usr/share/tripleo-image-elements
Now we will build all parts which are needed during boot and for installation of operating system. In this example we're building image for Ubuntu 16.04, but you can replace it with other distributions like CentOS and Debian. After we build each part, we add it to Glance, so we can use it when creating our instances for baremetal node(s).
$ IMAGE_NAME=ubuntu-xenial
$ disk-image-create ironic-agent ubuntu -o ${IMAGE_NAME}
$ glance image-create --name ${IMAGE_NAME}.kernel --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.kernel
$ glance image-create --name ${IMAGE_NAME}.initramfs --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initramfs
And the most important part which must succeed in order to setup any baremetal node in advance.
$ disk-image-create ubuntu baremetal localboot local-config dhcp-all-interfaces grub2 -o ${IMAGE_NAME}
$ VMLINUZ_UUID="$(glance image-create --name ${IMAGE_NAME}.vmlinuz --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.vmlinuz | awk '/\| id/ {print $4}')"
$ INITRD_UUID="$(glance image-create --name ${IMAGE_NAME}.initrd --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initrd | awk '/\| id/ {print $4}')"
$ glance image-create --name ${IMAGE_NAME} --visibility public --disk-format qcow2 --container-format bare --property kernel_id=${VMLINUZ_UUID} --property ramdisk_id=${INITRD_UUID} < ${IMAGE_NAME}.qcow2
After successful building we can create flavor accordingly to specifications of our lab machine.
$ FLAVOR_NAME="$IMAGE_NAME"
$ FLAVOR_ID=auto
$ FLAVOR_RAM=8192
$ FLAVOR_DISK=230
$ FLAVOR_CPU=4
$ nova flavor-create ${FLAVOR_NAME} ${FLAVOR_ID} ${FLAVOR_RAM} ${FLAVOR_DISK} ${FLAVOR_CPU}
$ nova flavor-key ${FLAVOR_NAME} set cpu_arch=x86_64
$ nova flavor-key ${FLAVOR_NAME} set capabilities:boot_option="local"
The last important step for Ironic node is to update data about kernel, initramfs and its hardware specifications.
$ KERNEL_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.kernel/ {print \$2}")
$ INITRAMFS_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.initramfs/ {print \$2}")
$ ROOT_DISK_SIZE_GB="$FLAVOR_DISK"
$ ironic node-update "$NODE_HOSTNAME" add \
driver_info/deploy_kernel=$KERNEL_IMAGE \
driver_info/deploy_ramdisk=$INITRAMFS_IMAGE \
instance_info/kernel=$KERNEL_IMAGE \
instance_info/ramdisk=$INITRAMFS_IMAGE \
instance_info/root_gb=${ROOT_DISK_SIZE_GB} \
instance_info/image_source=${IMAGE_SOURCE}
$ ironic node-update "$NODE_HOSTNAME" add \
properties/cpus="$FLAVOR_CPU" \
properties/memory_mb="$FLAVOR_RAM" \
properties/local_gb="$ROOT_DISK_SIZE_GB" \
properties/size=3600 \
properties/cpu_arch=x86_64 \
properties/capabilities=memory_mb:"$FLAVOR_RAM",local_gb:"$ROOT_DISK_SIZE_GB",cpu_arch:x86_64,cpus:"$FLAVOR_CPU",boot_option:local
To access our machine once will be ready, we have to import our public part of SSH key.
$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin
Just to be sure everything will work as expected, we need to check if keystone authentication config in /etc/nova/nova.conf
is correct or change part with all values below (those values are usually already written at the end of config block, but are most probably wrong - that's why openstack-nova-compute service may be inactive).
[ironic]
username=ironic
password=<ironic-password>
auth_plugin=password
admin_username=ironic
admin_password=<ironic-password>
admin_url=http://127.0.0.1:35357/
admin_tenant_name=services
Restart Nova Compute to apply changed config.
$ systemctl restart openstack-nova-compute
Next we need to discover hosts, so that Nova recognizes Ironic and enables support for running baremetal machines.
$ nova-manage cell_v2 discover_hosts
In case you have enabled firewall previous or it's already on by default, you need to add some rules for accepting selected packages.
$ iptables -I INPUT -p udp --dport 67 -s 10.1.2.0/24 -j ACCEPT
$ iptables -I INPUT -p udp --dport 69 -s 10.1.1.0/24 -j ACCEPT
$ iptables -I INPUT -p tcp --dport 3260 -s 10.1.2.0/24 -j ACCEPT
$ service iptables save
$ service iptables restart
This should be everything that's needed to roll out our first baremetal node. However there is still a little chance that deployment of node will fail, so don't fear out.
$ nova boot --flavor "$FLAVOR_NAME" --image "$IMAGE_NAME" --key-name admin "$NODE_HOSTNAME" --nic net-name=ironic-net
If your baremetal node started and started PXE boot, then you can almost do victory dance, but I advise you to wait until the whole procedure is done. In the opposite case, if you don't get any meaningful error from OpenStack - for example that some service doesn't run or it failed, or something is not configured well - you can try to increase RAM allocation ratio, which also helped me. Do this only if status of node is Error. Usually this setting's value is 1.0
, so we open /etc/nova/nova.conf
and set it to 3.0
.
ram_allocation_ratio=3.0
This along with restarting Nova, should help you to boot baremetal node successfully.
$ systemctl restart openstack-nova-{compute,conductor}
After restart you can return back to previous step where we start nova boot
command.
Congratulations, you have now successfully deployed OpenStack with Ironic and one baremetal node. You can repeat part of the procedure if you want to deploy another node.
The whole guide was composed from other guides found on the internet - mostly from official documentation. Links below are grouped as I had them in my notes, so they probably are somehow related.
Ironic related links
PackStack installation
OpenStack Ansible related links
Links
iSCSI
Others
If you encountered an issue which is preventing you to continue with guide, the most you can do before trying to reach help from community is to check error in details (look all possible related logs) and search the internet if anyone already found a fix you can also use. Since Ironic is not a component which is by default enabled in OpenStack, it's a bit harder to find solutions for specific errors, so in this case just check the source of part which is failing - there must be something that's indicating where is this part contained in code (otherwise just search where error message is thrown in code and you will sooner or later understand what's going on).
If you can't figure out what's causing failures then another way to solve them, is to ask people which are working on related code/component via official sources. Most of developers are available via project's IRC channel, forums or Ask OpenStack. Links for help are usually published somewhere in the documentation of projects.