Red Hat OpenStack Platform 15 standalone
If you need to deploy quickly a testing Red Hat Openstack Platform environment, you can use standalone deployment available since Red Hat OpenStack Platform 14.
Thanks to containers, this deployment will consume only one physical baremetal node. This standalone server will handle the control plane services and qemu/KVM compute services. This architecture is not using any nested virtualization and we can keep good Virtual Machine performance.
Official doc: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/15/html-single/quick_start_guide/index
Summary:
- OpenStack lab environment
- Prerequisites
- Prepare the environment
- Prepare yaml configuration
- Deploy!
- Check the deployed services
- Use OpenStack client
- Setup the OpenStack network
- Boot an instance with the CLI
- Login into the dashboard
- Launch one instance with the dashboard
Prerequisites
Because we will deploy all the cluster on only one node, it’s good to have at least 128GB of RAM and a SSD with enough IOPS.
For this deployment, we will use one Dell PowerEdge R740 with these specifications:
- 2 x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz (total of 40 threads)
- 1 x INTEL SSD SC2KG960G7R 960GB
- 256GB RAM
- 1 x NIC
We can start by installing one default RHEL 8.1. Pick this ISO file “Red Hat Enterprise Linux 8.1 Binary DVD” here: https://access.redhat.com/downloads/content/479/ver=/rhel—8/8.1/x86_64/product-software
We will not describe how to install a standard RHEL 8.1 operating system, you can review the documentation if needed: https://access.redhat.com/documentation/fr-fr/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/index
…
We have now a running RHEL 8.1:
[root@stand0 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.1 (Ootpa)
Prepare the environment
Create a stack user:
[root@stand0 ~]# useradd stack
[root@stand0 ~]# usermod -aG wheel stack
[root@stand0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
[root@stand0 ~]# chmod 0440 /etc/sudoers.d/stack
Register the node:
[stack@stand0 ~]$ sudo subscription-manager register --username myrhnaccount
Registering to: subscription.rhsm.redhat.com:443/subscription
Password:
The system has been registered with ID: XXXXX-XXXXX-XXXX-XXXXX-XXXXXXX
The registered system name is: stand0.lan.redhat.com
[stack@stand0 ~]$ sudo subscription-manager attach --pool=XXXXXXXXXXXXXXXXXXXXX
Successfully attached a subscription for: RHN
[stack@stand0 ~]$ sudo subscription-manager repos --disable=*
[stack@stand0 ~]$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-highavailability-rpms --enable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=openstack-15-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms
Repository 'rhel-8-for-x86_64-baseos-rpms' is enabled for this system.
Repository 'rhel-8-for-x86_64-appstream-rpms' is enabled for this system.
Repository 'rhel-8-for-x86_64-highavailability-rpms' is enabled for this system.
Repository 'ansible-2.8-for-rhel-8-x86_64-rpms' is enabled for this system.
Repository 'openstack-15-for-rhel-8-x86_64-rpms' is enabled for this system.
Repository 'fast-datapath-for-rhel-8-x86_64-rpms' is enabled for this system.
Install the director command line interface:
[stack@stand0 ~]$ sudo yum install -y python3-tripleoclient
Prepare yaml configuration
Generate the containers-prepare-parameters.yaml file that contains the default ContainerImagePrepare parameters:
[stack@stand0 ~]$ openstack tripleo container image prepare default --output-env-file $HOME/containers-prepare-parameters.yaml
Add authentification information:
[stack@stand0 ~]$ sudo tee -a $HOME/containers-prepare-parameters.yaml << EOF
ContainerImageRegistryCredentials:
registry.redhat.io:
rhn_user: 'rhn_password'
EOF
Replace “rhn_user” by your user and “rhn_password” by your password:
[stack@stand0 ~]$ sed -i 's/rhn_user/myrhnaccount/' $HOME/containers-prepare-parameters.yaml
[stack@stand0 ~]$ sed -i 's/rhn_password/XXXXXXXXXXX/' $HOME/containers-prepare-parameters.yaml
Check the network configuration:
[stack@stand0 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:a8 brd ff:ff:ff:ff:ff:ff
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:a9 brd ff:ff:ff:ff:ff:ff
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:aa brd ff:ff:ff:ff:ff:ff
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:ab brd ff:ff:ff:ff:ff:ff
6: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether f8:f2:1e:31:65:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.168.95/24 brd 192.168.168.255 scope global dynamic noprefixroute ens1f0
valid_lft 41044sec preferred_lft 41044sec
inet6 2620:52:0:2e04:faf2:1eff:fe31:6550/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::faf2:1eff:fe31:6550/64 scope link noprefixroute
valid_lft forever preferred_lft forever
7: ens1f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:51 brd ff:ff:ff:ff:ff:ff
8: ens1f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:52 brd ff:ff:ff:ff:ff:ff
9: ens1f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:53 brd ff:ff:ff:ff:ff:ff
[stack@stand0 ~]$ ip r
default via 192.168.168.254 dev ens1f0 proto dhcp metric 104
192.168.168.0/24 dev ens1f0 proto kernel scope link src 192.168.168.95 metric 104
Prepare environment variables (replace clock.redhat.com by your own time server):
[stack@stand0 ~]$ export IP=192.168.168.95
[stack@stand0 ~]$ export NETMASK=24
[stack@stand0 ~]$ export GATEWAY=192.168.168.254
[stack@stand0 ~]$ export INTERFACE=ens1f0
[stack@stand0 ~]$ export DNS_SERVER1=10.2.2.31
[stack@stand0 ~]$ export DNS_SERVER2=10.4.4.32
[stack@stand0 ~]$ export NTP_SERVER1=clock.redhat.com
Prepare the standalone YAML file:
[stack@stand0 ~]$ cat <<EOF > $HOME/standalone_parameters.yaml
parameter_defaults:
CloudName: $IP
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: $GATEWAY
default: true
Debug: true
DeploymentUser: $USER
DnsServers:
- $DNS_SERVER1
- $DNS_SERVER2
# needed for vip & pacemaker
KernelIpNonLocalBind: 1
DockerInsecureRegistryAddress:
- $IP:8787
NeutronPublicInterface: $INTERFACE
# domain name used by the host
NeutronDnsDomain: localdomain
# re-use ctlplane bridge for public net, defined in the standalone
# net config (do not change unless you know what you're doing)
NeutronBridgeMappings: datacentre:br-ctlplane
NeutronPhysicalBridge: br-ctlplane
# enable to force metadata for public net
#NeutronEnableForceMetadata: true
StandaloneEnableRoutedNetworks: false
StandaloneHomeDir: $HOME
InterfaceLocalMtu: 1500
# Needed if running in a VM, not needed if on baremetal
# NovaComputeLibvirtType: qemu
NtpServer: $NTP_SERVER1
EOF
Login into the Red Hat registry:
[stack@stand0 ~]$ sudo podman login registry.redhat.io
Username: myrhnaccount
Password:
Login Succeeded!
Deploy!
Deploy:
[stack@stand0 ~]$ sudo openstack tripleo deploy \
--templates \
--local-ip=$IP/$NETMASK \
-e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \
-e $HOME/containers-prepare-parameters.yaml \
-e $HOME/standalone_parameters.yaml \
--output-dir $HOME \
--standalone
...
PLAY RECAP ***********************************************************************************************************************************************
stand0 : ok=304 changed=170 unreachable=0 failed=0 skipped=436 rescued=0 ignored=1
undercloud : ok=16 changed=7 unreachable=0 failed=0 skipped=37 rescued=0 ignored=0
Not cleaning working directory /home/stack/tripleo-heat-installer-templates
Not cleaning ansible directory /home/stack/undercloud-ansible-d7yuz77o
Install artifact is located at /home/stack/undercloud-install-20191206231940.tar.bzip2
########################################################
Deployment successful!
########################################################
##########################################################
Useful files:
The clouds.yaml file is at ~/.config/openstack/clouds.yaml
Use "export OS_CLOUD=standalone" before running the
openstack command.
##########################################################
Writing the stack virtual update mark file /var/lib/tripleo-heat-installer/update_mark_standalone
Check the deployed services
After the deployment, you will find a set of pulled podman images:
[stack@stand0 ~]$ sudo podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.redhat.io/rhosp15-rhel8/openstack-nova-libvirt 15.0-86 8a18a6b77edf 11 days ago 1.88 GB
registry.redhat.io/rhosp15-rhel8/openstack-nova-compute 15.0-80 860775d7445f 11 days ago 2.02 GB
registry.redhat.io/rhosp15-rhel8/openstack-neutron-server-ovn 15.0-73 3e45ee55602f 3 weeks ago 1.06 GB
registry.redhat.io/rhosp15-rhel8/openstack-ovn-nb-db-server 15.0-76 2795b0cbc130 3 weeks ago 600 MB
registry.redhat.io/rhosp15-rhel8/openstack-neutron-metadata-agent-ovn 15.0-74 71792c828635 3 weeks ago 1.05 GB
registry.redhat.io/rhosp15-rhel8/openstack-cinder-api 15.0-78 0334b7ba5a61 3 weeks ago 1.17 GB
registry.redhat.io/rhosp15-rhel8/openstack-nova-placement-api 15.0-81 bc34f2c2f1a1 3 weeks ago 1.04 GB
registry.redhat.io/rhosp15-rhel8/openstack-cinder-volume pcmklatest 9b3407894984 3 weeks ago 1.23 GB
registry.redhat.io/rhosp15-rhel8/openstack-cinder-volume 15.0-79 9b3407894984 3 weeks ago 1.23 GB
registry.redhat.io/rhosp15-rhel8/openstack-swift-account 15.0-75 62227d473b3c 3 weeks ago 841 MB
registry.redhat.io/rhosp15-rhel8/openstack-keystone 15.0-77 8a0ee7c7d0f7 3 weeks ago 776 MB
registry.redhat.io/rhosp15-rhel8/openstack-glance-api 15.0-74 ae5421b66ad1 3 weeks ago 1.02 GB
registry.redhat.io/rhosp15-rhel8/openstack-ovn-northd 15.0-77 74cb4cfd75ee 3 weeks ago 734 MB
registry.redhat.io/rhosp15-rhel8/openstack-swift-container 15.0-76 4b9d4573f9c7 3 weeks ago 841 MB
registry.redhat.io/rhosp15-rhel8/openstack-ovn-controller 15.0-76 2190bc7c3099 3 weeks ago 600 MB
registry.redhat.io/rhosp15-rhel8/openstack-ovn-sb-db-server 15.0-78 dc6b62d4c061 3 weeks ago 600 MB
registry.redhat.io/rhosp15-rhel8/openstack-swift-object 15.0-76 cc5121b760cc 3 weeks ago 841 MB
registry.redhat.io/rhosp15-rhel8/openstack-nova-consoleauth 15.0-80 9b7ee143bf18 3 weeks ago 1.02 GB
registry.redhat.io/rhosp15-rhel8/openstack-nova-scheduler 15.0-79 ecce23c7490c 3 weeks ago 1.19 GB
registry.redhat.io/rhosp15-rhel8/openstack-swift-proxy-server 15.0-77 852279d6f91a 3 weeks ago 900 MB
registry.redhat.io/rhosp15-rhel8/openstack-nova-novncproxy 15.0-80 f169533182c3 3 weeks ago 1.11 GB
registry.redhat.io/rhosp15-rhel8/openstack-nova-api 15.0-77 93b13ef7e1c2 3 weeks ago 1.13 GB
registry.redhat.io/rhosp15-rhel8/openstack-cinder-scheduler 15.0-77 acc87eed27d6 3 weeks ago 1.09 GB
registry.redhat.io/rhosp15-rhel8/openstack-nova-conductor 15.0-80 4de00ab219f7 3 weeks ago 1.02 GB
registry.redhat.io/rhosp15-rhel8/openstack-horizon 15.0-77 d4618e2ccd9a 3 weeks ago 884 MB
registry.redhat.io/rhosp15-rhel8/openstack-mariadb 15.0-89 81ed7d3cef4b 3 weeks ago 776 MB
registry.redhat.io/rhosp15-rhel8/openstack-mariadb pcmklatest 81ed7d3cef4b 3 weeks ago 776 MB
registry.redhat.io/rhosp15-rhel8/openstack-rabbitmq pcmklatest d5b619e4dc4b 3 weeks ago 604 MB
registry.redhat.io/rhosp15-rhel8/openstack-rabbitmq 15.0-87 d5b619e4dc4b 3 weeks ago 604 MB
registry.redhat.io/rhosp15-rhel8/openstack-memcached 15.0-82 35fb07b3facc 3 weeks ago 444 MB
registry.redhat.io/rhosp15-rhel8/openstack-cron 15.0-84 6a73b400c5ca 3 weeks ago 423 MB
registry.redhat.io/rhosp15-rhel8/openstack-iscsid 15.0-84 46e1c3bad8f3 3 weeks ago 443 MB
Pacemaker is running:
[stack@stand0 ~]$ sudo pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: stand0 (version 2.0.2-3.el8-744a30d655) - partition with quorum
Last updated: Fri Dec 6 23:21:04 2019
Last change: Fri Dec 6 23:19:02 2019 by root via cibadmin on stand0
3 nodes configured
7 resources configured
Online: [ stand0 ]
GuestOnline: [ galera-bundle-0@stand0 rabbitmq-bundle-0@stand0 ]
Full list of resources:
Container bundle: galera-bundle [registry.redhat.io/rhosp15-rhel8/openstack-mariadb:pcmklatest]
galera-bundle-0 (ocf::heartbeat:galera): Master stand0
Container bundle: rabbitmq-bundle [registry.redhat.io/rhosp15-rhel8/openstack-rabbitmq:pcmklatest]
rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started stand0
Container bundle: openstack-cinder-volume [registry.redhat.io/rhosp15-rhel8/openstack-cinder-volume:pcmklatest]
openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Started stand0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Use OpenStack client
Instead of sourcing an overcloudrc file, you just need to export an OS_CLOUD variable:
[stack@stand0 ~]$ export OS_CLOUD=standalone"
You can try ask for a token, OpenStack client is working \o/:
[stack@stand0 ~]$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2019-12-07T00:21:58+0000 |
| id | gAAAAABd6uKW7SnCMVnXDu53dBasKh2gBTfNCfqW2byghIXEgI6w4esVt7OC--1OVyPD97721Ias5mnHj92oGKJ8qioUW7oX2T0kw88bO3cFdoBhK1s1BPXCl85xiSUF-322kAZ6loH8su1136DKzCB6spyirGjMslOvbMneuInLePdvofmLOHQ |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| user_id | 990cb75ee15e4a929655e21794704dfe |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
We have one hypervisor available:
[stack@stand0 ~]$ openstack hypervisor list
+----+---------------------------------+-----------------+------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------------------+-----------------+------------+-------+
| 1 | stand0.lan.redhat.com | QEMU | 192.168.168.95 | up | |
+----+---------------------------------+-----------------+------------+-------+
Disable the quota limitations:
[stack@stand0 ~]$ openstack quota set --secgroups -1 --secgroup-rules -1 --cores -1 --ram -1 --gigabytes -1 admin
Setup the OpenStack network
Export the network configuration variables:
export OS_CLOUD=standalone
export GATEWAY=192.168.168.254
export STANDALONE_HOST=192.168.168.95
export PUBLIC_NETWORK_CIDR=192.168.168.0/24
export PRIVATE_NETWORK_CIDR=172.16.16.0/24
export PUBLIC_NET_START=192.168.168.230
export PUBLIC_NET_END=192.168.168.240
export DNS_SERVER=10.2.2.31
Create the external network:
[stack@stand0 ~]$ openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-12-06T23:39:41Z |
| description | |
| dns_domain | None |
| id | fa850b25-e0a0-492e-a8ea-6670e3448c91 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| mtu | 1500 |
| name | public |
| port_security_enabled | True |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| provider:network_type | flat |
| provider:physical_network | datacentre |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | External |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2019-12-06T23:39:41Z |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create the internal network:
[stack@stand0 ~]$ openstack network create --internal private
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-12-06T23:40:12Z |
| description | |
| dns_domain | None |
| id | a6ad587d-aaed-450a-8c8a-62a284684b0c |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| mtu | 1442 |
| name | private |
| port_security_enabled | True |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| provider:network_type | geneve |
| provider:physical_network | None |
| provider:segmentation_id | 83 |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2019-12-06T23:40:12Z |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create the public subnet:
[stack@stand0 ~]$ openstack subnet create public-net \
--subnet-range $PUBLIC_NETWORK_CIDR \
--no-dhcp \
--gateway $GATEWAY \
--allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
--network public
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 192.168.168.230-192.168.168.240 |
| cidr | 192.168.168.0/24 |
| created_at | 2019-12-06T23:40:44Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.168.254 |
| host_routes | |
| id | cf92a3f6-876f-4f7a-878d-12f8bee8acbf |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| name | public-net |
| network_id | fa850b25-e0a0-492e-a8ea-6670e3448c91 |
| prefix_length | None |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2019-12-06T23:40:44Z |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create the rivate subnet:
[stack@stand0 ~]$ openstack subnet create private-net \
--subnet-range $PRIVATE_NETWORK_CIDR \
--network private
Create the router:
[stack@stand0 ~]$ openstack router create vrouter
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | None |
| availability_zones | None |
| created_at | 2019-12-06T23:41:37Z |
| description | |
| external_gateway_info | None |
| flavor_id | None |
| id | f847b457-4f9b-44f6-934c-3d98f1fb0726 |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| name | vrouter |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| revision_number | 0 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2019-12-06T23:41:37Z |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Enable the router public gateway:
[stack@stand0 ~]$ openstack router set vrouter --external-gateway public
Add floating ips in the tenant admin:
[stack@stand0 ~]$ openstack floating ip create public
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2019-12-06T23:43:07Z |
| description | |
| dns_domain | None |
| dns_name | None |
| fixed_ip_address | None |
| floating_ip_address | 192.168.168.231 |
| floating_network_id | fa850b25-e0a0-492e-a8ea-6670e3448c91 |
| id | 017c2861-02dd-444f-bc52-c44fc760af42 |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| name | 192.168.168.231 |
| port_details | None |
| port_id | None |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2019-12-06T23:43:07Z |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[stack@stand0 ~]$ openstack floating ip create public
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2019-12-06T23:44:25Z |
| description | |
| dns_domain | None |
| dns_name | None |
| fixed_ip_address | None |
| floating_ip_address | 192.168.168.239 |
| floating_network_id | fa850b25-e0a0-492e-a8ea-6670e3448c91 |
| id | 4c64c9a6-8de0-444a-96d8-5cd43acae729 |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| name | 192.168.168.239 |
| port_details | None |
| port_id | None |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2019-12-06T23:44:25Z |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create security groups:
[stack@stand0 ~]$ openstack security group create basic
[stack@stand0 ~]$ openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
[stack@stand0 ~]$ openstack security group rule create --protocol icmp basic
openstack security group rule create --protocol udp --dst-port 53:53 basic
Check the router:
[stack@stand0 ~]$ openstack router list
+--------------------------------------+---------+--------+-------+----------------------------------+
| ID | Name | Status | State | Project |
+--------------------------------------+---------+--------+-------+----------------------------------+
| f847b457-4f9b-44f6-934c-3d98f1fb0726 | vrouter | ACTIVE | UP | 2c57788b80fe4fcf95e46e96bdda147c |
+--------------------------------------+---------+--------+-------+----------------------------------+
[stack@stand0 ~]$ openstack router show vrouter
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | None |
| availability_zones | None |
| created_at | 2019-12-06T23:41:37Z |
| description | |
| external_gateway_info | {"network_id": "fa850b25-e0a0-492e-a8ea-6670e3448c91", "external_fixed_ips": [{"subnet_id": "cf92a3f6-876f-4f7a-878d-12f8bee8acbf", "ip_address": "192.168.168.230"}], "enable_snat": true} |
| flavor_id | None |
| id | f847b457-4f9b-44f6-934c-3d98f1fb0726 |
| interfaces_info | [{"port_id": "a88bc3cf-69a7-4b4e-9320-90fa46dd4bee", "ip_address": "172.16.16.1", "subnet_id": "cad5c9b8-54a1-48e9-83b9-191f51eec837"}] |
| location | Munch({'cloud': 'standalone', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': '2c57788b80fe4fcf95e46e96bdda147c', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| name | vrouter |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2019-12-06T23:42:16Z |
+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
List networks:
[stack@stand0 ~]$ openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| a6ad587d-aaed-450a-8c8a-62a284684b0c | private | cad5c9b8-54a1-48e9-83b9-191f51eec837 |
| fa850b25-e0a0-492e-a8ea-6670e3448c91 | public | cf92a3f6-876f-4f7a-878d-12f8bee8acbf |
+--------------------------------------+---------+--------------------------------------+
Boot an instance with the CLI
You can download a RHEL 8.1 KVM qcow2 guest image called “Red Hat Enterprise Linux 8.1 KVM Guest Image” here: https://access.redhat.com/downloads/content/479/ver=/rhel—8/8.1/x86_64/product-software :
[stack@stand0 ~]$ wget "https://access.cdn.redhat.com/content/origin/files/sha256/30/XXXXXXXXXX/rhel-8.1-x86_64-kvm.qcow2?user=XXXXXXXX&_auth_XXXXXXXX"
Create the RHEL 8.1 image in Glance:
[stack@stand0 ~]$ openstack image create rhel81 --file rhel-8.1-x86_64-kvm.qcow2 --disk-format qcow2 --container-format bare --public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | c6434c959c0fba9faa7a6af3bab146ae |
| container_format | bare |
| created_at | 2019-12-06T23:56:06Z |
| disk_format | qcow2 |
| file | /v2/images/40e7914c-b66c-4d08-b05f-ebb8b07d4b9a/file |
| id | 40e7914c-b66c-4d08-b05f-ebb8b07d4b9a |
| min_disk | 0 |
| min_ram | 0 |
| name | rhel81 |
| owner | 2c57788b80fe4fcf95e46e96bdda147c |
| properties | direct_url='swift+config://ref1/glance/40e7914c-b66c-4d08-b05f-ebb8b07d4b9a', os_hash_algo='sha512', os_hash_value='4565e79f1f9be64c9c527647b8ee36389a06d1f133765b095dfb49c5831411df9e1c91f6319096f7e3aae2259168b96bd8b2a603ac8d952d895e53ebd570e572', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 780093440 |
| status | active |
| tags | |
| updated_at | 2019-12-06T23:56:14Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create a flavor with 32GB of RAM:
[stack@stand0 ~]$ openstack flavor create --ram 32768 --disk 40 --vcpus 4 m1.large
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| id | 1074e95e-e447-414e-a5a2-522c7e262d0f |
| name | m1.large |
| os-flavor-access:is_public | True |
| properties | |
| ram | 32768 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
Create the key in Keystone:
[stack@stand0 .ssh]$ openstack keypair create --public-key ~/.ssh/id_rsa_lambda.pub lambda
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | df:f5:b2:b1:46:d0:13:32:6e:9b:dc:b8:71:a9:87:39 |
| name | lambda |
| user_id | 990cb75ee15e4a929655e21794704dfe |
+-------------+-------------------------------------------------+
Launch an instance with 32BG of RAM:
[stack@stand0 ~]$ openstack server create --flavor m1.large --image rhel81 --network private --security-group basic --key-name lambda server1
+-------------------------------------+-------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | j9WoTzj2ZX5n |
| config_drive | |
| created | 2019-12-07T00:09:06Z |
| flavor | m1.large (1074e95e-e447-414e-a5a2-522c7e262d0f) |
| hostId | |
| id | 12b79989-ad95-431c-993d-be556950cedf |
| image | rhel81 (40e7914c-b66c-4d08-b05f-ebb8b07d4b9a) |
| key_name | lambda |
| name | server1 |
| progress | 0 |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| properties | |
| security_groups | name='427a2d68-60d3-4b89-8080-a3850361d3ef' |
| status | BUILD |
| updated | 2019-12-07T00:09:07Z |
| user_id | 990cb75ee15e4a929655e21794704dfe |
| volumes_attached | |
+-------------------------------------+-------------------------------------------------+
Launch a second instance
[stack@stand0 ~]$ openstack server create --flavor m1.large --image rhel81 --network private --security-group basic --key-name lambda server2
+-------------------------------------+-------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | Tzu7rjfWzAKf |
| config_drive | |
| created | 2019-12-07T00:09:36Z |
| flavor | m1.large (1074e95e-e447-414e-a5a2-522c7e262d0f) |
| hostId | |
| id | 6409f6b6-e091-4d64-b7da-227bfc576c1a |
| image | rhel81 (40e7914c-b66c-4d08-b05f-ebb8b07d4b9a) |
| key_name | lambda |
| name | server2 |
| progress | 0 |
| project_id | 2c57788b80fe4fcf95e46e96bdda147c |
| properties | |
| security_groups | name='427a2d68-60d3-4b89-8080-a3850361d3ef' |
| status | BUILD |
| updated | 2019-12-07T00:09:36Z |
| user_id | 990cb75ee15e4a929655e21794704dfe |
| volumes_attached | |
+-------------------------------------+-------------------------------------------------+
Attach two floatting IPs:
[stack@stand0 ~]$ openstack server add floating ip server1 192.168.168.235
[stack@stand0 ~]$ openstack server add floating ip server2 192.168.168.237
[stack@stand0 ~]$ openstack floating ip list
List servers:
[stack@stand0 ~]$ openstack server list
+--------------------------------------+----------+--------+----------------------------------------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------+--------+----------------------------------------+--------+----------+
| 6409f6b6-e091-4d64-b7da-227bfc576c1a | server2 | ACTIVE | private=172.16.16.249, 192.168.168.237 | rhel81 | m1.large |
| 12b79989-ad95-431c-993d-be556950cedf | server1 | ACTIVE | private=172.16.16.242, 192.168.168.235 | rhel81 | m1.large |
| 63d21de6-01d2-42fc-ad75-81068964385e | myserver | ACTIVE | private=172.16.16.219, 192.168.168.231 | cirros | tiny |
+--------------------------------------+----------+--------+----------------------------------------+--------+----------+
Try to connect with ssh into server:
egallen@laptop ~ % ssh cloud-user@192.168.168.235
The authenticity of host '192.168.168.235 (192.168.168.235)' can't be established.
ECDSA key fingerprint is SHA256:EuM4MrA5BrA45HVvJMAg+YH7L8OBPoXMDRXKOunyxSY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.168.235' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket
This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --register
Last login: Fri Dec 6 19:11:01 2019 from 192.168.168.95
[cloud-user@server1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:e0:93:66 brd ff:ff:ff:ff:ff:ff
inet 172.16.16.242/24 brd 172.16.16.255 scope global dynamic noprefixroute eth0
valid_lft 43052sec preferred_lft 43052sec
inet6 fe80::f816:3eff:fee0:9366/64 scope link
valid_lft forever preferred_lft forever
From server1 we can ping server2 in the internal subnet:
[cloud-user@server1 ~]$ ping 172.16.16.249
PING 172.16.16.249 (172.16.16.249) 56(84) bytes of data.
64 bytes from 172.16.16.249: icmp_seq=1 ttl=64 time=2.82 ms
64 bytes from 172.16.16.249: icmp_seq=2 ttl=64 time=1.11 ms
64 bytes from 172.16.16.249: icmp_seq=3 ttl=64 time=0.568 ms
Check disk configuration:
[stack@stand0 tmp]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 126G 0 126G 0% /dev
tmpfs 126G 54M 126G 1% /dev/shm
tmpfs 126G 108M 126G 1% /run
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/mapper/VolGroup01-root 875G 16G 815G 2% /
/dev/sda1 673M 188M 437M 31% /boot
tmpfs 26G 4.0K 26G 1% /run/user/1000
tmpfs 500M 32M 469M 7% /var/log/heat-launcher
List blocks:
[stack@stand0 ~]$ sudo lsblk -o name,rota
NAME ROTA
loop2 1
|-cinder--volumes-cinder--volumes--pool_tmeta 1
| `-cinder--volumes-cinder--volumes--pool 1
`-cinder--volumes-cinder--volumes--pool_tdata 1
`-cinder--volumes-cinder--volumes--pool 1
sda 0
|-sda1 0
|-sda2 0
`-sda3 0
`-VolGroup01-root 0
sr0 1
Check Cinder storage, one file of 11GB is created:
[stack@stand0 ]$ tree /var/lib/cinder/
/var/lib/cinder/
|-- cinder-volumes
|-- groups
`-- tmp
2 directories, 1 file
[stack@stand0 cinder]$ ls -lah /var/lib/cinder/
total 232K
drwxrwsr-x. 4 42407 42400 4.0K Dec 6 23:19 .
drwxr-xr-x. 68 root root 4.0K Dec 6 22:49 ..
-rw-r--r--. 1 42407 42400 11G Dec 6 23:19 cinder-volumes
drwxr-sr-x. 2 root 42400 4.0K Dec 6 23:19 groups
drwxr-sr-x. 2 root 42400 4.0K Dec 6 23:19 tmp
The standalone IP is holding all services:
egallen@laptop ~ % sudo nmap -sS 192.168.168.95
Password:
Starting Nmap 7.70 ( https://nmap.org ) at 2019-12-07 09:58 CET
Nmap scan report for stand0 (192.168.168.95)
Host is up (0.083s latency).
Not shown: 921 filtered ports, 60 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
873/tcp open rsync
2022/tcp open down
3306/tcp open mysql
4567/tcp open tram
5000/tcp open upnp
5900/tcp open vnc
5901/tcp open vnc-1
5902/tcp open vnc-2
5903/tcp open vnc-3
5904/tcp open unknown
5906/tcp open unknown
5907/tcp open unknown
6000/tcp open X11
6001/tcp open X11:1
6002/tcp open X11:2
8080/tcp open http-proxy
9200/tcp open wap-wsp
Nmap done: 1 IP address (1 host up) scanned in 4.69 seconds
List endpoints:
[stack@stand0 ~]$ openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| 0aa466789cba4751a4fbc44ca060d23c | regionOne | swift | object-store | True | public | http://192.168.168.95:8080/v1/AUTH_%(tenant_id)s |
| 19bc373d08be405c920f4d3c9a597bb1 | regionOne | nova | compute | True | public | http://192.168.168.95:8774/v2.1 |
| 1bd72b0f379944068b9c3c7b8d87f092 | regionOne | placement | placement | True | admin | http://192.168.168.95:8778/placement |
| 26aab71dab5e48748c7da349ead9119c | regionOne | keystone | identity | True | internal | http://192.168.168.95:5000 |
| 273e6184ef574ef58481bb8d30bfa249 | regionOne | cinderv2 | volumev2 | True | public | http://192.168.168.95:8776/v2/%(tenant_id)s |
| 30964ba2bc01468aa1e5c88cb5d2f1c6 | regionOne | keystone | identity | True | public | http://192.168.168.95:5000 |
| 3879f695cd5b42a48e27cdd1839bf940 | regionOne | neutron | network | True | public | http://192.168.168.95:9696 |
| 3e8a75450dcc4938b1e5a56138d17b20 | regionOne | neutron | network | True | internal | http://192.168.168.95:9696 |
| 3e91810991ea46a3b91e7c55add35f1e | regionOne | cinderv2 | volumev2 | True | admin | http://192.168.168.95:8776/v2/%(tenant_id)s |
| 40ece28a5ef641c5a077f27a4826198f | regionOne | cinderv3 | volumev3 | True | internal | http://192.168.168.95:8776/v3/%(tenant_id)s |
| 458da1452aab4f9dbaca94358e64ca15 | regionOne | nova | compute | True | admin | http://192.168.168.95:8774/v2.1 |
| 513989da83c64b2c9dea8c4bb14b2e6a | regionOne | keystone | identity | True | admin | http://192.168.168.95:35357 |
| 65817ce86650456ab1ac6e09f5984267 | regionOne | neutron | network | True | admin | http://192.168.168.95:9696 |
| 65fbd9748f6640c5b999202a3737f701 | regionOne | placement | placement | True | public | http://192.168.168.95:8778/placement |
| 740adc6ebe374304b6b9d40a567ef020 | regionOne | glance | image | True | admin | http://192.168.168.95:9292 |
| 9dec2c6d7b1b43e5a35ce33faf8de5bc | regionOne | placement | placement | True | internal | http://192.168.168.95:8778/placement |
| c91076bc6e6b437bade52885c7d143f3 | regionOne | swift | object-store | True | internal | http://192.168.168.95:8080/v1/AUTH_%(tenant_id)s |
| cc7deab1c6074b6e87f1038cef7ba4fc | regionOne | cinderv3 | volumev3 | True | admin | http://192.168.168.95:8776/v3/%(tenant_id)s |
| cef40ae30834468e964b35ad762641fc | regionOne | glance | image | True | public | http://192.168.168.95:9292 |
| cfb2d4781fb2423d99ad91deb4bc2546 | regionOne | glance | image | True | internal | http://192.168.168.95:9292 |
| e4006b6d26ff41108fd9b881e553d20a | regionOne | cinderv3 | volumev3 | True | public | http://192.168.168.95:8776/v3/%(tenant_id)s |
| eb222fe6133c498191c501a76c61fcb8 | regionOne | swift | object-store | True | admin | http://192.168.168.95:8080 |
| eecc169f1a0746d5855b2cc73c22ca8e | regionOne | nova | compute | True | internal | http://192.168.168.95:8774/v2.1 |
| f54764bc0ce6401da98e1bffa891d22b | regionOne | cinderv2 | volumev2 | True | internal | http://192.168.168.95:8776/v2/%(tenant_id)s |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
You can get you overcloud password here:
[stack@stand0 ~]$ cat $HOME/undercloud-passwords.conf | grep undercloud_admin_password
undercloud_admin_password: J0XXXXXXXX12abcdef
Prepare overcloudrc credential file (you can also download a RC file in Hozizon)
[stack@stand0 ~]$ cat <<EOF > $HOME/overcloudrc
# OpenStack client configuration
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_VOLUME_API192.168.168._VERSION=3
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.168.95:5000
export NOVA_VERSION=1.1
export OS_IMAGE_API_VERSION=2
export OS_PASSWORD=J0XXXXXXXX12abcdef
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_PROJECT_NAME=admin
export OS_AUTH_TYPE=password
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
# Add OS_CLOUDNAME to PS1
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
export PS1=${PS1:-""}
export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
export CLOUDPROMPT_ENABLED=1
fi
Test the credential file and check running instances:
[stack@stand0 ~]$ source overcloudrc
(overcloud) [stack@stand0 ~]$ openstack server list
+--------------------------------------+---------+--------+----------------------------------------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------+--------+----------------------------------------+--------+----------+
| 57c9569b-63d1-4818-b095-8dd3355c9199 | server7 | ACTIVE | private=172.16.16.92 | rhel81 | m1.large |
| 18548a42-a59b-49d0-bb2d-a4080bf944b0 | server6 | ACTIVE | private=172.16.16.60 | rhel81 | m1.large |
| a235dca9-ac2d-4dc4-8172-8df3b2aa1bfd | server5 | ACTIVE | private=172.16.16.36 | rhel81 | m1.large |
| 6598831d-b10b-48e4-867f-c717e92fecb7 | server4 | ACTIVE | private=172.16.16.84 | rhel81 | m1.large |
| 732f39d2-9512-4750-952b-359128471a53 | server3 | ACTIVE | private=172.16.16.129, 192.168.168.238 | rhel81 | m1.large |
| 6409f6b6-e091-4d64-b7da-227bfc576c1a | server2 | ACTIVE | private=172.16.16.249, 192.168.168.237 | rhel81 | m1.large |
| 12b79989-ad95-431c-993d-be556950cedf | server1 | ACTIVE | private=172.16.16.242, 192.168.168.235 | rhel81 | m1.large |
+--------------------------------------+---------+--------+----------------------------------------+--------+----------+
Check network configuration:
[stack@stand0 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:a8 brd ff:ff:ff:ff:ff:ff
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:a9 brd ff:ff:ff:ff:ff:ff
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:aa brd ff:ff:ff:ff:ff:ff
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 20:04:0f:eb:8e:ab brd ff:ff:ff:ff:ff:ff
6: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether f8:f2:1e:31:65:50 brd ff:ff:ff:ff:ff:ff
inet6 fe80::faf2:1eff:fe31:6550/64 scope link
valid_lft forever preferred_lft forever
7: ens1f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:51 brd ff:ff:ff:ff:ff:ff
8: ens1f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:52 brd ff:ff:ff:ff:ff:ff
9: ens1f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f8:f2:1e:31:65:53 brd ff:ff:ff:ff:ff:ff
10: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether aa:4d:c4:b4:f3:4e brd ff:ff:ff:ff:ff:ff
11: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether f8:f2:1e:31:65:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.168.95/24 brd 192.168.168.255 scope global br-ctlplane
valid_lft forever preferred_lft forever
inet6 2620:52:0:2e04:faf2:1eff:fe31:6550/64 scope global dynamic mngtmpaddr
valid_lft 2591596sec preferred_lft 604396sec
inet6 fe80::faf2:1eff:fe31:6550/64 scope link
valid_lft forever preferred_lft forever
12: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
link/ether 66:85:13:cf:9b:49 brd ff:ff:ff:ff:ff:ff
14: tap4a0f1a1a-b0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether c6:b5:d9:a6:51:77 brd ff:ff:ff:ff:ff:ff link-netns ovnmeta-4a0f1a1a-b5ef-4023-b8d2-0b7a9ea1eb43
inet6 fe80::c4b5:d9ff:fea6:5177/64 scope link
valid_lft forever preferred_lft forever
19: tapbdc9895c-fc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:e0:93:66 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fee0:9366/64 scope link
valid_lft forever preferred_lft forever
20: tapf57fd857-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:9a:f6:b6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe9a:f6b6/64 scope link
valid_lft forever preferred_lft forever
23: tap6571426b-b1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:9b:b7:19 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe9b:b719/64 scope link
valid_lft forever preferred_lft forever
24: tap6a4714e5-4c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:97:e3:97 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe97:e397/64 scope link
valid_lft forever preferred_lft forever
25: tap42eccd40-7f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:3d:b4:03 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe3d:b403/64 scope link
valid_lft forever preferred_lft forever
26: tapf31f458e-a1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:bb:49:5c brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:febb:495c/64 scope link
valid_lft forever preferred_lft forever
28: tap338f9b0a-f7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:80:d3:24 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe80:d324/64 scope link
valid_lft forever preferred_lft forever
29: tap041da271-11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:05:d5:58 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe05:d558/64 scope link
valid_lft forever preferred_lft forever
Check Open vSwitch configuration:
[stack@stand0 ~]$ sudo ovs-vsctl show
306a5c45-cd3c-4264-9244-847d377662a8
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
fail_mode: secure
Port "tapf31f458e-a1"
Interface "tapf31f458e-a1"
Port "tap338f9b0a-f7"
Interface "tap338f9b0a-f7"
Port "tapf57fd857-31"
Interface "tapf57fd857-31"
Port "patch-br-int-to-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91"
Interface "patch-br-int-to-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91"
type: patch
options: {peer="patch-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91-to-br-int"}
Port br-int
Interface br-int
type: internal
Port "tap6a4714e5-4c"
Interface "tap6a4714e5-4c"
Port "tap041da271-11"
Interface "tap041da271-11"
Port "tapbdc9895c-fc"
Interface "tapbdc9895c-fc"
Port "tap6571426b-b1"
Interface "tap6571426b-b1"
Port "tap42eccd40-7f"
Interface "tap42eccd40-7f"
Port "tap4a0f1a1a-b0"
Interface "tap4a0f1a1a-b0"
Bridge br-ctlplane
fail_mode: standalone
Port br-ctlplane
Interface br-ctlplane
type: internal
Port "ens1f0"
Interface "ens1f0"
Port "patch-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91-to-br-int"
Interface "patch-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91-to-br-int"
type: patch
options: {peer="patch-br-int-to-provnet-fa850b25-e0a0-492e-a8ea-6670e3448c91"}
ovs_version: "2.11.0"
Check routes:
[stack@stand0 ~]$ ip r
default via 192.168.168.254 dev br-ctlplane
192.168.168.0/24 dev br-ctlplane proto kernel scope link src 192.168.168.95
169.254.0.0/16 dev ens1f0 scope link metric 1006
169.254.0.0/16 dev br-ctlplane scope link metric 1011
List namespaces:
[stack@stand0 ~]$ lsns
NS TYPE NPROCS PID USER COMMAND
4026531835 cgroup 4 8149 stack /usr/lib/systemd/systemd --user
4026531836 pid 4 8149 stack /usr/lib/systemd/systemd --user
4026531837 user 3 8149 stack /usr/lib/systemd/systemd --user
4026531838 uts 4 8149 stack /usr/lib/systemd/systemd --user
4026531839 ipc 4 8149 stack /usr/lib/systemd/systemd --user
4026531840 mnt 3 8149 stack /usr/lib/systemd/systemd --user
4026532056 net 4 8149 stack /usr/lib/systemd/systemd --user
4026532638 user 1 38142 stack podman pause
4026532639 mnt 1 38142 stack podman pause
Libvirt client is only available in the Nova container:
[stack@stand0 ~]$ sudo virsh list
sudo: virsh: command not found
Check with podman running VMs in interactive mode:
[stack@stand0 ~]$ sudo podman ps -a | grep nova_compute | grep "Up"
8d4a1898e2c3 registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 9 hours ago Up 9 hours ago nova_compute
[stack@stand0 ~]$ sudo podman exec --user 0 -it nova_compute /bin/bash
()[root@stand0 /]# virsh list
Id Name State
-----------------------------------
6 instance-00000006 running
7 instance-00000007 running
10 instance-0000000a running
11 instance-0000000b running
12 instance-0000000c running
13 instance-0000000d running
15 instance-0000000f running
16 instance-00000010 running
()[root@stand0 /]# exit
exit
[stack@stand0 ~]$ sudo podman ps -a | grep nova_compute | grep "Up"
8d4a1898e2c3 registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 9 hours ago Up 9 hours ago nova_compute
Check with podman running VMs in interactive mode:
[stack@stand0 ~]$ sudo podman exec --user 0 -it nova_compute virsh list
Id Name State
-----------------------------------
6 instance-00000006 running
7 instance-00000007 running
10 instance-0000000a running
11 instance-0000000b running
12 instance-0000000c running
13 instance-0000000d running
15 instance-0000000f running
16 instance-00000010 running
You can change configuration files, for example increase the initial_cpu_allocation_ratio:
[stack@stand0 ~]$ sudo cat /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf | grep cpu_allocation_rati
# ``initial_cpu_allocation_ratio``.
# * ``initial_cpu_allocation_ratio``
#cpu_allocation_ratio=<None>
# * ``cpu_allocation_ratio``
#initial_cpu_allocation_ratio=16.0
Check the name of the systemctl service:
[stack@stand0 ~]$ sudo systemctl | grep nova_compute
tripleo_nova_compute.service loaded active running nova_compute container
tripleo_nova_compute_healthcheck.timer loaded active waiting nova_compute container healthcheck
Check the status before the restart:
[stack@stand0 ~]$ sudo systemctl status tripleo_nova_compute
● tripleo_nova_compute.service - nova_compute container
Loaded: loaded (/etc/systemd/system/tripleo_nova_compute.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-12-06 23:19:05 UTC; 9h ago
Main PID: 144789 (conmon)
Tasks: 0 (limit: 32767)
Memory: 6.7M
CGroup: /system.slice/tripleo_nova_compute.service
‣ 144789 /usr/libexec/podman/conmon -s -c 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe -u 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe -n nova_compute -r /usr/bin/runc>
Dec 06 23:19:04 stand0.lan.redhat.com systemd[1]: Starting nova_compute container...
Dec 06 23:19:04 stand0.lan.redhat.com podman[144700]: 2019-12-06 23:19:04.534641242 +0000 UTC m=+0.299766881 container init 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe (image=registry.red>
Dec 06 23:19:04 stand0.lan.redhat.com podman[144700]: 2019-12-06 23:19:04.558007868 +0000 UTC m=+0.323133498 container start 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe (image=registry.re>
Dec 06 23:19:04 stand0.lan.redhat.com paunch-start-podman-container[144696]: nova_compute
Dec 06 23:19:04 stand0.lan.redhat.com paunch-start-podman-container[144696]: Creating additional drop-in dependency for "nova_compute" (8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe)
Dec 06 23:19:05 stand0.lan.redhat.com systemd[1]: Started nova_compute container.
[stack@stand0 ~]$ sudo podman ps -a | grep nova_compute
8d4a1898e2c3 registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 9 hours ago Up 9 hours ago nova_compute
650e59e1426d registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 10 hours ago Exited (0) 10 hours ago nova_compute_init_log
Try avoiding using podman commands to restart containers because systemd applies a restart policy, use systemd service commands instead:
[stack@stand0 ~]$ sudo systemctl restart tripleo_nova_compute
Check Podman containers processes:
[stack@stand0 ~]$ sudo podman ps -a | grep nova_compute
8d4a1898e2c3 registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 9 hours ago Up 4 seconds ago nova_compute
650e59e1426d registry.redhat.io/rhosp15-rhel8/openstack-nova-compute:15.0-80 dumb-init --singl... 10 hours ago Exited (0) 10 hours ago nova_compute_init_log
[stack@stand0 ~]$ sudo systemctl status tripleo_nova_compute
● tripleo_nova_compute.service - nova_compute container
Loaded: loaded (/etc/systemd/system/tripleo_nova_compute.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-07 08:42:41 UTC; 9s ago
Process: 423031 ExecStart=/usr/libexec/paunch-start-podman-container nova_compute (code=exited, status=0/SUCCESS)
Main PID: 423074 (conmon)
Tasks: 0 (limit: 32767)
Memory: 6.6M
CGroup: /system.slice/tripleo_nova_compute.service
‣ 423074 /usr/libexec/podman/conmon -s -c 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe -u 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe -n nova_compute -r /usr/bin/runc>
Dec 07 08:42:40 stand0.lan.redhat.com systemd[1]: Starting nova_compute container...
Dec 07 08:42:40 stand0.lan.redhat.com podman[423032]: 2019-12-07 08:42:40.650881266 +0000 UTC m=+0.261061731 container init 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe (image=registry.red>
Dec 07 08:42:40 stand0.lan.redhat.com podman[423032]: 2019-12-07 08:42:40.676961068 +0000 UTC m=+0.287141537 container start 8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe (image=registry.re>
Dec 07 08:42:40 stand0.lan.redhat.com paunch-start-podman-container[423031]: nova_compute
Dec 07 08:42:40 stand0.lan.redhat.com paunch-start-podman-container[423031]: Creating additional drop-in dependency for "nova_compute" (8d4a1898e2c3c5be8e2e37cf55349a6ed087afa1289aa14b9e0e98c86eebc6fe)
Dec 07 08:42:41 stand0.lan.redhat.com systemd[1]: Started nova_compute container.
Login into the dashboard
The dashboad Horizon is available with this URL: http://192.168.168.95/dashboard/
Dashboard home:
List OpenStack instances:
Network topology:
Hypervisor resources capacities:
Neutron:
Launch one instance with the dashboard
Launch a new instance, this instance will be called “server7”:
Choose the image, here we take our RHEL 8.1 image “rhel81”:
Pick a flavor, we take the “m1.large” (created previously):
Choose internal network an click on “Launch Instance”:
The instance is spawning:
The instance is spawned:
The console of the booted server is available: