Using programmatic methods

After reading the overview, you should have a decent idea of what resources are required for a compute instance to run and we can now begin creating a new instance. There are a number of different methods that you can use to create your instance. These include: using the dashboard, using different command line tools, or using an orchestration engine to manage all of the resources you require. The following sections will cover the different programs or methods you can use, from the command line, to create an instance.

Requirements

Before we get started, you will have to source an openRC file. This will give you the required environment variables, so that you are able to create resources on your project. You can find a guide here on how to source an openRC file.

Once this is done, you can follow any of the guides below, to create your instance.

Command line methods

The following is assumed:

  • You have installed the OpenStack command line tools

  • You have sourced an OpenRC file

The following steps are broken down to show you how each individual part is done. Even if you already have the required elements to create an instance, we recommend going through all these steps and completing them to give you a full view of how the individual pieces work together.

Note

This documentation refers to values using place holders (such as <PRIVATE_SUBNET_ID>) in example command output. The majority of these values will be displayed as UUIDs in your output. Many of these values will be stored in bash variables prefixed with CC_ so you do not have to cut and paste them. The prefix CC_ (Catalyst Cloud) is used to distinguish these variables from the OS_ (OpenStack) variables obtained from an OpenRC file.

The first thing we have to do is create the required network resources to host our instance:

Using the following code blocks we will create a router called “border-router” with a gateway to “public-net”. Also we will create a private network, called “private-net”:

Note

If you have completed one of the other tutorials, make sure that when you create your networks and routers with a different name to the ones in the previous tutorials.

$ openstack router create border-router
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | UP                                   |
| external_gateway_info | null                                 |
| headers               |                                      |
| id                    | <BORDER_ROUTER_ID>                   |
| name                  | border-router                        |
| project_id            | <PROJECT_ID>                         |
| routes                |                                      |
| status                | ACTIVE                               |
+-----------------------+--------------------------------------+

$ openstack router set border-router --external-gateway public-net

$ openstack network create private-net
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | UP                                   |
| headers         |                                      |
| id              | <PRIVATE_NETWORK_ID>                 |
| mtu             | 0                                    |
| name            | private-net                          |
| project_id      | <PROJECT_ID>                         |
| router:external | Internal                             |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
+-----------------+--------------------------------------+

Next, set your DNS Name Server variables. Then create a subnet of the “private-net” network, assigning the appropriate DNS server to that subnet.

$ if [[ $OS_REGION_NAME == "nz_wlg_2" ]]; then export CC_NAMESERVER_1=202.78.240.213 CC_NAMESERVER_2=202.78.240.214 CC_NAMESERVER_3=202.78.240.215; \
elif [[ $OS_REGION_NAME == "nz-por-1" ]]; then export CC_NAMESERVER_1=202.78.247.197 CC_NAMESERVER_2=202.78.247.198 CC_NAMESERVER_3=202.78.247.199; \
elif [[ $OS_REGION_NAME == "nz-hlz-1" ]]; then export CC_NAMESERVER_1=202.78.244.85 CC_NAMESERVER_2=202.78.244.86 CC_NAMESERVER_3=202.78.244.87; \
else echo 'please set OS_REGION_NAME'; fi;

$ openstack subnet create --allocation-pool start=10.0.0.10,end=10.0.0.200 --dns-nameserver $CC_NAMESERVER_1 --dns-nameserver $CC_NAMESERVER_2 \
--dns-nameserver $CC_NAMESERVER_3 --dhcp --network private-net --subnet-range 10.0.0.0/24 private-subnet
+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | 10.0.0.10-10.0.0.200                           |
| cidr              | 10.0.0.0/24                                    |
| dns_nameservers   | <NAMESERVER_1>,<NAMESERVER_2>,<NAMESERVER_3>   |
| enable_dhcp       | True                                           |
| gateway_ip        | 10.0.0.1                                       |
| headers           |                                                |
| host_routes       |                                                |
| id                | <PRIVATE_SUBNET_ID>                            |
| ip_version        | 4                                              |
| ipv6_address_mode | None                                           |
| ipv6_ra_mode      | None                                           |
| name              | private-subnet                                 |
| network_id        | <PRIVATE_NETWORK_ID>                           |
| project_id        | <PROJECT_ID>                                   |
| subnetpool_id     | None                                           |
+-------------------+------------------------------------------------+

Now create a router interface on the “private-subnet” subnet:

$ openstack router add subnet border-router private-subnet

After this we choose a Flavor for our instance:

The Flavor of an instance specifies the disk, CPU, and memory allocated to an instance. Use openstack flavor list to see a list of available configurations.

Note

Catalyst flavors are named ‘cX.cYrZ’, where X is the “compute generation”, Y is the number of vCPUs, and Z is the number of gigabytes of memory.

Choose a Flavor ID, assign it to an environment variable, then export for later use:

$ openstack flavor list
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| ID                                   | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| 01b42bbc-347f-43e8-9a07-0a51105a5527 | c1.c8r8   |  8192 |   10 |         0 |     8 | True      |
| 0c7dc485-e7cc-420d-b118-021bbafa76d7 | c1.c2r8   |  8192 |   10 |         0 |     2 | True      |
| 0f3be84b-9d6e-44a8-8c3d-8a0dfe226674 | c1.c16r16 | 16384 |   10 |         0 |    16 | True      |
| 1750075c-cd8a-4c87-bd06-a907db83fec6 | c1.c1r2   |  2048 |   10 |         0 |     1 | True      |
| 1d760238-67a7-4415-ab7b-24a88a49c117 | c1.c8r32  | 32768 |   10 |         0 |     8 | True      |
| 28153197-6690-4485-9dbc-fc24489b0683 | c1.c1r1   |  1024 |   10 |         0 |     1 | True      |
| 45060aa3-3400-4da0-bd9d-9559e172f678 | c1.c4r8   |  8192 |   10 |         0 |     4 | True      |
| 4efb43da-132e-4b50-a9d9-b73e827938a9 | c1.c2r16  | 16384 |   10 |         0 |     2 | True      |
| 62473bef-f73b-4265-a136-e3ae87e7f1e2 | c1.c4r4   |  4096 |   10 |         0 |     4 | True      |
| 6a16e03f-9127-427c-99aa-3bdbdd58471a | c1.c16r8  |  8192 |   10 |         0 |    16 | True      |
| 746b8230-b763-41a6-954c-b11a29072e52 | c1.c1r4   |  4096 |   10 |         0 |     1 | True      |
| 7b74c2c5-f131-4981-90ef-e1dc1ae51a8f | c1.c8r16  | 16384 |   10 |         0 |     8 | True      |
| 7cd52d7f-9272-47c9-a3ea-e8d7bc30a0bd | c1.c8r64  | 65536 |   10 |         0 |     8 | True      |
| 88597cff-9503-492c-b005-98736f0bd705 | c1.c16r64 | 65536 |   10 |         0 |    16 | True      |
| 92e03684-53d0-4f1e-9222-cf4fbb8ef15d | c1.c16r32 | 32768 |   10 |         0 |    16 | True      |
| a197eac1-9565-4052-8199-dfd8f31e5553 | c1.c8r4   |  4096 |   10 |         0 |     8 | True      |
| a80af444-9e8a-4984-9f7f-b46532052a24 | c1.c4r2   |  2048 |   10 |         0 |     4 | True      |
| b152339e-e624-4705-9116-da9e0a6984f7 | c1.c4r16  | 16384 |   10 |         0 |     4 | True      |
| b4a3f931-dc86-480c-b7a7-c34b2283bfe7 | c1.c4r32  | 32768 |   10 |         0 |     4 | True      |
| c093745c-a6c7-4792-9f3d-085e7782eca6 | c1.c2r4   |  4096 |   10 |         0 |     2 | True      |
| e3feb785-af2e-41f7-899b-6bbc4e0b526e | c1.c2r2   |  2048 |   10 |         0 |     2 | True      |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+

$ export CC_FLAVOR_ID=$( openstack flavor show c1.c1r1 -f value -c id )

This example assigns a c1.c1r1 flavor to the instance.

Note

Flavor IDs will be different in each region. Remember always to check what is available using openstack flavor list.

Next, we will have to choose an image:

In order to create an instance, we will use a pre-built Operating System known as an Image. Images are stored in the Glance service.

Note

Catalyst provides a number of popular images for general use. If your preferred image is not available, you may upload a custom image to Glance.

Choose an Image ID, assign it to an environment variable, then export for later use:

$ openstack image list --public
+--------------------------------------+---------------------------------+--------+
| ID                                   | Name                            | Status |
+--------------------------------------+---------------------------------+--------+
| 5892a80a-abc4-46f0-b39a-ecb4c0cb5d36 | ubuntu-18.04-x86_64             | active |
| 49fb1409-c88e-4750-a394-56ddea80231d | ubuntu-16.04-x86_64             | active |
| c75df558-7d84-4f97-9a5d-6eb58aeadcce | ubuntu-12.04-x86_64             | active |
| cab9f3f4-a3a5-488b-885e-892873c15f53 | ubuntu-14.04-x86_64             | active |
| f595d7ed-69c0-46b7-a688-a9d12d1e52dc | debian-8-x86_64                 | active |
| 64ce626e-d1c6-41f3-805e-a283e83e4d85 | centos-6.6-x86_64               | active |
| d46fde0f-01b4-4c21-b5a0-0d05df927c49 | centos-7.0-x86_64               | active |
| bfbc68e4-afd6-4384-8790-ecf0ac3dd6a3 | atomic-7-x86_64                 | active |
| b941a846-8cec-4f59-a39e-3720a25823cc | coreos-1068.8.0-x86_64          | active |
| c14d3623-8912-4502-b2cc-0487d9913686 | ubuntu-14.04-x86_64-20160803    | active |
| 08dd4b82-bea9-4f58-8351-6958fe7aae23 | ubuntu-12.04-x86_64-20160803    | active |
| 37b45c3a-2ce4-4a21-980b-d835512eb35a | ubuntu-16.04-x86_64-20160803    | active |
| 881fab19-35c6-410d-8d46-70e7f4db8c89 | centos-7.0-x86_64-20160802      | active |
| bee47bef-78f9-41e5-bc0d-786786fad388 | centos-6.6-x86_64-20160802      | active |
| c1e1cd17-1de4-4100-b280-1d10ee4aa8c0 | atomic-7-x86_64-20160802        | active |
| 3d7b214f-1b67-4c89-bac7-01d449101c76 | debian-8-x86_64-20160802        | active |
| 8c431b2b-1d89-4137-8b79-f288bfe65c9a | windows-server-2012r2-x86_64    | active |
| 98123ffa-18ea-454b-9509-74fc4abee95d | debian-8-x86_64-20160620        | active |
| 2e6ec1de-553b-4fa8-9997-d8366019ac68 | coreos-1010.5.0-x86_64-20160802 | active |
| 0f9a3680-25d6-4efa-b202-32f26b4030e4 | centos-6.6-x86_64-20160620      | active |
| 9e52bf38-addf-4391-8005-224be9113a0f | centos-7.0-x86_64-20160620      | active |
| d3901dfa-1d19-48f9-bfea-163cebeb62d0 | ubuntu-16.04-x86_64-20160621    | active |
| 4edfdb20-3af9-4880-a135-6d5971078460 | ubuntu-12.04-x86_64-20160622    | active |
| ffee7150-70de-48bb-99b9-6cf5666b368c | atomic-7-x86_64-20160620        | active |
| 661b2022-0f50-4783-b398-62113efd6bb2 | ubuntu-14.04-x86_64-20160624    | active |
| f641e7f8-c8ac-4667-9a84-8653716fc1ad | centos-6.5-x86_64               | active |
+--------------------------------------+---------------------------------+--------+

$ export CC_IMAGE_ID=$( openstack image show ubuntu-18.04-x86_64 -f value -c id )

This example uses the Ubuntu image to create an instance.

Note

The amount of images that Catalyst Provides can be quite large, if you know what Operating System you want for your image you can use the command openstack image list -- public | grep <OPERATING SYSTEM> to find it quicker than looking through this list. Another thing to note is that; Image IDs will be different in each region. Furthermore, images are periodically updated so Image IDs will change over time. Remember always to check what is available using openstack image list --public.

After we have these resources, we need to add an SSH key:

When an instance is created, OpenStack places an SSH key on the instance which can be used for shell access. By default, Ubuntu will install this key for the “ubuntu” user. Other operating systems have a different default user, as listed here: Types of images

Use openstack keypair create to upload your Public SSH key.

Tip

Name the key using information such as your username and the hostname on which the ssh key was generated. This makes the key easy to identify at a later stage.

$ openstack keypair create --public-key ~/.ssh/id_rsa.pub first-instance-key
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | <SSH_KEY_FINGERPRINT>                           |
| name        | first-instance-key                              |
| user_id     | <USER_ID>                                       |
+-------------+-------------------------------------------------+

$ openstack keypair list
+--------------------+-------------------------------------------------+
| Name               | Fingerprint                                     |
+--------------------+-------------------------------------------------+
| first-instance-key | <SSH_KEY_FINGERPRINT>                           |
+------------+---------------------------------------------------------+

Note

Keypairs must be created in each region being used.

Now we choose the network to host our instance:

List the available networks and choose the appropriate one to use. Assign the Network ID to an environment variable and export it for later use.

$ openstack network list
+--------------------------------------+-------------+----------------------------+
| ID                                   | Name           | Subnets                 |
+--------------------------------------+-------------+----------------------------+
| <PUBLIC_NETWORK_ID>                  | public-net  | <PUBLIC_SUBNET_ID>         |
| <PRIVATE_NETWORK_ID>                 | private-net | <PRIVATE_SUBNET_ID>        |
+--------------------------------------+-------------+----------------------------+

$ export CC_PUBLIC_NETWORK_ID=$( openstack network show public-net -f value -c id )
$ export CC_PRIVATE_NETWORK_ID=$( openstack network show private-net -f value -c id )

The public-net is used by routers to access the Internet. Instances may not be booted on this network. Choose “private-net” when assigning a network to the instance.

Note

Network IDs will be different in each region. Remember to always check what is available using openstack network list.

Now that we have our network set up, we will need to create a security group:

For our example instance, we are going to create a security group called “first-instance-sg”.

$ openstack security group create --description 'Network access for our first instance.' first-instance-sg
+-------------+---------------------------------------------------------------------------------+
| Field       | Value                                                                           |
+-------------+---------------------------------------------------------------------------------+
| description | Network access for our first instance.                                          |
| headers     |                                                                                 |
| id          | <SECURITY_GROUP_ID>                                                             |
| name        | first-instance-sg                                                               |
| project_id  | <PROJECT_ID>                                                                    |
| rules       | direction='egress', ethertype='IPv4', id='afc19e4d-a3d3-467f-8da3-3a07d3d59acc' |
|             | direction='egress', ethertype='IPv6', id='e027c9b3-f59b-40bb-b4ea-d44a0f057d7f' |
+-------------+---------------------------------------------------------------------------------+

Create a rule within the “first-instance-sg” security group.

Issue the openstack security group list command to find the SECURITY_GROUP_ID. Assign the Security Group ID to an environment variable and export it for later use.

$ openstack security group list
+--------------------------------------+-------------------+----------------------------------------+----------------------------------+
| ID                                   | Name              | Description                            | Project                          |
+--------------------------------------+-------------------+----------------------------------------+----------------------------------+
| 14aeedb8-5e9c-4617-8cf9-6e072bb41886 | first-instance-sg | Network access for our first instance. | 0cb6b9b744594a619b0b7340f424858b |
| 687512ab-f197-4f07-ae51-788c559883b9 | default           | default                                | 0cb6b9b744594a619b0b7340f424858b |
+--------------------------------------+-------------------+----------------------------------------+----------------------------------+

$ export CC_SECURITY_GROUP_ID=$( openstack security group show first-instance-sg -f value -c id )

Assign the local external IP address to an environment variable and export it for later use:

$ export CC_REMOTE_CIDR_NETWORK="$( dig +short myip.opendns.com @resolver1.opendns.com )/32"
$ echo $CC_REMOTE_CIDR_NETWORK

Note

Ensure that this variable is correctly set and if not, set it manually. If you are unsure of what CC_REMOTE_CIDR_NETWORK should be, ask your network administrator, or visit http://ifconfig.me to find your IP address. Use “<IP_ADDRESS>/32” as CC_REMOTE_CIDR_NETWORK to allow traffic only from your current effective IP.

Create a rule to restrict SSH access to your instance to the current public IP address:

$ openstack security group rule create --ingress --protocol tcp --dst-port 22 --remote-ip $CC_REMOTE_CIDR_NETWORK $CC_SECURITY_GROUP_ID
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| headers           |                                      |
| id                | <SECURITY_GROUP_RULE_ID>             |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| project_id        | <PROJECT_ID>                         |
| protocol          | tcp                                  |
| remote_group_id   | None                                 |
| remote_ip_prefix  | <REMOTE_CIDR_NETWORK>                |
| security_group_id | 14aeedb8-5e9c-4617-8cf9-6e072bb41886 |
+-------------------+--------------------------------------+

Now we actually create our instance:

Use the openstack server create command, supplying the information obtained in previous steps and exported as environment variables.

Ensure you have appropriate values set for CC_FLAVOR_ID, CC_IMAGE_ID and CC_PRIVATE_NETWORK_ID.

$ env | grep CC_

$ openstack server create --flavor $CC_FLAVOR_ID --image $CC_IMAGE_ID --key-name first-instance-key \
--security-group default --security-group first-instance-sg --nic net-id=$CC_PRIVATE_NETWORK_ID first-instance

As the Instance builds, its details will be provided. This includes its ID (represented by <INSTANCE_ID>) below.

+--------------------------------------+------------------------------------------------------------+
| Field                                | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          |                                                            |
| OS-EXT-STS:power_state               | NOSTATE                                                    |
| OS-EXT-STS:task_state                | scheduling                                                 |
| OS-EXT-STS:vm_state                  | building                                                   |
| OS-SRV-USG:launched_at               | None                                                       |
| OS-SRV-USG:terminated_at             | None                                                       |
| accessIPv4                           |                                                            |
| accessIPv6                           |                                                            |
| addresses                            |                                                            |
| adminPass                            | <ADMIN_PASS>                                               |
| config_drive                         |                                                            |
| created                              | 2016-08-17T23:35:32Z                                       |
| flavor                               | c1.c1r1 (28153197-6690-4485-9dbc-fc24489b0683)             |
| hostId                               |                                                            |
| id                                   | <INSTANCE_ID>                                              |
| image                                | ubuntu-18.04-x86_64 (cab9f3f4-a3a5-488b-885e-892873c15f53) |
| key_name                             | glyndavies                                                 |
| name                                 | first-instance                                             |
| os-extended-volumes:volumes_attached | []                                                         |
| progress                             | 0                                                          |
| project_id                           | <PROJECT_ID>                                               |
| properties                           |                                                            |
| security_groups                      | [{u'name': u'default'}, {u'name': u'first-instance-sg'}]   |
| status                               | BUILD                                                      |
| updated                              | 2016-08-17T23:35:33Z                                       |
| user_id                              | <USER_ID>                                                  |
+--------------------------------------+------------------------------------------------------------+

Note

Observe that the status is BUILD Catalyst Cloud instances build very quickly, but it still takes a few seconds. Wait a few seconds and ask for the status of this instance using the <INSTANCE_ID> or name (if unique) of this instance.

$ openstack server show first-instance
+--------------------------------------+------------------------------------------------------------+
| Field                                | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          | nz-por-1a                                                  |
| OS-EXT-STS:power_state               | Running                                                    |
| OS-EXT-STS:task_state                | None                                                       |
| OS-EXT-STS:vm_state                  | active                                                     |
| OS-SRV-USG:launched_at               | 2016-09-02T00:30:13.000000                                 |
| OS-SRV-USG:terminated_at             | None                                                       |
| accessIPv4                           |                                                            |
| accessIPv6                           |                                                            |
| addresses                            | private-net=10.0.0.12                                      |
| config_drive                         |                                                            |
| created                              | 2016-09-02T00:29:44Z                                       |
| flavor                               | c1.c1r1 (28153197-6690-4485-9dbc-fc24489b0683)             |
| hostId                               | 4f39b132f41c2ab6113d5bbeedab6e1bc0b1a1095949dd64df815077   |
| id                                   | <INSTANCE_ID>                                              |
| image                                | ubuntu-18.04-x86_64 (49fb1409-c88e-4750-a394-56ddea80231d) |
| key_name                             | first-instance-key                                         |
| name                                 | first-instance                                             |
| os-extended-volumes:volumes_attached | []                                                         |
| progress                             | 0                                                          |
| project_id                           | <PROJECT_ID>                                               |
| properties                           |                                                            |
| security_groups                      | [{u'name': u'default'}, {u'name': u'first-instance-sg'}]   |
| status                               | ACTIVE                                                     |
| updated                              | 2016-09-02T00:30:13Z                                       |
| user_id                              | <USER_ID>                                                  |
+--------------------------------------+------------------------------------------------------------+

In order to connect to the instance, we first need to allocate a Floating IP. Use the ID of “public-net” (obtained previously with openstack network list) to request a new Floating IP.

$ openstack floating ip create $CC_PUBLIC_NETWORK_ID
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | None                                 |
| floating_ip_address | <PUBLIC_IP>                          |
| floating_network_id | <PUBLIC_NETWORK_ID>                  |
| headers             |                                      |
| id                  | <FLOATING_IP_ID>                     |
| port_id             | None                                 |
| project_id          | <PROJECT_ID>                         |
| router_id           | None                                 |
| status              | DOWN                                 |
+---------------------+--------------------------------------+

Note

This step can be skipped if Floating IPs already exist. Check this by issuing the command: openstack floating ip list.

$ export CC_FLOATING_IP_ID=$( openstack floating ip list -f value | grep -m 1 'None None' | awk '{ print $1 }' )
$ export CC_PUBLIC_IP=$( openstack floating ip show $CC_FLOATING_IP_ID -f value -c floating_ip_address )

Associate this Floating IP with the instance:

$ openstack server add floating ip first-instance $CC_PUBLIC_IP

Connecting to the Instance should be as easy as:

$ ssh ubuntu@$CC_PUBLIC_IP

The bash script provided here comprises all the commands from the Openstack CLI example in a single script.

Download and run this script using the following commands:

$ wget -q https://raw.githubusercontent.com/catalyst/catalystcloud-docs/master/source/_scripts/create-first-instance.sh
$ chmod 744 create-first-instance.sh
$ ./create-first-instance.sh

Note

Please examine the script carefully before it is run, ensuring that its content, function, and impact is thoroughly understood. The script may require editing to add a prefix, for example. See the “VARS” section at the top of the script for more details. In addition to this, you are able to change the default DNS settings if you have your own that you wish to use. Otherwise the script will use the catalyst cloud DNS by default.

#!/bin/bash

# VARS, change these if required
# Set a prefix if you wish all names to have a unique prefix
#PREFIX='myprefix-'
PREFIX=''
ROUTER_NAME="${PREFIX}border-router"
PRIVATE_NETWORK_NAME="${PREFIX}private-net"
PRIVATE_SUBNET_NAME="${PREFIX}private-subnet"
SSH_KEY_NAME="${PREFIX}first-instance-key"
INSTANCE_NAME="${PREFIX}first-instance"
SECURITY_GROUP_NAME="${PREFIX}first-instance-sg"
# Network portion of /24 you wish to use in the subnet
NETWORK="10.0.0"
POOL_START_OCT="10"
POOL_END_OCT="200"
FLAVOR_NAME="c1.c1r1"
IMAGE_NAME="ubuntu-18.04-x86_64"
SSH_PUBLIC_KEY=~/.ssh/id_rsa.pub

# valid ip function
valid_ip() {
    regex="\b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b"
    echo "$1" | egrep "$regex" &>/dev/null
    return $?
}

# Var so we can exit if required after all checks
EXIT=0;

# Check the required OS_ env vars exist
if [ -z "$OS_REGION_NAME" ]; then
    echo OS_REGION_NAME not set please ensure you have sourced an OpenStack RC file.
    EXIT=1;
fi

if [ -z "$OS_AUTH_URL" ]; then
    echo OS_AUTH_URL not set please ensure you have sourced an OpenStack RC file.
    EXIT=1;
fi

if [ -z "$OS_PROJECT_NAME" ]; then
    echo OS_PROJECT_NAME not set please ensure you have sourced an OpenStack RC file.
    EXIT=1;
fi

if [ -z "$OS_USERNAME" ]; then
    echo OS_USERNAME not set please ensure you have sourced an OpenStack RC file.
    EXIT=1;
fi

if [ -z "$OS_PASSWORD" ]; then
    echo OS_PASSWORD not set please ensure you have sourced an OpenStack RC file.
    EXIT=1;
fi

# check the openstack command is available
hash openstack 2>/dev/null || {
    echo "Openstack command line client is not available, please install it before proceeding";
    EXIT=1;
}

# Checks
if [ ! -f $SSH_PUBLIC_KEY ]; then
    echo "Cannot find an ssh public key, please set SSH_PUBLIC_KEY to point at a valid key";
    EXIT=1;
fi

if [[ $OS_REGION_NAME == "nz_wlg_2" ]]; then
    CC_NAMESERVER_1=202.78.240.213
    CC_NAMESERVER_2=202.78.240.214
    CC_NAMESERVER_3=202.78.240.215
elif [[ $OS_REGION_NAME == "nz-por-1" ]]; then
    CC_NAMESERVER_1=202.78.247.197
    CC_NAMESERVER_2=202.78.247.198
    CC_NAMESERVER_3=202.78.247.199
elif [[ $OS_REGION_NAME == "nz-hlz-1" ]]; then
    CC_NAMESERVER_1=202.78.244.85
    CC_NAMESERVER_2=202.78.244.86
    CC_NAMESERVER_3=202.78.244.87
else
    echo "OS_REGION_NAME does not point at a valid region";
    EXIT=1;
fi;

# check that resources do not already exist
if openstack server list | grep -q "$INSTANCE_NAME"; then
    echo "Instance $INSTANCE_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if openstack router list | grep -q "$ROUTER_NAME"; then
    echo "Router $ROUTER_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if openstack subnet list | grep -q "$PRIVATE_SUBNET_NAME"; then
    echo "Subnet $PRIVATE_SUBNET_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if openstack network list | grep -q "$PRIVATE_NETWORK_NAME"; then
    echo "Network $PRIVATE_NETWORK_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if openstack security group list | grep -q "$SECURITY_GROUP_NAME"; then
    echo "Security group $SECURITY_GROUP_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if openstack keypair list | grep -q "$SSH_KEY_NAME"; then
    echo "Keypair $SSH_KEY_NAME exists, please delete all first instance resources before running this script";
    EXIT=1;
fi

if [ "$EXIT" -eq 1 ]; then
    exit 1;
fi

for curl_ip in http://ipinfo.io/ip http://ifconfig.me/ip http://curlmyip.com; do
    CC_REMOTE_IP=$( curl -s $curl_ip )
    if valid_ip "$CC_REMOTE_IP"; then
        break
    fi
done

if ! valid_ip "$CC_REMOTE_IP"; then
    echo "Could not determine your external IP address, please find it and edit CC_REMOTE_IP before proceeding";
    exit 1;
fi
echo "$CC_REMOTE_IP"
CC_REMOTE_CIDR_NETWORK="$CC_REMOTE_IP/32"

# everything is in order, lets build a stack!
echo Creating a new router:
openstack router create $ROUTER_NAME

echo Setting router gateway.
openstack router set $ROUTER_NAME --external-gateway public-net

echo Creating a new private network:
openstack network create "$PRIVATE_NETWORK_NAME"

echo Creating a private subnet:
openstack subnet create \
--allocation-pool "start=${NETWORK}.${POOL_START_OCT},end=${NETWORK}.${POOL_END_OCT}" \
--dns-nameserver "$CC_NAMESERVER_1" \
--dns-nameserver "$CC_NAMESERVER_2" \
--dns-nameserver "$CC_NAMESERVER_3" \
--dhcp \
--network "$PRIVATE_NETWORK_NAME" \
--subnet-range "$NETWORK.0/24" \
"$PRIVATE_SUBNET_NAME" \

echo Creating a router interface on the subnet.
openstack router add subnet "$ROUTER_NAME" "$PRIVATE_SUBNET_NAME"

echo Selecting a flavour.
CC_FLAVOR_ID=$( openstack flavor show "$FLAVOR_NAME" -f value -c id )

echo Selecting an image.
CC_IMAGE_ID=$( openstack image show "$IMAGE_NAME" -f value -c id )

echo Uploading a key:
openstack keypair create --public-key $SSH_PUBLIC_KEY $SSH_KEY_NAME

echo Getting network ids.
CC_PUBLIC_NETWORK_ID=$( openstack network show public-net -f value -c id )
CC_PRIVATE_NETWORK_ID=$( openstack network show "$PRIVATE_NETWORK_NAME" -f value -c id )

echo Creating security group:
openstack security group create --description 'Network access for our first instance.' $SECURITY_GROUP_NAME

echo Getting security group id.
CC_SECURITY_GROUP_ID=$( openstack security group show "$SECURITY_GROUP_NAME" -f value -c id )

echo Creating security group rule for ssh access:
openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 22 \
--remote-ip "$CC_REMOTE_CIDR_NETWORK" \
"$CC_SECURITY_GROUP_ID"

echo Booting first instance:
openstack server create \
--flavor "$CC_FLAVOR_ID" \
--image "$CC_IMAGE_ID" \
--key-name "$SSH_KEY_NAME" \
--security-group default \
--security-group "$SECURITY_GROUP_NAME" \
--nic "net-id=$CC_PRIVATE_NETWORK_ID" \
"$INSTANCE_NAME"

INSTANCE_STATUS=$( openstack server show "$INSTANCE_NAME" -f value -c status )

until [ "$INSTANCE_STATUS" == 'ACTIVE' ]
do
    INSTANCE_STATUS=$( openstack server show "$INSTANCE_NAME" -f value -c status )
    sleep 2;
done

echo Getting floating ip id.
CC_FLOATING_IP_ID=$( openstack floating ip list -f value -c ID --status 'DOWN' | head -n 1 )
if [ -z "$CC_FLOATING_IP_ID" ]; then
    echo No floating ip found creating a floating ip:
    openstack floating ip create "$CC_PUBLIC_NETWORK_ID"
    echo Getting floating ip id:
    CC_FLOATING_IP_ID=$( openstack floating ip list -f value -c ID --status 'DOWN' | head -n 1 )
fi

echo Getting public ip.
CC_PUBLIC_IP=$( openstack floating ip show "$CC_FLOATING_IP_ID" -f value -c floating_ip_address )

echo Associating floating ip with instance.
openstack server add floating ip "$INSTANCE_NAME" "$CC_PUBLIC_IP"

echo You can now connect to your instance using the following command:
echo "ssh ubuntu@$CC_PUBLIC_IP"

Instead of using the resource cleanup commands in the section below, you can use the following bash script:

You can download and run this script using the following commands:

$ wget -q https://raw.githubusercontent.com/catalyst/catalystcloud-docs/master/source/_scripts/delete-first-instance.sh
$ chmod 744 delete-first-instance.sh
$ ./delete-first-instance.sh

Note

You may wish to edit the script before executing, for example to add a prefix.

#!/bin/bash

# VARS, change these if required
# Set a prefix if you wish all names to have a unique prefix
#PREFIX='myprefix-'
PREFIX=''
ROUTER_NAME="${PREFIX}border-router"
PRIVATE_NETWORK_NAME="${PREFIX}private-net"
SSH_KEYPAIR_NAME="${PREFIX}first-instance-key"
INSTANCE_NAME="${PREFIX}first-instance"
SECURITY_GROUP_NAME="${PREFIX}first-instance-sg"

echo Deleting instance.
openstack server delete $INSTANCE_NAME

echo Deleting router interface.
openstack router remove port $ROUTER_NAME "$( openstack port list -f value -c ID --router $ROUTER_NAME )"

echo Deleting router.
openstack router delete $ROUTER_NAME

echo Deleting network.
openstack network delete $PRIVATE_NETWORK_NAME

echo Deleting security group.
openstack security group delete $SECURITY_GROUP_NAME

echo Deleting ssh keypair.
openstack keypair delete $SSH_KEYPAIR_NAME

Heat is the native OpenStack orchestration tool. This section demonstrates how to create a first instance using Heat.

It is beyond the scope of this section to explain the syntax of writing Heat templates. A predefined example from the catalystcloud-orchestration git repository will be used here as a template. This example may also be used as the basis for new templates.

Tip

For more information on writing Heat templates, please consult the documentation at Orchestration.

Checkout the catalystcloud-orchestration repository. This includes the example Heat templates:

$ git clone https://github.com/catalyst/catalystcloud-orchestration.git && ORCHESTRATION_DIR="$(pwd)/catalystcloud-orchestration" && echo $ORCHESTRATION_DIR

To continue, we will need to have an SSH key for our instance. Heat does not support uploading an SSH key so this step must be performed manually.

When an instance is created, OpenStack passes an SSH key to the instance which can be used for shell access. By default, Ubuntu will install this key for the “ubuntu” user. Other operating systems have a different default user, as listed here: Types of images

Use openstack keypair create to upload your Public SSH key.

Tip

Name the key using information such as your username and the hostname on which the ssh key was generated. This makes the key easy to identify at a later stage.

$ openstack keypair create --public-key ~/.ssh/id_test.pub first-instance-key
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | <SSH_KEY_FINGERPRINT>                           |
| name        | testkey                                         |
| user_id     | <USER_ID>                                       |
+-------------+-------------------------------------------------+

$ openstack keypair list
+------------+-------------------------------------------------+
| Name       | Fingerprint                                     |
+------------+-------------------------------------------------+
| testkey    | <SSH_KEY_FINGERPRINT>                           |
+------------+-------------------------------------------------+

Note

Keypairs must be created in each region being used.

Now that we have our SSH key, we can start to build our instance:

Select the following Heat template from the catalystcloud-orchestration repository cloned earlier. Before making use of a template, it is good practice to check that the template is valid:

$ openstack orchestration template validate -t $ORCHESTRATION_DIR/hot/ubuntu-18.04/first-instance/first-instance.yaml

This command will echo the yaml if it succeeds and will return an error if it does not. If the template validates, it may be used to build the stack:

$ openstack stack create -t $ORCHESTRATION_DIR/hot/ubuntu-18.04/first-instance/first-instance.yaml first-instance-stack
+---------------------+-------------------------------------------------------------------------------------------+
| Field               | Value                                                                                     |
+---------------------+-------------------------------------------------------------------------------------------+
| id                  | cb956f56-536a-4244-930d-62ae1eb2b182                                                      |
| stack_name          | first-instance-stack                                                                      |
| description         | HOT template for building the first instance stack on the Catalyst Cloud nz-por-1 region. |
|                     |                                                                                           |
| creation_time       | 2016-08-18T22:39:25Z                                                                      |
| updated_time        | None                                                                                      |
| stack_status        | CREATE_IN_PROGRESS                                                                        |
| stack_status_reason | Stack CREATE started                                                                      |
+---------------------+-------------------------------------------------------------------------------------------+

The stack_status indicates that creation is in progress. Use the event list command to check on the stack’s orchestration progress:

$  openstack stack event list first-instance-stack

View the output of the stack show command for further details:

$  openstack stack show first-instance-stack
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                                   |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| id                    | cb956f56-536a-4244-930d-62ae1eb2b182                                                                                                                    |
| stack_name            | first-instance-stack                                                                                                                                    |
| description           | HOT template for building the first instance stack on the Catalyst Cloud nz-por-1 region.                                                               |
|                       |                                                                                                                                                         |
| creation_time         | 2016-08-18T22:39:25Z                                                                                                                                    |
| updated_time          | None                                                                                                                                                    |
| stack_status          | CREATE_COMPLETE                                                                                                                                         |
| stack_status_reason   | Stack CREATE completed successfully                                                                                                                     |
| parameters            | OS::project_id: <PROJECT_ID>                                                                                                        |
|                       | OS::stack_id: cb956f56-536a-4244-930d-62ae1eb2b182                                                                                                      |
|                       | OS::stack_name: first-instance-stack                                                                                                                    |
|                       | domain_name: localdomain                                                                                                                                |
|                       | host_name: first-instance                                                                                                                               |
|                       | image: ubuntu-18.04-x86_64                                                                                                                              |
|                       | key_name: first-instance-key                                                                                                                            |
|                       | private_net_cidr: 10.0.0.0/24                                                                                                                           |
|                       | private_net_dns_servers: 202.78.247.197,202.78.247.198,202.78.247.199                                                                                   |
|                       | private_net_gateway: 10.0.0.1                                                                                                                           |
|                       | private_net_name: private-net                                                                                                                           |
|                       | private_net_pool_end: 10.0.0.200                                                                                                                        |
|                       | private_net_pool_start: 10.0.0.10                                                                                                                       |
|                       | private_subnet_name: private-subnet                                                                                                                     |
|                       | public_net: public-net                                                                                                                                  |
|                       | public_net_id: 849ab1e9-7ac5-4618-8801-e6176fbbcf30                                                                                                     |
|                       | router_name: border-router                                                                                                                              |
|                       | secgroup_name: first-instance-sg                                                                                                                        |
|                       | servers_flavor: c1.c1r1                                                                                                                                 |
|                       |                                                                                                                                                         |
| outputs               | []                                                                                                                                                      |
|                       |                                                                                                                                                         |
| links                 | - href: https://api.nz-por-1.catalystcloud.io:8004/v1/<PROJECT_ID>/stacks/first-instance-stack/cb956f56-536a-4244-930d-62ae1eb2b182 |
|                       |   rel: self                                                                                                                                             |
|                       |                                                                                                                                                         |
| parent                | None                                                                                                                                                    |
| disable_rollback      | True                                                                                                                                                    |
| stack_user_project_id | <PROJECT_ID>                                                                                                                        |
| stack_owner           | None                                                                                                                                                    |
| capabilities          | []                                                                                                                                                      |
| notification_topics   | []                                                                                                                                                      |
| timeout_mins          | None                                                                                                                                                    |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+

Once the stack status is CREATE_COMPLETE, it is possible to SSH to the Floating IP of the instance:

$ export CC_FLOATING_IP_ID=$( openstack stack resource show -f value -c physical_resource_id first-instance-stack first_instance_server_floating_ip )
$ export CC_PUBLIC_IP=$( openstack floating ip show -f value -c floating_ip_address $CC_FLOATING_IP_ID )
$ ssh ubuntu@$CC_PUBLIC_IP

Warning

If a stack has been orchestrated using Heat, it is generally a good idea to also use Heat to delete that stack’s resources. Deleting components of a Heat orchestrated stack manually, whether using the other command line tools or the web interface, can result in resources or stacks being left in an inconsistent state.

To delete the first-instance-stack created previously, proceed as follows:

$ openstack stack delete first-instance-stack
Are you sure you want to delete this stack(s) [y/N]? y

Check that the stack has been deleted properly using the openstack stack list command. If there is an error, or if deleting the stack is taking a long time, check the output of openstack stack event list first-instance-stack.

Ansible is a popular open source configuration management and application deployment tool. Ansible provides a set of core modules for interacting with OpenStack. This makes Ansible an ideal tool for providing both OpenStack orchestration and instance configuration, letting you use a single tool to set up the underlying infrastructure and configure instances. As such Ansible can replace other tools, such as Heat for OpenStack orchestration, and Puppet for instance configuration.

Comprehensive documentation of the Ansible OpenStack modules is available at https://docs.ansible.com/ansible/list_of_cloud_modules.html#openstack And for any troubleshooting issues you may face using Ansible you can refer to the following: https://docs.ansible.com/ansible-tower/2.2.0/html/administration/troubleshooting.html

A script is provided by Catalyst which installs the required Ansible and OpenStack libraries within a Python virtual environment. This script is part of the catalystcloud-ansible git repository. Clone this repository and run the install script in order to install Ansible.

$ git clone https://github.com/catalyst/catalystcloud-ansible.git && CC_ANSIBLE_DIR="$(pwd)/catalystcloud-ansible" && echo $CC_ANSIBLE_DIR
$ cd catalystcloud-ansible
$ ./install-ansible.sh
Installing stable version of Ansible
...
Ansible installed successfully!

To activate run the following command:

source /home/yourname/src/catalystcloud-ansible/ansible-venv/bin/activate

$ source $CC_ANSIBLE_DIR/ansible-venv/bin/activate
$ ansible --version
ansible 2.1.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Note

Catalyst recommends customers use Ansible >= 2.0 and Shade >= 1.4 with the Catalyst Cloud.

Before running the Ansible playbooks, ensure your OpenStack credentials have been set up. The variables from your sourced openRC file are read by the Ansible os_auth module, and will provide Ansible with the credentials required to access the Catalyst Cloud APIs.

Note

If credentials are not set up by sourcing an OpenStack RC file, a few mandatory authentication attributes will need to be included in the playbooks. See the “vars” section of the playbooks for details.

Once we have the required Ansible and OpenStack libraries, and we have sourced our necessary credentials, we can start with our first playbook.

The first instance playbooks are located under the example-playbooks directory and have been split up as follows:

  • The first playbook, create-network.yml creates the required network components.

  • The second playbook, launch-instance.yml launches the instance.

Starting with the first playbook, these are the tasks the create-network.yml playbook will perform:

$ ansible-playbook --list-tasks create-network.yml

playbook: create-network.yml

 play #1 (localhost): Create a network in the Catalyst Cloud   TAGS: []
   tasks:
     Connect to the Catalyst Cloud TAGS: []
     Create a network  TAGS: []
     Create a subnet   TAGS: []
     Create a router   TAGS: []
     Create a security group   TAGS: []
     Create a security group rule for SSH access   TAGS: []
     Import an SSH keypair TAGS: []

In order for this playbook to work, the path to a valid SSH key must be provided. Edit create-network.yml and update the ssh_public_key variable, or override the variable when running the playbook as shown below:

$ ansible-playbook --extra-vars "ssh_public_key=$HOME/.ssh/id_rsa.pub" create-network.yml

PLAY [Deploy a cloud instance in OpenStack] ************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [Connect to the Catalyst Cloud] *******************************************
ok: [localhost]

TASK [Create a network] ********************************************************
changed: [localhost]

TASK [Create a subnet] *********************************************************
changed: [localhost]

TASK [Create a router] *********************************************************
changed: [localhost]

TASK [Create a security group] *************************************************
changed: [localhost]

TASK [Create a security group rule for SSH access] *****************************
changed: [localhost]

TASK [Import an SSH keypair] ***************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=6    unreachable=0    failed=0

Tip

Pay careful attention to the console output. It provides lots of useful information.

After the network has been set up successfully, run the launch-instance.yml playbook:


$ ansible-playbook launch-instance.yml

PLAY [Deploy a cloud instance in OpenStack] ************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [Connect to the Catalyst Cloud] *******************************************
ok: [localhost]

TASK [Create a compute instance on the Catalyst Cloud] *************************
changed: [localhost]

TASK [Assign a floating IP] ****************************************************
changed: [localhost]

TASK [Output floating IP] ******************************************************
ok: [localhost] => {
    "floating_ip_info.floating_ip.floating_ip_address": "150.242.41.75"
}

PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=1

The new instance is accessible using SSH. Retrieve the instance’s IP address from the console output. It is echoed by the example Output floating IP task above as “150.242.41.75”. Login using SSH (using the username appropriate to the build image):

$ ssh ubuntu@150.242.41.75

Tip

Additional Ansible playbooks may now be used to configure this instance further, as required.

Lastly, we have a playbook that you can use to cleanup all resources created by the previous playbooks.

It has been included in the catalystcloud-ansible git repository referenced earlier, but may also be downloaded as follows:

$ wget -q https://raw.githubusercontent.com/catalyst/catalystcloud-ansible/master/remove-stack.yml

Run the playbook to remove all resources created previously:

$ ansible-playbook remove-stack.yml --extra-vars "floating_ip=<ip-address>"

Replace <ip-address> with the floating-ip assigned by the launch-instance.yml playbook.

Note

This cleanup playbook assumes that all resources have been created using the default names defined in the original playbooks. If the original names have been changed, it will be necessary to edit the cleanup playbook to reflect these changes.

Terraform is an open source infrastructure configuration and provisioning tool developed by Hashicorp. Terraform supports the configuration of many kinds of infrastructure, including the Catalyst Cloud. It achieves this by using components known as providers. In the case of the Catalyst Cloud, this is the Openstack provider.

For further information on using Terraform with OpenStack, see the linked video and blog post:

Installation of Terraform is very simple. Go to the Terraform download page and choose the zip file that matches your operating system and architecture. Unzip this file to the location where Terraform’s binaries will reside on your system. Terraform is written in Go, so it has minimal dependencies. Please refer to https://www.terraform.io/intro/getting-started/install.html for detailed install instructions.

$ mkdir terraform-first-instance
$ export TERRAFORM_DIR="$(pwd)/terraform-first-instance"
$ cd $TERRAFORM_DIR
$ wget https://releases.hashicorp.com/terraform/0.12.12/terraform_0.12.12_linux_amd64.zip
$ unzip unzip terraform_0.12.12_linux_amd64.zip

Before running Terraform, ensure your OpenStack credentials have been set up. These variables are read by the OpenStack provider and will provide Terraform with permissions to access the Catalyst Cloud APIs.

Note

If credentials are not set up by sourcing an OpenStack RC file, a few required authentication arguments must be set in the OpenStack provider. See the Configuration Reference section of the OpenStack provider documentation.

Once Terraform had been installed and the OpenStack credentials have been set up, a first instance may be built.


To create an instance using terraform, you must have a configuration file prepared to construct your resources.

It is beyond the scope of this documentation to explain how Terraform configuration files are written. A pre-prepared example is provided in the catalystcloud-orchestration git repository.

For more information on writing Terraform configuration files, please consult the Terraform documentation. The configuration file used here can be used as a template from which you can build your own configurations.

Download the configuration file:

$ cd $TERRAFORM_DIR
$ wget https://raw.githubusercontent.com/catalyst-cloud/catalystcloud-orchestration/master/terraform/first-instance/first-instance-variables.tf

Note

In order for the pre-prepared configuration to work, a number of changes must be made:

  • Edit the file and change the public_key under the openstack_compute_keypair_v2 resource. Use the actual public key string, not a path to the public key file.

  • Ensure that variables referred to in the file match the correct OpenStack region. The pre-prepared file has been set up to work with Catalyst’s Porirua region. Pay particular attention to external_gateway, dns_nameservers, image_id, and flavor_id.

The “terraform plan” command outlines the list of operations that Terraform will execute:

$ terraform init
$ terraform plan
Refreshing Terraform state prior to plan...


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ openstack_compute_floatingip_v2.floatingip_1
    address:     "" => "<computed>"
    fixed_ip:    "" => "<computed>"
    instance_id: "" => "<computed>"
    pool:        "" => "public-net"
    region:      "" => "nz-por-1"

+ openstack_compute_instance_v2.instance_1
    access_ip_v4:               "" => "<computed>"
    access_ip_v6:               "" => "<computed>"
    flavor_id:                  "" => "28153197-6690-4485-9dbc-fc24489b0683"
    flavor_name:                "" => "<computed>"
    floating_ip:                "" => "${openstack_compute_floatingip_v2.floatingip_1.address}"
    image_id:                   "" => "378f3322-740f-4c4d-9864-aebeb41f21ab"
    image_name:                 "" => "<computed>"
    key_pair:                   "" => "first-instance-key"
    metadata.#:                 "" => "1"
    metadata.group:             "" => "test-group"
    name:                       "" => "first-instance"
    network.#:                  "" => "1"
    network.0.access_network:   "" => "0"
    network.0.fixed_ip_v4:      "" => "<computed>"
    network.0.fixed_ip_v6:      "" => "<computed>"
    network.0.floating_ip:      "" => "<computed>"
    network.0.mac:              "" => "<computed>"
    network.0.name:             "" => "private-net"
    network.0.port:             "" => "<computed>"
    network.0.uuid:             "" => "<computed>"
    region:                     "" => "nz-por-1"
    security_groups.#:          "" => "2"
    security_groups.310671339:  "" => "first-instance-sg"
    security_groups.3814588639: "" => "default"
    volume.#:                   "" => "<computed>"

+ openstack_compute_keypair_v2.keypair_1
    name:       "" => "first-instance-key"
    public_key: "" => "ssh-rsa AAAAB3......"
    region:     "" => "nz-por-1"

+ openstack_compute_secgroup_v2.secgroup_1
    description:                  "" => "Network access for our first instance."
    name:                         "" => "first-instance-sg"
    region:                       "" => "nz-por-1"
    rule.#:                       "" => "1"
    rule.836640770.cidr:          "" => "0.0.0.0/0"
    rule.836640770.from_group_id: "" => ""
    rule.836640770.from_port:     "" => "22"
    rule.836640770.id:            "" => "<computed>"
    rule.836640770.ip_protocol:   "" => "tcp"
    rule.836640770.self:          "" => "0"
    rule.836640770.to_port:       "" => "22"

+ openstack_networking_network_v2.network_1
    admin_state_up: "" => "true"
    name:           "" => "private-net"
    region:         "" => "nz-por-1"
    shared:         "" => "<computed>"
    tenant_id:      "" => "<computed>"

+ openstack_networking_router_interface_v2.router_interface_1
    region:    "" => "nz-por-1"
    router_id: "" => "${openstack_networking_router_v2.router_1.id}"
    subnet_id: "" => "${openstack_networking_subnet_v2.subnet_1.id}"

+ openstack_networking_router_v2.router_1
    admin_state_up:   "" => "<computed>"
    distributed:      "" => "<computed>"
    external_gateway: "" => "849ab1e9-7ac5-4618-8801-e6176fbbcf30"
    name:             "" => "border-router"
    region:           "" => "nz-por-1"
    tenant_id:        "" => "<computed>"

+ openstack_networking_subnet_v2.subnet_1
    allocation_pools.#:         "" => "1"
    allocation_pools.0.end:     "" => "10.0.0.200"
    allocation_pools.0.start:   "" => "10.0.0.10"
    cidr:                       "" => "10.0.0.0/24"
    dns_nameservers.#:          "" => "3"
    dns_nameservers.3010225292: "" => "202.78.247.198"
    dns_nameservers.3295368218: "" => "202.78.247.199"
    dns_nameservers.601061661:  "" => "202.78.247.197"
    enable_dhcp:                "" => "1"
    gateway_ip:                 "" => "<computed>"
    ip_version:                 "" => "4"
    name:                       "" => "private-subnet"
    network_id:                 "" => "${openstack_networking_network_v2.network_1.id}"
    region:                     "" => "nz-por-1"
    tenant_id:                  "" => "<computed>"


Plan: 8 to add, 0 to change, 0 to destroy.

Note

It is a good idea to review the output of this command. Check the resources that will be created match your intentions.

The “terraform apply” command executes the plan, creating OpenStack resources:

$ terraform apply
openstack_compute_keypair_v2.keypair_1: Creating...
  name:       "" => "first-instance-key"
  public_key: "" => "ssh-rsa AAAAB3......"
  region:     "" => "nz-por-1"
openstack_networking_router_v2.router_1: Creating...
  admin_state_up:   "" => "<computed>"
  distributed:      "" => "<computed>"
  external_gateway: "" => "849ab1e9-7ac5-4618-8801-e6176fbbcf30"
  name:             "" => "border-router"
  region:           "" => "nz-por-1"
  tenant_id:        "" => "<computed>"
openstack_compute_floatingip_v2.floatingip_1: Creating...
  address:     "" => "<computed>"
  fixed_ip:    "" => "<computed>"
  instance_id: "" => "<computed>"
  pool:        "" => "public-net"
  region:      "" => "nz-por-1"
openstack_compute_secgroup_v2.secgroup_1: Creating...
  description:                  "" => "Network access for our first instance."
  name:                         "" => "first-instance-sg"
  region:                       "" => "nz-por-1"
  rule.#:                       "" => "1"
  rule.836640770.cidr:          "" => "0.0.0.0/0"
  rule.836640770.from_group_id: "" => ""
  rule.836640770.from_port:     "" => "22"
  rule.836640770.id:            "" => "<computed>"
  rule.836640770.ip_protocol:   "" => "tcp"
  rule.836640770.self:          "" => "0"
  rule.836640770.to_port:       "" => "22"
openstack_networking_network_v2.network_1: Creating...
  admin_state_up: "" => "true"
  name:           "" => "private-net"
  region:         "" => "nz-por-1"
  shared:         "" => "<computed>"
  tenant_id:      "" => "<computed>"
openstack_compute_keypair_v2.keypair_1: Creation complete
openstack_compute_secgroup_v2.secgroup_1: Creation complete
openstack_compute_floatingip_v2.floatingip_1: Creation complete
openstack_networking_network_v2.network_1: Creation complete
openstack_networking_subnet_v2.subnet_1: Creating...
  allocation_pools.#:         "" => "1"
  allocation_pools.0.end:     "" => "10.0.0.200"
  allocation_pools.0.start:   "" => "10.0.0.10"
  cidr:                       "" => "10.0.0.0/24"
  dns_nameservers.#:          "" => "3"
  dns_nameservers.3010225292: "" => "202.78.247.198"
  dns_nameservers.3295368218: "" => "202.78.247.199"
  dns_nameservers.601061661:  "" => "202.78.247.197"
  enable_dhcp:                "" => "1"
  gateway_ip:                 "" => "<computed>"
  ip_version:                 "" => "4"
  name:                       "" => "private-subnet"
  network_id:                 "" => "1913210e-3921-4c9b-b8ab-a097b7c8fc7b"
  region:                     "" => "nz-por-1"
  tenant_id:                  "" => "<computed>"
openstack_compute_instance_v2.instance_1: Creating...
  access_ip_v4:               "" => "<computed>"
  access_ip_v6:               "" => "<computed>"
  flavor_id:                  "" => "28153197-6690-4485-9dbc-fc24489b0683"
  flavor_name:                "" => "<computed>"
  floating_ip:                "" => "150.242.42.67"
  image_id:                   "" => "378f3322-740f-4c4d-9864-aebeb41f21ab"
  image_name:                 "" => "<computed>"
  key_pair:                   "" => "first-instance-key"
  metadata.#:                 "" => "1"
  metadata.group:             "" => "test-group"
  name:                       "" => "first-instance"
  network.#:                  "" => "1"
  network.0.access_network:   "" => "0"
  network.0.fixed_ip_v4:      "" => "<computed>"
  network.0.fixed_ip_v6:      "" => "<computed>"
  network.0.floating_ip:      "" => "<computed>"
  network.0.mac:              "" => "<computed>"
  network.0.name:             "" => "private-net"
  network.0.port:             "" => "<computed>"
  network.0.uuid:             "" => "<computed>"
  region:                     "" => "nz-por-1"
  security_groups.#:          "" => "2"
  security_groups.310671339:  "" => "first-instance-sg"
  security_groups.3814588639: "" => "default"
  volume.#:                   "" => "<computed>"
openstack_networking_router_v2.router_1: Creation complete
openstack_networking_subnet_v2.subnet_1: Creation complete
openstack_networking_router_interface_v2.router_interface_1: Creating...
  region:    "" => "nz-por-1"
  router_id: "" => "b1a302c2-3369-47bd-ad3f-b85465cd6b72"
  subnet_id: "" => "53dda21d-6e27-43cb-86bf-deb576b10134"
openstack_compute_instance_v2.instance_1: Still creating... (10s elapsed)
openstack_networking_router_interface_v2.router_interface_1: Creation complete
openstack_compute_instance_v2.instance_1: Still creating... (20s elapsed)
openstack_compute_instance_v2.instance_1: Creation complete

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Once the terraform apply command has completed, your resources will be built and you will be able to monitor them on the cloud.

If you wish to clean up these resources, the “terraform destroy” command will delete any of the resources that were created using the previous command.

Note

Terraform keeps track of the state of resources using a local file called terraform.tfstate. Terraform consults this file when destroying resources in order to determine what to delete.

$ ./terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

openstack_compute_secgroup_v2.secgroup_1: Refreshing state... (ID: 1da4e4a5-5401-4f17-b379-2f397839eb9a)
openstack_networking_network_v2.network_1: Refreshing state... (ID: 1913210e-3921-4c9b-b8ab-a097b7c8fc7b)
openstack_compute_floatingip_v2.floatingip_1: Refreshing state... (ID: 580c174a-2972-4597-aedc-f21f5b421e21)
openstack_networking_router_v2.router_1: Refreshing state... (ID: b1a302c2-3369-47bd-ad3f-b85465cd6b72)
openstack_compute_keypair_v2.keypair_1: Refreshing state... (ID: first-instance-key)
openstack_networking_subnet_v2.subnet_1: Refreshing state... (ID: 53dda21d-6e27-43cb-86bf-deb576b10134)
openstack_compute_instance_v2.instance_1: Refreshing state... (ID: 72776b0d-438e-421d-89fc-3a806eadd3eb)
openstack_networking_router_interface_v2.router_interface_1: Refreshing state... (ID: 267afa19-f2df-4b17-96da-7a1d09f413b6)
openstack_networking_router_interface_v2.router_interface_1: Destroying...
openstack_compute_instance_v2.instance_1: Destroying...
openstack_compute_instance_v2.instance_1: Still destroying... (10s elapsed)
openstack_networking_router_interface_v2.router_interface_1: Still destroying... (10s elapsed)
openstack_networking_router_interface_v2.router_interface_1: Destruction complete
openstack_networking_subnet_v2.subnet_1: Destroying...
openstack_networking_router_v2.router_1: Destroying...
openstack_compute_instance_v2.instance_1: Destruction complete
openstack_compute_floatingip_v2.floatingip_1: Destroying...
openstack_compute_keypair_v2.keypair_1: Destroying...
openstack_compute_secgroup_v2.secgroup_1: Destroying...
openstack_compute_keypair_v2.keypair_1: Destruction complete
openstack_compute_floatingip_v2.floatingip_1: Destruction complete
openstack_networking_subnet_v2.subnet_1: Still destroying... (10s elapsed)
openstack_networking_router_v2.router_1: Still destroying... (10s elapsed)
openstack_networking_router_v2.router_1: Destruction complete
openstack_networking_subnet_v2.subnet_1: Destruction complete
openstack_networking_network_v2.network_1: Destroying...
openstack_compute_secgroup_v2.secgroup_1: Still destroying... (10s elapsed)
openstack_compute_secgroup_v2.secgroup_1: Destruction complete
openstack_networking_network_v2.network_1: Still destroying... (10s elapsed)
openstack_networking_network_v2.network_1: Destruction complete

Apply complete! Resources: 0 added, 0 changed, 8 destroyed.

The Catalyst Cloud is built on top of the OpenStack project. There are many Software Development Kits for a variety of different languages available for OpenStack. Some of these SDKs are written specifically for OpenStack while others are multi cloud SDKs that have an OpenStack provider. Some of these libraries are written to support a particular service like compute, while others attempt to provide a unified interface to all services.

You will find an up to date list of recommended SDKs at http://developer.openstack.org/. A more exhaustive list that includes in development SDKs is available at https://wiki.openstack.org/wiki/SDKs.

This section covers the OpenstackSDK which is a python based SDK with support currently only provided for python3. This sdk came out of 3 separate libraries originally: shade, os-client-config and python-openstacksdk. They each have their own history on how they were created but after awhile it was clear that there was a lot to be gained by merging the three projects.


Firstly, we have to install the OpenstackSDK. The recommended way to get the up to date version of the SDK is to use Python’s pip installer. Simply run:

pip install openstacksdk

It is recommended that you use the openstack sdk from a virtual environment. More information can be found here: Using python virtual environments

Now that we have the OpenstackSDK installed, the next step in getting an instance running is to provide your Python script with the correct credentials and configuration for your project. If you have already sourced your OpenRC file, then this step has been taken care of. If you still need to source your OpenRC, there is a link above in the requirements section of this document.


Once your environment variables have been set, we are able to create an instance using the openstack-SDK. We have prepared below a python script that will create all the various resources needed to set up a blank ubuntu-18 instance with a block storage volume attached. If you want to create an instance with different parameters, you can find information on how to create your own scripts on the OpenstackSDK documentation:

The following code block assumes a few things:

  • Your external IP address has been whitelisted for API access as explained under Access and whitelist.

  • You are using an RC file that does not use 2-factor-authentication. if you are using 2FA then you would need to change the password variable to be a token

  • The region your instance is going to be made is the Porirua region.

  • You have downloaded and installed a version of python3 on your machine.

  • You don’t already have a private SSH key that you want to associate with your instance. To change this you will have to alter the code relating to the ‘create_keypair’ function.

#!/usr/bin/env python

#import needed packages
import os
import sys
import openstack
from openstack.config import loader
import errno
config = loader.OpenStackConfig()

#Variables for the creation of an instance
prefix = 'openstacksdk-' #Change this prefix if you're wanting a different name
NETWORK_PREFIX = '10.10.0'
SERVER_NAME = prefix + 'instance'
PRIVATE_NETWORK = prefix + 'private-net'
PRIVATE_SUBNET = prefix + 'private-subnet'
ROUTER = prefix + 'router'
SECURITY_GROUP = prefix + 'sg'
NETWORK_NAME = prefix + 'private-net'
KEYPAIR_NAME = prefix + 'keypair'
VOLUME_NAME = prefix + 'volume'

IMAGE_NAME = 'ubuntu-18.04-x86_64'
FLAVOR_NAME = 'c1.c1r1'
SSH_DIR = '{home}/.ssh'.format(home=os.path.expanduser("~"))
PUBLIC_KEYPAIR_FILE = '{ssh_dir}/openstacksdk.id_rsa.pub'.format(ssh_dir=SSH_DIR)
PRIVATE_KEYPAIR_FILE = '{ssh_dir}/openstacksdk.id_rsa.private'.format(ssh_dir=SSH_DIR)
RESTRICTED_CIDR_RANGE = '0.0.0.0/32'

#connect to the cloud using local variables.
auth = os.environ['OS_AUTH_URL']
region_name = os.environ['OS_REGION_NAME']
project_name = os.environ['OS_PROJECT_NAME']
username = os.environ['OS_USERNAME']
password = os.environ['OS_PASSWORD']

print('The environment variables this script has found:')
print('Auth URL:',auth)
print('Region name:',region_name)
print('Project name',project_name)
print('Username',username)
print('Password',password[:1])

conn = openstack.connect(
        auth_url=auth,
        project_name=project_name,
        username=username,
        password=password,
        region_name=region_name,
        app_name='examples',
        app_version='1.0',
    )

#print the current network to prove that the connectivity is successful
print('------------------------------------------------------------------------')
print('Connection to the catalyst server:')
print(conn,'\n')

def ssh_port(conn):
  sec_group = conn.network.find_security_group(SECURITY_GROUP)
  if not sec_group:
    print("Create a security group and set up SSH ingress:")
    print('------------------------------------------------------------------------\n')

    sec_group = conn.network.create_security_group(
        name=SECURITY_GROUP)

    ssh_rule = conn.network.create_security_group_rule(
        security_group_id=sec_group.id,
        direction='ingress',
        remote_ip_prefix='114.110.38.54/32',
        protocol='TCP',
        port_range_max='22',
        port_range_min='22',
        ethertype='IPv4')

  return sec_group

def create_router(conn):
  router = conn.network.find_router(ROUTER)
  if not router:
    print("Create a Router:")
    print('------------------------------------------------------------------------\n')

    router = conn.network.create_router(
        name=ROUTER,external_gateway_info={'network_id':'849ab1e9-7ac5-4618-8801-e6176fbbcf30'}
    )
    router.add_interface(conn.network,subnet_id=conn.network.find_subnet(PRIVATE_SUBNET).id)

  return router

def create_network(conn):
  network = conn.network.find_network(NETWORK_NAME)
  if not network:
    print("Create a Network and subnet:")
    print('------------------------------------------------------------------------\n')
    network = conn.network.create_network(
        name=NETWORK_NAME)

    example_subnet = conn.network.create_subnet(
        name=PRIVATE_SUBNET,
        network_id=network.id,
        ip_version='4',
        cidr='10.0.0.0/24',
        gateway_ip='10.0.0.2')

  router=create_router(conn)
  security_group=ssh_port(conn)

  return network

def create_keypair(conn):
  keypair = conn.compute.find_keypair(KEYPAIR_NAME)
  if not keypair:
      print("Create a Key Pair:")
      print('------------------------------------------------------------------------\n')
      keypair = conn.compute.create_keypair(name=KEYPAIR_NAME)

      try:
          os.mkdir(SSH_DIR)
      except OSError as e:
          if e.errno != errno.EEXIST:
              raise e

      with open(PRIVATE_KEYPAIR_FILE, 'w') as f:
          f.write("%s" % keypair.private_key)

      os.chmod(PRIVATE_KEYPAIR_FILE, 0o400)

  return keypair

def create_volume(conn):
  print("Creating and attaching Volume:")
  print('------------------------------------------------------------------------\n')
  volume = conn.volume_exists(VOLUME_NAME)
  instance = conn.compute.find_server(SERVER_NAME)
  loop_val = True
  if not volume:
    volume = conn.volume.create_volume(name=VOLUME_NAME, size=10,volume_type='b1.standard',wait=True)
    # The following loop, waits for your volume to be built before attaching it to your instance.
    while loop_val == True:
      volume_stat = conn.get_volume(VOLUME_NAME).status
      if volume_stat == 'available':
        loop_val = False
    # attach the volume to your instance
    volume = conn.get_volume(VOLUME_NAME)
    conn.attach_volume(server=instance,volume=volume,wait=True)

  return volume

def attach_floating_ip(conn):
  print('Attaching floating IP to instance:')
  print('------------------------------------------------------------------------\n')
  instance = conn.compute.find_server(SERVER_NAME)
  floating_IP = conn.network.find_available_ip()

  if floating_IP:
    conn.compute.add_floating_ip_to_server(instance,floating_IP.floating_ip_address)
    print('Allocated a floating IP. To access your instance use : ssh -i {key} ubuntu@{ip}'.format(key=PRIVATE_KEYPAIR_FILE, ip=floating_IP.floating_ip_address))
  else:
    conn.network.create_ip(floating_network_id='849ab1e9-7ac5-4618-8801-e6176fbbcf30')
    floating_IP = conn.network.find_available_ip()
    conn.compute.add_floating_ip_to_server(instance,floating_IP.floating_ip_address)
    print('Created a floating IP. To access your instance use : ssh -i {key} ubuntu@{ip}'.format(key=PRIVATE_KEYPAIR_FILE, ip=floating_IP.floating_ip_address))


  return floating_IP

def create_instance(conn):
  print('Building resources for create:')
  print('------------------------------------------------------------------------\n')

  image = conn.compute.find_image(IMAGE_NAME)
  flavor = conn.compute.find_flavor(FLAVOR_NAME)
  network = create_network(conn)
  security_group = conn.network.find_security_group(SECURITY_GROUP)
  keypair = create_keypair(conn)

  print('Creating Instance')
  print('------------------------------------------------------------------------\n')
  server = conn.compute.create_server(
  name=SERVER_NAME, image_id=image.id, flavor_id=flavor.id,
  networks=[{"uuid": network.id}], key_name=keypair.name, security_groups=[security_group])
  server = conn.compute.wait_for_server(server)

def main(conn):
  #run this function to create your instance.

  #creates your instance:
  create_instance(conn)
  #creates and attaches a volume
  create_volume(conn)
  #attaches a floating_IP to your instance.
  attach_floating_ip(conn)

main(conn)

You’ll need to save this script as a python file and run the following command from your the directory of your file:

python3 script-file-name.py

After this is completed you should be able to see your new instance on your project in the catalyst cloud.

Resource cleanup using the command line

At this point you may want to clean up the OpenStack resources that have been created. Running the following commands should remove all networks, routers, ports, security groups and instances. These commands will work regardless of the method you used to create the resources as long as the names of your resources, match the ones below. Note that the order in which you delete resources is important.

Warning

The following commands will delete all the resources you have created including networks and routers. Do not run these commands unless you wish to delete all these resources.

# delete the instances
$ openstack server delete first-instance

# delete router interface
$ openstack router remove port border-router $( openstack port list -f value -c ID --router border-router )

# delete router
$ openstack router delete border-router

# delete network
$ openstack network delete private-net

# delete security group
$ openstack security group delete first-instance-sg

# delete ssh key
$ openstack keypair delete first-instance-key