OpenStack: Network Creation Script

Hi All! The past few months have been very busy, and lots of work on OpenStack. I have quite a bit to share, but for today let’s look at how I automated OpenStack network creation.

Here is the script. The article below discusses the manual steps that the script automates. You do want to read the article, but if you are impatient (like me) and just want to get the good stuff and get moving, get the os-create-network-sh.txt script, rename it to os-create-network.sh, read it, and use it as you like. Remember…no warranties! Something breaks, and you own it. The script is idempotent; it checks carefully to see if objects are already created and doesn’t try to “re-add” them if they already exist.

I use Neutron Networking (Icehouse) but that’s a different article (and a fun one!). For today, I’ll show you the script for creating tenant networks, attaching them to the external network, and verifying the results. I’m assuming you have a complete OpenStack stack except for a Compute node (and VMs, of course). Also, you can see Icehouse Neutron Initial Networks for more info.

On our Neutron nodes we defined three interfaces: eth0 is Management, eth1 is Data (Guest VM traffic), and eth2 is External (DMZ VLAN 106). First we create the External network; this is the path to the outside world:

[l.abruce@co1 rc_scripts]$ neutron net-create ext-net --shared --router:external=True
Created a new network: 
+---------------------------+--------------------------------------+ 
| Field                     | Value                                | 
+---------------------------+--------------------------------------+ 
| admin_state_up            | True                                 | 
| id                        | b40800a7-814c-46cf-a3df-77137efbe180 | 
| name                      | ext-net                              | 
| provider:network_type     | gre                                  | 
| provider:physical_network |                                      | 
| provider:segmentation_id  | 1                                    | 
| router:external           | True                                 | 
| shared                    | True                                 | 
| status                    | ACTIVE                               | 
| subnets                   |                                      | 
| tenant_id                 | 789b9bd0a06a47099a59650ff78b69da     | 
+---------------------------+--------------------------------------+ 

Now we create the external subnet. This gives us the IP range which we present to our tenants; I’m using 172.20.128.0/18 which gives me a huge range:

[l.abruce@co1 rc_scripts]$ neutron subnet-create ext-net --name ext-subnet \
  --allocation-pool start=172.20.132.1,end=172.20.190.254 \
  --disable-dhcp --gateway 172.20.128.1 172.20.128.0/18 \
  --dns_nameservers list=true 192.168.1.2
Created a new subnet: 
+------------------+----------------------------------------------------+ 
| Field            | Value                                              | 
+------------------+----------------------------------------------------+ 
| allocation_pools | {"start": "172.20.132.1", "end": "172.20.190.254"} | 
| cidr             | 172.20.128.0/18                                    | 
| dns_nameservers  | 192.168.1.2                                        | 
| enable_dhcp      | False                                              | 
| gateway_ip       | 172.20.128.1                                       | 
| host_routes      |                                                    | 
| id               | 47aa68bd-bbdb-4f6f-9012-acd1d1a6e066               | 
| ip_version       | 4                                                  | 
| name             | ext-subnet                                         | 
| network_id       | cab649b6-c4b2-4da5-af58-15319d244abf               | 
| tenant_id        | b4ecec89c305404c90c16d79511376a7                   | 
+------------------+----------------------------------------------------+ 

OK, we have an external network and subnet created. Let’s create a tenant network; in my case, our tenants get only internal, isolated networks. All communications is over GRE via the Neutron Controller. I also VLAN this network separately, but that’s a different and more advanced discussion.

So each tenant will have its own tenant network, and each network is isolated and independent of one another. This means the networks can overlap; thus, we’ll use 10.0.0.0/24 for each network and let Neutron sort everything out. Here’s an example for the DEMO tenant:

[l.abruce@co1 rc_scripts]$ neutron --os-tenant-name=demo net-create demo-net
Created a new network: 
+---------------------------+--------------------------------------+ 
| Field                     | Value                                | 
+---------------------------+--------------------------------------+ 
| admin_state_up            | True                                 | 
| id                        | ed417ebd-e095-4b6f-89cc-463cd8c905ca | 
| name                      | demo-net                             | 
| provider:network_type     | gre                                  | 
| provider:physical_network |                                      | 
| provider:segmentation_id  | 2                                    | 
| shared                    | False                                | 
| status                    | ACTIVE                               | 
| subnets                   |                                      | 
| tenant_id                 | 7a4fbcfb98b84c38a0b8f464c8bb0fda     | 
+---------------------------+--------------------------------------+ 

We have our tenant-specific network, so we need to create the subnet:

[l.abruce@co1 rc_scripts]$ neutron --os-tenant-name=demo subnet-create \
  demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24
  --dns_nameservers list=true 192.168.1.2
Created a new subnet:
+------------------+--------------------------------------------+
| Field            | Value                                      |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr             | 10.0.0.0/24                                |
| dns_nameservers  | 192.168.1.2                                |
| enable_dhcp      | True                                       |
| gateway_ip       | 10.0.0.1                                   |
| host_routes      |                                            |
| id               | c34bc597-c7e0-451a-ae41-e0c36d42cece       |
| ip_version       | 4                                          |
| name             | demo-subnet                                |
| network_id       | ed417ebd-e095-4b6f-89cc-463cd8c905ca       |
| tenant_id        | 7a4fbcfb98b84c38a0b8f464c8bb0fda           |
+------------------+--------------------------------------------+

A bit of explanation is in order here…you no doubt see a reference to 192.168.1.2. For my setup, I actually have the concept of “DMZ-within-DMZ”; I use the big 172.20.128.0/18 private Class B subnet for all of my tenants to share their “external” addresses; but I also have a standard 192.168.1.0/24 network that I get with my cheapie Linksys router. Because the only thing between the 192.168.1.0/24 network and the Big, Bad Internet is the cheapie router I want to insulate my tenants and developers from that network. Besides, I only have 254 IP addresses available on the 192.168.1.0/24 subnet; why waste them on a bunch of VMs that only require internal access?

The net result is that I keep my DNS and OpenLDAP on the 192.168.1.0/24 subnet, and I have a couple software routers to handle getting things around.

Anyways, we have our tenant-specific network / subnet created. Now we need a router to get us to the external network:

[l.abruce@co1 rc_scripts]$ neutron --os-tenant-name=demo router-create demo-router
Created a new router: 
+-----------------------+--------------------------------------+ 
| Field                 | Value                                | 
+-----------------------+--------------------------------------+ 
| admin_state_up        | True                                 | 
| external_gateway_info |                                      | 
| id                    | a18d1aea-31f0-4951-ab2a-a2622f7a861c | 
| name                  | demo-router                          | 
| status                | ACTIVE                               | 
| tenant_id             | b42aa3d7e8e743b6b9b6dde9c063578f     | 
+-----------------------+--------------------------------------+

The router is just a database record entry for now. Let’s make it “real” by adding some interfaces to it. We need the internal (Demo) subnet, so let’s add that first. This permits VMs on the Demo subnet to have a default gateway:

[l.abruce@co1 rc_scripts]$ neutron --os-tenant-name=demo router-interface-add demo-router demo-subnet
Added interface 169a6c9e-ff6a-4235-87e9-23ad7ff4a54c to router demo-router. 

Next, we add the egress network (ext-net) to the Demo router; this permits routing all egress traffic to the outside world:

[l.abruce@co1 rc_scripts]$ neutron --os-tenant-name=demo router-gateway-set demo-router ext-net
Set gateway for router demo-router

Now…what IP address will that router have on the external network? Let’s find out! We first need to know the ports that were added:

[l.abruce@co1 rc_scripts(keystone_admin)]$ neutron --os-tenant-name=demo router-port-list demo-router
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 8a65a52f-4b62-436b-9afd-c7296a2b74a1 |      | fa:16:3e:23:0f:53 | {"subnet_id": "47aa68bd-bbdb-4f6f-9012-acd1d1a6e066", "ip_address": "172.20.132.1"} |
| b4f0fa85-07d2-4c84-9a10-aa4f455fbc60 |      | fa:16:3e:5c:6e:9a | {"subnet_id": "35c1befc-8521-4ee9-909d-2034fba84493", "ip_address": "10.0.0.1"}     |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+

Look above and you’ll see the 172.20.132.1 IP was assigned to the external gateway interface for our router. Let’s ping it:
created:

[l.abruce@co1 rc_scripts]$ ping -c 4 172.20.132.1
PING 172.20.132.1 (172.20.132.1) 56(84) bytes of data. 
64 bytes from 172.20.132.1: icmp_seq=1 ttl=63 time=1.14 ms 
64 bytes from 172.20.132.1: icmp_seq=2 ttl=63 time=0.984 ms 
64 bytes from 172.20.132.1: icmp_seq=3 ttl=63 time=1.02 ms 
64 bytes from 172.20.132.1: icmp_seq=4 ttl=63 time=0.859 ms 

--- 172.20.132.1 ping statistics --- 
4 packets transmitted, 4 received, 0% packet loss, time 3004ms 
rtt min/avg/max/mdev = 0.859/1.003/1.148/0.103 ms

Check back up above for the script. It automates the entire process!

Team-oriented systems mentor with deep knowledge of numerous software methodologies, technologies, languages, and operating systems. Excited about turning emerging technology into working production-ready systems. Focused on moving software teams to a higher level of world-class application development. Specialties:Software analysis and development...Product management through the entire lifecycle...Discrete product integration specialist!

Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Human Verification: In order to verify that you are a human and not a spam bot, please enter the answer into the following box below based on the instructions contained in the graphic.