OpenStack: Adding External Networks to Neutron with GRE

Hello All – today’s post is on Neutron using GRE Networking. The all-in-one OpenStack gives you GRE by default but only a single external subnet. Read on to learn how to add more external networks!

Use this section on just your Neutron Controller(s) to add more external networks. “External networks” allow you to hook up functions running for a specific tenant to the outside world (like a Web server). Typically, this external network will be your DMZ; for example, if you have a small shop with a single after-market router connecting you to the network you probably have 192.168.1.0/24 for your NAT’ed network connected to the Internet. In that case, if you run an OpenStack then any tenant functions that you want to get to from the outside world, will need to have a dedicated IP address on that 192.168.1.0/24 network. Thus, your “external network” would be 192.168.1.0/24 and you would allocate IP addresses on that subnet; one for each public function. Then you would either do port redirection (PAT) or Layer 7 name redirection from your router to funnel inbound requests from the Internet to your backend VMs.

The problem with this? In an OpenStack hosted model, you really want to keep your tenant networks completely separated from your infrastructure / management networks. (So you don’t just want to have a single “external network” that everything connects to; you want to have multiple external networks that you can isolate by function. For example, you may want to segment tenants such that their “public” external IP addresses are truly isolated from each other just like the tenant-specific networks if we want to have VMs run inside OpenStack that connect only to a protected subnet rather than to the default DMZ network. In my case, I have a number of internal functions such as Active Directory, DNS forwarders, SMTP servers, and so on that are truly separate and distinct from the Guest VMs I host for my customers. Thus, it makes sense to keep those IP addresses (and traffic!) completely segmented away from any DMZ used by my customers.

I’ve chosen to give my VM tenants a nice big 172.20.64.0/18 subnet (16,382 IP addresses); that is the “external network” I registered with Neutron. However, I also want to have a 172.24.4.0/22 subnet for my management functions. (These networks are already VLAN’ed off from each other.) And – I may want to add more subnets in the future. So this article was a great way for me to solve these problems and come up with a scalable, extensible, and secure solution that works fine with Neutron and GRE networking.

I found some great articles on this topic and you are wise to check them out (and do your own research):

Let’s get started!

The Problem: A Single External Subnet

The problem is that Neutron GRE by default only gives you a single external subnet. Consider my example lab: I setup my Neutron Controller with four (4) network interfaces. However…I used only three of those interfaces:

  • one for internal Management (VLAN OpenStack-104) communication over 172.28.0.0/22
  • one for internal GRE traffic (VLAN Guest-VMs-010-120) over 172.20.64.130/22
  • one for external Tenant VM access (VLAN DMZ-106) over the big 172.20.128.0/18 network

I didn’t use the fourth interface, which is the “Infrastructure-103” network 172.24.0.0/22 (VLAN 103). So let’s configure that fourth interface as a second external network for our Neutron environment. That way, I can run my DNS, AD controllers, SMTP servers, Nagios monitors, etc. within my virtualized environment without having any data leaks between my internal network and the Tenant VMs.

The Process

Keep in mind that the basic network setup we already performed leverages the br-ex interface, which is bridged to DMZ-106. This is perfect for hosted tenants but not so good if we want to host internal VMs that we manage ourselves (such as an SMTP server). We want to keep our traffic separate from our hosted customers’ traffic. To do this, we need to specify additional L3 agents. Follow these steps:

  1. When you are adding the First Additional External Subnet (First-Time Only), do these initial steps to make the existing L3 agent context aware of its mapped external network:

    • Add the new interface to the Neutron system. For this paper, we already did this with our fourth interface we created from the Infrastructure-103 VLAN. But for additional networks (for example, if we wanted to host a VM on the Management-102 VLAN), we’d need to add another network. I’m using the KVM hypervisor to run my OpenStack VMs, so this was simply using virsh edit to add another NIC.
    • Determine the Neutron network ID of the existing external network e.g. ext-net for this paper:
      [l.abruce@co1 rc_scripts]$ neutron net-show ext-net | grep -e " id"
      | id                        | cab649b6-c4b2-4da5-af58-15319d244abf |
      
    • Modify the existing L3 agent initialization file to reference the OpenStack network ID:
      # /etc/neutron/l3_agent.ini
      [DEFAULT]
      handle_internal_only_routers = True
      gateway_external_network_id =  cab649b6-c4b2-4da5-af58-15319d244abf
      external_network_bridge = br-ex
      
    • If you can, reboot the machine and verify that everything comes back up correctly. Regardless, continue to the next step.
  2. Create the new OVS bridge to handle the new external network. For my purposes, this will be br-ex-103 to indicate it is for VLAN 103:
    ovs-vsctl add-br br-ex-103
    ovs-vsctl add-port br-ex-103 eth3
    
  3. Configure the interface configuration files. These are the commands I used:
    cp /etc/sysconfig/network-scripts/ifcfg-{2,3}
    cp /etc/sysconfig/network-scripts/ifcfg-br-{ex,ex-103}
    
  4. Modify networking scripts to drive the openvswitch configuration during boot. Note I’m using jumbo frames here but that is probably not necessary (jumbo frames are beyond the scope of this article, search on my site for other articles on when to use them):
    # /etc/sysconfig/network-scripts/ifcfg-eth3
    DEVICE=eth3
    TYPE=Ethernet
    ONBOOT=yes
    BOOTPROTO=none
    PROMISC=yes
    NM_CONTROLLED=no
    MTU=9000
    
    # /etc/sysconfig/network-scripts/ifcfg-br-ex-103
    TYPE=OVSBridge
    ONBOOT=yes
    IPADDR=172.24.4.129
    NETMASK=255.255.252.0
    GATEWAY=172.24.4.1
    DEFROUTE=no
    MTU=9000
    NM_CONTROLLED=no
    

    Note: Above, I’m giving an IP address not to the physical interface (eth3 on the Neutron Controller VM) but to the bridge (switch) I’m creating on that physical interface. The only reason I’m even doing this is so that I can easily verify whether the bridge is up from a trusted remote controller (to enable Nagios monitoring). Otherwise, you don’t even need an IP address on the bridge any more than you “need” an IP address on any type of switch device.

    Restart networking:

    service network restart
    
  5. Get the OpenStack Neutron network UUID; we do this by creating a new Neutron external network. Remember that simply creating the network does not allocate any IPs, now setup any subnets, nor run any routers. It simply creates a record in the Neutron database and we need the UUID of that record.

    That being said, you want to spend a few minutes and create the network the way you want it. Consider that it’s not typical for our tenants (hosted customer VMs) to get their own DMZ network space; instead they get their own private tenant networks and they share DMZ space with other tenants. This model isn’t good enough if you truly want all tenant traffic isolated from other tenants. In our case, where we have an “Infrastructure” set of VMs, we want to have an “Infrastructure” tenant that will be hooked up to our infrastructure-103 VLAN on the outside (the tenant-specific DMZ) and to the standard private tenant network on the inside (which is the way that GRE networks operate in Neutron). So follow these general sub-steps:

    • Create the tenant for which you want isolated DMZ networking. In our case, this is the “infrastructure” tenant:
      keystone tenant-create --name infrastructure --description "Internal LMIL VMs"
      
    • Now we create an external (DMZ) network just for this new tenant; this is what we will map to our new Neutron interface:
      neutron net-create "infrastructure-ext-net" --router:external --tenant-id [TENANT_ID]
      
    • Let’s take a look at the setup we have for this paper:
      [l.abruce@co1 rc_scripts]$ neutron net-show infrastructure-ext-net
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | True                                 |
      | id                        | 420ff818-64f0-4df1-b1e2-bef4aacf0a25 |
      | name                      | infrastructure-ext-net               |
      | provider:network_type     | gre                                  |
      | provider:physical_network |                                      |
      | provider:segmentation_id  | 3                                    |
      | router:external           | True                                 |
      | shared                    | False                                |
      | status                    | ACTIVE                               |
      | subnets                   | 22c287fb-7d9a-40ed-9396-307c047193a3 |
      | tenant_id                 | 5328058c4d704b4fa589708c566d4d96     |
      +---------------------------+--------------------------------------+
      

      While we have already created a subnet for the network above, that isn’t necessary right now. All you need is a valid “id” field (which we highlighted above) and you may continue to the next step.

  6. 7. Create a new L3 agent initialization to handle the eth3 interface:

    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent-103.ini
    ln -fs /usr/bin/neutron-l3-agent /usr/bin/neutron-l3-agent-103
    
    Modify the new L3 agent initialization file:
    # /etc/neutron/l3_agent.ini
    [DEFAULT]
    handle_internal_only_routers = False
    gateway_external_network_id = [OpenStack Neutron Network UUID]
    external_network_bridge = br-ex-103
    

    Once complete, copy the existing L3 agent initialization script to the new name:

    cp /etc/init.d/neutron-l3-agent /etc/init.d/neutron-l3-agent-103
    chgrp neutron /etc/neutron/l3_agent-103.ini
    

    Next, modify the new script to reference the new initialization file. This is a pain and must be repeated each time a new OpenStack version is installed:

    # /etc/init.d/neutron-l3-agent-103
    plugin=l3-agent-103
    configs=(
        "/usr/share/$proj/$proj-dist.conf" \
        "/etc/$proj/$proj.conf" \
        "/etc/$proj/l3_agent-103.ini" \
        "/etc/$proj/fwaas_driver.ini" \
    )
    

    Indicate that the service should start at boot:

    chkconfig neutron-l3-agent-103 on
    

    I ran into problems with this step and had to review my steps carefully (including resorting to invoking strace within the /etc/init.d/neutron-l3-agent-103 script I created) so start the service now, review logs carefully, and *reboot the system* to prove that everything comes up as you expect. Then continue to the next step.

  7. Create your new subnet and router for your tenant as per usual. I’ll cover those basic steps in another article, but there are *plenty* of resources available for this on the Internet.
  8. Verify the Setup

    Let’s take a look at output from what I expect with our paper’s use case as the template:

    First, I verify that all interfaces up and running. In my case, this is the eth3 and the br-ex-103 interfaces:

    [root@lvosneutr100 ~]# ip a show eth3
    5: eth3:  mtu 9000 qdisc pfifo_fast state UP qlen 1000
        link/ether 52:54:00:7a:e5:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fe7a:e502/64 scope link
           valid_lft forever preferred_lft forever
    [root@lvosneutr100 ~]# ip a show br-ex-103
    14: br-ex-103:  mtu 9000 qdisc noqueue state UNKNOWN
        link/ether d6:47:40:d3:55:49 brd ff:ff:ff:ff:ff:ff
        inet 172.24.4.129/22 brd 172.24.7.255 scope global br-ex-103
        inet6 fe80::68a6:93ff:fe60:1a85/64 scope link
           valid_lft forever preferred_lft forever
    

    Next, verify that OVS created the bridge entry for your network:

    [root@lvosneutr100 ~]# ovs-vsctl show
    [...output cut...]
        Bridge "br-ex-103" 
            Port "qg-7a58687b-fc" 
                Interface "qg-7a58687b-fc" 
                    type: internal 
            Port "br-ex-103" 
                Interface "br-ex-103" 
                    type: internal 
            Port "eth3" 
                Interface "eth3"
    

    The port entry is for the router, so let’s move ahead to verify that we have our router created for the specific tenant:

    [l.abruce@co1 rc_scripts]$ neutron net-show infrastructure-ext-net
    +---------------------------+--------------------------------------+ 
    | Field                     | Value                                | 
    +---------------------------+--------------------------------------+ 
    | admin_state_up            | True                                 | 
    | id                        | 420ff818-64f0-4df1-b1e2-bef4aacf0a25 | 
    | name                      | infrastructure-ext-net               | 
    | provider:network_type     | gre                                  | 
    | provider:physical_network |                                      | 
    | provider:segmentation_id  | 3                                    | 
    | router:external           | True                                 | 
    | shared                    | False                                | 
    | status                    | ACTIVE                               | 
    | subnets                   | 22c287fb-7d9a-40ed-9396-307c047193a3 | 
    | tenant_id                 | 5328058c4d704b4fa589708c566d4d96     |
    +---------------------------+--------------------------------------+
    

    That gives us the OpenStack network UUID, which is what we fed to the Neutron L3 agent. But let’s also take a look at the subnet and the router we defined for our Infrastructure external DMZ network:

    [l.abruce@co1 rc_scripts]$ neutron subnet-show infrastructure-ext-subnet
    +------------------+------------------------------------------------+ 
    | Field            | Value                                          | 
    +------------------+------------------------------------------------+ 
    | allocation_pools | {"start": "172.24.5.2", "end": "172.24.6.254"} | 
    | cidr             | 172.24.4.0/22                                  | 
    | dns_nameservers  | 192.168.1.2                                    | 
    | enable_dhcp      | True                                           | 
    | gateway_ip       | 172.24.4.1                                     | 
    | host_routes      |                                                | 
    | id               | 22c287fb-7d9a-40ed-9396-307c047193a3           | 
    | ip_version       | 4                                              | 
    | name             | infrastructure-ext-subnet                      | 
    | network_id       | 420ff818-64f0-4df1-b1e2-bef4aacf0a25           | 
    | tenant_id        | 5328058c4d704b4fa589708c566d4d96               | 
    +------------------+------------------------------------------------+
    

    And the router:

    [l.abruce@co1 rc_scripts]$ neutron router-show infrastructure-router
    +-----------------------+-----------------------------------------------------------------------------+ 
    | Field                 | Value                                                                       | 
    +-----------------------+-----------------------------------------------------------------------------+ 
    | admin_state_up        | True                                                                        | 
    | external_gateway_info | {"network_id": "420ff818-64f0-4df1-b1e2-bef4aacf0a25", "enable_snat": true} | 
    | id                    | 664e6c15-3c3d-4ade-b6fb-4aef53e7e223                                        | 
    | name                  | infrastructure-router                                                       | 
    | routes                |                                                                             | 
    | status                | ACTIVE                                                                      | 
    | tenant_id             | 5328058c4d704b4fa589708c566d4d96                                            | 
    +-----------------------+-----------------------------------------------------------------------------+
    

    That last output is important; the OpenStack Neutron router ID is what the Neutron L3 agent uses to create a Network Namespace to permit separate NAT rules and external routing for the Infrastructure tenant. Let’s verify that the appropriate namespace was created:

    [root@lvosneutr100 ~]# ip netns list | grep 664e6c15-3c3d-4ade-b6fb-4aef53e7e223
    qrouter-664e6c15-3c3d-4ade-b6fb-4aef53e7e223
    

    That looks good, we have our namespace so all of our hooks and maps are working thus far. Let’s now issue a command to verify that the Neutron L3 agent actually created the IP interfaces within that namespace for our router:

    [root@lvosneutr100 ~]# ip netns exec qrouter-664e6c15-3c3d-4ade-b6fb-4aef53e7e223 ip a
    15: qg-7a58687b-fc:  mtu 9000 qdisc noqueue state UNKNOWN 
        link/ether fa:16:3e:d1:e0:19 brd ff:ff:ff:ff:ff:ff 
        inet 172.24.5.2/22 brd 172.24.7.255 scope global qg-7a58687b-fc 
        inet6 fe80::f816:3eff:fed1:e019/64 scope link 
           valid_lft forever preferred_lft forever 
    19: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
        inet 127.0.0.1/8 scope host lo 
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    

    Take a look at that! We have an actual IP address 172.24.5.2 assigned within the desired isolated Infrastructure VLAN. The Neutron L3 agent has done its job; let’s verify we can ping the IP:

    [l.abruce@co1 rc_scripts]$ ping -c 4 172.24.5.2
    PING 172.24.5.2 (172.24.5.2) 56(84) bytes of data. 
    64 bytes from 172.24.5.2: icmp_seq=1 ttl=63 time=2.59 ms 
    64 bytes from 172.24.5.2: icmp_seq=2 ttl=63 time=1.02 ms 
    64 bytes from 172.24.5.2: icmp_seq=3 ttl=63 time=1.08 ms 
    64 bytes from 172.24.5.2: icmp_seq=4 ttl=63 time=0.838 ms 
    
    --- 172.24.5.2 ping statistics --- 
    4 packets transmitted, 4 received, 0% packet loss, time 3005ms 
    rtt min/avg/max/mdev = 0.838/1.383/2.596/0.706 ms
    

    Success! We have Neutron now handling our new isolated subnet and we can continue our journey.

    In future articles I’ll cover much more ground on Neutron, GRE networking, troubleshooting, and so on. But we must walk before we can run…Happy Computing!

Team-oriented systems mentor with deep knowledge of numerous software methodologies, technologies, languages, and operating systems. Excited about turning emerging technology into working production-ready systems. Focused on moving software teams to a higher level of world-class application development. Specialties:Software analysis and development...Product management through the entire lifecycle...Discrete product integration specialist!

2 Comments on “OpenStack: Adding External Networks to Neutron with GRE

  1. Great and perfectly detailed post! It was applied in Centos 7 with Icehouse and all was perfect.

    Thanks Andrew.

  2. Hi Andrew. Did you experience some issue when a user’s request arrives (e.g: request of floating Ip?

    In my case, the two L3 agents are listening qpid messages, and only one message is created in qpid when a user does a request (for example, assigning a floating IP). In this case, that message(request) should be processed by the specific L3 agent which deals that floating IP range. But it could not happen. Indeed the issue is that the request could be treated by the other L3 agent and obviously does not apply the request over the router correctly.

    I’m temporally solving this restarting the L3 agents, but it’s not a good choice.

    Any other workaround to solve this?

    Thanks
    Miguel.

Leave a Reply

Your email address will not be published. Required fields are marked *

*