AVD example for a MPLS-VPN based WAN Network¶
Introduction¶
This example is the logical second step in introducing AVD to new users, following the Introduction to Ansible and AVD section. New users with access to virtual routers (using Arista vEOS-lab or cEOS) can learn how to generate configuration and documentation for a complete fabric environment. Users with access to physical routers will have to adapt a few settings. This is all documented inline in the comments included in the YAML files. If a lab with virtual or physical routers is not accessible, this example can also be used to only generate the output from AVD if desired.
The example includes and describes all the AVD files and their content used to build a MPLS-VPN WAN network covering two sites using the following:
- Four (virtual) p routers.
- Three (virtual) pe routers serving aggregation devices and CPEs.
- Two (virtual) route reflectors act as route servers for the WAN.
This example does not include Integration with CloudVision to keep everything as simple as possible. In this case, the Ansible host will communicate directly with the routers using eAPI.
Installation¶
Requirements to use this example:
- Follow the installation guide for AVD found here.
- Run the following playbook to copy the examples to your current working directory, for example
ansible-avd-examples
:
ansible-playbook arista.avd.install_examples
This will show the following:
~/ansible-avd-examples# ansible-playbook arista.avd.install_examples
PLAY [Install Examples]***************************************************************************************************************************************************************************************************************************************************************
TASK [Copy all examples to ~/ansible-avd-examples]*****************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP
****************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
After the playbook has run successfully, the directory structure will look as shown below, the contents of which will be covered in later sections:
ansible-avd-examples/ (or wherever the playbook was run)
|── isis-ldp-ipvpn
├── ansible.cfg
├── build.yml
├── deploy.yml
├── documentation
├── group_vars
├── images
├── intended
├── inventory.yml
├── README.md
└── switch-basic-configurations
Info
If the content of any file is modified and the playbook is rerun, the file will not be overwritten. However, if any file in the example is deleted and the playbook is rerun, Ansible will re-create the file.
Overall design overview¶
Physical topology¶
The drawing below shows the physical topology used in this example. The interface assignment shown here is referenced across the entire example, so keep that in mind if this example must be adapted to a different topology. Finally, the Ansible host is connected to the dedicated out-of-band management port (Management1 when using vEOS-lab):
IP ranges used¶
Out-of-band management IP allocation for WAN1 | 172.16.1.0/24 |
---|---|
Default gateway | 172.16.1.1 |
p1 | 172.16.1.11 |
p2 | 172.16.1.12 |
p3 | 172.16.1.13 |
p4 | 172.16.1.14 |
pe1 | 172.16.1.101 |
pe2 | 172.16.1.102 |
pe3 | 172.16.1.103 |
rr1 | 172.16.1.151 |
rr2 | 172.16.1.152 |
Point-to-point links between network nodes | (Underlay) |
WAN1 | 100.64.48.0/24 |
Loopback0 interfaces for router ID (p) | 10.255.0.0/27 |
Loopback0 interfaces for overlay peering (pe) | 10.255.1.0/27 |
Loopback0 interfaces for overlay peering (rr) | 10.255.2.0/27 |
L3 Interfaces | 10.0-1.1.0/24 |
For example pe1 Ethernet3.10 has the IP address: |
10.0.1.1 |
For example pe3 Ethernet4 has the IP address: |
10.1.1.9 |
ISIS-LDP design¶
BGP design¶
Basic EOS config¶
Basic connectivity between the Ansible host and the routers must be established before Ansible can be used to push configurations. You must configure the following on all routers:
- A hostname configured purely for ease of understanding.
- An IP enabled interface - in this example, the dedicated out-of-band management interface is used.
- A username and password with the proper access privileges.
Below is the basic configuration file for p1
:
! ansible-avd-examples/single-dc-l3ls/switch-basic-configurations/p1-basic-configuration.txt
! Basic EOS config
!
! Hostname of the device
hostname p1
!
! Configures username and password for the ansible user
username ansible privilege 15 role network-admin secret sha512 $6$7u4j1rkb3VELgcZE$EJt2Qff8kd/TapRoci0XaIZsL4tFzgq1YZBLD9c6f/knXzvcYY0NcMKndZeCv0T268knGKhOEwZAxqKjlMm920
!
! Defines the VRF for MGMT
vrf instance MGMT
!
! Defines the settings for the Management1 interface through which Ansible reaches the device
interface Management1
description oob_management
no shutdown
vrf MGMT
! IP address - must be set uniquely per device
ip address 172.16.1.11/24
!
! Static default route for VRF MGMT
ip route vrf MGMT 0.0.0.0/0 172.16.1.1
!
! Enables API access in VRF MGMT
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!
! Save configuration to flash
copy running-config startup-config
Note
The folder isis-ldp-ipvpn/switch-basic-configurations/
contains a file per device for the initial configurations.
Ansible inventory, group vars, and naming scheme¶
The following drawing shows a graphic overview of the Ansible inventory, group variables, and naming scheme used in this example:
Note
The CPE’s and aggregation nodes are not configured by AVD, but the ports used to connect to them are.
Group names use uppercase and underscore syntax:
- FABRIC
- WAN1
- WAN1_P_ROUTERS
- WAN1_PE_ROUTERS
- WAN1_RR_ROUTERS
All hostnames use lowercase, for example:
- p4
- pe1
- rr2
The drawing also shows the relationships between groups and their children:
- For example,
p1
,p2
,p3
, andp4
are all children of the group calledWAN1_P_ROUTERS
.
Additionally, groups themselves can be children of another group, for example:
WAN1_P_ROUTERS
is a child of the groupWAN1
.WAN1_PE_ROUTERS
is a child of bothWAN1
andNETWORK_SERVICES
.
This naming convention makes it possible to extend anything easily, but as always, this can be changed based on your preferences. Just ensure that the names of all groups and hosts are unique.
Content of the inventory.yml file¶
This section describes the entire ansible-avd-examples/isis-ldp-ipvpn/inventory.yml
file used to represent the above topology.
The hostnames specified in the inventory must exist either in DNS or in the hosts file on your Ansible host to allow successful name lookup and be able to reach the routers directly. A successful ping from the Ansible host to each inventory host verifies name resolution(e.g., ping p1
).
Alternatively, if there is no DNS available, or if devices need to be reached using a fully qualified domain name (FQDN), define ansible_host
to be an IP address or FQDN for each device - see below for an example:
---
all:
children:
FABRIC:
children:
WAN1:
children:
WAN1_P_ROUTERS:
hosts:
p1:
ansible_host: 172.16.1.11
p2:
ansible_host: 172.16.1.12
p3:
ansible_host: 172.16.1.13
p4:
ansible_host: 172.16.1.14
WAN1_PE_ROUTERS:
hosts:
pe1:
ansible_host: 172.16.1.101
pe2:
ansible_host: 172.16.1.102
pe3:
ansible_host: 172.16.1.103
WAN1_RR_ROUTERS:
hosts:
rr1:
ansible_host: 172.16.1.151
rr2:
ansible_host: 172.16.1.152
NETWORK_SERVICES:
children:
WAN1_PE_ROUTERS:
The above is included in this example, purely to make it as simple as possible. However, in the future, please do not carry over this practice to a production environment, where an inventory file for an identical topology should look as follows when using DNS:
---
all:
children:
FABRIC:
children:
WAN1:
children:
WAN1_P_ROUTERS:
hosts:
p1:
p2:
p3:
p4:
WAN1_PE_ROUTERS:
hosts:
pe1:
pe2:
pe3:
WAN1_RR_ROUTERS:
hosts:
rr1:
rr2:
NETWORK_SERVICES:
children:
WAN1_PE_ROUTERS:
NETWORK_SERVICES
- Creates a group named
NETWORK_SERVICES
. Ansible variable resolution resolves this group name to the identically named group_vars file (ansible-avd-examples/isis-ldp-ipvpn/group_vars/NETWORK_SERVICES.yml
).- The file’s contents are specifications of tenant VRFs and their associated routed interfaces, BGP peers, and OSPF interfaces, then applied to the group’s children. In this case, the group
WAN1_PE_ROUTERS
.
Defining device types¶
Since this example covers building a MPLS WAN network, AVD must know about the device types, for example, p, pe, rr routers, etc. The devices are already grouped in the inventory, so the device types are specified in the group variable files with the following names and content:
For example, all routers that are children of the WAN1_P_ROUTERS group defined in the inventory will be of type p
.
Setting fabric-wide configuration parameters¶
The ansible-avd-examples/isis-ldp-ipvpn/group_vars/FABRIC.yml
file defines generic settings that apply to all children of the FABRIC
group as specified in the inventory described earlier.
The first section defines how the Ansible host connects to the devices:
ansible_connection: ansible.netcommon.httpapi
ansible_network_os: arista.eos.eos
ansible_user: ansible
ansible_password: ansible
ansible_become: true
ansible_become_method: enable
ansible_httpapi_use_ssl: true
ansible_httpapi_validate_certs: false
The following section specifies variables that generate configuration to be applied to all devices in the fabric:
fabric_name: FABRIC
underlay_routing_protocol: isis-ldp
overlay_routing_protocol: ibgp
local_users:
- name: ansible
privilege: 15
role: network-admin
sha512_password: $6$QJUtFkyu9yoecsq.$ysGzlb2YXaIMvezqGEna7RE8CMALJHnv7Q1i.27VygyKUtSeX.n2xRTyOtCR8eOAl.4imBLyhXFc4o97P5n071
- name: admin
privilege: 15
role: network-admin
no_password: true
bgp_peer_groups:
mpls_overlay_peers:
password: Q4fqtbqcZ7oQuKfuWtNGRQ==
p2p_uplinks_mtu: 1500
Setting device-specific configuration parameters¶
The ansible-avd-examples/isis-ldp-ipvpn/group_vars/WAN1.yml
file defines settings that apply to all children of the WAN1
group as specified in the inventory described earlier. However, this time the settings defined are no longer fabric-wide but are limited to WAN1. This example is of limited benefit with only a single data center. Still, it allows us to scale the configuration to a scenario with multiple data centers in the future.
---
mgmt_gateway: 172.16.1.1
p:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
nodes:
- name: p1
id: 1
mgmt_ip: 172.16.1.11/24
- name: p2
id: 2
mgmt_ip: 172.16.1.12/24
The following section covers the pe routers. Significantly more settings need to be set compared to the p routers:
# PE router group
pe:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.1.0/27
virtual_router_mac_address: 00:1c:73:00:dc:00
mpls_route_reflectors: [ rr1, rr2 ]
isis_system_id_prefix: '0000.0001'
spanning_tree_mode: none
node_groups:
- group: WAN1-PE1-2
nodes:
- name: pe1
id: 1
mgmt_ip: 172.16.1.101/24
- name: pe2
id: 2
mgmt_ip: 172.16.1.102/24
- group: WAN1-PE3
nodes:
- name: pe3
id: 3
mgmt_ip: 172.16.1.103/24
Finally, more of the same, but this time for the rr routers:
rr:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.2.0/27
mpls_route_reflectors: [ rr1, rr2 ]
isis_system_id_prefix: '0000.0002'
spanning_tree_mode: none
node_groups:
- group: WAN1_RR1-2
nodes:
- name: rr1
id: 1
mgmt_ip: 172.16.1.151/24
- name: rr2
id: 2
mgmt_ip: 172.16.1.152/24
Defining underlay connectivity between network nodes¶
A free-standing list of core_interfaces
dictionaries and their associated profiles and ip pools defines the underlay connectivity between nodes.
core_interfaces:
p2p_links_ip_pools:
- name: core_pool
ipv4_pool: 100.64.48.0/24
p2p_links_profiles:
- name: core_profile
mtu: 1500
isis_metric: 50
ip_pool: core_pool
isis_circuit_type: level-2
isis_authentication_mode: md5
isis_authentication_key: $1c$sTNAlR6rKSw=
p2p_links:
- nodes: [ pe1, p1 ]
interfaces: [ Ethernet1, Ethernet1 ]
profile: core_profile
id: 1
- nodes: [ pe1, p2 ]
interfaces: [ Ethernet2, Ethernet2 ]
profile: core_profile
id: 2
Specifying network services (VRFs and routed interfaces) and endpoint connectivity in the VPN-IPv4 fabric¶
---
tenants:
# Definition of tenants. Additional level of abstraction to VRFs
- name: CUSTOMER1
vrfs:
# VRF definitions inside the tenant.
- name: C1_VRF1
# VRF ID definition.
vrf_id: 10
# Select address families for the VRF.
address_families:
- vpn-ipv4
# Enable OSPF on selected PEs in the VRF.
ospf:
enabled: true
nodes:
- pe1
- pe2
- pe3
l3_interfaces:
# L3 interfaces
- interfaces: [ Ethernet3.10, Ethernet4.10, Ethernet2 ]
nodes: [ pe1, pe2, pe3 ]
description: C1_L3_SERVICE
enabled: true
ip_addresses: [ 10.0.1.1/29, 10.0.1.2/29, 10.0.1.9/30 ]
# Enable OSPF on the interfaces.
ospf:
enabled: true
- name: CUSTOMER2
vrfs:
- name: C2_VRF1
vrf_id: 20
address_families:
- vpn-ipv4
l3_interfaces:
# L3 interfaces
- interfaces: [ Ethernet3.20, Ethernet4.20, Ethernet4 ]
nodes: [ pe1, pe2, pe3 ]
description: C2_L3_SERVICE
enabled: true
ip_addresses: [ 10.1.1.1/29, 10.1.1.2/29, 10.1.1.9/30 ]
# Define BGP peers inside the VRF.
bgp_peers:
- ip_address: 10.1.1.3
remote_as: 65123
description: C2_ROUTER1
send_community: standard
maximum_routes: 100
nodes: [ pe1, pe2 ]
- ip_address: 10.1.1.10
remote_as: 65124
description: C2_ROUTER2
send_community: standard
maximum_routes: 100
nodes: [ pe3 ]
All tenant VRFs and routed interfaces for endpoint connectivity in the network are defined here.
Two tenants called CUSTOMER1
and CUSTOMER2
are specified. Each of these tenants has a single VRF defined, and under those VRFs, we define the routed interfaces, tenant (PE-CE) routing protocols and address families in use:
- name: C1_VRF1
vrf_id: 10
address_families:
- vpn-ipv4
ospf:
enabled: true
nodes:
- pe1
- pe2
- pe3
l3_interfaces:
- interfaces: [ Ethernet3.10, Ethernet4.10, Ethernet2 ]
nodes: [ pe1, pe2, pe3 ]
description: C1_L3_SERVICE
enabled: true
ip_addresses: [ 10.0.1.1/29, 10.0.1.2/29, 10.0.1.9/30 ]
ospf:
enabled: true
This defines C1_VRF1
, with a VRF ID of 10
, enables OSPF routing for PE-CE connections inside the VRF on selected pe routers and defines routed interfaces that are used to connect to the CE devices/aggregation nodes. Each interface has an IP address assigned, a description, and has OSPF routing enabled.
The lists of interfaces, nodes, and ip_addresses used in the above definition of the l3 interface are read by the ansible logic as follows: interface Ethernet3.10
belongs to the node pe1
and has the IP address of 10.0.1.1/29
. In other words, the list indices are used to form the basic parameters for one interface.
The playbook¶
In this example, the deploy playbook looks like the following:
---
# Please note, comments below are intended for site documentation only
- name: Run AVD
hosts: FABRIC
gather_facts: false
tasks:
- name: Generate AVD Structured Configurations and Fabric Documentation
ansible.builtin.import_role:
name: arista.avd.eos_designs
- name: Generate Device Configurations and Documentation
ansible.builtin.import_role:
name: arista.avd.eos_cli_config_gen
- name: Deploy Configurations to Devices
ansible.builtin.import_role:
name: arista.avd.eos_config_deploy_eapi
Testing AVD output without a lab¶
Example of using the build playbook without devices (local tasks):
---
# Please note, comments below are intended for site documentation only
- name: Run AVD
hosts: FABRIC
gather_facts: false
tasks:
- name: Generate AVD Structured Configurations and Fabric Documentation
ansible.builtin.import_role:
name: arista.avd.eos_designs
- name: Generate Device Configurations and Documentation
ansible.builtin.import_role:
name: arista.avd.eos_cli_config_gen
The build playbook will generate all of the output (variables, configurations, documentation) but will not attempt to communicate with any devices.
Please look through the folders and files described above to learn more about the output generated by AVD.
Executing the playbook¶
The execution of the playbook should produce the following output:
user@ubuntu:~/isis-ldp-ipvpn$ ansible-playbook deploy.yml
PLAY [Run AVD] *****************************************************************************************************************************************************************************
TASK [arista.avd.eos_designs : Collection arista.avd version 3.5.0 loaded from /home/user/.ansible/collections/ansible_collections] ******************************************************
ok: [p1]
TASK [arista.avd.eos_designs : Create required output directories if not present] **********************************************************************************************************
ok: [p1 -> localhost] => (item=/home/user/Documents/git_projects/ansible-avd-examples/isis-ldp-ipvpn/intended/structured_configs)
ok: [p1 -> localhost] => (item=/home/user/Documents/git_projects/ansible-avd-examples/isis-ldp-ipvpn/documentation/fabric)
(...)
If similar output is not shown, make sure:
- The documented requirements are met.
- The latest
arista.avd
collection is installed.
Troubleshooting¶
VPN-IPv4 Overlay not working¶
If after doing the following steps:
- Manually copy/paste the switch-basic-configuration to the devices.
- Run the playbook and push the generated configuration to the fabric.
- Login to a pe or rr device, for example, pe1 and run the command
show bgp vpn-ipv4 summary
to view VPN routes.
The following error message is shown:
This is caused by AVD pushing the configuration line service routing protocols model multi-agent
, which enables the multi-agent routing process supporting VPN-IPv4 and EVPN. This change requires a reboot of the device.
VPN-IPv4 Overlay in Arista Cloud Test (ACT)¶
Suppose you are running this lab in the Arista Cloud Test service, and the overlay services are not working (no connectivity from CPE to CPE) after performing the abovementioned steps. In that case, you may need to change the default forwarding engine of the vEOS nodes.
Add the following line to the starting configurations for each node:
Currently, this command must be manually entered into the device configurations before trying to push the command with AVD. After you have entered it manually on each node, add the following YAML to group_vars/WAN1.yml and run the deployment playbook:
Retest the services. They should now work, provided the CPEs and aggregation node are correctly configured.