Example for L2LS Fabric¶
Introduction¶
This example includes and describes all the AVD files used to build a Layer 2 Leaf Spine (L2LS) fabric with the following nodes:
- Two spine nodes
- Four leaf nodes
The network fabric in this example is layer 2; an external firewall (FW) or layer 3 (L3) device will handle routing. Later, in this example, we will discuss adding L3 routing to the spines. But first, we will focus on defining the fabric variables to build this L2LS Topology. Before we start, we must ensure we have installed AVD with the requirements covered in the Installation & Requirements section.
The example is meant as a starting foundation. You may build more advanced fabrics based on this design. To keep things simple, the Arista eAPI will be used to communicate with the switches.
Info
The configurations may also be applied with CloudVision with a few updates to your playbook and Ansible variables.
Installation & Requirements¶
- Install AVD - Installation guide found here.
- Install Ansible module requirements - Instructions found here.
- Run the following playbook to copy the Getting Started examples to your working directory.
The output will show something similar to the following. If not, please ensure that AVD and all requirements are correctly installed.
~/ansible-avd-examples# ansible-playbook arista.avd.install_examples
PLAY [Install Examples]***************************************************************************************************************************************************************************************************************************************************************
TASK [Copy all examples to ~/ansible-avd-examples]*****************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP
****************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
After the playbook has run successfully, the following directory structure will be created.
ansible-avd-examples/ (directory where playbook was run)
├── l2ls-fabric/
├── documentation/
├── group_vars/
├── images/
├── intended/
├── switch-basic-configurations/
├── ansible.cfg
├── build.yml
├── deploy.yml
├── inventory.yml
└── README.md (this document)
Info
If the content of any file in the example is modified and the playbook is rerun, the file will not be overwritten. However, if any file in the example is deleted and the playbook is rerun, the file will be re-created.
Design Overview¶
Physical L2LS Topology¶
The drawing below shows the physical topology used in this example. The interface assignment shown here are referenced across the entire example, so keep that in mind if this example must be adapted to a different topology.
Note
In this example, the FW/L3 Device and individual hosts (A-D) are not managed by AVD, but the switch ports connecting to these devices are.
Basic EOS Switch Configuration¶
Basic connectivity between the Ansible controller host and the switches must be established before Ansible can be used to deploy configurations. The following should be configured on all switches:
- Switch Hostname
- IP enabled interface
- Username and Password defined
- Management eAPI enabled
Info
When using vEOS/cEOS virtual switches, Management0
or Management1
is used. When using hardware switches, Management1
is used. The included basic switch configurations may need to be adjusted for your environment.
Below is the basic configuration file for SPINE1:
!
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
!
hostname SPINE1
!
vrf instance MGMT
!
management api http-commands
no shutdown
!
vrf MGMT
no shutdown
!
interface Management0
vrf MGMT
ip address 172.100.100.101/24
!
ip routing
no ip routing vrf MGMT
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management ssh
vrf MGMT
no shutdown
!
Ansible Inventory¶
Now that we understand the physical L2LS topology, we must create the Ansible inventory that represents this topology. The following is a textual and graphical representation of the Ansible inventory group variables and naming scheme used in this example:
- DC1
- DC1_FABRIC
- DC1_SPINES
- DC1_LEAFS
- DC1_NETWORK_SERVICES
- DC1_SPINES
- DC1_LEAFS
- DC1_NETWORK_PORTS
- DC1_SPINES
- DC1_LEAFS
DC1 represents the highest level within the hierarchy. Ansible variables defined at this level will be applied to all nodes in the fabric. Ansible groups have parent-and-child relationships. For example, both DC1_SPINES and DC1_LEAFS are children of DC1_FABRIC. Groups of Groups are possible and allow variables to be shared at any level within the hierarchy. For example, DC1_NETWORK_SERVICES is a group with two other groups defined as children: DC1_SPINES and DC1_LEAFS. The same applies to the group named DC1_NETWORK_PORTS. You will see these groups listed at the bottom of the inventory file.
This naming convention makes it possible to extend anything quickly but can be changed based on your preferences. The names of all groups and hosts must be unique.
inventory.yml¶
The below inventory file represents two spines and four leafs. The nodes are defined under the groups DC1_SPINES and DC1_LEAFS, respectively. We apply group variables (group_vars) to these groups to define their functionality and configurations.
The hostnames specified in the inventory must exist either in DNS or in the hosts file on your Ansible host to allow successful name lookup and be able to reach the switches directly. A successful ping from the Ansible host to each inventory host verifies name resolution (e.g., ping SPINE1).
Alternatively, if DNS is unavailable, define the ansible_host variable as an IP address for each device.
# inventory.yml
DC1:
children:
DC1_FABRIC:
children:
DC1_SPINES:
hosts:
SPINE1:
ansible_host: 172.100.100.101
SPINE2:
ansible_host: 172.100.100.102
DC1_LEAFS:
hosts:
LEAF1:
ansible_host: 172.100.100.105
LEAF2:
ansible_host: 172.100.100.106
LEAF3:
ansible_host: 172.100.100.107
LEAF4:
ansible_host: 172.100.100.108
DC1_NETWORK_SERVICES:
children:
DC1_LEAFS:
DC1_SPINES:
DC1_NETWORK_PORTS:
children:
DC1_LEAFS:
DC1_SPINES:
AVD Fabric Variables¶
To apply AVD variables to the nodes in the fabric, we make use of Ansible group_vars. How and where you define the variables is your choice. The group_vars table below is one example of AVD fabric variables.
group_vars/ | Description |
---|---|
DC1.yml | Global settings for all devices |
DC1_FABRIC.yml | Fabric, Topology, and Device settings |
DC1_SPINES.yml | Device type for spines |
DC1_LEAFS.yml | Device type for leafs |
DC1_NETWORK_SERVICES.yml | VLANs |
DC1_NETWORK_PORTS.yml | Port Profiles and Connected Endpoint settings |
The tabs below show the Ansible group_vars used in this example.
At the top level (DC1), the following variables are defined in group_vars/DC1.yml. These Ansible variables apply to all fabric nodes and are a common place to set AAA, users, NTP, and management interface settings. Update local_users and passwords for your environment.
You can create a sha512_password by creating a username and password on a switch and copy/paste it within double quotes here.
---
### group_vars/DC1.yml
aaa_authentication:
policies:
local:
allow_nopassword: true
# local users
local_users:
# Username with no password configured
- name: arista
privilege: 15
role: network-admin
no_password: true
# Username with a password
- name: admin
privilege: 15
role: network-admin
sha512_password: "$6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/."
# OOB Management network default gateway
mgmt_gateway: 172.100.100.1
mgmt_interface: Management0
# dns servers.
name_servers:
- 8.8.4.4
- 8.8.8.8
# NTP Servers IP or DNS name, first NTP server will be preferred, and sourced from Management VRF
ntp:
servers:
- name: time.google.com
preferred: true
vrf: MGMT
- name: pool.ntp.org
vrf: MGMT
# Establish exec/enable role when logging in to switch
aaa_authorization:
exec:
default: local
At the fabric level (DC1_FABRIC), the following variables are defined in group_vars/DC1_FABRIC.yml. In addition, the fabric name, design type (l2ls), spine and leaf defaults, ansible authentication, and interface links are defined at this level. Other variables you must supply include spanning-tree mode, priority, and an MLAG IP pool.
Variables applied under the node key type (spine/leaf) defaults section are inherited by nodes under each type. These variables may be overwritten under the node itself.
The spine interface used by a particular leaf is defined from the leaf’s perspective with a variable called uplink_switch_interfaces
. For example, LEAF2 has a unique variable uplink_switch_interfaces: [Ethernet2, Ethernet2]
defined. This means that LEAF2 is connected to SPINE1’s Ethernet2 and SPINE2’s Ethernet2 interface.
---
### group_vars/DC1_FABRIC.yml
# Set the Fabric Name - must match an Ansible Inventory Group
fabric_name: DC1_FABRIC
# Set Design Type to l2ls
design:
type: l2ls
# Ansible connectivity definitions
# eAPI connectivity via HTTPS is specified (as opposed to CLI via SSH)
ansible_connection: ansible.netcommon.httpapi
# Specifies that we are indeed using Arista EOS
ansible_network_os: arista.eos.eos
# This user/password must exist on the switches to enable Ansible access
ansible_user: admin
ansible_password: admin
# User escalation (to enter enable mode)
ansible_become: true
ansible_become_method: enable
# Use SSL (HTTPS)
ansible_httpapi_use_ssl: true
# Do not try to validate certs
ansible_httpapi_validate_certs: false
# Spine Switches (L2 only)
spine:
defaults:
platform: cEOS-LAB
spanning_tree_mode: mstp
spanning_tree_priority: 4096
mlag_peer_ipv4_pool: 192.168.0.0/24
mlag_interfaces: [Ethernet47, Ethernet48]
node_groups:
- group: SPINES
nodes:
- name: SPINE1
id: 1
mgmt_ip: 172.100.100.101/24
- name: SPINE2
id: 2
mgmt_ip: 172.100.100.102/24
# Leaf Switches
leaf:
defaults:
platform: cEOS-LAB
mlag_peer_ipv4_pool: 192.168.0.0/24
uplink_switches: [SPINE1, SPINE2]
uplink_interfaces: [Ethernet1, Ethernet2]
mlag_interfaces: [Ethernet47, Ethernet48]
spanning_tree_mode: mstp
spanning_tree_priority: 16384
node_groups:
- group: RACK1
mlag: true
filter:
tags: [bluezone, greenzone]
nodes:
- name: LEAF1
id: 1
mgmt_ip: 172.100.100.105/24
uplink_switch_interfaces: [Ethernet1, Ethernet1]
- name: LEAF2
id: 2
mgmt_ip: 172.100.100.106/24
uplink_switch_interfaces: [Ethernet2, Ethernet2]
- group: RACK2
mlag: true
filter:
tags: [bluezone, orangezone]
nodes:
- name: LEAF3
id: 3
mgmt_ip: 172.100.100.107/24
uplink_switch_interfaces: [Ethernet3, Ethernet3]
- name: LEAF4
id: 4
mgmt_ip: 172.100.100.108/24
uplink_switch_interfaces: [Ethernet4, Ethernet4]
#### Override for vEOS/cEOS Lab Caveats ####
p2p_uplinks_mtu: 1500
# Documentation
eos_designs_documentation:
connected_endpoints: true
In an L2LS design, there are two types of spine nodes: spine
and l3spine
. In AVD, the node type defines the functionality and the EOS CLI configuration to be generated. For an L2LS design, we will use node type: spine. Later, we will add routing to the spines by changing the node type to l3spine.
In an L2LS design, we have one type of leaf node: leaf
.
You add VLANs to the fabric by updating the group_vars/DC1_NETWORK_SERVICES.yml. Each VLAN will be given a name and a list of tags. The tags filter the VLAN to specific leaf Pairs. These variables are applied to the spine and leaf nodes since they are a part of this group.
Our fabric would only be complete by connecting some devices to it. We define connected endpoints and port profiles in group_vars/DC1_NETWORKS_PORTS.yml. Each endpoint’s adapter defines which switch port(s) and port profile to use. In our example, we have four hosts and a firewall connected to the fabric. The connected endpoints keys are used for logical separation and apply to interface descriptions. These variables are applied to the spine and leaf nodes since they are a part of this inventory group.
---
### group_vars/DC1_NETWORK_PORTS.yml
connected_endpoints_keys:
- key: servers
type: server
- key: firewalls
type: firewall
- key: routers
type: router
port_profiles:
- profile: PP-DEFAULTS
spanning_tree_portfast: edge
- profile: PP-BLUE
mode: access
vlans: "10"
parent_profile: PP-DEFAULTS
- profile: PP-GREEN
mode: access
vlans: "20"
parent_profile: PP-DEFAULTS
- profile: PP-ORANGE
mode: access
vlans: "30"
parent_profile: PP-DEFAULTS
- profile: PP-FIREWALL
mode: trunk
vlans: "10,20,30"
servers:
- name: HostA
rack: POD1
adapters:
- endpoint_ports: [Eth1]
switch_ports: [Ethernet3]
switches: [LEAF1]
profile: PP-BLUE
- name: HostB
rack: POD1
adapters:
- endpoint_ports: [Eth1]
switch_ports: [Ethernet3]
switches: [LEAF2]
profile: PP-GREEN
- name: HostC
rack: POD2
adapters:
- endpoint_ports: [Eth1]
switch_ports: [Ethernet3]
switches: [LEAF3]
profile: PP-BLUE
- name: Host2
rack: POD2
adapters:
- endpoint_ports: [Eth1]
switch_ports: [Ethernet3]
switches: [LEAF4]
profile: PP-ORANGE
firewalls:
- name: FIREWALL
adapters:
- endpoint_ports: [Eth1, Eth2]
switch_ports: [Ethernet5, Ethernet5]
switches: [SPINE1, SPINE2]
profile: PP-FIREWALL
port_channel:
mode: active
The Playbooks¶
Now that we have defined all of our Ansible variables (AVD inputs), it is time to generate some configs. To make things simple, we provide two playbooks. One playbook will allow you to build and view EOS CLI intended configurations per device. The second playbook has an additional task to deploy the configurations to your switches. The playbooks are provided in the tabs below. The playbook is straightforward as it imports two AVD roles: eos_designs and eos_cli_config_gen, which do all the heavy lifting. Combining these two roles produces recommended configurations that follow Arista Design Guides.
---
# build.yml
- name: Build Configs
hosts: DC1_FABRIC
gather_facts: false
tasks:
- name: Generate AVD Structured Configurations and Fabric Documentation
ansible.builtin.import_role:
name: arista.avd.eos_designs
- name: Generate Device Configurations and Documentation
ansible.builtin.import_role:
name: arista.avd.eos_cli_config_gen
---
# deploy.yml
- name: Build and Deploy Configs
hosts: DC1_FABRIC
gather_facts: false
tasks:
- name: Generate AVD Structured Configurations and Fabric Documentation
ansible.builtin.import_role:
name: arista.avd.eos_designs
- name: Generate Device Configurations and Documentation
ansible.builtin.import_role:
name: arista.avd.eos_cli_config_gen
- name: Deploy Configurations to Devices
ansible.builtin.import_role:
name: arista.avd.eos_config_deploy_eapi
Playbook Run¶
To build the configuration files, run the playbook called build.yml
.
After the playbook run finishes, EOS CLI intended configuration files were written to intended/configs
.
To build and deploy the configurations to your switches, run the playbook called deploy.yml
. This assumes that your Ansible host has access and authentication rights to the switches. Those auth variables were defined in DC1_FABRIC.yml.
EOS Intended Configurations¶
Your configuration files should be similar to these.
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname SPINE1
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 4096
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 20
name GREEN-NET
!
vlan 30
name ORANGE-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description RACK1_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,20
switchport mode trunk
mlag 1
!
interface Port-Channel3
description RACK2_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,30
switchport mode trunk
mlag 3
!
interface Port-Channel5
description FIREWALL
no shutdown
switchport
switchport trunk allowed vlan 10,20,30
switchport mode trunk
mlag 5
!
interface Port-Channel47
description MLAG_PEER_SPINE2_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description LEAF1_Ethernet1
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description LEAF2_Ethernet1
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description LEAF3_Ethernet1
no shutdown
channel-group 3 mode active
!
interface Ethernet4
description LEAF4_Ethernet1
no shutdown
channel-group 3 mode active
!
interface Ethernet5
description FIREWALL_Eth1
no shutdown
channel-group 5 mode active
!
interface Ethernet47
description MLAG_PEER_SPINE2_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_SPINE2_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.101/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.0/31
no ip routing vrf MGMT
!
mlag configuration
domain-id SPINES
local-interface Vlan4094
peer-address 192.168.0.1
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname SPINE2
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 4096
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 20
name GREEN-NET
!
vlan 30
name ORANGE-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description RACK1_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,20
switchport mode trunk
mlag 1
!
interface Port-Channel3
description RACK2_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,30
switchport mode trunk
mlag 3
!
interface Port-Channel5
description FIREWALL
no shutdown
switchport
switchport trunk allowed vlan 10,20,30
switchport mode trunk
mlag 5
!
interface Port-Channel47
description MLAG_PEER_SPINE1_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description LEAF1_Ethernet2
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description LEAF2_Ethernet2
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description LEAF3_Ethernet2
no shutdown
channel-group 3 mode active
!
interface Ethernet4
description LEAF4_Ethernet2
no shutdown
channel-group 3 mode active
!
interface Ethernet5
description FIREWALL_Eth2
no shutdown
channel-group 5 mode active
!
interface Ethernet47
description MLAG_PEER_SPINE1_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_SPINE1_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.102/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.1/31
no ip routing vrf MGMT
!
mlag configuration
domain-id SPINES
local-interface Vlan4094
peer-address 192.168.0.0
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname LEAF1
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 16384
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 20
name GREEN-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description SPINES_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,20
switchport mode trunk
mlag 1
!
interface Port-Channel47
description MLAG_PEER_LEAF2_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description SPINE1_Ethernet1
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description SPINE2_Ethernet1
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description HostA_Eth1
no shutdown
switchport access vlan 10
switchport mode access
switchport
spanning-tree portfast
!
interface Ethernet47
description MLAG_PEER_LEAF2_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_LEAF2_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.105/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.0/31
no ip routing vrf MGMT
!
mlag configuration
domain-id RACK1
local-interface Vlan4094
peer-address 192.168.0.1
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname LEAF2
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 16384
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 20
name GREEN-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description SPINES_Po1
no shutdown
switchport
switchport trunk allowed vlan 10,20
switchport mode trunk
mlag 1
!
interface Port-Channel47
description MLAG_PEER_LEAF1_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description SPINE1_Ethernet2
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description SPINE2_Ethernet2
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description HostB_Eth1
no shutdown
switchport access vlan 20
switchport mode access
switchport
spanning-tree portfast
!
interface Ethernet47
description MLAG_PEER_LEAF1_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_LEAF1_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.106/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.1/31
no ip routing vrf MGMT
!
mlag configuration
domain-id RACK1
local-interface Vlan4094
peer-address 192.168.0.0
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname LEAF3
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 16384
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 30
name ORANGE-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description SPINES_Po3
no shutdown
switchport
switchport trunk allowed vlan 10,30
switchport mode trunk
mlag 1
!
interface Port-Channel47
description MLAG_PEER_LEAF4_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description SPINE1_Ethernet3
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description SPINE2_Ethernet3
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description HostC_Eth1
no shutdown
switchport access vlan 10
switchport mode access
switchport
spanning-tree portfast
!
interface Ethernet47
description MLAG_PEER_LEAF4_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_LEAF4_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.107/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.4/31
no ip routing vrf MGMT
!
mlag configuration
domain-id RACK2
local-interface Vlan4094
peer-address 192.168.0.5
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname LEAF4
ip name-server vrf MGMT 8.8.4.4
ip name-server vrf MGMT 8.8.8.8
!
ntp server vrf MGMT pool.ntp.org
ntp server vrf MGMT time.google.com prefer
!
spanning-tree mode mstp
no spanning-tree vlan-id 4094
spanning-tree mst 0 priority 16384
!
aaa authentication policy local allow-nopassword-remote-login
aaa authorization exec default local
!
no enable password
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$eucN5ngreuExDgwS$xnD7T8jO..GBDX0DUlp.hn.W7yW94xTjSanqgaQGBzPIhDAsyAl9N4oScHvOMvf07uVBFI4mKMxwdVEUVKgY/.
username arista privilege 15 role network-admin nopassword
!
vlan 10
name BLUE-NET
!
vlan 30
name ORANGE-NET
!
vlan 4094
name MLAG_PEER
trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
description SPINES_Po3
no shutdown
switchport
switchport trunk allowed vlan 10,30
switchport mode trunk
mlag 1
!
interface Port-Channel47
description MLAG_PEER_LEAF3_Po47
no shutdown
switchport
switchport mode trunk
switchport trunk group MLAG
!
interface Ethernet1
description SPINE1_Ethernet4
no shutdown
channel-group 1 mode active
!
interface Ethernet2
description SPINE2_Ethernet4
no shutdown
channel-group 1 mode active
!
interface Ethernet3
description Host2_Eth1
no shutdown
switchport access vlan 30
switchport mode access
switchport
spanning-tree portfast
!
interface Ethernet47
description MLAG_PEER_LEAF3_Ethernet47
no shutdown
channel-group 47 mode active
!
interface Ethernet48
description MLAG_PEER_LEAF3_Ethernet48
no shutdown
channel-group 47 mode active
!
interface Management0
description oob_management
no shutdown
vrf MGMT
ip address 172.100.100.108/24
!
interface Vlan4094
description MLAG_PEER
no shutdown
mtu 1500
no autostate
ip address 192.168.0.5/31
no ip routing vrf MGMT
!
mlag configuration
domain-id RACK2
local-interface Vlan4094
peer-address 192.168.0.4
peer-link Port-Channel47
reload-delay mlag 300
reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 172.100.100.1
!
management api http-commands
protocol https
no shutdown
!
vrf MGMT
no shutdown
!
end
Add Routing to Spines¶
Our example used an external L3/FW Device to route between subnets. This is very typical in a Layer 2 only environment. To route on the spines, we remove the L3/FW device from the topology and create the SVIs on the spines. The updated topology is shown below.
Note
The spine type has been changed to l3spine.
The following group_vars need updating to enable L3 routing on the spines.
- DC1_SPINES.yml
- DC1_FABRIC.yml
- DC1_NETWORK_SERVICES.yml
The updated changes are noted in the tabs below.
Update type to l3spine
. This makes it a routing device.
Update with the following changes and additions.
- Change the node key spine to l3spine to match the node type set previously in DC1_SPINES.yml
- Add loopback_ipv4_pool
- Add mlag_peer_l3_ipv4_pool
- Add virtual_router_mac_address
Update DC1_FABRIC.yml with the following recommended settings. Use your own IP pools.
# Node Key must be l3spine to match type
l3spine:
defaults:
platform: cEOS-LAB
spanning_tree_mode: mstp
spanning_tree_priority: 4096
# Loopback is used to generate a router-id
loopback_ipv4_pool: 1.1.1.0/24
mlag_peer_ipv4_pool: 192.168.0.0/24
# Needed for L3 peering across the MLAG Trunk
mlag_peer_l3_ipv4_pool: 10.1.1.0/24
# Used for SVI Virtual MAC address
virtual_router_mac_address: 00:1c:73:00:dc:01
mlag_interfaces: [Ethernet47, Ethernet48]
Update Network Services to use L3 SVIs.
Note
To create L3 SVIs on the spines, we need to utilize an L3 VRF. In our case, we will use the default VRF. MY_FABRIC
is simply a tenant name for organizing VRFs and SVIs.
tenants:
- name: MY_FABRIC
vrfs:
- name: default
svis:
- id: 10
name: 'BLUE-NET'
tags: [bluezone]
enabled: true
ip_virtual_router_addresses:
- 10.10.10.1
nodes:
- node: SPINE1
ip_address: 10.10.10.2/24
- node: SPINE2
ip_address: 10.10.10.3/24
- id: 20
name: 'GREEN-NET'
tags: [greenzone]
enabled: true
ip_virtual_router_addresses:
- 10.20.20.1
nodes:
- node: SPINE1
ip_address: 10.20.20.2/24
- node: SPINE2
ip_address: 10.20.20.3/24
- id: 30
name: 'ORANGE-NET'
tags: [orangezone]
enabled: true
ip_virtual_router_addresses:
- 10.30.30.1
nodes:
- node: SPINE1
ip_address: 10.30.30.2/24
- node: SPINE2
ip_address: 10.30.30.3/24
Now rerun your playbook and build the new configurations. The intended/configs for the spines will have been updated with L3 SVIs.
If you wish to deploy these changes, then run the deploy playbook.
Next steps¶
Try building your topology.