IP Addressing in AVD¶
Introduction¶
AVD provides a powerful automatic IP addressing system that assigns IP addresses to fabric devices based on pools and node identifiers. This eliminates manual IP planning and ensures consistent, predictable addressing across your network.
This guide covers IP pool types, allocation mechanisms, and customization options available in AVD.
Key Concepts¶
Before diving into specifics, understand these core concepts:
IP Pool: A range of IP addresses from which AVD allocates individual addresses
Node ID: A unique numeric identifier for each device, used to calculate IP offsets
Offset: A value added to the base pool address to derive a specific IP
Prefix Length: The subnet mask size for allocated addresses (e.g., /32, /31)
IP Pool Types¶
AVD uses several IP pools for different purposes:
| Pool Variable | Purpose | Default Interface |
|---|---|---|
loopback_ipv4_pool |
Router ID and BGP peering | Loopback0 |
vtep_loopback_ipv4_pool |
VXLAN tunnel endpoints | Loopback1 |
uplink_ipv4_pool |
P2P links between devices | Ethernet uplinks |
mlag_peer_ipv4_pool |
MLAG peer-link SVI | VLAN 4094 |
mlag_peer_l3_ipv4_pool |
MLAG L3 iBGP peering | VLAN 4093 |
router_id_pool |
BGP Router ID only (IPv6 underlay) | None (ID only) |
Pool Hierarchy¶
Pools can be defined at multiple levels with the following precedence (highest to lowest):
- Node type defaults - Applied to all nodes of a type
- Node group level - Shared by devices in a group (e.g., MLAG pair)
- Node level - Specific to a single device
---
l3leaf:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
loopback_ipv4_offset: 2
vtep_loopback_ipv4_pool: 10.255.3.0/27
uplink_switches: ['htipa-spine1', 'htipa-spine2']
uplink_ipv4_pool: 10.255.255.0/26
mlag_peer_ipv4_pool: 10.255.3.64/27
mlag_peer_l3_ipv4_pool: 10.255.3.96/27
virtual_router_mac_address: 00:1c:73:00:00:99
spanning_tree_priority: 4096
spanning_tree_mode: mstp
node_groups:
- group: HTIPA_L3_LEAFS
bgp_as: 65101
loopback_ipv4_pool: 10.255.1.0/24
nodes:
- name: htipa-leaf1
id: 1
mgmt_ip: 172.16.2.101/24
uplink_switch_interfaces: [Ethernet1, Ethernet1]
loopback_ipv4_pool: 10.255.2.0/24
- name: htipa-leaf2
id: 2
uplink_switch_interfaces: [Ethernet2, Ethernet2]
mgmt_ip: 172.16.2.102/24
Loopback IP Allocation¶
Basic Formula¶
For Loopback0 (Router ID):
Note
When spines and leafs share the same pool, use loopback_ipv4_offset to prevent IP conflicts
Example¶
Spine assignment:
- spine1 will be assigned
10.255.0.1/32- calculation (node type default pool + id[1]) - spine2 will get
10.255.0.2/32- calculation (node type default pool + id[2])
---
spine:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
bgp_as: 65100
nodes:
- name: htipa-spine1
id: 1
mgmt_ip: 172.16.2.11/24
- name: htipa-spine2
id: 2
mgmt_ip: 172.16.2.12/24
interface Loopback0
description ROUTER_ID
no shutdown
ip address 10.255.0.1/32
!
interface Loopback0
description ROUTER_ID
no shutdown
ip address 10.255.0.2/32
!
Leaf assignment:
- leaf1 will get 10.255.2.3/32 (node specific pool + id[1] + offset[2])
- leaf2 will get 10.255.1.4/32 (group specific pool + id[2] + offset[(2])
---
l3leaf:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
loopback_ipv4_offset: 2
vtep_loopback_ipv4_pool: 10.255.3.0/27
uplink_switches: ['htipa-spine1', 'htipa-spine2']
uplink_ipv4_pool: 10.255.255.0/26
mlag_peer_ipv4_pool: 10.255.3.64/27
mlag_peer_l3_ipv4_pool: 10.255.3.96/27
virtual_router_mac_address: 00:1c:73:00:00:99
spanning_tree_priority: 4096
spanning_tree_mode: mstp
node_groups:
- group: HTIPA_L3_LEAFS
bgp_as: 65101
loopback_ipv4_pool: 10.255.1.0/24
nodes:
- name: htipa-leaf1
id: 1
mgmt_ip: 172.16.2.101/24
uplink_switch_interfaces: [Ethernet1, Ethernet1]
loopback_ipv4_pool: 10.255.2.0/24
- name: htipa-leaf2
id: 2
uplink_switch_interfaces: [Ethernet2, Ethernet2]
mgmt_ip: 172.16.2.102/24
VTEP Loopback Allocation¶
VTEP loopbacks (Loopback1) use vtep_loopback_ipv4_pool:
---
l3leaf:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
loopback_ipv4_offset: 2
vtep_loopback_ipv4_pool: 10.255.3.0/27
uplink_switches: ['htipa-spine1', 'htipa-spine2']
uplink_ipv4_pool: 10.255.255.0/26
mlag_peer_ipv4_pool: 10.255.3.64/27
mlag_peer_l3_ipv4_pool: 10.255.3.96/27
virtual_router_mac_address: 00:1c:73:00:00:99
spanning_tree_priority: 4096
spanning_tree_mode: mstp
node_groups:
- group: HTIPA_L3_LEAFS
bgp_as: 65101
loopback_ipv4_pool: 10.255.1.0/24
nodes:
- name: htipa-leaf1
id: 1
mgmt_ip: 172.16.2.101/24
uplink_switch_interfaces: [Ethernet1, Ethernet1]
loopback_ipv4_pool: 10.255.2.0/24
- name: htipa-leaf2
id: 2
uplink_switch_interfaces: [Ethernet2, Ethernet2]
mgmt_ip: 172.16.2.102/24
MLAG VTEP Sharing
MLAG pairs share the same VTEP IP. AVD automatically uses the MLAG primary ID in the group for both peers. This translates to IP = pool + mlag_primary_id + loopback_ipv4_offset. Non-MLAG nodes use their own ID.
P2P Uplink Allocation¶
Uplink IP addresses are calculated using a more complex formula to ensure unique /31 subnets for each link.
Formula¶
Where:
node_id: The leaf’s IDmax_uplink_switches: Maximum number of uplink switches (default: length ofuplink_switches)max_parallel_uplinks: Maximum parallel uplinks per switch (default: 1)uplink_switch_index: Index of the uplink switch (0-based)
Example¶
---
l3leaf:
defaults:
platform: vEOS-lab
loopback_ipv4_pool: 10.255.0.0/27
loopback_ipv4_offset: 2
vtep_loopback_ipv4_pool: 10.255.3.0/27
uplink_switches: ['htipa-spine1', 'htipa-spine2']
uplink_ipv4_pool: 10.255.255.0/26
mlag_peer_ipv4_pool: 10.255.3.64/27
mlag_peer_l3_ipv4_pool: 10.255.3.96/27
virtual_router_mac_address: 00:1c:73:00:00:99
spanning_tree_priority: 4096
spanning_tree_mode: mstp
node_groups:
- group: HTIPA_L3_LEAFS
bgp_as: 65101
loopback_ipv4_pool: 10.255.1.0/24
nodes:
- name: htipa-leaf1
id: 1
mgmt_ip: 172.16.2.101/24
uplink_switch_interfaces: [Ethernet1, Ethernet1]
loopback_ipv4_pool: 10.255.2.0/24
- name: htipa-leaf2
id: 2
uplink_switch_interfaces: [Ethernet2, Ethernet2]
mgmt_ip: 172.16.2.102/24
Resulting allocations for leaf1 (id=1):
| Uplink | Subnet Offset | Leaf IP | Spine IP |
|---|---|---|---|
| To spine1 | 0 | 10.255.255.1/31 | 10.255.255.0/31 |
| To spine2 | 1 | 10.255.255.3/31 | 10.255.255.2/31 |
interface Ethernet1
description P2P_htipa-spine1_Ethernet1
no shutdown
mtu 1500
no switchport
ip address 10.255.255.1/31
!
interface Ethernet2
description P2P_htipa-spine2_Ethernet1
no shutdown
mtu 1500
no switchport
ip address 10.255.255.3/31
!
Downlink Pools¶
While uplink_ipv4_pool is defined on the downstream device (leaf), downlink_pools provides an alternative approach where IP pools are defined on the upstream device (spine). This is useful when you want centralized IP management on parent switches.
Mutually Exclusive
downlink_pools on a parent switch cannot be combined with uplink_ipv4_pool on the child switch. Use one approach or the other.
Configuration¶
Define downlink_pools on the parent switch (spine) with a list of pools mapped to specific downlink interfaces:
spine:
defaults:
loopback_ipv4_pool: 192.168.0.0/24
bgp_as: 65000
nodes:
- name: spine1
id: 1
downlink_pools:
- ipv4_pool: 10.0.1.0/24
downlink_interfaces: [Ethernet3-6] # (1)!
- ipv4_pool: 10.0.3.0/24
downlink_interfaces: [Ethernet7-14] # (2)!
- First pool for interfaces Ethernet3 through Ethernet6
- Second pool for interfaces Ethernet7 through Ethernet14
IP Allocation¶
The IP address is derived based on the interface’s index position in the downlink_interfaces list:
| Interface | Index | Subnet from Pool |
|---|---|---|
| Ethernet3 | 0 | 10.0.1.0/31 |
| Ethernet4 | 1 | 10.0.1.2/31 |
| Ethernet5 | 2 | 10.0.1.4/31 |
| Ethernet6 | 3 | 10.0.1.6/31 |
The spine gets the even IP (.0, .2, .4) and the leaf gets the odd IP (.1, .3, .5) in each /31 subnet.
When to Use Downlink Pools¶
| Use Case | Recommended Approach |
|---|---|
| Leaf-centric IP management | uplink_ipv4_pool on leafs |
| Spine-centric IP management | downlink_pools on spines |
| Different pools per spine port range | downlink_pools with multiple entries |
| Simple uniform allocation | uplink_ipv4_pool on leafs |
MLAG IP Allocation¶
MLAG requires two pools for peer connectivity:
| Pool | Purpose | Default VLAN |
|---|---|---|
mlag_peer_ipv4_pool |
L2 peer-link SVI | 4094 |
mlag_peer_l3_ipv4_pool |
L3 iBGP peering | 4093 |
MLAG Allocation Algorithms¶
AVD supports three MLAG IP allocation algorithms configured via fabric_ip_addressing.mlag.algorithm:
Uses the first node’s ID in the MLAG group:
fabric_ip_addressing:
mlag:
algorithm: first_id # Default
l3leaf:
defaults:
mlag_peer_ipv4_pool: 10.255.3.64/27
node_groups:
- group: HOW_TO_L3_LEAFS
nodes:
- name: htipa-leaf1
id: 1 # Primary: 10.255.3.64/31
- name: htipa-leaf2
id: 2 # Secondary: 10.255.3.65/31
Formula: offset = (mlag_primary_id - 1)
Node ID Assignment¶
Node IDs are critical for IP allocation. AVD supports two assignment methods:
Static Assignment¶
Manually assign IDs to each node (default):
Dynamic Assignment¶
Automatically assign IDs based on fabric topology:
IDs are assigned based on: fabric_name, dc_name, pod_name, type, and rack.
Pool Formats¶
AVD supports flexible pool formats:
Single Subnet¶
Multiple Subnets¶
Comma-separated list of subnets:
IP Ranges¶
Static IP Overrides¶
Override any pool-calculated address with a static value:
l3leaf:
nodes:
- name: htipa-leaf1
id: 1
loopback_ipv4_address: 10.100.100.1 # Override loopback pool
vtep_loopback_ipv4_address: 10.100.101.1 # Override VTEP pool
Available override variables:
| Override Variable | Overrides Pool |
|---|---|
loopback_ipv4_address |
loopback_ipv4_pool |
vtep_loopback_ipv4_address |
vtep_loopback_ipv4_pool |
Global IP Addressing Settings¶
Configure fabric-wide IP addressing behavior:
fabric_ip_addressing:
mlag:
algorithm: first_id # first_id, odd_id, same_subnet
ipv4_prefix_length: 31
p2p_uplinks:
ipv4_prefix_length: 31
Custom IP Addressing¶
For complex requirements, create a custom Python module:
First, create the custom node type keys and set your custom Python module and class name:
---
custom_node_type_keys:
- key: custom_l3leaf
type: custom_l3leaf
connected_endpoints: true
default_evpn_role: client
mlag_support: true
network_services:
l2: true
l3: true
vtep: true
default_ptp_priority1: 30
cv_tags_topology_type: leaf
ip_addressing:
python_module: custom_modules.avd_overrides
python_class_name: MyCustomIpAddressing
# Optional: pass custom variables to your module
custom_ip_offset: 25
Second, create your custom python module and class:
# Copyright (c) 2026 Arista Networks, Inc.
# Use of this source code is governed by the Apache License 2.0
# that can be found in the LICENSE file.
# custom_modules/avd_overrides.py
from functools import cached_property
from pyavd.api.ip_addressing import AvdIpAddressing
class MyCustomIpAddressing(AvdIpAddressing):
"""Custom IP addressing that adds an offset to all loopback IPs."""
@cached_property
def _custom_offset(self) -> int:
"""Read a custom offset from hostvars."""
return int(self._hostvars.get("custom_ip_offset", 0))
def router_id(self) -> str:
"""Override router_id to add custom offset."""
offset = self._id + self._loopback_ipv4_offset + self._custom_offset
return self._ip(self._loopback_ipv4_pool, 32, offset, 0)
def vtep_ip(self) -> str:
"""Override VTEP IP with custom offset."""
offset = self._id + self._loopback_ipv4_offset + self._custom_offset
return self._ip(self._vtep_loopback_ipv4_pool, 32, offset, 0)
Note
The custom python module must be in the PYTHONPATH for Ansible to find it. export PYTHONPATH="${PYTHONPATH}:$(pwd)/custom_modules"
For a list of available methods, refer to the pyavd documentation.
Best Practices¶
- Plan your ID scheme: Use consistent ID numbering across the fabric
- Use offsets wisely: When sharing pools between node types, use
loopback_ipv4_offset - Size pools appropriately: Ensure pools have enough addresses for growth
- Document your allocation scheme: Keep records of pool assignments
- Use node groups for MLAG: Define MLAG pairs as node groups for automatic VTEP sharing
- Consider the algorithm: Choose the MLAG algorithm that fits your operational model
Troubleshooting¶
Common Issues¶
| Issue | Cause | Solution |
|---|---|---|
| IP conflict between spine and leaf | Shared pool without offset | Add loopback_ipv4_offset to leaf defaults |
| Missing MLAG peer IP | mlag_peer_ipv4_pool not defined |
Define pool at node_group or defaults level |
| Unexpected VTEP IP | MLAG pair not in same node_group | Move MLAG peers to same node_group |
| Pool exhausted | Too many nodes for pool size | Use larger pool or multiple subnets |
| Loopback IP not updating | Static IP override in use or more specific pools | Remove override or check pool definitions |
| Duplicate IP addresses on uplinks | Leafs have different numbers of uplinks | Set max_uplink_switches and/or max_parallel_uplinks to reserve IP space. |