eos_config_deploy_cvp¶
Overview¶
eos_config_deploy_cvp is a role that deploys the configuration to Arista EOS devices via the CloudVision Management platform.
The eos_config_deploy_cvp role:
- Designed to configure CloudVision with fabric configlets & topology.
- Deploy intended configlets to devices and execute pending tasks.
Role requirements¶
This role requires to install arista.cvp
collection to support CloudVision interactions.
NOTE: When using ansible-cvp modules, the user executing the Ansible playbook must have access to both CVP and the EOS CLI.
Role Inputs and Outputs¶
Figure 1 below provides a visualization of the role’s inputs, outputs, and tasks in order executed by the role.
- Read inventory
- Build containers topology
- Role looks for configuration previously generated by
arista.avd.eos_cli_config_gen
- List configuration and build configlets list, one per device
- Role looks for additional configlets to attach to either devices or containers
- Build CloudVision configuration using
arista.cvp
collection:- Build configlets on CV
- Create containers topology
- Move devices to container
- Bind configlet to device
- Deploy Fabric configuration by running all pending tasks (optional, if execute_tasks == true)
Inputs¶
Inventory configuration:
An entry must be part of the inventory to describe the CloudVision server. arista.cvp
modules use the httpapi approach. The example below provides a framework to use in your inventory.
all:
children:
cloudvision:
hosts:
cv_server01:
ansible_host: 10.83.28.164
ansible_user: ansible
ansible_password: ansible
ansible_connection: httpapi
ansible_httpapi_use_ssl: True
ansible_httpapi_validate_certs: False
ansible_network_os: eos
ansible_httpapi_port: 443
For a complete list of authentication options available with CloudVision Ansible collection, you can read the dedicated page on arista.cvp collection.
Role default output directories¶
Default output directories can be updated by modifying the default role variables:
# Role Defaults - output directories
# Root directory where to build output structure
root_dir: '{{ inventory_dir }}'
# AVD configurations output
# Main output directory
output_dir_name: 'intended'
output_dir: '{{ root_dir }}/{{ output_dir_name }}'
# Output for structured YAML files:
structured_dir_name: 'structured_configs'
structured_dir: '{{ output_dir }}/{{ structured_dir_name }}'
# Output for structured YAML files for CVP:
structured_cvp_dir_name: 'cvp'
structured_cvp_dir: '{{ structured_dir }}/{{ structured_cvp_dir_name }}'
# EOS Configuration Directory name
eos_config_dir_name: 'configs'
eos_config_dir: '{{ output_dir }}/{{ eos_config_dir_name }}'
Tip
You might need the outputs generated by AVD in a different folder than inventory (the default). If updating root_dir
, leverage a relative path from inventory_dir
to ensure consistent behavior. Example: root_dir: '{{ inventory_dir}}/../outputs'
.
Role variables¶
avd_inventory_to_container_file
: Inventory YAML file to read inventory from. Default is to read from memory.container_root
: Inventory group name where fabric devices are located. Default:{{ fabric_name }}
.container_apply_mode
:strict
/loose
. Set how configlets are attached/detached to containers. If set to strict all configlets not listed in your vars will be detached. Default:loose
.configlets_prefix
: Prefix to use for configlet on CV side. Default:AVD-{{ fabric_name }}-
.device_filter
: Filter to target a specific set of devices on CV side. Default:all
. It can be either a string or a list of strings.device_apply_mode
:strict
/loose
. Set how configlets are attached/detached on device. If set to strict, all configlets and image bundles not listed in your vars are detached. Default:loose
.device_inventory_mode
:strict
/loose
. Define how missing devices are handled. “loose” will ignore missing devices. “strict” will fail on any missing device. Default:strict
. NOTE: This option requiresarista.cvp
collection version >= 3.8.0.device_search_key
:fqdn
/hostname
/serialNumber
. Key name to use to look for device in CloudVision. Default:hostname
.state
:present
/absent
. Support creation or cleanup topology on CV server. Default:present
.execute_tasks
:true
/false
. Support automatically executing pending tasks. Default:false
.cvp_configlets
: Structure to add additional configlets to those automatically generated by AVD roles.cv_collection
: Version of CloudVision collection to use. Can bev1
orv3
. Default isv3
.
Getting Started¶
Below is an example of how to use role with a single string as device_filter
:
- name: Deploy Configs
hosts: CVP
gather_facts: false
tasks:
- name: Run CVP provisioning
ansible.builtin.import_role:
name: eos_config_deploy_cvp
vars:
container_root: 'DC1_FABRIC'
configlets_prefix: 'DC1-AVD'
device_filter: 'DC1'
state: present
execute_tasks: false
The following code block is an example of how to use this role with a list of strings to create device_filter
entries:
- name: Deploy Configs
hosts: CVP
gather_facts: false
tasks:
- name: Run CVP provisioning
ansible.builtin.import_role:
name: eos_config_deploy_cvp
vars:
container_root: 'DC1_FABRIC'
configlets_prefix: 'DC1-AVD'
device_filter:
- 'DC1'
- 'DC2'
state: present
execute_tasks: false
Ignore devices not provisioned in CloudVision¶
When you want to provision a complete topology and devices aren’t already in CloudVision, you can configure inventory to ignore these devices by using a host variable: is_deployed
.
is_deployed: true
oris_deployed is not defined
: An entry in cv_device is generated and AVD will configure device on CloudVision. If device is undefined, an error is raised.is_deployed: false
: Device isn’t configured in cv_device topology and only its configlet is uploaded on CloudVision.
Here is an overview with the key configured in the YAML inventory:
DC1_BL1:
hosts:
DC1-BL1A:
ansible_port: 8012
DC1_BL2:
hosts:
DC1-BL2A:
ansible_port: 8012
# Device configuration is generated by AVD
# Device is not configured on Cloudvision (configlet is uploaded)
is_deployed: false
Deploy using device serial number as key instead of device hostname¶
During configuration deployment, devices can be identified using their serial number instead of the hostname. This avoids the requirement of special DHCP reservations to get the correct hostname assigned for Zero Touch Provisioning (ZTP).
To use serial number as the key, each device must have a hostvar serial_number
set. This can be set as hostvars in the inventory like:
If using arista.avd.eos_designs
, it is also possible to set serial_number
under the fabric topology definitions like:
Since eos_designs
will output the serial_number
as part of the structured_config
, this config is only available within the same play as where eos_designs
was run. If you wish to split Ansible playbooks, you can either add an include_vars
task
first, to import the structured_config
file per host, or just set the serial_number
in the inventory like in the first example above.
To instruct the arista.avd.eos_config_deploy_cvp
role to use the serial number as the identifier of devices, the playbook should be updated with cv_collection: v3
and device_search_key: serialNumber
similar to this example:
- name: Deploy Configs
hosts: CVP
gather_facts: false
tasks:
- name: Deploy Configurations to Devices
ansible.builtin.import_role:
name: arista.avd.eos_config_deploy_cvp
vars:
container_root: 'DC1_FABRIC'
configlets_prefix: 'DC1-AVD'
state: present
execute_tasks: false
cv_collection: v3
device_search_key: serialNumber
If the serial_number
variable is not set correctly, an error will be raised:
TASK [arista.avd.eos_config_deploy_cvp : Build DEVICES and CONTAINER definition for mycvpserver] ********************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleError: When using 'device_search_key: serialNumber', on device SPINE1 'serial_number' was expected but not set!
fatal: [mycvpserver -> localhost]: FAILED! => {"changed": false, "msg": "AnsibleError: When using 'device_search_key: serialNumber', on device SPINE1 'serial_number' was expected but not set!"}
Add additional configlets¶
This structure must be part of group_vars
targeting container_root
. Below is an example applied to eos_l3_evpn
:
# group_vars/DC1_FABRIC.yml
# List of additional CVP configlets to bind to devices and containers
# Configlets MUST be configured on CVP before running AVD playbooks.
cv_configlets:
containers:
<name of container>:
- <First configlet to attach>
- <Second configlet to attach>
- <...>
devices:
<inventory_hostname>:
- <First configlet to attach>
- <Second configlet to attach>
- <...>
<inventory_hostname>:
- <First configlet to attach>
- <Second configlet to attach>
- <...>
Full example:
# group_vars/DC1_FABRIC.yml
# List of additional CVP configlets to bind to devices and containers
# Configlets MUST be configured on CVP before running AVD playbooks.
cv_configlets:
containers:
DC1_L3LEAFS:
- GLOBAL-ALIASES
devices:
DC1-L2LEAF2A:
- GLOBAL-ALIASES
DC1-L2LEAF2B:
- GLOBAL-ALIASES
Notes:
- These configlets must be created previously on CloudVision server and won’t be managed by AVD roles.
- Current version doesn’t support configlets unbound from container for safety reason. In such case, configlets should be removed from variables and manually unbind from containers on CloudVision.
Run module with different tags¶
This module also supports tags to run a subset of ansible tasks:
build
: Generate Arista Validated Design configuration for EOS devices (structure_configs / configs / documentation) and CloudVision inputs.provision
: Runbuild
tags + configure CloudVision with information generated in previous tasks
Other option to run a subset of ansible tasks is to use --skip-tags <tag>
:
- To run the module to update existing configlets only, we can use the following command:
- Skipping multiple tags could make the playbook even more lightweight. For example, the command below avoids CVP task execution.
Outputs¶
- None.
Tasks¶
- Copy generated configuration to CloudVision static configlets.
- Create container topology and attach devices to the correct container.
- Bind configlet for each device.
- Apply generated tasks to deploy the configuration to devices.
Requirements¶
Requirements are located here: avd-requirements
License¶
Project is published under Apache 2.0 License