Getting started with Network Automation using Vagrant + Libvirt
In this post, I’m going to walk you through how to set up an environment that can be used for testing what’s been called these days “Cloud Native Data Centers”, including underlay/overlay protocols, network automation scripts, zero-touch provisioning, monitoring, observability amongst many other things.
In order to build this, we’re going to use Vagrant combined with Libvirt, which uses KVM as the hypervisor. I’ve chosen Libvirt over Virtualbox due to its better scalability and portability beyond running some small tests in your home PC1. You don’t normally see Virtualbox in Linux servers, but you do see KVM.
The great think about Vagrant is that it uses text files to describe the entire network topology, this allows us to use git for version control and to automate the creation of the test environment - no need to waste time pointing and clicking in a GUI.
There’s a bit of a learning curve understanding the text file that Vagrant uses to describe the topology (called Vagrantfile). Hopefully, after you read this blog post, you will be able to create you own topologies without any issues.
Network Topology
We’re going to build the following 3-stage Clos topology made up of two spines, four leaves, and two servers. Additionally, we have an out-of-band management network (purple) where all devices are connected to via their management interfaces through a Libvirt virtual bridge. Lastly, we have a mgmt-server connected to the virtual bridge only.
OS versions
Switches: Cumulus VX 3.7.15
Servers: Ubuntu 20.04
Management IP Addressing
Assigned all via DHCP. The MGMT network has been configured explicitly in the Vagranfile to be 172.16.100.0/24.
The assigned DHCP allocations can bee seen running the following command:
virsh net-dhcp-leases vagrant-libvirtIf you’re not using the default vagrant-libvirt network, then you need to change the name there.
Installing Vagrant and Libvirt plugging2
After spending a lot of time trying to figure out how to make the libvirt plugin work with Vagrant (there seems to be tons of problems due to different versions being used) I’ve figured out that if you just run this script, it all works immediately:
https://github.com/vagrant-libvirt/vagrant-libvirt-qa/blob/main/scripts/install.bash
#!/bin/bash
set -o errexit -o pipefail -o noclobber -o nounset
DPKG_OPTS=(
-o Dpkg::Options::="--force-confold"
)
VAGRANT_LIBVIRT_VERSION=${VAGRANT_LIBVIRT_VERSION:-"latest"}
function restart_libvirt() {
service_name=${1:-libvirtd}
# it appears there can be issues with libvirt being started before certain
# packages that are required for create behaviour on first run. Restart to
# ensure the daemon picks up the latest environment and can create a VM
# on the first attempt. Otherwise will need to reboot
sudo systemctl restart ${service_name}
}
function setup_apt() {
export DEBIAN_FRONTEND=noninteractive
export DEBCONF_NONINTERACTIVE_SEEN=true
sudo sed -i "s/# deb-src/deb-src/" /etc/apt/sources.list
sudo -E apt-get update
sudo -E apt-get -y "${DPKG_OPTS[@]}" upgrade
sudo -E apt-get -y build-dep vagrant ruby-libvirt
}
function setup_arch() {
sudo pacman -Suyu --noconfirm --noprogressbar
sudo pacman -Qs 'iptables' | grep "local" | grep "iptables " && sudo pacman -Rd --nodeps --noconfirm iptables
# need to remove iptables to allow ebtables to be installed
sudo pacman -S --needed --noprogressbar --noconfirm \
autoconf \
automake \
binutils \
bridge-utils \
dnsmasq \
git \
gcc \
iptables-nft \
libvirt \
libxml2 \
libxslt \
make \
openbsd-netcat \
pkg-config \
qemu \
ruby \
wget \
;
sudo systemctl enable --now libvirtd
}
function setup_centos_7() {
sudo yum -y update
sudo yum -y install centos-release-qemu-ev
sudo yum -y update
sudo yum -y install \
autoconf \
automake \
binutils \
cmake \
gcc \
git \
libguestfs-tools \
libvirt \
libvirt-devel \
make \
qemu \
qemu-kvm-ev \
ruby-devel \
wget \
;
restart_libvirt
}
function setup_centos() {
sudo dnf -y update
sudo dnf -y install \
@virt \
autoconf \
automake \
binutils \
byacc \
cmake \
gcc \
gcc-c++ \
git \
libguestfs-tools \
libvirt \
libvirt-devel \
make \
qemu-kvm \
rpm-build \
ruby-devel \
wget \
zlib-devel \
;
restart_libvirt
}
function setup_debian() {
setup_apt
sudo -E apt-get -y "${DPKG_OPTS[@]}" install \
dnsmasq \
ebtables \
git \
libvirt-clients \
libvirt-daemon \
libvirt-daemon-system \
qemu \
qemu-system-x86 \
qemu-utils \
wget \
;
restart_libvirt
}
function setup_fedora() {
sudo dnf -y update
sudo dnf -y install \
@virtualization \
autoconf \
automake \
binutils \
byacc \
cmake \
gcc \
gcc-c++ \
git \
libguestfs-tools \
libvirt-devel \
make \
wget \
zlib-devel \
;
restart_libvirt
}
function setup_ubuntu_1804() {
setup_apt
sudo -E apt-get -y "${DPKG_OPTS[@]}" install \
git \
libvirt-bin \
qemu \
wget \
;
restart_libvirt
}
function setup_ubuntu() {
setup_apt
sudo -E apt-get -y "${DPKG_OPTS[@]}" install \
git \
libvirt-clients \
libvirt-daemon \
libvirt-daemon-system \
qemu \
qemu-system-x86 \
qemu-utils \
wget \
;
restart_libvirt
}
function setup_distro() {
local distro=${1}
local version=${2:-}
if [[ -n "${version}" ]] && [[ $(type -t setup_${distro}_${version} 2>/dev/null) == 'function' ]]
then
eval setup_${distro}_${version}
else
eval setup_${distro}
fi
}
function download_vagrant() {
local version=${1}
local pkgext=${2}
local pkg="vagrant_${1}_x86_64.${pkgext}"
wget --no-verbose https://releases.hashicorp.com/vagrant/${version}/${pkg} -O /tmp/${pkg}.tmp
mv /tmp/${pkg}.tmp /tmp/${pkg}
}
function install_rake_arch() {
sudo pacman -S --needed --noprogressbar --noconfirm \
ruby-bundler \
rake
}
function install_rake_centos() {
sudo yum -y install \
rubygem-bundler \
rubygem-rake
}
function install_rake_debian() {
sudo apt install -y \
bundler \
rake
}
function install_rake_fedora() {
sudo dnf -y install \
rubygem-rake
}
function install_rake_ubuntu() {
install_rake_debian $@
}
function install_vagrant_arch() {
sudo pacman -S --needed --noprogressbar --noconfirm \
vagrant
}
function install_vagrant_centos() {
local version=$1
download_vagrant ${version} rpm
sudo -E rpm -Uh --force /tmp/vagrant_${version}_x86_64.rpm
}
function install_vagrant_debian() {
local version=$1
download_vagrant ${version} deb
sudo -E dpkg -i /tmp/vagrant_${version}_x86_64.deb
}
function install_vagrant_fedora() {
install_vagrant_centos $@
}
function install_vagrant_ubuntu() {
install_vagrant_debian $@
}
function build_libssh() {
local dir=${1}
mkdir -p ${dir}-build
pushd ${dir}-build
cmake ${dir} -DOPENSSL_ROOT_DIR=/opt/vagrant/embedded/
make
sudo cp lib/libssh* /opt/vagrant/embedded/lib64
popd
}
function build_krb5() {
local dir=${1}
pushd ${dir}/src
./configure
make
sudo cp -P lib/crypto/libk5crypto.* /opt/vagrant/embedded/lib64/
popd
}
function setup_rpm_sources_centos() {
typeset -n basedir=$1
pkg="$2"
rpmname="${3:-${pkg}}"
[[ ! -d ${pkg} ]] && git clone https://git.centos.org/rpms/${pkg}
pushd ${pkg}
nvr=$(rpm -q --queryformat "${pkg}-%{version}-%{release}" ${rpmname})
nv=$(rpm -q --queryformat "${pkg}-%{version}" ${rpmname})
git checkout $(git tag -l | grep "${nvr}\$" | tail -n1)
into_srpm.sh -d c8s
pushd BUILD
tar xf ../SOURCES/${nv}.tar.*z
basedir=$(realpath ${nv})
popd
popd
}
function patch_vagrant_centos_8() {
mkdir -p patches
pushd patches
[[ ! -d centos-git-common ]] && git clone https://git.centos.org/centos-git-common
export PATH=$(readlink -f ./centos-git-common):$PATH
chmod a+x ./centos-git-common/*.sh
setup_rpm_sources_centos LIBSSH_DIR libssh
build_libssh ${LIBSSH_DIR}
setup_rpm_sources_centos KRB5_DIR krb5 krb5-libs
build_krb5 ${KRB5_DIR}
popd
}
function setup_rpm_sources_fedora() {
typeset -n basedir=$1
pkg="$2"
rpmname="${3:-${pkg}}"
nvr=$(rpm -q --queryformat "${pkg}-%{version}-%{release}" ${rpmname})
nv=$(rpm -q --queryformat "${pkg}-%{version}" ${rpmname})
mkdir -p ${pkg}
pushd ${pkg}
[[ ! -e ${nvr}.src.rpm ]] && dnf download --source ${rpmname}
rpm2cpio ${nvr}.src.rpm | cpio -imdV
rm -rf ${nv}
tar xf ${nv}.tar.*z
basedir=$(realpath ${nv})
popd
}
function patch_vagrant_fedora() {
mkdir -p patches
pushd patches
setup_rpm_sources_fedora LIBSSH_DIR libssh
build_libssh ${LIBSSH_DIR}
setup_rpm_sources_fedora KRB5_DIR krb5 krb5-libs
build_krb5 ${KRB5_DIR}
popd
}
function install_vagrant() {
local version=${1}
local distro=${2}
local distro_version=${3:-}
echo "Installing vagrant version '${version}'"
eval install_vagrant_${distro} ${version}
if [[ -n "${distro_version}" ]] && [[ $(type -t patch_vagrant_${distro}_${distro_version} 2>/dev/null) == 'function' ]]
then
echo "running patch_vagrant_${distro}_${distro_version}"
eval patch_vagrant_${distro}_${distro_version}
elif [[ $(type -t patch_vagrant_${distro} 2>/dev/null) == 'function' ]]
then
echo "running patch_vagrant_${distro}"
eval patch_vagrant_${distro}
else
echo "no patch functions configured for ${distro} ${distro_version}"
fi
}
function install_vagrant_libvirt() {
local distro=${1}
echo "Testing vagrant-libvirt version: '${VAGRANT_LIBVIRT_VERSION}'"
if [[ "${VAGRANT_LIBVIRT_VERSION:0:4}" == "git-" ]]
then
eval install_rake_${distro}
if [[ ! -d "./vagrant-libvirt" ]]
then
git clone https://github.com/vagrant-libvirt/vagrant-libvirt.git
fi
pushd vagrant-libvirt
git checkout ${VAGRANT_LIBVIRT_VERSION#git-}
rm -rf ./pkg
rake build
vagrant plugin install ./pkg/vagrant-libvirt-*.gem
popd
elif [[ "${VAGRANT_LIBVIRT_VERSION}" == "latest" ]]
then
vagrant plugin install vagrant-libvirt
else
vagrant plugin install vagrant-libvirt --plugin-version ${VAGRANT_LIBVIRT_VERSION}
fi
}
OPTIONS=o
LONGOPTS=vagrant-only,vagrant-version:
# -pass arguments only via -- "$@" to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]
then
echo "Invalid options provided"
exit 2
fi
eval set -- "$PARSED"
VAGRANT_ONLY=0
while true; do
case "$1" in
-o|--vagrant-only)
VAGRANT_ONLY=1
shift
;;
--vagrant-version)
VAGRANT_VERSION=$2
shift 2
;;
--)
shift
break
;;
*)
echo "Programming error"
exit 3
;;
esac
done
echo "Starting vagrant-libvirt installation script"
DISTRO=${DISTRO:-$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"' | tr '[A-Z]' '[a-z]')}
DISTRO_VERSION=${DISTRO_VERSION:-$(awk -F= '/^VERSION_ID/{print $2}' /etc/os-release | tr -d '"' | tr '[A-Z]' '[a-z]' | tr -d '.')}
[[ ${VAGRANT_ONLY} -eq 0 ]] && setup_distro ${DISTRO} ${DISTRO_VERSION}
if [[ -z ${VAGRANT_VERSION+x} ]]
then
VAGRANT_VERSION="$(
wget -qO - https://checkpoint-api.hashicorp.com/v1/check/vagrant 2>/dev/null | \
tr ',' '\n' | grep current_version | cut -d: -f2 | tr -d '"'
)"
fi
install_vagrant ${VAGRANT_VERSION} ${DISTRO} ${DISTRO_VERSION}
[[ ${VAGRANT_ONLY} -eq 0 ]] && install_vagrant_libvirt ${DISTRO}
echo "Finished vagrant-libvirt installation script"Once the installation is finished, let’s confirm everything works by trying to build an Ubuntu 20.04 VM.
You can find all the pre-built Vagrant boxes here just make sure you select libvirt as your provider - if you want to use an image not listed there, you’ll need to build it yourself.3
Let’s build our first Vagrant box. Below we have the equivalent to “hello world” for Vagrant:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/clos$ vagrant init generic/ubuntu2004
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
lab@lab-HP-Z620-Workstation:~/NetworkAuto/clos$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
...output trimmed...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!As you can see, it created the Vagrantfile for us. Later, we’ll cover how to create this file from scratch.
And here, we have our Ubuntu VM up and running:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/clos$ vagrant ssh
vagrant@ubuntu2004:~$ cat /etc/*release* | grep VERSION
VERSION="20.04.4 LTS (Focal Fossa)"
VERSION_ID="20.04"
VERSION_CODENAME=focal
vagrant@ubuntu2004:~$At this point, we’ve confirmed that Vagrant is working fine with the Libvirt plugging, so let’s delete the VM:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/clos$ vagrant destroy -f
==> default: Removing domain...
==> default: Deleting the machine folderBuilding ‘Vagrantfile’
As mentioned, Vagrant models everything using a file called “Vagrantfile” written in Ruby. In this section, we’re going to explore the following three individual components:
- How Vagrant models devices
- How Vagrant models P2P network links
- How to run scripts at build time
I’m going to use the link between leaf01 and server01 to describe the above components of a Vagrantfile.
How to model a device
First, we set the following variables:
# Global Variables
SWITCH_OS = "CumulusCommunity/cumulus-vx"
SWITCH_VERSION = "3.7.15"
SERVER_OS= "generic/ubuntu2004"Then, we define the device’s name, software and VM settings (we’re skipping the interfaces for now):
####################################
########## leaf01 config ###########
####################################
config.vm.define "leaf01" do |device|
device.vm.hostname = "leaf01"
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
#####################################
########## server01 config ##########
#####################################
config.vm.define "server01" do |device|
device.vm.hostname = "server01"
device.vm.box = SERVER_OS
device.vm.synced_folder ".", "/vagrant", disabled: true
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
endThe vm.synced_folder set to disable is to stop default Vagrant behaviour of sharing the project folder with the VM. You can find more details here https://www.vagrantup.com/docs/synced-folders.
How to model a P2P link
Libvirt uses tunnels to represent P2P links based on:
- Protocol: TCP or UDP
- Source IP
- Destination IP
- Source Port
- Destination Port
Each link needs to be uniquely identified, therefore using different IPs with the same port numbers or vice-versa
is sufficient.
For the sake of simplicity, I’m keeping port 9999 unchanged and using different IPs to uniquely identify the link.
####################################
########## leaf01 config ##########
####################################
# Link leaf01 swp3 ----> server01 eth1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.18",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.17",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp3",
auto_config: false
####################################
########## server01 config ##########
####################################
# Link server01 eth1 ----> leaf01 swp3
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.17",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.18",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "eth1",
auto_config: falseHow to preconfigure devices
Another very useful feature of Vagrant is the option of running scripts at provisioning time. Once Vagrant builds the VM and can access it, it’ll run the scripts described in the Vagrantfile.
####################################
########## leaf01 config ##########
####################################
device.vm.provision "shell", inline: $switches_script, :args => ["leaf01", "3"]
####################################
########## server01 config ##########
####################################
device.vm.provision "shell", inline: $switches_script, :args => ["leaf01", "3"]If your script doesn’t need arguments, you can delete the :args section.
Final Vagrantfile modelling our full network topology
Now, it’s time to see everything put together in the final Vagranfile, but before that, let’s discuss a few things.
First, how to set an env variable so that Vagrant’s default provider is change from Virtualbox to Libvirt:
# Set libvirt as the default provider
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'Second, in the following table I’ve put together the tunnel IP mappings we’re going to use to build the P2P links in the Vagrantfile.
For the UDP ports I’m going to use the same across all links to make things easier (port 9999), only the IPs will change.
| Device A | Intf A | IP A | UDP Port A | Device B | Intf B | IP B | UDP Port B |
|---|---|---|---|---|---|---|---|
| spine01 | swp1 | 127.0.100.1 | 9999 | leaf01 | swp1 | 127.0.100.2 | 9999 |
| spine01 | swp2 | 127.0.100.3 | 9999 | leaf02 | swp1 | 127.0.100.4 | 9999 |
| spine01 | swp3 | 127.0.100.5 | 9999 | leaf03 | swp1 | 127.0.100.6 | 9999 |
| spine01 | swp4 | 127.0.100.7 | 9999 | leaf04 | swp1 | 127.0.100.8 | 9999 |
| spine02 | swp1 | 127.0.100.9 | 9999 | leaf01 | swp2 | 127.0.100.10 | 9999 |
| spine02 | swp2 | 127.0.100.11 | 9999 | leaf02 | swp2 | 127.0.100.12 | 9999 |
| spine02 | swp3 | 127.0.100.13 | 9999 | leaf03 | swp2 | 127.0.100.14 | 9999 |
| spine02 | swp4 | 127.0.100.15 | 9999 | leaf04 | swp2 | 127.0.100.16 | 9999 |
| server01 | eth1 | 127.0.100.17 | 9999 | leaf01 | swp3 | 127.0.100.18 | 9999 |
| server04 | eth1 | 127.0.100.19 | 9999 | leaf04 | swp3 | 127.0.100.20 | 9999 |
Third, the below changes:
config.vm.provider :libvirt do |domain|
# Change the default allowed number of interfaces from 8 to 52
domain.nic_adapter_count = 52
# Change the MGMT network subnet and its default name
domain.management_network_name = "clos_fabric_mgmt_network"
domain.management_network_address = "172.16.100.0/24"Finally, here’s the final Vagranfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Set ENV variables
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
# Global Variables
SWITCH_OS = "CumulusCommunity/cumulus-vx"
SWITCH_VERSION = "3.7.15"
SERVER_OS= "generic/ubuntu2004"
# Build scripts
$switches_script = <<EOF
sudo net add hostname $1
sudo net add vrf mgmt
sudo net add int swp1-$2
sudo net commit
EOF
$servers_script = <<EOF
sudo ip link set eth1 up
sudo apt-get install net-tools -y
sudo apt-get install inetutils-traceroute -y
sudo apt-get install lldpd -y
EOF
$mgmt_server_script = <<EOF
sudo apt-get install net-tools -y
sudo apt-get install inetutils-traceroute -y
sudo apt-get install lldpd -y
EOF
# Spine and Leaf Fabric - including servers
Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |domain|
# Change the default allowed number of interfaces from 8 to 52
domain.nic_adapter_count = 52
# Change the MGMT network subnet and its default name
domain.management_network_name = "clos_fabric_mgmt_network"
domain.management_network_address = "172.16.100.0/24"
end
################################
###### mgmt-server config ######
################################
config.vm.define "mgmt-server" do |device|
device.vm.hostname = "mgmt-server"
device.vm.box = SERVER_OS
device.vm.synced_folder ".", "/vagrant", disabled: true
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 4096
domain.cpus = 4
end
# No data plane links
device.vm.provision "shell", inline: $mgmt_server_script
end
####################################
########## spine01 config ##########
####################################
config.vm.define "spine01" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "spine01"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link spine01 swp1 ----> leaf01 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.1",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.2",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link spine01 swp2 ----> leaf02 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.3",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.4",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
# Link spine01 swp3 ----> leaf03 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.5",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.6",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp3",
auto_config: false
# Link spine01 swp4 ----> leaf04 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.7",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.8",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp4",
auto_config: false
# Call script + provide values for its variables
device.vm.provision "shell", inline: $switches_script, :args => ["spine01", "4"]
end
####################################
########## spine02 config ##########
####################################
config.vm.define "spine02" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "spine02"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link spine02 swp1 ----> leaf01 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.9",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.10",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link spine02 swp2 ----> leaf02 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.11",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.12",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
# Link spine02 swp3 ----> leaf03 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.13",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.14",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp3",
auto_config: false
# Link spine02 swp4 ----> leaf04 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.15",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.16",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp4",
auto_config: false
device.vm.provision "shell", inline: $switches_script, :args => ["spine02", "4"]
end
####################################
########## leaf01 config ##########
####################################
config.vm.define "leaf01" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "leaf01"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link leaf01 swp1 ----> spine01 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.2",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.1",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link leaf01 swp2 ----> spine02 swp1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.10",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.9",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
# Link leaf01 swp3 ----> server01 eth1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.18",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.17",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp3",
auto_config: false
device.vm.provision "shell", inline: $switches_script, :args => ["leaf01", "3"]
end
####################################
########## leaf02 config ##########
####################################
config.vm.define "leaf02" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "leaf02"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link leaf02 swp1 ----> spine01 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.4",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.3",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link leaf02 swp2 ----> spine02 swp2
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.12",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.11",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
device.vm.provision "shell", inline: $switches_script, :args => ["leaf02", "2"]
end
####################################
########## leaf03 config ##########
####################################
config.vm.define "leaf03" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "leaf03"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link leaf03 swp1 ----> spine01 swp3
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.6",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.5",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link leaf03 swp2 ----> spine02 swp3
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.14",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.13",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
device.vm.provision "shell", inline: $switches_script, :args => ["leaf03", "2"]
end
####################################
########## leaf04 config ##########
####################################
config.vm.define "leaf04" do |device|
device.vm.box = SWITCH_OS
device.vm.box_version = SWITCH_VERSION
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.hostname = "leaf04"
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link leaf04 swp1 ----> spine01 swp4
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.8",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.7",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp1",
auto_config: false
# Link leaf04 swp2 ----> spine02 swp4
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.16",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.15",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp2",
auto_config: false
# Link leaf04 swp3 ----> server04 eth1
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.20",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.19",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "swp3",
auto_config: false
device.vm.provision "shell", inline: $switches_script, :args => ["leaf04", "3"]
end
####################################
########## server01 config ##########
####################################
config.vm.define "server01" do |device|
device.vm.hostname = "server01"
device.vm.box = SERVER_OS
device.vm.synced_folder ".", "/vagrant", disabled: true
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link server01 eth1 ----> leaf01 swp3
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.17",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.18",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "eth1",
auto_config: false
device.vm.provision "shell", inline: $servers_script
end
####################################
########## server04 config ##########
####################################
config.vm.define "server04" do |device|
device.vm.hostname = "server04"
device.vm.box = SERVER_OS
device.vm.synced_folder ".", "/vagrant", disabled: true
# VM settings
device.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
end
# Link server01 eth1 ----> leaf04 swp3
device.vm.network :private_network,
:libvirt__tunnel_type => "udp",
:libvirt__tunnel_local_ip => "127.0.100.19",
:libvirt__tunnel_local_port => "9999",
:libvirt__tunnel_ip => "127.0.100.20",
:libvirt__tunnel_port => "9999",
:libvirt__iface_name => "eth1",
auto_config: false
device.vm.provision "shell", inline: $servers_script
end
endRunning ‘vagrant up’
Time to bring up the topology. Once we have our Vagranfile completed, we go inside the directory where the file is and run:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant up
Bringing machine 'mgmt-server' up with 'libvirt' provider...
Bringing machine 'spine01' up with 'libvirt' provider...
Bringing machine 'spine02' up with 'libvirt' provider...
Bringing machine 'leaf01' up with 'libvirt' provider...
Bringing machine 'leaf02' up with 'libvirt' provider...
Bringing machine 'leaf03' up with 'libvirt' provider...
Bringing machine 'leaf04' up with 'libvirt' provider...
Bringing machine 'server01' up with 'libvirt' provider...
Bringing machine 'server04' up with 'libvirt' provider...
...output trimmed...Let’s check the status after the ‘vagrant up’ command finished:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant status
Current machine states:
mgmt-server running (libvirt)
spine01 running (libvirt)
spine02 running (libvirt)
leaf01 running (libvirt)
leaf02 running (libvirt)
leaf03 running (libvirt)
leaf04 running (libvirt)
server01 running (libvirt)
server04 running (libvirt)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.Ok, looking good, it seems all VMs are up and running. Let’s see if the devices got their MGMT IP via DHCP:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ virsh net-list
Name State Autostart Persistent
-------------------------------------------------------------
clos_fabric_mgmt_network active no yes
default active yes yes
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ virsh net-dhcp-leases clos_fabric_mgmt_network
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
------------------------------------------------------------------------------------------------------------
2022-07-23 08:23:10 52:54:00:32:b8:46 ipv4 172.16.100.183/24 mgmt-server ...output trimmed...
2022-07-23 08:23:26 52:54:00:5e:59:fd ipv4 172.16.100.59/24 leaf03 -
2022-07-23 08:23:27 52:54:00:62:24:7c ipv4 172.16.100.47/24 leaf02 -
2022-07-23 08:23:27 52:54:00:80:ed:99 ipv4 172.16.100.159/24 spine01 -
2022-07-23 08:23:29 52:54:00:84:d2:b9 ipv4 172.16.100.18/24 leaf04 -
2022-07-23 08:23:20 52:54:00:9d:2a:3c ipv4 172.16.100.132/24 server01 ...output trimmed...
2022-07-23 08:23:27 52:54:00:c3:fa:1a ipv4 172.16.100.112/24 leaf01 -
2022-07-23 08:23:27 52:54:00:e2:b4:67 ipv4 172.16.100.50/24 spine02 -
2022-07-23 08:23:20 52:54:00:ef:9b:e8 ipv4 172.16.100.11/24 server04 ...output trimmed...Great, time for the final test, LLDP outputs:
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant ssh spine01 -c "sudo net show lldp"
LocalPort Speed Mode RemoteHost RemotePort
--------- ----- ------- ---------- ----------
swp1 1G Default leaf01 swp1
swp2 1G Default leaf02 swp1
swp3 1G Default leaf03 swp1
swp4 1G Default leaf04 swp1
Connection to 172.16.100.159 closed.
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant ssh spine02 -c "sudo net show lldp"
LocalPort Speed Mode RemoteHost RemotePort
--------- ----- ------- ---------- ----------
swp1 1G Default leaf01 swp2
swp2 1G Default leaf02 swp2
swp3 1G Default leaf03 swp2
swp4 1G Default leaf04 swp2
Connection to 172.16.100.50 closed.
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant ssh server01 -c "sudo lldpcli show nei summary"
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface: eth1, via: LLDP
Chassis:
ChassisID: mac 52:54:00:c3:fa:1a
SysName: leaf01
Port:
PortID: ifname swp3
PortDescr: swp3
TTL: 120
-------------------------------------------------------------------------------
Connection to 172.16.100.132 closed.
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$ vagrant ssh server04 -c "sudo lldpcli show nei summary"
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface: eth1, via: LLDP
Chassis:
ChassisID: mac 52:54:00:84:d2:b9
SysName: leaf04
Port:
PortID: ifname swp3
PortDescr: swp3
TTL: 120
-------------------------------------------------------------------------------
Connection to 172.16.100.11 closed.
lab@lab-HP-Z620-Workstation:~/NetworkAuto/test_libvirt_tunnels$As we can see from the LLDP outputs above, our topology is up and working as per our diagram. In terms of data plane, we don’t have anything configured yet, but we can manage all devices from either our host machine or using the mgmt-server VM.
In conclusion, we’ve reached our goal. We’ve got a topology ready to start running our various automation scripts or to test all the new fancy technologies.