So you want a real Cisco lab running on your laptop or a small Linux box — no physical hardware, no expensive CML server license, just containers. This post walks through exactly that: installing Containerlab, building Docker images from Cisco IOL binaries using vrnetlab, defining a multi-node topology, and finally pushing configuration to every device automatically from AWX.
By the end you will have two routers and two switches wired up, OSPF converged, VLANs pushed, and L2/L3 interfaces configured — all from Ansible playbooks running in AWX.
What We Are Building

Two IOL routers peering over OSPF, each connected down to an IOL-L2 switch, the switches inter-connected with two trunk links.
Prerequisites
- Ubuntu 22.04 / 24.04 (bare metal or VM, x86-64)
- Docker installed and running
- Cisco IOL images from CML (Cisco Modeling Labs):
x86_64_crb_linux-adventerprisek9-ms.iol(router)x86_64_crb_linux_l2-adventerprisek9-ms.iol(L2 switch)
- An AWX instance reachable from the lab host (optional but covered at the end)
Step 1 — Install Containerlab
Containerlab provides a single install script that handles everything: the binary, bash completion, and an optional setup for Docker if it is not already present.
curl -sL https://containerlab.dev/setup | sudo -E bash -s "all"
Verify the install:
❯ clab version
⣴⡾⠛⠛⠖ ⢠⣶⠟⠛⢷⣦ ⢸⣿⣧ ⣿ ⠘⠛⢻⡟⠛⠛ ⣾⣿⡀ ⣿⡇ ⣿⣿⡄ ⢸⡇ ⣿⡟⠛⠛⠃⢸⣿⠛⠛⣷⡄ ⣿⡇ ⣿⡇
⢸⣿ ⣿⡇ ⣿⡇⢸⣿⠹⣧⡀⣿ ⢸⡇ ⣸⡏⢹⣧ ⣿⡇ ⣿⡏⢿⣄⢸⡇ ⣿⣧⣤⣤ ⢸⣿⣀⣀⣾⠇ ⣿⡇⠐⠟⠛⢿⡆ ⣿⡷⠛⢿⣆
⠘⣿⣄ ⡀ ⢻⣧⡀ ⣠⣿⠃⢸⣿ ⠘⣷⣿ ⢸⡇ ⢠⣿⠷⠶⢿⡆ ⣿⡇ ⣿⡇ ⢻⣾⡇ ⣿⡇ ⢸⣿⠉⢻⣧⡀ ⣿⡇⢰⡟⠛⣻⡇ ⣿⡇ ⣸⡿
⠈⠙⠛⠛⠉ ⠉⠛⠛⠋⠁ ⠘⠛ ⠈⠛ ⠘⠃ ⠚⠃ ⠘⠛ ⠛⠃ ⠛⠃ ⠙⠃ ⠛⠛⠛⠛⠃⠘⠛ ⠙⠓ ⠛⠃⠈⠛⠛⠙⠃ ⠛⠛⠛⠛⠁
version: 0.74.3
commit: 7eadb290a
date: 2026-03-24T10:00:24Z
source: https://github.com/srl-labs/containerlab
rel. notes: https://containerlab.dev/rn/0.74/#0743
Step 2 — Build the Cisco IOL Docker Images
Containerlab does not ship Cisco images — you bring your own from a CML subscription. The srl-labs/vrnetlab fork packages those binaries into Docker images that Containerlab knows how to manage.
⚠️ Important: Use
srl-labs/vrnetlab, not the upstreamvrnetlab/vrnetlab. They are not compatible with Containerlab.
Clone the repo
git clone https://github.com/srl-labs/vrnetlab.git
cd vrnetlab/cisco/iol
Rename and place your images
The Makefile discovers images by filename pattern. Follow the convention exactly:
# IOL-L2 switch → cisco_iol-L2-<version>.bin
cp /path/to/x86_64_crb_linux_l2-adventerprisek9-ms.iol \
cisco_iol-L2-17.16.01a.bin
# IOL router → cisco_iol-<version>.bin
cp /path/to/x86_64_crb_linux-adventerprisek9-ms.iol \
cisco_iol-17.16.01a.bin
Build both images in one go
make docker-image
The Makefile iterates over every .bin file in the directory, so both images are built in a single run. Verify:
docker images | grep cisco_iol
# vrnetlab/cisco_iol L2-17.16.01a ... 607MB
# vrnetlab/cisco_iol 17.16.01a ... 598MB
Why IOL instead of vIOS?
IOL (IOS on Linux) runs as a native Linux binary inside the container — no QEMU, no KVM, no nested virtualisation required. It is lighter, faster to boot, and easier to run at scale compared to vIOS qcow2 images.
Step 3 — Generate the IOL License File
IOL will silently exit without a valid iourc license keyed to your hostname. Generate one:
# gen_iourc.py
import struct, socket, hashlib
hostname = socket.gethostname()
iou_key = 0x4944414d
host_id = struct.unpack('!I', socket.inet_aton(
socket.gethostbyname(hostname)))[0] ^ iou_key
data = struct.pack('!II', host_id, iou_key) + b'\x00' * 4
key = hashlib.md5(data).hexdigest()[:16]
print(f"[license]\n{hostname} = {key};")
python3 gen_iourc.py > ~/iourc
cat ~/iourc
# [license]
# ubuntu = 1a2b3c4d5e6f7890;
Step 4 — Define the Topology
Create a working directory and write the topology file:
mkdir -p ~/clab/site-a && cd ~/clab/site-a
# site-a.clab.yml
name: site-a
topology:
defaults:
binds:
- /home/ubuntu/iourc:/opt/iourc:ro
nodes:
ios-router-1:
kind: cisco_iol
image: vrnetlab/cisco_iol:17.16.01a
ios-router-2:
kind: cisco_iol
image: vrnetlab/cisco_iol:17.16.01a
ios-switch-1:
kind: cisco_iol
image: vrnetlab/cisco_iol:L2-17.16.01a
type: L2
ios-switch-2:
kind: cisco_iol
image: vrnetlab/cisco_iol:L2-17.16.01a
type: L2
links:
# Router-to-router (OSPF area 0)
- endpoints: ["ios-router-1:Ethernet0/1", "ios-router-2:Ethernet0/1"]
# Router-1 down to Switch-1
- endpoints: ["ios-router-1:Ethernet0/2", "ios-switch-1:Ethernet0/1"]
# Router-2 down to Switch-2
- endpoints: ["ios-router-2:Ethernet0/2", "ios-switch-2:Ethernet0/1"]
# Switch-to-switch trunk (two parallel links)
- endpoints: ["ios-switch-1:Ethernet0/2", "ios-switch-2:Ethernet0/2"]
- endpoints: ["ios-switch-1:Ethernet0/3", "ios-switch-2:Ethernet0/3"]
A few things worth noting:
- The
bindsunderdefaultsmounts the license file into every node automatically. type: L2on the switch nodes tells Containerlab to apply an L2-appropriate startup config — without it, STP and VLAN features will not work correctly.- IOL interfaces use the
EthernetX/Ynaming convention, grouped in sets of 4 per slot.
Deploy
sudo clab deploy -t site-a.clab.yml
After ~60 seconds all nodes will be reachable:
╭──────────────────────────┬─────────────────────────────────┬─────────┬───────────────────╮
│ Name │ Kind/Image │ State │ IPv4/6 Address │
├──────────────────────────┼─────────────────────────────────┼─────────┼───────────────────┤
│ clab-site-a-ios-router-1 │ cisco_iol │ running │ 172.20.20.4 │
│ │ vrnetlab/cisco_iol:17.16.01a │ │ 3fff:172:20:20::4 │
├──────────────────────────┼─────────────────────────────────┼─────────┼───────────────────┤
│ clab-site-a-ios-router-2 │ cisco_iol │ running │ 172.20.20.5 │
│ │ vrnetlab/cisco_iol:17.16.01a │ │ 3fff:172:20:20::5 │
├──────────────────────────┼─────────────────────────────────┼─────────┼───────────────────┤
│ clab-site-a-ios-switch-1 │ cisco_iol │ running │ 172.20.20.2 │
│ │ vrnetlab/cisco_iol:L2-17.16.01a │ │ 3fff:172:20:20::2 │
├──────────────────────────┼─────────────────────────────────┼─────────┼───────────────────┤
│ clab-site-a-ios-switch-2 │ cisco_iol │ running │ 172.20.20.3 │
│ │ vrnetlab/cisco_iol:L2-17.16.01a │ │ 3fff:172:20:20::3 │
╰──────────────────────────┴─────────────────────────────────┴─────────┴───────────────────╯
Re-run this table any time with:
sudo clab inspect -t site-a.clab.yml
# or for all running labs:
sudo clab inspect --all
SSH into any node directly:
ssh [email protected] # ios-router-1
ssh [email protected] # ios-switch-1
ℹ️ Default user credentials:
admin:admin
Step 5 — Expose the Lab to Another Machine (AWX)
The management network 172.20.20.0/24 is a Docker bridge local to the Containerlab host. To let AWX (running on a different machine or container) reach the nodes, add a static route on the AWX host pointing to the Containerlab host:
# Run this on the AWX host
sudo ip route add 172.20.20.0/24 via <containerlab-host-ip>
Verify reachability:
ping 172.20.20.4
ssh [email protected]
If AWX runs as a Docker container on the same host, just connect it to the clab network instead:
docker network connect clab <awx-container-name>
Step 6 — Configure with AWX and Ansible
Inventory
[ios]
172.20.20.4
172.20.20.5
172.20.20.2
172.20.20.3
[ios:vars]
ansible_connection=network_cli
ansible_network_os=ios
ansible_user=admin
ansible_password=admin
[routers]
172.20.20.4 eth1addr=192.168.1.1/24
172.20.20.5 eth1addr=192.168.1.2/24
[switches]
172.20.20.2
172.20.20.3
In AWX, the inline host variable eth1addr=192.168.1.1/24 is set per-host under Inventories → Hosts → Variables:
# Host variables for clab-site-a-ios-router-1
eth1addr: 192.168.1.1/24
# Host variables for clab-site-a-ios-router-2
eth1addr: 192.168.1.2/24
Playbook — L3 Interfaces (Routers)
# l3-interfaces.yaml
- name: Layer-3 interfaces
hosts: routers
tasks:
- name: IP address
cisco.ios.ios_l3_interfaces:
config:
- name: Ethernet0/1
ipv4:
- address: "{{ eth1addr }}"
- name: Admin State
cisco.ios.ios_interfaces:
config:
- name: "{{ item }}"
enabled: true
loop:
- Ethernet0/1
- Ethernet0/2
- Ethernet0/3
Playbook — OSPF (Routers)
# ospf.yaml
- name: Playbook OSPF for Border Routers
hosts: routers
tasks:
- name: OSPF
cisco.ios.ios_ospfv2:
config:
processes:
- process_id: 1
network:
- address: 192.168.1.0
wildcard_bits: 0.0.0.255
area: 0
Playbook — VLANs (Switches)
# ensure-vlans.yaml
- name: Play for ensuring VLANS
hosts: switches
tasks:
- name: Ensure VLANS
cisco.ios.ios_vlans:
config:
- vlan_id: 10
name: Printers
- vlan_id: 20
name: CCTV
- vlan_id: 30
name: IoT
- vlan_id: 40
name: Guest
state: merged
- name: Save to start-up config
cisco.ios.ios_config:
save_when: modified
Playbook — L2 Interfaces (Switches)
# l2-interfaces.yaml
- name: Layer-2 interfaces
hosts: switches
tasks:
- name: Switchports VLANs
cisco.ios.ios_l2_interfaces:
config:
- name: Ethernet0/1
access:
vlan: 10
- name: Ethernet0/2
trunk:
encapsulation: dot1q
allowed_vlans: 10,20,30,40
- name: Ethernet0/3
trunk:
encapsulation: dot1q
allowed_vlans: 10,20,30,40
- name: Switchports Modes
cisco.ios.ios_l2_interfaces:
config:
- name: Ethernet0/1
mode: access
- name: Ethernet0/2
mode: trunk
- name: Ethernet0/3
mode: trunk
Run them from AWX as individual Job Templates, or chain them in a Workflow Template in this order:
l3-interfaces→ospf(routers)ensure-vlans→l2-interfaces(switches)
Verification
After all playbooks complete:
# On ios-router-1
show ip ospf neighbor
show ip interface brief
show ip route
# On ios-switch-1
show vlan brief
show interfaces trunk
show spanning-tree summary