MPLS (Multiprotocol Label Switching) replaces per-hop IP lookups with fast label swaps. Instead of each router examining the destination IP and consulting the routing table, a label is attached to the packet at the network edge and every core router makes its forwarding decision based purely on that short fixed-length label. The result is a predictable, high-speed forwarding path — the Label Switched Path (LSP) — and the foundation for services like MPLS VPNs and traffic engineering.
This post walks through a lab, building from an OSPF underlay to a fully operational LDP domain, then examining both the control and data planes in detail.
Topology
R1 ── R2 ── R3 ── R4 ── R5 ── R6 ── R7
Router Loopback0 Loopback1 Transit links
R1 1.1.1.1 11.1.1.1 G0/2 → 12.1.1.1/24
R2 2.2.2.2 22.2.2.2 G0/1 → 12.1.1.2 │ G0/3 → 23.1.1.2
R3 3.3.3.3 33.3.3.3 G0/2 → 23.1.1.3 │ G0/4 → 34.1.1.3
R4 4.4.4.4 44.4.4.4 G0/3 → 34.1.1.4 │ G0/5 → 45.1.1.4
R5 5.5.5.5 55.5.5.5 G0/4 → 45.1.1.5 │ G0/6 → 56.1.1.5
R6 6.6.6.6 66.6.6.6 G0/5 → 56.1.1.6 │ G0/7 → 67.1.1.6
R7 7.7.7.7 77.7.7.7 G0/6 → 67.1.1.7
Each router has a Loopback0 (used as the OSPF/LDP router-ID) and a Loopback1 (intentionally excluded from OSPF initially, to illustrate a common LDP pitfall).
Step 1 — OSPF Underlay
LDP needs IP reachability to form TCP sessions. OSPF Area 0 runs on all transit links and Loopback0 interfaces. Loopback1 interfaces are excluded for now.
# R1
router ospf 1
router-id 0.0.0.1
network 1.1.1.1 0.0.0.0 area 0
network 12.1.1.1 0.0.0.0 area 0
# R2
router ospf 1
router-id 0.0.0.2
network 2.2.2.2 0.0.0.0 area 0
network 12.1.1.2 0.0.0.0 area 0
network 23.1.1.2 0.0.0.0 area 0
# Repeat the same pattern for R3–R7
Once OSPF converges, every router has a route to every Loopback0 — the prerequisite for LDP.
Step 2 — LDP Configuration
Enable MPLS and set the LDP Router-ID
# On all routers — pin LDP router-ID to Loopback0
mpls ldp router-id loopback 0 force
# Enable LDP per transit interface (example on R1)
interface GigabitEthernet0/2
mpls ip
The Router-ID Pitfall
This step hides a trap worth understanding. LDP session establishment is two-phase:
- Hello adjacency — LSRs multicast UDP hellos to
224.0.0.2port 646 on MPLS-enabled interfaces. This is the basic discovery mechanism. - TCP session — established between the transport addresses advertised in those hellos.
By default, the transport address = LDP router-ID = the highest loopback IP on the
router. Before pinning it to Loopback0, R1 advertises 11.1.1.1 and R2 advertises
22.2.2.2 as transport addresses. Neither is in OSPF, so neither is reachable, and
no TCP session can form.
R1# show mpls ldp discovery
Local LDP Identifier:
11.1.1.1:0
Discovery Sources:
Interfaces:
GigabitEthernet0/2 (ldp): xmit/recv
LDP Id: 22.2.2.2:0; no route ← TCP session cannot form
Setting mpls ldp router-id loopback 0 force changes the transport address to
1.1.1.1 / 2.2.2.2 — addresses OSPF already knows — and the session forms
immediately.
R1# show mpls ldp neighbor
Peer LDP Ident: 2.2.2.2:0; Local LDP Ident 1.1.1.1:0
TCP connection: 2.2.2.2.32572 - 1.1.1.1.646
State: Oper; Msgs sent/rcvd: 24/24; Downstream
Up time: 00:03:55
LDP discovery sources:
GigabitEthernet0/2, Src IP addr: 12.1.1.2
Addresses bound to peer LDP Ident:
12.1.1.2 23.1.1.2 2.2.2.2 22.2.2.2
The LDP identifier format is <router-id>:<label-space>. The :0 suffix means
platform-wide label space — labels are globally significant across all interfaces
on this LSR, and the label alone is the matching criterion for forwarding decisions.
Note: The router with the higher transport address becomes the TCP client (active role). Here
2.2.2.2 > 1.1.1.1, so R2 initiates the TCP connection to R1’s port 646.
Step 3 — LDP Timers
Two independent timer sets govern LDP reliability.
Discovery timers
# On all routers
mpls ldp discovery hello interval 15
mpls ldp discovery hello holdtime 45
R1# show mpls ldp discovery detail
Hello interval: 15000 ms; Transport IP addr: 1.1.1.1
Hold time: 45 sec; Proposed local/peer: 45/45 sec
Session keepalive timers
Setting the session holdtime implicitly sets the keepalive interval to holdtime ÷ 3.
# On all routers
mpls ldp holdtime 90 # keepalive becomes 30s automatically
R1# show mpls ldp parameters
Session hold time: 90 sec; keep alive interval: 30 sec
Discovery hello: holdtime: 45 sec; interval: 15 sec
If existing sessions do not pick up the new timers, reset them:
clear mpls ldp neighbor *
Step 4 — Label Ranges
MPLS labels are 20-bit values. IOS reserves 0–15 for special purposes and defaults to 16–100,000 for LDP-assigned labels. Assigning non-overlapping per-router ranges makes every label self-documenting in traces and LFIB output.
R1(config)# mpls label range 100 199
R2(config)# mpls label range 200 299
R3(config)# mpls label range 300 399
R4(config)# mpls label range 400 499
R5(config)# mpls label range 500 599
R6(config)# mpls label range 600 699
R7(config)# mpls label range 700 799
Important: The new range only takes effect after
write memory+reload. Until thenshow mpls label rangeshows the current range alongside the pending one.
R1# show mpls label range
Downstream Generic label region: Min/Max label: 16/199
[Configured range for next reload: Min/Max label: 100/199]
Step 5 — The Control Plane (LIB)
The Label Information Base (LIB) is the MPLS equivalent of the routing table. Each LSR:
- Assigns a local binding (a label) to every prefix in its routing table
- Advertises that binding to all LDP peers over the TCP session
- Stores received bindings as remote entries
Tracing label bindings for 7.7.7.7/32 from R7 back to R1 shows the full chain:
| Router | Local label | Next-hop LSR | Remote label used |
|---|---|---|---|
| R7 | imp-null |
— (destination) | — |
| R6 | 600 |
R7 | imp-null → PHP |
| R5 | 502 |
R6 | 600 |
| R4 | 405 |
R5 | 502 |
| R3 | 307 |
R4 | 405 |
| R2 | 205 |
R3 | 307 |
| R1 | 100 |
R2 | 205 |
R6# show mpls ldp bindings 7.7.7.7 32
lib entry: 7.7.7.7/32, rev 10
local binding: label: 600
remote binding: lsr: 7.7.7.7:0, label: imp-null ← next-hop, used for forwarding
remote binding: lsr: 5.5.5.5:0, label: 502 ← not next-hop, stored but unused
That second remote binding from R5 is retained even though R5 is not the next-hop. This is liberal label retention mode — Cisco IOS’s default. If routing changes and R5 becomes the next-hop, R6 already has the binding and can switch immediately without waiting for LDP re-advertisement.
imp-null and PHP
imp-null (Implicit NULL, label 3) is a special signal. When R7 advertises it to R6
for the 7.7.7.7/32 prefix, it instructs R6: “pop the top label before forwarding to
me.” This is Penultimate Hop Popping (PHP). R7 receives a plain IP packet and only
needs one CEF lookup instead of two — a small optimisation that matters at scale.
Step 6 — The Data Plane (LFIB)
The Label Forwarding Information Base (LFIB) is derived from the LIB and merged with CEF, so a single lookup yields both the outgoing label and the next-hop.
| Operation | Description |
|---|---|
| Push (impose) | Add a label to an unlabelled packet — happens at the ingress LSR |
| Swap | Replace the topmost label — happens at intermediate LSRs |
| Pop | Remove the topmost label — happens at the penultimate or egress LSR |
Walking the LSP from R1 to 7.7.7.7
R1 imposes label 205 (the binding received from R2) and forwards:
R1# show ip cef 7.7.7.7/32 detail
7.7.7.7/32, epoch 0
nexthop 12.1.1.2 GigabitEthernet0/2 label 205()
Each hop swaps until R6 performs PHP:
R6# show mpls forwarding-table 7.7.7.7 32
Local Outgoing Prefix Outgoing Next Hop
Label Label or Tunnel interface
600 Pop Label 7.7.7.7/32 Gi0/7 67.1.1.7
R7 receives a plain IP packet and does a simple CEF receive lookup:
R7# show ip cef 7.7.7.7
7.7.7.7/32
receive for Loopback0
The traceroute confirms the full path and shows the active label at each hop:
R1# traceroute 7.7.7.7
1 12.1.1.2 [MPLS: Label 205 Exp 0] ← R2 (200–299 = R2's range)
2 23.1.1.3 [MPLS: Label 307 Exp 0] ← R3
3 34.1.1.4 [MPLS: Label 405 Exp 0] ← R4
4 45.1.1.5 [MPLS: Label 502 Exp 0] ← R5
5 56.1.1.6 [MPLS: Label 600 Exp 0] ← R6 (PHP on outbound)
6 67.1.1.7 ← R7 receives plain IP
Only R1 and R7 perform IP-based lookups. Everything in between is a label swap.
Step 7 — LDP Autoconfig
Enabling mpls ip per interface is tedious and easy to miss. OSPF can drive LDP
automatically — any interface in OSPF Area 0 gets LDP enabled without manual
intervention.
# On all routers
mpls ip
router ospf 1
mpls ldp autoconfig area 0
R2# show mpls interfaces
Interface IP Tunnel Operational
GigabitEthernet0/1 Yes (ldp) No Yes
GigabitEthernet0/3 Yes (ldp) No Yes
Loopbacks are not listed — LDP is not enabled on them because they do not carry labelled packets. Adding a new link to OSPF automatically brings up both OSPF and LDP adjacencies on that interface, with no additional configuration required.
Step 8 — Conditional Label Advertising
By default, LSRs advertise labels for every prefix, including transit link subnets
(12.1.1.0/24, 23.1.1.0/24, etc.). In a real MPLS core those labels are never used —
traffic always targets a loopback, never a transit subnet. Filtering them reduces LIB
size and LDP chatter across the core.
# On all routers
no mpls ldp advertise-labels
ip access-list standard LOOPBACKS-ONLY
deny 12.1.1.0 0.0.0.255
deny 23.1.1.0 0.0.0.255
deny 34.1.1.0 0.0.0.255
deny 45.1.1.0 0.0.0.255
deny 56.1.1.0 0.0.0.255
deny 67.1.1.0 0.0.0.255
permit any
mpls ldp advertise-labels for LOOPBACKS-ONLY
Note: This only filters what is advertised — local label bindings are still generated for all prefixes. Peer routers simply stop receiving remote bindings for the denied prefixes.
Step 9 — TTL Propagation and Topology Hiding
By default, when an ingress LSR imposes a label it copies the IP TTL into the MPLS shim header TTL field. Each LSR in the core decrements this value, generating ICMP TTL-exceeded messages the same way a normal IP router would. This makes every core router visible in a traceroute — useful internally, but something service providers typically want to hide from customers.
# Hide core from customer traceroutes (forwarded traffic only)
# Configure on all core LSRs (R2–R6)
no mpls ip propagate-ttl forwarded
Customer view after the change — core hops hidden:
R7# traceroute 1.1.1.1 probe 1
1 67.1.1.6 ← ingress LSR (still visible as first hop)
2 23.1.1.2 [MPLS: Label 209 Exp 0] ← R3/R4/R5 are hidden
3 12.1.1.1
Core operator traceroute is unaffected and still sees every hop:
R6# traceroute 1.1.1.1 probe 1
1 56.1.1.5 [MPLS: Label 509 Exp 0]
2 45.1.1.4 [MPLS: Label 409 Exp 0]
3 34.1.1.3 [MPLS: Label 310 Exp 0]
4 23.1.1.2 [MPLS: Label 209 Exp 0]
5 12.1.1.1
The local variant does the opposite — hides hops only from the LSR’s own
traceroutes while leaving customer-originated traceroutes unaffected.
Step 10 — Session Protection
Basic LDP discovery is tied to the physical link. If the direct link between two LDP peers goes down, the session drops and all label bindings are purged — even if an alternate routed path exists through the network. Session protection fixes this by establishing a parallel targeted hello adjacency (unicast UDP to the peer’s loopback) alongside the link hello. When the link fails, the targeted adjacency keeps the TCP session alive over the alternate IGP path.
# On R3, R4 and R5 (which have redundant paths between them)
mpls ldp session protection
R3# show mpls ldp discovery
Targeted Hellos:
3.3.3.3 -> 4.4.4.4 (ldp): active/passive, xmit/recv LDP Id: 4.4.4.4:0
3.3.3.3 -> 5.5.5.5 (ldp): active/passive, xmit/recv LDP Id: 5.5.5.5:0
When the G0/4 link on R3 goes down:
%OSPF-5-ADJCHG: Nbr 0.0.0.4 on GigabitEthernet0/4 from FULL to DOWN
LDP SP: 4.4.4.4:0: last primary adj lost; starting session protection holdup timer
LDP SP: 4.4.4.4:0: state change (Ready -> Protecting)
The OSPF adjacency drops but the LDP TCP session is kept alive via the targeted adjacency. All of R4’s label bindings remain in R3’s LIB throughout the outage — no re-advertisement needed when the link recovers.