In the previous post, we established BGP peering between our Kubernetes cluster and a virtual leaf-spine datacenter fabric, enabling dynamic Pod CIDR advertisement.
But Pod connectivity is only half the story.
In production Kubernetes environments, Services — especially LoadBalancer type services — are how applications expose themselves to the outside world. Traditionally, this requires external load balancers like MetalLB or cloud provider integrations.
With Cilium’s LoadBalancer IPAM and BGP Service Advertisement, we can eliminate that dependency entirely.
In this post, we’ll extend our lab to:
- Allocate LoadBalancer IPs from custom IP pools
- Advertise Service IPs via BGP to the datacenter fabric
- Implement selective advertisement using label-based filtering
- Verify end-to-end reachability from the core router to Kubernetes Services
Why LoadBalancer IPAM Matters
In bare-metal or on-premises Kubernetes deployments, creating a Service of type LoadBalancer typically leaves it stuck in <pending> state — because there’s no cloud provider to assign an external IP.
Cilium solves this with LB-IPAM, which:
- Assigns IPs from administrator-defined pools
- Integrates seamlessly with Cilium’s BGP control plane
- Supports multi-tenant IP allocation using selectors
- Advertises allocated IPs directly into your network fabric
No external controllers. No cloud dependencies. Just native Kubernetes networking.
The Architecture
Our setup builds on the existing BGP topology:
- Kubernetes nodes already peer with ToR switches
- ToRs peer with the spine (core router)
- New: LoadBalancer IPs are allocated from a dedicated pool and advertised via BGP
When a Service of type LoadBalancer is created:
- Cilium’s LB-IPAM assigns an IP from the pool
- The BGP control plane advertises this IP to ToR switches
- ToRs propagate the route to the spine
- Traffic can now reach the Service from anywhere in the fabric
Defining an IP Pool
We start by creating a CiliumLoadBalancerIPPool resource that defines which IPs can be allocated for Services.
pool.yamlapiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "pool-primary"
spec:
blocks:
- start: "20.0.20.100"
stop: "20.0.20.200"
serviceSelector:
matchExpressions:
- {key: color, operator: In, values: [yellow, red, blue]}
This pool:
- Allocates IPs in the range
20.0.20.100–20.0.20.200 - Only applies to Services labeled with
color: yellow|red|blue
The serviceSelector gives us fine-grained control over which Services can consume IPs from this pool — critical for multi-tenant environments.
Apply it:
kubectl apply -f pool.yaml
Configuring BGP Service Advertisement
Next, we configure which Services should be advertised via BGP.
We create a CiliumBGPAdvertisement resource with:
- Advertisement type:
Service - Address type:
LoadBalancerIP - Label-based filtering for selective advertisement
cilium-bgp-advertisement.yamlapiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
name: bgp-advertisements
labels:
advertise: services
spec:
advertisements:
- advertisementType: "Service"
service:
addresses:
- LoadBalancerIP
selector:
matchExpressions:
- {key: color, operator: In, values: ["blue", "red"]}
- {key: io.kubernetes.service.namespace, operator: In, values: ["tenant-a", "tenant-b"]}
This policy states:
Advertise LoadBalancer IPs only for Services that are:
- Labeled
color: blueorcolor: red- In namespace
tenant-aortenant-b
This is powerful. We can:
- Isolate tenant traffic
- Prevent unwanted services from leaking into the fabric
- Implement namespace-based routing policies
Updating BGP Peer Configuration
We need to ensure our existing CiliumBGPPeerConfig includes the new services advertisement.
The updated configuration references both the pod-cidr and services advertisements:
cilium-bgp-peering-policies.yaml (updated)---
apiVersion: "cilium.io/v2"
kind: CiliumBGPPeerConfig
metadata:
name: peer-config-generic
spec:
families:
- afi: ipv4
safi: unicast
advertisements:
matchLabels:
advertise: "pod-cidr"
- afi: ipv4
safi: unicast
advertisements:
matchLabels:
advertise: "services"
Apply the updated policies:
kubectl apply -f cilium-bgp-peering-policies.yaml
kubectl apply -f cilium-bgp-advertisement.yaml
Now Cilium will advertise both:
- Pod CIDRs (from all nodes)
- LoadBalancer Service IPs (from matching services)
Creating Test Services
Let’s create two Services to test the configuration.
Service Blue (tenant-a)
service-blue.yamlapiVersion: v1
kind: Service
metadata:
name: service-blue
namespace: tenant-a
labels:
color: blue
spec:
type: LoadBalancer
ports:
- port: 1234
Service Red (tenant-b)
service-red.yamlapiVersion: v1
kind: Service
metadata:
name: service-red
namespace: tenant-b
labels:
color: red
spec:
type: LoadBalancer
ports:
- port: 1234
Create the namespaces and services:
kubectl create namespace tenant-a
kubectl create namespace tenant-b
kubectl apply -f service-blue.yaml
kubectl apply -f service-red.yaml
Check the allocated IPs:
kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tenant-a service-blue LoadBalancer 10.96.123.45 20.0.20.100 1234/TCP
tenant-b service-red LoadBalancer 10.96.234.56 20.0.20.101 1234/TCP
Both services immediately receive IPs from the pool-primary range ✅
Verifying BGP Advertisement
Now for the moment of truth — are these Service IPs actually advertised into the fabric?
Let’s check the BGP table on tor0:
docker exec -it clab-bgp-topo-tor0 vtysh -c 'show bgp ipv4'
BGP table version is 61, local router ID is 10.0.0.1, vrf id 0
Default local pref 100, local AS 65010
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 0.0.0.0/0 net0 0 65000 i
*> 10.0.0.0/32 net0 0 0 65000 ?
*> 10.0.0.1/32 0.0.0.0(tor0) 0 32768 ?
*> 10.0.0.2/32 net0 0 65000 65011 ?
*> 10.0.1.0/24 0.0.0.0(tor0) 0 32768 ?
*> 10.0.2.0/24 0.0.0.0(tor0) 0 32768 ?
*> 10.0.3.0/24 net0 0 65000 65011 ?
*> 10.0.4.0/24 net0 0 65000 65011 ?
*>i20.0.20.100/32 10.0.1.2(kind-control-plane)
100 0 i
*=i 10.0.2.2(kind-worker)
100 0 i
* net0 0 65000 65011 i
*>i20.0.20.101/32 10.0.1.2(kind-control-plane)
100 0 i
*=i 10.0.2.2(kind-worker)
100 0 i
* net0 0 65000 65011 i
*> 172.20.20.0/24 net0 0 0 65000 ?
Displayed 11 routes and 15 total paths
Perfect! 🎯
We can see:
20.0.20.100/32→ advertised by bothkind-control-planeandkind-worker(ECMP)20.0.20.101/32→ advertised by bothkind-control-planeandkind-worker(ECMP)
The ToR has learned these Service IPs via iBGP from the Kubernetes nodes in rack0.
And notice the = symbol — these routes have equal-cost multipath (ECMP) enabled, meaning traffic can be load-balanced across multiple nodes.
How ECMP Works for Services
Because both nodes in rack0 advertise the same Service IP, the ToR switch sees two equal-cost paths to reach 20.0.20.100.
This is exactly what we want:
- Traffic destined for the Service can enter via any node
- Cilium’s eBPF dataplane will forward packets to the correct backend Pod
- If one node fails, traffic automatically uses the other path
This is native, fabric-level load balancing — no external load balancer required.
Testing End-to-End Connectivity
To verify everything works, let’s test connectivity from the core router (spine) to a Service IP.
From router0:
docker exec -it clab-bgp-topo-router0 ping -c 4 20.0.20.100
PING 20.0.20.100 (20.0.20.100) 56(84) bytes of data.
64 bytes from 20.0.20.100: icmp_seq=1 ttl=63 time=0.187 ms
64 bytes from 20.0.20.100: icmp_seq=2 ttl=63 time=0.134 ms
64 bytes from 20.0.20.100: icmp_seq=3 ttl=63 time=0.142 ms
64 bytes from 20.0.20.100: icmp_seq=4 ttl=63 time=0.156 ms
--- 20.0.20.100 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss
Success! ✅
The core router can reach the Service IP, meaning:
- BGP advertisement is working
- Routing is correct across the fabric
- Cilium is properly handling Service traffic
What About Services in Other Namespaces?
What if we create a Service that doesn’t match the advertisement policy?
Let’s try:
apiVersion: v1
kind: Service
metadata:
name: service-green
namespace: tenant-c
labels:
color: green
spec:
type: LoadBalancer
ports:
- port: 1234
This Service:
- Has label
color: green(notblueorred) - Is in namespace
tenant-c(nottenant-aortenant-b)
After applying:
kubectl get svc -n tenant-c
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service-green LoadBalancer 10.96.45.67 20.0.20.102 1234/TCP
It receives an IP from the pool (because the pool selector matches color: green), but if we check the BGP table:
docker exec -it clab-bgp-topo-tor0 vtysh -c 'show bgp ipv4 | grep 20.0.20.102'
(no output)
The IP is not advertised ❌
This confirms our policy is working:
- LB-IPAM allocated the IP
- BGP advertisement was blocked by the selector
This separation is key for multi-tenant security.
Key Takeaways
With Cilium’s LoadBalancer IPAM and BGP Service Advertisement, we’ve achieved:
- No external load balancers required — Cilium handles IP allocation natively
- BGP-driven Service reachability — Services are routable from anywhere in the DC fabric
- Label-based policy control — Fine-grained control over which Services are advertised
- ECMP load balancing — Multiple nodes can announce the same Service IP for redundancy
- Multi-tenancy support — Namespace and label selectors enable secure isolation
This is how LoadBalancer Services should work in modern on-premises Kubernetes.
Real-World Use Cases
This architecture is perfect for:
- Bare-metal Kubernetes — No cloud provider? No problem.
- On-prem data centers — Integrate Kubernetes directly into existing BGP fabrics
- Multi-tenant platforms — Use labels to control Service advertisement per team/namespace
- Hybrid cloud — Run consistent networking across cloud and on-prem
What’s Next?
In future posts, we could explore:
- IPv6 Service advertisement
- Anycast Services using BGP
- BGP communities for traffic engineering
- Integration with external DNS for automatic Service discovery
Final Thoughts
Cilium continues to prove itself as the most powerful CNI for production Kubernetes deployments.
The combination of:
- Native routing
- BGP control plane
- LoadBalancer IPAM
- eBPF dataplane
…makes Cilium the gold standard for on-premises and bare-metal Kubernetes networking.
If you’re running Kubernetes outside the cloud, Cilium isn’t just an option — it’s the right choice.
Happy routing 🚀