In the previous post on tunnelling ldp over rsvp we have briefly discussed the option traffic-engineering bgp-igp, which we need to turn on on PE1 so we can use the LSP path with the trace-route to PE2 loopback for verification/demonstration purpose.
Today, we go into more details on the two options, that are quite similar in that regard.
traffic-engineering bgp-igp
install LSP as the best route in inet.0 table, as well as in forwarding table.
traffic-engineering mpls-forwarding
install LSP in inet.0 table for forwarding only. It can be used for next hop look up. But it is installed in the inet.0 table with a higher admin distance, i.e. less preferred than IGP route.
As far as tracing from PE1 to PE2 loopback is concerned, the two commands do the same job: LSP is used for forwarding traffic.
lab@PE1# show protocols mpls
traffic-engineering bgp-igp;
lab@PE1# run show route 192.168.1.2
inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
192.168.1.2/32 *[LDP/9] 00:02:28, metric 1
> to 172.22.210.2 via em1.210, Push 300240
[OSPF/10] 00:02:30, metric 4
> to 172.22.210.2 via em1.210
lab@PE1# run traceroute 192.168.1.2
traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 40 byte packets
1 172.22.210.2 (172.22.210.2) 0.461 ms 0.388 ms 0.243 ms
MPLS Label=300240 CoS=0 TTL=1 S=1
2 172.22.201.2 (172.22.201.2) 0.498 ms 0.459 ms 0.355 ms
MPLS Label=300528 CoS=0 TTL=1 S=0
MPLS Label=300240 CoS=0 TTL=1 S=1
3 172.22.206.2 (172.22.206.2) 0.634 ms 0.570 ms 0.602 ms
MPLS Label=300240 CoS=0 TTL=1 S=1
4 192.168.1.2 (192.168.1.2) 0.876 ms 0.991 ms 0.861 ms
[edit]
lab@PE1# set protocols mpls traffic-engineering mpls-forwarding
[edit]
lab@PE1# commit
commit complete
[edit]
lab@PE1# show protocols mpls
traffic-engineering mpls-forwarding;
lab@PE1# run show route 192.168.1.2
inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
@ = Routing Use Only, # = Forwarding Use Only
+ = Active Route, - = Last Active, * = Both
192.168.1.2/32 @[OSPF/10] 00:04:19, metric 4
> to 172.22.210.2 via em1.210
#[LDP/9] 00:00:28, metric 1
> to 172.22.210.2 via em1.210, Push 300240
inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
192.168.1.2/32 *[LDP/9] 00:00:28, metric 1
> to 172.22.210.2 via em1.210, Push 300240
lab@PE1# run traceroute 192.168.1.2
traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 40 byte packets
1 172.22.210.2 (172.22.210.2) 0.905 ms 0.312 ms 0.272 ms
MPLS Label=300240 CoS=0 TTL=1 S=1
2 172.22.201.2 (172.22.201.2) 0.417 ms 0.401 ms 0.414 ms
MPLS Label=300528 CoS=0 TTL=1 S=0
MPLS Label=300240 CoS=0 TTL=1 S=1
3 172.22.206.2 (172.22.206.2) 0.572 ms 0.683 ms 0.899 ms
MPLS Label=300240 CoS=0 TTL=1 S=1
4 192.168.1.2 (192.168.1.2) 0.943 ms 1.180 ms 0.824 ms
As we can see, tracing looks the same! No difference in the packet forwarding decision.
However, the difference is in the routing selection process, i.e. in the control plane. When we use traffic-engineering bgp-igp option, it may change the protocol associated with the best route, and may change the routing outcome (e.g. with routing-policy that match based on the source protocol). On the other hand, traffic-engineering mpls-forwarding does not change the routing behaviour.
To demonstrate this behaviour, we create a policy to export OSPF route into eBGP to CE1.
[edit protocols bgp]
lab@PE1# show
# delete advertise-inactive
group my-ext-group {
type external;
export OSPF-to-BGP;
peer-as 65101;
neighbor 10.0.10.2;
}
lab@PE1# top show policy-options
policy-statement OSPF-to-BGP {
term 1 {
from {
protocol ospf;
route-filter 192.168.1.2/32 exact;
}
then accept;
}
}
In the below example, as we use mpls-forwarding option, the route 192.168.1.2/32 gets advertised from OSPF into BGP.
[edit protocols mpls]
lab@PE1# show
traffic-engineering mpls-forwarding;
[edit protocols mpls]
lab@PE1# run show route advertising-protocol bgp 10.0.10.2
inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.0.11.0/24 Self 65102 I
@ 192.168.1.2/32 Self 4 I
* 192.168.11.2/32 Self 65102 I
But with the below config, the same route does not get advertised from OSPF to BGP
lab@PE1# show
traffic-engineering bgp-igp;
lab@PE1# run show route 192.168.1.2
inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
192.168.1.2/32 *[LDP/9] 00:01:02, metric 1
> to 172.22.210.2 via em1.210, Push 300240
[OSPF/10] 00:27:55, metric 4
> to 172.22.210.2 via em1.210
[edit protocols mpls]
lab@PE1# run show route advertising-protocol bgp 10.0.10.2
inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.0.11.0/24 Self 65102 I
* 192.168.11.2/32 Self 65102 I
The route 192.168.1.2/32 is not advertised to CE1, because it is no longer OSPF active route.
Reference
http://www.juniper.net/techpubs/en_US/junos9.5/information-products/topic-collections/config-guide-mpls-applications/mpls-configuring-traffic-engineering-for-lsps.html
When you configure an LSP, a host route (a 32-bit mask) is installed in the ingress router toward the egress router; the address of the host route is the destination address of the LSP. Typically, you configure the BGP option (traffic-engineering bgp), allowing only BGP to use LSPs in its route calculations . The other traffic-engineering statement options, allow you to alter this behavior in the master instance. This functionality is not available for specific routing instances. Also, you can enable only one of the traffic-engineering statement options (bgp, bgp-igp, bgp-igp-both-ribs, or mpls-forwarding) at a time.
Using RSVP and LDP Routes for Traffic Forwarding
Configure the bgp-igp option of the traffic-engineering statement to cause BGP and the interior gateway protocols (IGPs) to use LSPs for forwarding traffic destined for egress routers. The bgp-igp option causes all inet.3 routes to be moved to the inet.0 routing table.
On the ingress router, include the traffic-engineering bgp-igp statement:
traffic-engineering bgp-igp;
Using RSVP and LDP Routes for Forwarding But Not Route Selection
If you configure the traffic-engineering bgp-igp statement or the traffic-engineering bgp-igp-both-ribs statement, high-priority RSVP and LDP routes can supersede IGP routes in the inet.0 routing table. IGP routes might no longer be redistributed since they are no longer the active routes.
When you configure the mpls-forwarding option at either the [edit logical-systems logical-system-name protocols mpls traffic-engineering] hierarchy level or the [edit protocols mpls traffic-engineering] hierarchy level, RSVP and LDP routes are used for forwarding but are excluded from route selection. These routes are added to both the inet.0 and inet.3 routing tables. RSVP and LDP routes in the inet.0 routing table are given a low preference when the active route is selected. However, RSVP and LDP routes in the inet.3 routing table are given a normal preference and are therefore used for selecting forwarding next hops.
When you activate the mpls-forwarding option, routes whose state is ForwardingOnly are preferred for forwarding even if their preference is lower than that of the currently active route. To examine the state of a route, execute a show route detail command.
To configure, include the traffic-engineering mpls-forwarding statement:
traffic-engineering mpls-forwarding;