BGP based L2VPN

Configuration

PE1#

/* If logical systems are used for the lab, then 
 * The physical interface encapsulation (ethernet-ccc) is configured 
 * at the global level - NOT at logical system level
 */
interface {
    ge-1/1/0 {
        encapsulation ethernet-ccc;
        unit 0 {
            description "PE1->CE1 | Physical interface";
        }    
    }
}

protocols {
    mpls {
        /* PE1->P1 */
        interface lt-0/0/10.1101;
    }
    bgp {
        group PE2 {                     
            type internal;
            local-address 11.11.11.11;
            family l2vpn {
                signaling;
            }
            neighbor 12.12.12.12;
        }
    }
    ospf {
        area 0.0.0.0 {
            /* PE1->P1 */
            interface lt-0/0/10.1101;
            /* PE1 Loopback */
            interface lo0.11;
        }
    }
    ldp {
        /* PE1->P1 */
        interface lt-0/0/10.1101;
        /* PE1 Loopback */
        interface lo0.11;
    }
}
routing-instances {
    L2VPN_1 {
        instance-type l2vpn;            
        interface ge-1/1/0.0;
        route-distinguisher 11.11.11.11:1001;
        vrf-target target:100:1001;
        protocols {
            l2vpn {
                encapsulation-type ethernet;
                interface ge-1/1/0.0;
                site CE1 {
                    site-identifier 1;
                    interface ge-1/1/0.0;
                }
            }
        }
    }
}



PE2#

/* If logical systems are used for the lab, then 
 * The physical interface encapsulation (ethernet-ccc) is configured 
 * at the global level - NOT at logical system level
 */
interface {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            description "PE2->CE2 | Physical interface";
        }    
    }
}


protocols {
    mpls {
        /* PE2->P3 */
        interface lt-0/0/10.123;
    }
    bgp {
        group PE1 {
            type internal;              
            local-address 12.12.12.12;
            family l2vpn {
                signaling;
            }
            neighbor 11.11.11.11;
        }
    }
    ospf {
        area 0.0.0.0 {
            /* PE2->P3 */
            interface lt-0/0/10.123;
            /* PE2 Loopback */
            interface lo0.12;
        }
    }
    ldp {
        /* PE2->P3 */
        interface lt-0/0/10.123;
        /* PE2 Loopback */
        interface lo0.12;
    }
}
routing-instances {
    L2VPN_1 {
        instance-type l2vpn;
        interface ge-1/1/1.0;           
        route-distinguisher 12.12.12.12:1001;
        vrf-target target:100:1001;
        protocols {
            l2vpn {
                encapsulation-type ethernet;
                site CE2 {
                    site-identifier 2;
                    interface ge-1/1/1.0 {
                        remote-site-id 1;
                    }
                }
            }
        }
    }
}

Verification

pe1@MX:PE1> show bgp summary          
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l2vpn.0          
                       1          1          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
12.12.12.12             100         63         64       0       0       26:27 Establ
  bgp.l2vpn.0: 1/1/1/0
  L2VPN_1.l2vpn.0: 1/1/1/0

pe1@MX:PE1> show route receive-protocol bgp 12.12.12.12 detail 

inet.0: 24 destinations, 25 routes (24 active, 0 holddown, 0 hidden)

inet.3: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)

mpls.0: 13 destinations, 13 routes (13 active, 0 holddown, 0 hidden)

bgp.l2vpn.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
*  12.12.12.12:1001:2:1/96 (1 entry, 0 announced)
     Import Accepted
     Route Distinguisher: 12.12.12.12:1001
     Label-base: 800000, range: 2, status-vector: 0x0 
     Nexthop: 12.12.12.12
     Localpref: 100
     AS path: I
     Communities: target:100:1001 Layer2-info: encaps:ETHERNET, control flags:Control-Word, mtu: 0, site preference: 100

L2VPN_1.l2vpn.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)

*  12.12.12.12:1001:2:1/96 (1 entry, 1 announced)
     Import Accepted
     Route Distinguisher: 12.12.12.12:1001
     Label-base: 800000, range: 2, status-vector: 0x0 
     Nexthop: 12.12.12.12
     Localpref: 100
     AS path: I                         
     Communities: target:100:1001 Layer2-info: encaps:ETHERNET, control flags:Control-Word, mtu: 0, site preference: 100

pe1@MX:PE1> show route table l2vpn    

L2VPN_1.l2vpn.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

11.11.11.11:1001:1:1/96                
                   *[L2VPN/170/-101] 00:08:09, metric2 1
                      Indirect
12.12.12.12:1001:2:1/96                
                   *[BGP/170] 00:08:09, localpref 100, from 12.12.12.12
                      AS path: I
                    > to 100.1.11.1 via lt-0/0/10.1101, Push 300016

pe1@MX:PE1> show l2vpn connections | find L2VPN_1     

Instance: L2VPN_1
  Local site: CE1 (1)
    connection-site           Type  St     Time last up          # Up trans
    2                         rmt   Up     Jul 20 06:13:15 2014           1
      Remote PE: 12.12.12.12, Negotiated control-word: Yes (Null)
      Incoming label: 800001, Outgoing label: 800000
      Local interface: ge-1/1/0.0, Status: Up, Encapsulation: ETHERNET

LDP based L2CKT

Configuration

PE1#

/* If logical systems are used for the lab, then 
 * The physical interface encapsulation (ethernet-ccc) is configured 
 * at the global level - NOT at logical system level
 */
 interface {
    ge-1/1/0 {
        encapsulation ethernet-ccc;
        unit 0 {
            description "PE1->CE1 | Physical interface";
        }    
    }
}

protocols {
    mpls {
        /* PE1->P1 */
        interface lt-0/0/10.1101;
    }
    ospf {
        area 0.0.0.0 {                  
            /* PE1 Loopback */
            interface lo0.11;
            /* PE1->P1 */
            interface lt-0/0/10.1101;
        }
    }
    ldp {
        /* PE1->P1 */
        interface lt-0/0/10.1101;
        /* PE1 Loopback */
        interface lo0.11;
    }
    l2circuit {
        neighbor 12.12.12.12 {
            interface ge-1/1/0.0 {
                virtual-circuit-id 1;
                no-control-word;
                ignore-mtu-mismatch;
            }
        }
    }
}



PE2#

 interface {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            description "PE2->CE2 | Physical interface";
        }    
    }
}


protocols {
    mpls {
        /* PE2->P3 */
        interface lt-0/0/10.123;
    }
    ospf {
        area 0.0.0.0 {
            /* PE2->P3 */               
            interface lt-0/0/10.123;
            /* PE2 Loopback */
            interface lo0.12;
        }
    }
    ldp {
        /* PE2->P3 */
        interface lt-0/0/10.123;
        /* PE2 Loopback */
        interface lo0.12;
    }
    l2circuit {
        neighbor 11.11.11.11 {
            interface ge-1/1/1.0 {
                virtual-circuit-id 1;
                no-control-word;
                ignore-mtu-mismatch;
            }
        }
    }
}

Verification

Confirm that LDP sessions are up not only between local routers PE1-P1, but also via remote targeted LDP session PE1–PE2

pe1@MX:PE1> show ldp neighbor    
Address            Interface          Label space ID         Hold time
12.12.12.12        lo0.11             12.12.12.12:0            42
100.1.11.1         lt-0/0/10.1101     1.1.1.1:0                14


pe1@MX:PE1> show ldp database 
Input label database, 11.11.11.11:0--1.1.1.1:0
  Label     Prefix
      3     1.1.1.1/32
 299776     2.2.2.2/32
 299792     3.3.3.3/32
 299840     4.4.4.4/32
 299808     5.5.5.5/32
 299824     6.6.6.6/32
 299952     11.11.11.11/32
 299968     12.12.12.12/32

Output label database, 11.11.11.11:0--1.1.1.1:0
  Label     Prefix
 300112     1.1.1.1/32
 300128     2.2.2.2/32
 300144     3.3.3.3/32
 300192     4.4.4.4/32
 300160     5.5.5.5/32
 300176     6.6.6.6/32
      3     11.11.11.11/32
 300224     12.12.12.12/32

Input label database, 11.11.11.11:0--12.12.12.12:0
  Label     Prefix
 300160     1.1.1.1/32
 300144     2.2.2.2/32
 300128     3.3.3.3/32                  
 300208     4.4.4.4/32
 300176     5.5.5.5/32
 300192     6.6.6.6/32
 300224     11.11.11.11/32
      3     12.12.12.12/32
 300112     L2CKT NoCtrlWord ETHERNET VC 1

Output label database, 11.11.11.11:0--12.12.12.12:0
  Label     Prefix
 300112     1.1.1.1/32
 300128     2.2.2.2/32
 300144     3.3.3.3/32
 300192     4.4.4.4/32
 300160     5.5.5.5/32
 300176     6.6.6.6/32
      3     11.11.11.11/32
 300224     12.12.12.12/32
 300208     L2CKT NoCtrlWord ETHERNET VC 1

Confirm that the L2CKT is up for the P2P connection between PE1–PE2. If the session is not up, then we may need to check MTU, encapsulation, VC ID.

pe1@MX:PE1> show l2circuit connections | find Neighbor       
Neighbor: 12.12.12.12 
    Interface                 Type  St     Time last up          # Up trans
    ge-1/1/0.0(vc 1)          rmt   Up     Jul 20 05:27:32 2014           1
      Remote PE: 12.12.12.12, Negotiated control-word: No
      Incoming label: 300208, Outgoing label: 300112
      Negotiated PW status TLV: No
      Local interface: ge-1/1/0.0, Status: Up, Encapsulation: ETHERNET
      

IPSec tunnel between 2 Cisco IOS routers

In this lab, we are going to configure a simple IPSec tunnel between two Cisco IOS routers, and run OSPF over the tunnel.

Below are parameters for the IPSec tunnel, which is the same as in the IPSec lab between 2 SRX firewalls.

Phase 1:

  • Authentication method: Pre-shared Key
  • dh-group: group2
  • Authentication algorithm: md5
  • encryption algorithm: 3des-cbc
  • lifetime: 86400

Phase 2:

  • ESP protocol
  • Authentication algorithm: hmac-md5-96
  • Encryption algorithm: 3des-cbc
  • Lifetime: 3600

All internal traffic (in this lab is between the two loopback addresses) is allowed via the tunnel.

Topology

IPSec Tunnel IOS routers

Configuration

Below is the config on one router (R1). Config on the other router (R2) is similar.

IKE / ISAKMP Phase 1 config

crypto isakmp policy 1
 encr 3des
 hash md5
 authentication pre-share
 group 2
! 0.0.0.0/0 is to define traffic to be encrypted.
crypto isakmp key cisco address 0.0.0.0 0.0.0.0
crypto isakmp keepalive 10

IPSec Phase 2 Config

crypto ipsec transform-set R1-R2-TSET esp-3des esp-md5-hmac 
!
crypto ipsec profile R1-R2-PROFILE
 set transform-set R1-R2-TSET 
!
interface Tunnel0
 description "IPSec tunnel interface"
 ip address 10.10.1.1 255.255.255.252
 tunnel source 123.1.1.2
 tunnel destination 123.1.2.2
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile R1-R2-PROFILE

Routing config (for demonstration/verification purpose).

interface FastEthernet0/0
 description "Internet facing interface"
 ip address 123.1.1.2 255.255.255.252
!
interface Loopback0
 description "Internal facing interface"
 ip address 10.10.100.1 255.255.255.0
!
! Run OSPF via IPSec tunnel
router ospf 1
 log-adjacency-changes
 network 10.10.1.1 0.0.0.0 area 0
 network 10.10.100.1 0.0.0.0 area 0
!

ip route 0.0.0.0 0.0.0.0 123.1.1.1 name Default-to-Internet

Verification

R1#show interfaces tunnel 0
Tunnel0 is up, line protocol is up 
  Hardware is Tunnel
  Description: "IPSec tunnel interface"
  Internet address is 10.10.1.1/30
  MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec, 
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation TUNNEL, loopback not set
  Keepalive not set
  Tunnel source 123.1.1.2, destination 123.1.2.2
  Tunnel protocol/transport IPSEC/IP
  Tunnel TTL 255
  Fast tunneling enabled
  Tunnel transmit bandwidth 8000 (kbps)
  Tunnel receive bandwidth 8000 (kbps)
  Tunnel protection via IPSec (profile "R1-R2-PROFILE")
  Last input never, output never, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/0 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     434 packets input, 37004 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     444 packets output, 37470 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out

R1#show crypto session detail 
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection     
K - Keepalives, N - NAT-traversal, X - IKE Extended Authentication

Interface: Tunnel0
Session status: UP-ACTIVE     
Peer: 123.1.2.2 port 500 fvrf: (none) ivrf: (none)
      Phase1_id: 123.1.2.2
      Desc: (none)
  IKE SA: local 123.1.1.2/500 remote 123.1.2.2/500 Active 
          Capabilities:D connid:3 lifetime:23:44:43
  IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 
        Active SAs: 2, origin: crypto map
        Inbound:  #pkts dec'ed 96 drop 0 life (KB/Sec) 4386664/2741
        Outbound: #pkts enc'ed 96 drop 0 life (KB/Sec) 4386664/2741

R1#show crypto isakmp ?
  key      Show ISAKMP preshared keys
  peers    Show ISAKMP peer structures
  policy   Show ISAKMP protection suite policy
  profile  Show ISAKMP profiles
  sa       Show ISAKMP Security Associations

R1#show crypto isakmp sa detail
Codes: C - IKE configuration mode, D - Dead Peer Detection
       K - Keepalives, N - NAT-traversal
       X - IKE Extended Authentication
       psk - Preshared key, rsig - RSA signature
       renc - RSA encryption

C-id  Local           Remote          I-VRF    Status Encr Hash Auth DH Lifetime Cap.
3     123.1.1.2       123.1.2.2                ACTIVE 3des md5  psk  2  23:29:57 D   
       Connection-id:Engine-id =  3:1(software)

R1#show crypto isakmp policy  

Global IKE policy
Protection suite of priority 1
        encryption algorithm:   Three key triple DES
        hash algorithm:         Message Digest 5
        authentication method:  Pre-Shared Key
        Diffie-Hellman group:   #2 (1024 bit)
        lifetime:               86400 seconds, no volume limit
Default protection suite
        encryption algorithm:   DES - Data Encryption Standard (56 bit keys).
        hash algorithm:         Secure Hash Standard
        authentication method:  Rivest-Shamir-Adleman Signature
        Diffie-Hellman group:   #1 (768 bit)
        lifetime:               86400 seconds, no volume limit

R1#sh crypto ipsec ?      
  client                Show Client Status
  policy                Show IPSEC client policies
  profile               Show ipsec profile information
  sa                    IPSEC SA table
  security-association  Show parameters for IPSec security associations
  transform-set         Crypto transform sets

R1#sh crypto ipsec sa

interface: Tunnel0
    Crypto map tag: Tunnel0-head-0, local addr 123.1.1.2

   protected vrf: (none)
   local  ident (addr/mask/prot/port): (0.0.0.0/0.0.0.0/0/0)
   remote ident (addr/mask/prot/port): (0.0.0.0/0.0.0.0/0/0)
   current_peer 123.1.2.2 port 500
     PERMIT, flags={origin_is_acl,}
    #pkts encaps: 114, #pkts encrypt: 114, #pkts digest: 114
    #pkts decaps: 115, #pkts decrypt: 115, #pkts verify: 115
    #pkts compressed: 0, #pkts decompressed: 0
    #pkts not compressed: 0, #pkts compr. failed: 0
    #pkts not decompressed: 0, #pkts decompress failed: 0
    #send errors 0, #recv errors 0

     local crypto endpt.: 123.1.1.2, remote crypto endpt.: 123.1.2.2
     path mtu 1500, ip mtu 1500, ip mtu idb FastEthernet0/0
     current outbound spi: 0x5D85C9FC(1569049084)

     inbound esp sas:
      spi: 0x43B60039(1136001081)
        transform: esp-3des esp-md5-hmac ,
        in use settings ={Tunnel, }
        conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0
        sa timing: remaining key lifetime (k/sec): (4386661/2562)
        IV size: 8 bytes
        replay detection support: Y
        Status: ACTIVE

     inbound ah sas:

     inbound pcp sas:

     outbound esp sas:
      spi: 0x5D85C9FC(1569049084)
        transform: esp-3des esp-md5-hmac ,
        in use settings ={Tunnel, }
        conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0
        sa timing: remaining key lifetime (k/sec): (4386661/2560)
        IV size: 8 bytes
        replay detection support: Y
        Status: ACTIVE

     outbound ah sas:

     outbound pcp sas:

Debugging commands:

debug crypto isakmp
debug crypto ipsec

Difference between traffic-engineering options: bgp-igp vs mpls-forwarding

In the previous post on tunnelling ldp over rsvp we  have briefly discussed the option traffic-engineering bgp-igp, which we need to turn on on PE1 so we can use the LSP path with the trace-route to PE2 loopback for verification/demonstration purpose.

Today, we go into more details on the two options, that are quite similar in that regard.

traffic-engineering bgp-igp

install LSP as the best route in inet.0 table, as well as in forwarding table.

traffic-engineering mpls-forwarding

install LSP in inet.0 table for forwarding only. It can be used for next hop look up. But it is installed in the inet.0 table with a higher admin distance, i.e. less preferred than IGP route.

As far as tracing from PE1 to PE2 loopback is concerned, the two commands do the same job: LSP is used for forwarding traffic.

lab@PE1# show protocols mpls 
traffic-engineering bgp-igp;

lab@PE1# run show route 192.168.1.2 

inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.1.2/32     *[LDP/9] 00:02:28, metric 1
                    > to 172.22.210.2 via em1.210, Push 300240
                    [OSPF/10] 00:02:30, metric 4
                    > to 172.22.210.2 via em1.210

lab@PE1# run traceroute 192.168.1.2 
traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 40 byte packets
 1  172.22.210.2 (172.22.210.2)  0.461 ms  0.388 ms  0.243 ms
     MPLS Label=300240 CoS=0 TTL=1 S=1
 2  172.22.201.2 (172.22.201.2)  0.498 ms  0.459 ms  0.355 ms
     MPLS Label=300528 CoS=0 TTL=1 S=0
     MPLS Label=300240 CoS=0 TTL=1 S=1
 3  172.22.206.2 (172.22.206.2)  0.634 ms  0.570 ms  0.602 ms
     MPLS Label=300240 CoS=0 TTL=1 S=1
 4  192.168.1.2 (192.168.1.2)  0.876 ms  0.991 ms  0.861 ms

[edit]
lab@PE1# set protocols mpls traffic-engineering mpls-forwarding 
[edit]
lab@PE1# commit 
commit complete

[edit]
lab@PE1# show protocols mpls 
traffic-engineering mpls-forwarding;

lab@PE1# run show route 192.168.1.2                                

inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
@ = Routing Use Only, # = Forwarding Use Only
+ = Active Route, - = Last Active, * = Both

192.168.1.2/32     @[OSPF/10] 00:04:19, metric 4
                    > to 172.22.210.2 via em1.210
                   #[LDP/9] 00:00:28, metric 1
                    > to 172.22.210.2 via em1.210, Push 300240

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.1.2/32     *[LDP/9] 00:00:28, metric 1
                    > to 172.22.210.2 via em1.210, Push 300240

lab@PE1# run traceroute 192.168.1.2                                
traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 40 byte packets
 1  172.22.210.2 (172.22.210.2)  0.905 ms  0.312 ms  0.272 ms
     MPLS Label=300240 CoS=0 TTL=1 S=1
 2  172.22.201.2 (172.22.201.2)  0.417 ms  0.401 ms  0.414 ms
     MPLS Label=300528 CoS=0 TTL=1 S=0
     MPLS Label=300240 CoS=0 TTL=1 S=1
 3  172.22.206.2 (172.22.206.2)  0.572 ms  0.683 ms  0.899 ms
     MPLS Label=300240 CoS=0 TTL=1 S=1
 4  192.168.1.2 (192.168.1.2)  0.943 ms  1.180 ms  0.824 ms

As we can see, tracing looks the same! No difference in the packet forwarding decision.

However, the difference is in the routing selection process, i.e. in the control plane. When we use traffic-engineering bgp-igp option, it may change the protocol associated with the best route, and may change the routing outcome (e.g. with routing-policy that match based on the source protocol). On the other hand, traffic-engineering mpls-forwarding does not change the routing behaviour.

To demonstrate this behaviour, we create a policy to export OSPF route into eBGP to CE1.

[edit protocols bgp]
lab@PE1# show 
# delete advertise-inactive 
group my-ext-group {
    type external;
    export OSPF-to-BGP;
    peer-as 65101;
    neighbor 10.0.10.2;
}

lab@PE1# top show policy-options     
policy-statement OSPF-to-BGP {
    term 1 {
        from {
            protocol ospf;
            route-filter 192.168.1.2/32 exact;
        }
        then accept;
    }
}

In the below example, as we use mpls-forwarding option, the route 192.168.1.2/32 gets advertised from OSPF into BGP.

[edit protocols mpls]
lab@PE1# show 
traffic-engineering mpls-forwarding;

[edit protocols mpls]

lab@PE1# run show route advertising-protocol bgp 10.0.10.2    

inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 10.0.11.0/24            Self                                    65102 I
@ 192.168.1.2/32          Self                 4                  I
* 192.168.11.2/32         Self                                    65102 I

But with the below config, the same route does not get advertised from OSPF to BGP

lab@PE1# show                               
traffic-engineering bgp-igp;

lab@PE1# run show route 192.168.1.2    

inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.1.2/32     *[LDP/9] 00:01:02, metric 1
                    > to 172.22.210.2 via em1.210, Push 300240
                    [OSPF/10] 00:27:55, metric 4
                    > to 172.22.210.2 via em1.210

[edit protocols mpls]
lab@PE1# run show route advertising-protocol bgp 10.0.10.2    

inet.0: 29 destinations, 33 routes (29 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 10.0.11.0/24            Self                                    65102 I
* 192.168.11.2/32         Self                                    65102 I

The route 192.168.1.2/32 is not advertised to CE1, because it is no longer OSPF active route.

Reference

http://www.juniper.net/techpubs/en_US/junos9.5/information-products/topic-collections/config-guide-mpls-applications/mpls-configuring-traffic-engineering-for-lsps.html

When you configure an LSP, a host route (a 32-bit mask) is installed in the ingress router toward the egress router; the address of the host route is the destination address of the LSP. Typically, you configure the BGP option (traffic-engineering bgp), allowing only BGP to use LSPs in its route calculations . The other traffic-engineering statement options, allow you to alter this behavior in the master instance. This functionality is not available for specific routing instances. Also, you can enable only one of the traffic-engineering statement options (bgp, bgp-igp, bgp-igp-both-ribs, or mpls-forwarding) at a time.

Using RSVP and LDP Routes for Traffic Forwarding

Configure the bgp-igp option of the traffic-engineering statement to cause BGP and the interior gateway protocols (IGPs) to use LSPs for forwarding traffic destined for egress routers. The bgp-igp option causes all inet.3 routes to be moved to the inet.0 routing table.

On the ingress router, include the traffic-engineering bgp-igp statement:

traffic-engineering bgp-igp;

Using RSVP and LDP Routes for Forwarding But Not Route Selection

If you configure the traffic-engineering bgp-igp statement or the traffic-engineering bgp-igp-both-ribs statement, high-priority RSVP and LDP routes can supersede IGP routes in the inet.0 routing table. IGP routes might no longer be redistributed since they are no longer the active routes.

When you configure the mpls-forwarding option at either the [edit logical-systems logical-system-name protocols mpls traffic-engineering] hierarchy level or the [edit protocols mpls traffic-engineering] hierarchy level, RSVP and LDP routes are used for forwarding but are excluded from route selection. These routes are added to both the inet.0 and inet.3 routing tables. RSVP and LDP routes in the inet.0 routing table are given a low preference when the active route is selected. However, RSVP and LDP routes in the inet.3 routing table are given a normal preference and are therefore used for selecting forwarding next hops.

When you activate the mpls-forwarding option, routes whose state is ForwardingOnly are preferred for forwarding even if their preference is lower than that of the currently active route. To examine the state of a route, execute a show route detail command.

To configure, include the traffic-engineering mpls-forwarding statement:

traffic-engineering mpls-forwarding;

Junos routing tables

Full list of routing tables in Junos is listed at the following link:

http://www.juniper.net/techpubs/software/junos/junos73/swcmdref73-protocols/html/protocols-monitor-generic10.html

Table Purpose
bgp.isovpn.0 Border Gateway Protocol (BGP) reachability information for ISO virtual private networks (VPNs).
bgp.l2vpn.0 BGP Layer 2 VPN routes.
bgp.l3vpn.0 BGP Layer 3 VPN routes.
bgp.rtarget.0 BGP route target information.
inet.0 Internet Protocol version 4 (IPv4) unicast routes  (–> main IP routing table)
inet.1 IP multicast routes. Each (S,G) pair in the network is placed into this table.
inet.2 IPv4 unicast routes. Used by IP multicast-enabled routing protocols to perform Reverse Path Forwarding (RPF).
inet.3 Accessed by BGP to use Multiprotocol Label Switching (MPLS) paths for forwarding traffic (e.g. when using MPLS with traffic-engineering)
inet.4 Routes learned by the Multicast Source Discovery Protocol (MSDP).
inet6.0 Internet Protocol version 6 (IPv6) unicast routes.
inet6.3 Populated when the resolve-vpn statement is enabled.
inetflow.0 Border Gateway Protocol (BGP) flow destination (firewall match criteria) information.
invpnflow.0 BGP flow destination (firewall match criteria) information within an RFC 2547 Layer 3 VPN.
iso.0 Intermediate System-to-Intermediate System (IS-IS) and End System-to-Intermediate System (ES-IS) routes.
l2circuit.0 Layer 2 circuit routes.
mpls.0 MPLS LSPs. Contains a list of the next LSR in each LSP. Used by transit routers to route packets to the next router along an LSP.
instance-name.inet.0 Table that JUNOS software creates each time you configure an IPv4 unicast routing instance.
instance-name.inet.3 Table that JUNOS software creates for each BGP instance that is configured to use MPLS paths for forwarding traffic.
instance-name.inet6.0 Table that JUNOS software creates each time you configure an IPv6 unicast routing instance.
instance-name.inetflow.0 Table that JUNOS software creates each time you configure a routing instance. This table stores dynamic filtering information for BGP.
instance-name.iso.0 Table that JUNOS software creates each time you configure an IS-IS or ES-IS instance.
instance-name.mpls.0 Table that JUNOS software creates each time you configure MPLS LSPs.

Configure NTP

Task

Configure the router to synch the system clock with NTP servers

Configuration

The following config will meet the basic NTP functionality:

lab@R1> show configuration system 
time-zone Australia/Sydney;

ntp {
    server 192.43.244.18;
    server 203.163.124.161;
    server 192.168.0.4;
}

Verification

lab@R1> show ntp associations    
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.43.244.18   .ACTS.           1 -    8  128    7  197.467  -73.817  11.645
+203.163.124.161 135.160.20.242   4 -   60   64    1  130.891  -68.339   2.325
+192.168.0.4     192.43.244.18    2 -   59   64    1    5.918  -68.684   0.752

lab@R1> show ntp status 
status=06a4 leap_none, sync_ntp, 10 events, event_peer/strat_chg,
version="ntpd 4.2.0-a Sat Jul 23 09:16:47 UTC 2011 (1)",
processor="i386", system="JUNOS10.4R6.5", leap=00, stratum=2,
precision=-21, rootdelay=197.467, rootdispersion=2.550, peer=15876,
refid=192.43.244.18,
reftime=d3d8b5ef.50c5d6ec  Fri, Aug 17 2012 22:22:07.315, poll=6,
clock=d3d8b614.1d541be5  Fri, Aug 17 2012 22:22:44.114, state=3,
offset=0.000, frequency=0.000, jitter=11.388, stability=0.000

lab@R1> show system uptime 
Current time: 2012-08-17 22:23:07 EST
System booted: 2012-08-17 20:32:17 EST (01:50:50 ago)
Protocols started: 2012-08-17 20:33:15 EST (01:49:52 ago)
Last configured: 2012-08-17 22:20:01 EST (00:03:06 ago) by lab
10:23PM  up 1:51, 3 users, load averages: 0.00, 0.00, 0.00

Note

Note that NTP take time to sync with the server. The following is quoted from JNCIP Study Guide.

Assuming that you have set the local router’s clock accurately (and quickly), the two clocks should be within the limits needed for NTP synchronization. However, since the NTP protocol requires several successful packet exchanges before allowing synchronization, you will have to wait approximately five minutes to determine your relative success in this matter. Because NTP slowly steps a system’s clock into synchronization, it may take a seemingly inordinate amount of time to get the proper NTP synchronization on all of your routers. You can tell when things are working correctly when you see a display containing an asterisk in the left margin, as shown above.

NTP operation is confusing to many exam candidates, and the delays associated with normal NTP operation have been known to cause some candidates to assume that they have made a mis- take when things do not work as expected right away. When all else fails, remember that NTP works slowly, and that the system clocks have to be within 128 seconds of each other to get things synchronizing. Also, keep in mind that time zone settings will affect your local clock, and remember that non-zero values in the offset and delay fields of the show ntp associations command indicate successful communication and, when in use, authentication between your router and the NTP server. As a final tip, when all else has failed, you may want to try deacti- vating and reactivating the NTP configuration stanza to ensure that recent changes are in fact being put into effect after you commit them.