Rakes,
This looks fantastic. I'm going to drop some of this configuration into my lab once I get home this afternoon and see how it goes.
Thanks
MIke
Rakes,
This looks fantastic. I'm going to drop some of this configuration into my lab once I get home this afternoon and see how it goes.
Thanks
MIke
You didn't specify the number of VPNv4 routes but generally speaking more RAM does give you more room for routes. If both boxes are configured identically and have the same number of total routes (v4, v6, VPN) you may want open a JTAC ticket and see if they can help you determine why one box is running hotter than the other.
Thanks a lot Krasl!
D'you know how to associate each LSI with each MPLS table and/or LS? I mean... context ID should be the same for LSI and MPLS table and LS? Or is there any other way to check this?
Thanks!
Hello Rakesh ,
I went ahead and created the rib-group on both R1 and R3 for local interfaces and ospf and I see the routes showing up in the NDCS routing-instance. But no luck on tunnel 200 coming up or ospf peering up over it.
R1
set interfaces em0 unit 0 family inet address 10.10.10.1/30 set interfaces em1 unit 0 family inet address 172.16.1.1/29 set interfaces em2 unit 0 family inet address 172.16.1.2/29 set interfaces em3 unit 0 family inet address 134.123.1.1/24 set interfaces gre unit 100 tunnel source 10.10.10.1 set interfaces gre unit 100 tunnel destination 10.10.10.2 set interfaces gre unit 100 family inet address 192.168.1.1/30 set interfaces gre unit 200 tunnel source 20.20.20.1 set interfaces gre unit 200 tunnel destination 10.10.10.6 set interfaces gre unit 200 family inet mtu 1400 set interfaces gre unit 200 family inet address 192.168.100.1/30 set interfaces lo0 unit 0 family inet address 20.20.20.1/32 set routing-options interface-routes rib-group inet NDCS set routing-options rib-groups NDCS import-rib inet.0 set routing-options rib-groups NDCS import-rib NDCS.inet.0 set protocols ospf rib-group NDCS set protocols ospf area 0.0.0.0 interface gre.100 set protocols ospf area 0.0.0.0 interface lo0.0 passive set routing-instances DELTA-CIC instance-type virtual-router set routing-instances DELTA-CIC interface em1.0 set routing-instances DELTA-CIC interface em3.0 set routing-instances DELTA-CIC protocols ospf area 0.0.0.49 interface em1.0 set routing-instances DELTA-CIC protocols ospf area 0.0.0.49 interface em3.0 passive set routing-instances NDCS instance-type virtual-router set routing-instances NDCS interface em2.0 set routing-instances NDCS interface gre.200 set routing-instances NDCS protocols ospf area 0.0.0.49 interface em2.0 set routing-instances NDCS protocols ospf area 0.0.0.49 interface gre.200
inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.0/30 *[Direct/0] 01:02:04 > via em0.0 10.10.10.1/32 *[Local/0] 01:02:04 Local via em0.0 10.10.10.4/30 *[OSPF/10] 00:43:35, metric 2> via gre.100 20.20.20.1/32 *[Direct/0] 01:02:00> via lo0.0 20.20.20.2/32 *[OSPF/10] 00:43:35, metric 1> via gre.100 192.168.1.0/30 *[Direct/0] 01:02:00> via gre.100 [OSPF/10] 00:43:35, metric 1> via gre.100 192.168.1.1/32 *[Local/0] 01:02:00 Local via gre.100 224.0.0.5/32 *[OSPF/10] 01:02:07, metric 1 MultiRecv DELTA-CIC.inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 134.123.1.0/24 *[Direct/0] 01:02:00 > via em3.0 134.123.1.1/32 *[Local/0] 01:02:00 Local via em3.0 172.16.1.0/29 *[Direct/0] 01:02:00> via em1.0 172.16.1.1/32 *[Local/0] 01:02:00 Local via em1.0 192.168.100.0/30 *[OSPF/10] 01:01:09, metric 2> to 172.16.1.2 via em1.0 224.0.0.5/32 *[OSPF/10] 01:02:06, metric 1 MultiRecv NDCS.inet.0: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.0/30 *[Direct/0] 00:49:12 > via em0.0 10.10.10.1/32 *[Local/0] 00:49:12 Local via em0.0 10.10.10.4/30 *[OSPF/10] 00:43:35, metric 2> via gre.100 20.20.20.1/32 *[Direct/0] 00:49:12> via lo0.0 20.20.20.2/32 *[OSPF/10] 00:43:35, metric 1> via gre.100 134.123.1.0/24 *[OSPF/10] 01:01:09, metric 2> to 172.16.1.1 via em2.0 172.16.1.0/29 *[Direct/0] 01:01:59> via em2.0 172.16.1.2/32 *[Local/0] 01:01:59 Local via em2.0 192.168.1.0/30 *[Direct/0] 00:49:12> via gre.100 [OSPF/10] 00:43:35, metric 1> via gre.100 192.168.1.1/32 *[Local/0] 00:49:12 Local via gre.100 192.168.100.0/30 *[Direct/0] 00:43:35> via gre.200 [OSPF/10] 01:01:29, metric 1> via gre.200 192.168.100.1/32 *[Local/0] 01:02:00 Local via gre.200 224.0.0.5/32 *[OSPF/10] 01:02:06, metric 1 MultiRecv
R3
set interfaces em0 unit 0 family inet address 10.10.10.6/30 set interfaces em1 unit 0 family inet address 172.16.2.1/29 set interfaces em2 unit 0 family inet address 172.16.2.2/29 set interfaces em3 unit 0 family inet address 134.123.2.1/24 set interfaces gre unit 200 tunnel source 10.10.10.6 set interfaces gre unit 200 tunnel destination 20.20.20.1 set interfaces gre unit 200 family inet mtu 1400 set interfaces gre unit 200 family inet address 192.168.100.2/30 set routing-options interface-routes rib-group inet NDCS set routing-options rib-groups NDCS import-rib inet.0 set routing-options rib-groups NDCS import-rib NDCS.inet.0 set protocols ospf rib-group NDCS set protocols ospf area 0.0.0.0 interface em0.0 set routing-instances DELTA-CIC instance-type virtual-router set routing-instances DELTA-CIC interface em2.0 set routing-instances DELTA-CIC interface em3.0 set routing-instances DELTA-CIC protocols ospf area 0.0.0.0 interface em2.0 set routing-instances DELTA-CIC protocols ospf area 0.0.0.0 interface em3.0 passive set routing-instances NDCS instance-type virtual-router set routing-instances NDCS interface em1.0 set routing-instances NDCS interface gre.200 set routing-instances NDCS protocols ospf area 0.0.0.0 interface em1.0 set routing-instances NDCS protocols ospf area 0.0.0.49 interface gre.200
inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.4/30 *[Direct/0] 01:25:48 > via em0.0 10.10.10.6/32 *[Local/0] 01:25:48 Local via em0.0 20.20.20.1/32 *[OSPF/10] 00:55:44, metric 2> to 10.10.10.5 via em0.0 20.20.20.2/32 *[OSPF/10] 00:55:44, metric 1> to 10.10.10.5 via em0.0 192.168.1.0/30 *[OSPF/10] 00:55:44, metric 2> to 10.10.10.5 via em0.0 224.0.0.5/32 *[OSPF/10] 01:25:51, metric 1 MultiRecv DELTA-CIC.inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 134.123.2.0/24 *[Direct/0] 01:25:45 > via em3.0 134.123.2.1/32 *[Local/0] 01:25:45 Local via em3.0 172.16.2.0/29 *[Direct/0] 01:25:45> via em2.0 172.16.2.2/32 *[Local/0] 01:25:45 Local via em2.0 192.168.100.0/30 *[OSPF/10] 01:24:49, metric 2> to 172.16.2.1 via em2.0 224.0.0.5/32 *[OSPF/10] 01:25:51, metric 1 MultiRecv NDCS.inet.0: 11 destinations, 12 routes (11 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.4/30 *[Direct/0] 01:16:53 > via em0.0 10.10.10.6/32 *[Local/0] 01:16:53 Local via em0.0 20.20.20.1/32 *[OSPF/10] 00:55:44, metric 2> to 10.10.10.5 via em0.0 20.20.20.2/32 *[OSPF/10] 00:55:44, metric 1> to 10.10.10.5 via em0.0 134.123.2.0/24 *[OSPF/10] 01:24:54, metric 2> to 172.16.2.2 via em1.0 172.16.2.0/29 *[Direct/0] 01:25:45 > via em1.0 172.16.2.1/32 *[Local/0] 01:25:45 Local via em1.0 192.168.1.0/30 *[OSPF/10] 00:55:44, metric 2> to 10.10.10.5 via em0.0 192.168.100.0/30 *[Direct/0] 00:55:44> via gre.200 [OSPF/10] 01:24:54, metric 1> via gre.200 192.168.100.2/32 *[Local/0] 01:25:45 Local via gre.200 224.0.0.5/32 *[OSPF/10] 01:25:51, metric 1 MultiRecv
Hi ,
It is done automatically.
As per your output:
show route forwarding-table label 262147
Routing table: default.mpls
MPLS:
Destination Type RtRef Next hop Type Index NhRef Netif
262147 user 0 Pop 2195 2 lsi.1049345
Logical system: MIAV
Routing table: default.mpls
MPLS:
Destination Type RtRef Next hop Type Index NhRef Netif
262147 user 0 Pop 2759 2 lsi.17826311
When mpls packet(label 262147 )arrives for example into LS "MIAV" ( by means of ingress ifl bound to LS MIAV) then the lookup will be processed in LS MIAV routing context ( i.e. PFE mpls.0 table index corresponding to LS). So the label will be popped and forwarded through lsi.17826311(actually recirculating for L2 lookup)
However if the same packet arrives on ifl bound to "default" LS it will be processed in context of "default" mpls.0 table index into PFE, so it will be popped and forwarded through lsi.1049345
HTH,
Krasi
Hi,
are you able to ping your gre interface from Router-2 global ? is this setup remotely accessible just to see ?
Thanks once again for clarifying Krasi! For some reason I didn't take into account the inbound interface when analyzing
Hi,
I had this tested in a lab:
Active pseudowire is between PE1 & PE3.
Config PE1:-
root@PE1> show configuration protocols l2circuit neighbor 192.168.0.3 { interface ge-0/0/0.0 { virtual-circuit-id 1; revert-time 10; backup-neighbor 192.168.0.4 { virtual-circuit-id 3; } } }
PE2:
root@PE2> show configuration protocols l2circuit neighbor 192.168.0.3 { interface ge-0/0/0.0 { virtual-circuit-id 2; revert-time 10; backup-neighbor 192.168.0.4 { virtual-circuit-id 4; } } }
PE3:
root@PE3> show configuration protocols l2circuit neighbor 192.168.0.1 { interface ge-0/0/0.0 { virtual-circuit-id 1; revert-time 10; backup-neighbor 192.168.0.2 { virtual-circuit-id 2; } } }
PE4:
root@PE4> show configuration protocols l2circuit neighbor 192.168.0.1 { interface ge-0/0/0.0 { virtual-circuit-id 3; revert-time 10; backup-neighbor 192.168.0.2 { virtual-circuit-id 4; } } }
Results below:
Simulating PE3 interface going down:
Hope this helps.
Cheers,
Ashvin
hi,
I have question regarding l3vpn...
As I understand it, the default source of a ping on Junos is the loopback address
If I am pinging a destination on a l3vpn, where the table has no route to the PE loopback, what source is used for the ping if I do not specify a source myself?
Any assistance ould be greatly appreciated
Thanks
Update...
So I found this in a Juniper techpub:
"If you attempt to ping a remote CE router from a PE router, ICMP echo requests are sent from the PE router, with the PE router’s VPN interface as the source."
Hi Neil,
Just to add to that you could have more than one interface, the interface on PE will be the one which is reference in the next-hop Unicast interface in the forwarding table in-case if you are not doing load balancing.
Let us say you have two interfaces called in the same VRF, the exit interface would depend on which PE interface your VRF instance uses to go out of the PE based on Forwarding table
Hi,
I had this tested in the lab. I believe GRE tunnel interface does come up in the VR and traffic gets sent from R3 to R1 gr-0/0/0.200. I can see OSPF hello packets being received on gr-0/0/0.200 R1. However, traffic is not able to be sent on gr-0/0/0.200 tunnel from R1. Ping from R1 results in below:
root@R1# run ping 192.168.100.2 routing-instance NDCS PING 192.168.100.2 (192.168.100.2): 56 data bytes ping: sendto: Too many references: can't splice ping: sendto: Too many references: can't splice ping: sendto: Too many references: can't splice ^C --- 192.168.100.2 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss
Is it because a double GRE encapsulation over the same PFE?
http://forums.juniper.net/t5/SRX-Services-Gateway/Problem-with-GRE-tunnel/td-p/96338
Below shows traffic being received on R1 gr-0/0/0.200 and OSPF hello packets received, hence OSPF in init state:
root@R1# run monitor traffic interface gr-0/0/0.200 no-resolve verbose output suppressed, use <detail> or <extensive> for full protocol decode Address resolution is OFF. Listening on gr-0/0/0.200, capture size 96 bytes 22:11:15.962664 In IP 192.168.100.2 > 224.0.0.5: OSPFv2, Hello, length 56 22:11:25.312688 In IP 192.168.100.2 > 224.0.0.5: OSPFv2, Hello, length 56 ^C 2 packets received by filter 0 packets dropped by kernel [edit] root@R1# run show ospf neighbor instance NDCS Address Interface State ID Pri Dead 172.16.1.1 ge-0/0/3.0 Full 134.123.1.1 128 37 192.168.100.2 gr-0/0/0.200 Init 172.16.2.2 128 33
No traffic is received on gr-0/0/0.200 R3:
root@R3# run show ospf neighbor instance NDCS Address Interface State ID Pri Dead 172.16.2.1 ge-0/0/3.0 Full 134.123.2.1 128 36 root@R3# run show interfaces gr-0/0/0.200 | match packets Input packets : 0 Output packets: 161
@sylar: Why it might have worked for you is probably because you are not using gre tunnel to route between R1 and R2.
10.10.10.4/30 *[OSPF/10] 00:40:07, metric 2> to 10.10.10.2 via lt-1/0/10.12
compared to:
root@R1# run show route table inet.0 10.10.10.4 inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.4/30 *[OSPF/10] 00:00:04, metric 101 > via gr-0/0/0.100
If we enable OSPF between R1 and R2 physical interfaces and prefer the routes by having lower metric on the physical interface over the gr-0/0/0.100, bi-directional traffic is now possible over gr-0/0/0.200:
root@R1# show protocols ospf area 0.0.0.0 interface gr-0/0/0.100 { metric 100; } interface lo0.0 { passive; } interface ge-0/0/1.0 { metric 1; }
ge-0/0/1.0 = em0.0 in my setup
Results below:
root@R1# run show ospf neighbor Address Interface State ID Pri Dead 10.10.10.2 ge-0/0/1.0 Full 20.20.20.2 128 32 192.168.1.2 gr-0/0/0.100 Full 20.20.20.2 128 32 [edit] root@R1# run show route table inet.0 10.10.10.6 inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 10.10.10.4/30 *[OSPF/10] 00:00:17, metric 2 > to 10.10.10.2 via ge-0/0/1.0 root@R1# run show ospf neighbor instance NDCS Address Interface State ID Pri Dead 172.16.1.1 ge-0/0/3.0 Full 134.123.1.1 128 31 192.168.100.2 gr-0/0/0.200 Full 172.16.2.2 128 35 [edit]
root@R1# run ping routing-instance NDCS 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: icmp_seq=0 ttl=64 time=2.299 ms
64 bytes from 192.168.100.2: icmp_seq=1 ttl=64 time=64.926 ms
^C
--- 192.168.100.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.299/33.613/64.926/31.313 ms
The rib-groups configuration was not required as well as the routes pointing to table inet.0 in NDCS VR.
See below [no static routes in NDCS]:
root@R1# run show route table NDCS.inet.0 terse NDCS.inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both A V Destination P Prf Metric 1 Metric 2 Next hop AS path * ? 134.123.1.0/24 O 10 2 >172.16.1.1 * ? 134.123.2.0/24 O 10 3 >gr-0/0/0.200 * ? 172.16.1.0/29 D 0 >ge-0/0/3.0 * ? 172.16.1.2/32 L 0 Local * ? 172.16.2.0/29 O 10 2 >gr-0/0/0.200 * ? 192.168.100.0/30 D 0 >gr-0/0/0.200 ? O 10 1 >gr-0/0/0.200 * ? 192.168.100.1/32 L 0 Local * ? 224.0.0.5/32 O 10 1 MultiRecv
From previous experience, I know it is possible to add a gre interface to a VR or vrf without much hassle.
Hope this helps.
Cheers,
Ashvin
Hi Ashvin,
Thanks for the revert, I will try to see it on the device as well and let everyone know results, am not too sure how it worked for your without rib-groups though, as i have tried initially without rib-groups without any reachability, gre tunnel should have source as physical address and the destination is as loopback
Hi,
I disable p-2 ospf interface which connect to p-3.
Below LSP log, line 39 & 40, primary path CSPF link down and no route toward to 1.1.1.6, the active path is secondary but the primary path state is up and ready to revert in 175 seconds.
Would some one expert please help to explain this behavior.
Thanks.
Topology as below,
________________________________
| |
| |
pe-1 ----- p-2 ----- p-3 ----- pe-4
LAB-MX240# run show mpls lsp ingress logical-system pe-1 extensive
Ingress LSP: 1 sessions
101.1.1.4
From: 101.1.1.1, State: Up, ActiveRoute: 0, LSPname: pe-1_to_pe-4
ActivePath: to_pe-4_2nd (secondary)
Link protection desired
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
Revert timer: 300
Time remaining before reverting: 175
Primary to_pe-4_pri State: Up
Priorities: 7 0
OptimizeTimer: 10
SmartOptimizeTimer: 180
Reoptimization in 3 second(s).
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 3)
1.1.1.2 S 1.1.1.6 S 1.1.1.10 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt 20=Node-ID):
101.1.1.2(flag=0x21) 1.1.1.2(flag=1 Label=300432) 101.1.1.3(flag=0x21) 1.1.1.6(flag=1 Label=300416) 101.1.1.4(flag=0x20) 1.1.1.10(Label=3)
49 Jun 7 02:14:20.460 CSPF failed: no route toward 1.1.1.6
48 Jun 7 02:14:10.677 Record Route: 101.1.1.2(flag=0x21) 1.1.1.2(flag=1 Label=300432) 101.1.1.3(flag=0x21) 1.1.1.6(flag=1 Label=300416) 101.1.1.4(flag=0x20) 1.1.1.10(Label=3)
47 Jun 7 02:13:41.789 CSPF failed: no route toward 1.1.1.6
46 Jun 7 02:13:32.341 Record Route: 101.1.1.2(flag=0x23) 1.1.1.2(flag=3 Label=300432) 70.1.1.2(Label=300416) 101.1.1.4(flag=0x20) 1.1.1.10(Label=3)
45 Jun 7 02:13:29.612 CSPF failed: no route toward 1.1.1.6
44 Jun 7 02:13:29.612 CSPF: link down/deleted: 0.0.0.0(1.1.1.6:0)(1.1.1.6)->0.0.0.0(101.1.1.3:0)(101.1.1.3)
43 Jun 7 02:13:29.246 Link-protection Up
42 Jun 7 02:13:29.245 Deselected as active
41 Jun 7 02:13:29.245 Link-protection Down
40 Jun 7 02:13:29.244 CSPF failed: no route toward 1.1.1.6
39 Jun 7 02:13:29.244 CSPF: link down/deleted: 1.1.1.5(101.1.1.2:0)(101.1.1.2)->0.0.0.0(1.1.1.6:0)(1.1.1.6)
Hello,
Please post the (1) full configuration and (2) human-readable diagram with interface names and IP addresses clearly marked.
One explanation for Your question is that when You "disabled OSPF link", PLR p2 started to use p2->p3 bypass. And RSVP Path messages from p1 are tunneled via this bypass. Likewise, RSVP Resv messages from p4 are tunneled via p3->p2 bypass. Hence p1 sees its primary path as Up.
HTH
Thx
Alex
Hi,
Are rsvp messages related to control plane or data plane?
Does control plane packet also has the MPLS header?
Thanks.
Ernest Lin
Hi Rakesh,
The gre tunnel source and destinations are as per the requirements, i.e source physical address and destination loopback. Below is the configuration:
root@R1# show interfaces gr-0/0/0 unit 200 tunnel { source 20.20.20.1; destination 10.10.10.6; } family inet { mtu 1400; address 192.168.100.1/30; } root@R3# show interfaces gr-0/0/0 unit 200 tunnel { source 10.10.10.6; destination 20.20.20.1; } family inet { mtu 1400; address 192.168.100.2/30; }
This is equivalent to scenario 1 in this guide:
http://kb.juniper.net/InfoCenter/index?page=content&id=KB24592&actp=RSS
Complete configuration:
Could you please explain why you think Rib-groups would be needed.
I have successfully implemented GRE tunnels between 2 loopback addresses with the GRE interface sitting in a vrf on one side in the past [similar to this one].
Hope this helps.
Cheers,
Ashvin
Hi,
RSVP messages are destined to the RE, therefore control plane. From RFC2205:
RSVP does not transport application data but is rather an Internet control protocol, like ICMP, IGMP, or routing protocols. Like the implementations of routing and management protocols, an implementation of RSVP will typically execute in the background, not in the data forwarding path.
Whether control packet also has MPLS header, I suppose would depend on the configuration. As per default configuration, RSVP control packets follow IGP paths and are not MPLS forwarded. This can be altered with "traffic-engineering bgp-igp|-both-ribs". I believe when this is applied, even control packets such as RSVP gets MPLS forwarded.
An interesting discussion!
Cheers,
Ashvin
Hi,
Please share with your knowledge how I can configure "hairpin" on Juniper SRX for access to server which have static NAT rule. Suppose that I have local subnet 10.213.0.0/24, snat rule for all local subnet 1.1.1.1/32 and static NAT rule for local ip 10.213.0.10/32 to 2.2.2.2/32, also I configured "Hairpin" how showed in link:
http://kb.juniper.net/InfoCenter/index?page=content&id=KB24639&actp=search
So, when I try establish tcp session from ip 10.213.0.20 to ip 2.2.2.2 (10.213.0.10), on dest. server (10.213.0.10) I see tcp syn from ip 10.213.0.20 and syn ack sended directly to ip 10.213.0.20, due this tcp seesion cannot be established. Not quite understand why on server 10.213.0.10 source ip 10.213.0.20 instead router ip ...
I tried change static NAT on DNAT and SNAT, but result same.
Maybe somebody has experience with this case.
My config:
set security nat source pool snat-pool address 1.1.1.1/32
set security nat source rule-set hairpin-nat from routing-instance 55555
set security nat source rule-set hairpin-nat to zone UNTRUST
set security nat source rule-set hairpin-nat rule hairpin-nat-rule match source-address 0.0.0.0/0
set security nat source rule-set hairpin-nat rule hairpin-nat-rule then source-nat pool snat-pool
set security nat destination pool hairpin-pool address 10.213.0.10/32
set security nat destination rule-set HAIRPIN from routing-instance 55555
set security nat destination rule-set HAIRPIN rule rule-hairpin-destination match source-address 10.213.0.0/24
set security nat destination rule-set HAIRPIN rule rule-hairpin-destination match destination-address 2.2.2.2/32
set security nat destination rule-set HAIRPIN rule rule-hairpin-destination then destination-nat pool hairpin-pool
set security policies from-zone TRUST to-zone TRUST policy default-permit match source-address any
set security policies from-zone TRUST to-zone TRUST policy default-permit match destination-address any
set security policies from-zone TRUST to-zone TRUST policy default-permit match application any
set security policies from-zone TRUST to-zone TRUST policy default-permit then permit
set security policies from-zone UNTRUST to-zone TRUST policy default-permit match source-address any
set security policies from-zone UNTRUST to-zone TRUST policy default-permit match destination-address 10.213.0.10
set security policies from-zone UNTRUST to-zone TRUST policy default-permit match application any
set security policies from-zone UNTRUST to-zone TRUST policy default-permit then permit
Thanks in advance !
Hello ,
In destination NAT rule , instead of "from instance " can you change in to "from zone " . Say zone "XYZ" . and create a source NAT from zone "XYZ" to zone "XYZ" . Then try .