Quantcast
Channel: All Routing posts
Viewing all articles
Browse latest Browse all 8688

Re: Throuput Issue in MX960 During Nating

$
0
0

Hello there,

Thanks for posting the configs, topology and printouts.

From what I see You are having a cosmetic/display issue here.

First things first - MS-DPC NPU t'put is around 10Gbps where 10Gbps is a sum of inside->outside (or PRIVATE->PUBLIC) and outside->inside (or PUBLIC->PRIVATE) bps. 

I will explain this further.

"monitor interface sp-8/0/0" printouts shows You the sum of:

- client->server (c2s) traffic ENTERING sp-8/0/0 private side + server->client (s2c) traffic ENTERING sp-8/0/0 public side, roughly 10Gbps on L3.

- client->server (c2s) traffic LEAVING sp-8/0/0 public side + server->client (s2c) traffic LEAVING sp-8/0/0 private side, roughly 10Gbps on L3.

So far so good.

The "monitor interface xe-9/1/3" printouts shows You :

- c2s traffic in one direction

- s2c traffic in other direction, and

- bps at L2, not L3 as SP interface shows.

Now, let's see how much traffic You are offering to NPU.

Xe-9/1/3 input is 9Gbs.

xe-9/1/2 input is 6Gbps.

Summing it up, You are offering 15Gbps of traffic at L2 to NPU while it is able to process only 10Gbps at L3.

Your average packet size is ~1282 Bytes from sp-8/0/0 stats, therefore 15Gbps of L2 translates to 15 * (1282/1296) = 14.8Gbps of L3. Still above 10Gbps, and I reckon You are losing ~4.8Gbps inside NPU.

I hope this makes sense. 

Please post "show services service-set sttatistics packet-drops" printout to see the reason. I reckon Your NPU CPU is at nearly 100%.

 

HTH

Thx

Alex

 

 


Viewing all articles
Browse latest Browse all 8688

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>