Posts‎ > ‎

Catalyst 2960/3560/3750 Output Drops (Part 2)

posted Jul 12, 2014, 5:34 PM by Marc Kerscher   [ updated Jul 12, 2014, 5:42 PM ]
This is the second part of the article, trying to adjust buffers to remove the output drops and prevent re-transmits. A bit of trial and error will be required to make it work. Let's start of with the settings generated by AUTO-QOS:

mls qos queue-set output 1 threshold 1 100 100 50 200
mls qos queue-set output 1 threshold 2 125 125 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 50 200
mls qos queue-set output 1 buffers 15 25 40 20

interface FastEthernet0/6
 srr-queue bandwidth share 1 30 35 5
 priority-queue out

This is Cisco's best effort as to guess the best QOS settings (baseline), now depending on whether you are using 4 / 8 / 12 qos model, these will all not fit into the AUTO-QOS and changes will need to be made. The best way to start is pick a couple of ports that have lots of the output queue errors and take a look at the QOS breakdown:

Switch#sh mls qos inter fa0/6 stat
FastEthernet0/6 (All statistics are in packets)

..SNIP..

Sample output queues enqueued:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0         51252
 queue 1:           0           0        847872
 queue 2:           0           0         45743
 queue 3:           0      355323        782472

  output queues dropped:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0             0
 queue 1:           0           0           746
 queue 2:           0           0          7720
 queue 3:           0       67464         73897

Analyzing the data:

PacketsT1T2T3% of Traffic
Q100512522.46%
Q20084787240.71%
Q300457432.20%
Q4035532378247254.63%
Total35532317273392082662
DropsT1T2T3% of T1 Drops% of T2 Drops% of T3 Drops
Q1000#DIV/0!#DIV/0!0.00%
Q200746#DIV/0!#DIV/0!0.09%
Q3007720#DIV/0!#DIV/0!16.88%
Q406746473897#DIV/0!18.99%9.44%



Using the information above I would make the changes as follows. The PACKETS section shows the traffic breakdown, so I would adjust the following parameters to match:

mls qos queue-set output 1 buffers 5 40 5 50
srr-queue bandwidth share 1 40 5 50
(I tried to make both of them close to match the packets, rather than the default AUTO-QOS parameters)

Now using the DROPS I would make the following changes:

mls qos queue-set output 1 threshold 1 100 100 50 100
(Leave the priority queue alone)
mls qos queue-set output 1 threshold 2 80 90 200 800
(Since there was a lot of traffic, I increased the reserve and maximum)
mls qos queue-set output 1 threshold 3 80 90 50 100
(Not a lot of traffic here reducing some of the numbers)
mls qos queue-set output 1 threshold 4 80 150 200 800
(Since there was a lot of traffic even on T2, I increased several parameters)

Again this is based on the analysis above, but still might need some more adjustments, make the changes wait for a day and see if you need to change the 3rd and 4th parameter. 

From my reading Cisco recommends using Q1 for station ports and Q2 for uplinks. One might have to do the same analysis for the uplinks. 

Whether this could be a global or site settings will depend on the consistency of traffic.