Strange traffic graphs when monitoring GigE interfaces

Post general support questions here that do not specifically fall into the Linux or Windows categories.

Moderators: Developers, Moderators

Post Reply
richardmc
Posts: 13
Joined: Wed Oct 29, 2008 11:49 pm
Location: Melbourne, Australia

Strange traffic graphs when monitoring GigE interfaces

Post by richardmc »

Cacti: 0.8.7b
rrdtool: 1.2.19

Hello, I'm attempting to graph traffic on a GigE interface of a Redback SE800 router using the standard interface graph templates. Even when I set the max value in the data template to 1Gbps (1e+09) and check the results using rrdtool info, the graphs show strange results once the traffic level exceeds 100Mbps (see attached). Below is the results of the rrdtool fetch, info commands on the affected rrd file.

[root@localhost rra]# /usr/bin/rrdtool info pitt-edge-06_traffic_in_1580.rrd
filename = "pitt-edge-06_traffic_in_1580.rrd"
rrd_version = "0003"
step = 300
last_update = 1233274811
ds[traffic_in].type = "COUNTER"
ds[traffic_in].minimal_heartbeat = 600
ds[traffic_in].min = 0.0000000000e+00
ds[traffic_in].max = 1.0000000000e+09
ds[traffic_in].last_ds = "1868726574"
ds[traffic_in].value = 6.7194921383e+07
ds[traffic_in].unknown_sec = 0
ds[traffic_out].type = "COUNTER"
ds[traffic_out].minimal_heartbeat = 600
ds[traffic_out].min = 0.0000000000e+00
ds[traffic_out].max = 1.0000000000e+09
ds[traffic_out].last_ds = "3060189038"
ds[traffic_out].value = 8.0314305133e+06
ds[traffic_out].unknown_sec = 0
rra[0].cf = "AVERAGE"
rra[0].rows = 600
rra[0].pdp_per_row = 1
rra[0].xff = 5.0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
rra[0].cdp_prep[1].value = NaN
rra[0].cdp_prep[1].unknown_datapoints = 0
rra[1].cf = "AVERAGE"
rra[1].rows = 700
rra[1].pdp_per_row = 6
rra[1].xff = 5.0000000000e-01
rra[1].cdp_prep[0].value = 2.2392047016e+07
rra[1].cdp_prep[0].unknown_datapoints = 0
rra[1].cdp_prep[1].value = 2.5464795880e+06
rra[1].cdp_prep[1].unknown_datapoints = 0
rra[2].cf = "AVERAGE"
rra[2].rows = 775
rra[2].pdp_per_row = 24
rra[2].xff = 5.0000000000e-01
rra[2].cdp_prep[0].value = 2.2392047016e+07
rra[2].cdp_prep[0].unknown_datapoints = 0
rra[2].cdp_prep[1].value = 2.5464795880e+06
rra[2].cdp_prep[1].unknown_datapoints = 0
rra[3].cf = "AVERAGE"
rra[3].rows = 797
rra[3].pdp_per_row = 288
rra[3].xff = 5.0000000000e-01
rra[3].cdp_prep[0].value = 2.2392047016e+07
rra[3].cdp_prep[0].unknown_datapoints = 0
rra[3].cdp_prep[1].value = 2.5464795880e+06
rra[3].cdp_prep[1].unknown_datapoints = 0
rra[4].cf = "MAX"
rra[4].rows = 600
rra[4].pdp_per_row = 1
rra[4].xff = 5.0000000000e-01
rra[4].cdp_prep[0].value = NaN
rra[4].cdp_prep[0].unknown_datapoints = 0
rra[4].cdp_prep[1].value = NaN
rra[4].cdp_prep[1].unknown_datapoints = 0
rra[5].cf = "MAX"
rra[5].rows = 700
rra[5].pdp_per_row = 6
rra[5].xff = 5.0000000000e-01
rra[5].cdp_prep[0].value = 6.0790106241e+06
rra[5].cdp_prep[0].unknown_datapoints = 0
rra[5].cdp_prep[1].value = 8.7507004726e+05
rra[5].cdp_prep[1].unknown_datapoints = 0
rra[6].cf = "MAX"
rra[6].rows = 775
rra[6].pdp_per_row = 24
rra[6].xff = 5.0000000000e-01
rra[6].cdp_prep[0].value = 6.0790106241e+06
rra[6].cdp_prep[0].unknown_datapoints = 0
rra[6].cdp_prep[1].value = 8.7507004726e+05
rra[6].cdp_prep[1].unknown_datapoints = 0
rra[7].cf = "MAX"
rra[7].rows = 797
rra[7].pdp_per_row = 288
rra[7].xff = 5.0000000000e-01
rra[7].cdp_prep[0].value = 6.0790106241e+06
rra[7].cdp_prep[0].unknown_datapoints = 0
rra[7].cdp_prep[1].value = 8.7507004726e+05
rra[7].cdp_prep[1].unknown_datapoints = 0



[root@localhost rra]# /usr/bin/rrdtool fetch pitt-edge-06_traffic_in_1580.rrd AVERAGE
traffic_in traffic_out

1233188700: nan nan
1233189000: nan nan
1233189300: nan nan
1233189600: nan nan
1233189900: nan nan
1233190200: nan nan
1233190500: nan nan
1233190800: nan nan
1233191100: nan nan
1233191400: nan nan
1233191700: nan nan
1233192000: nan nan
1233192300: nan nan
1233192600: nan nan
1233192900: nan nan
1233193200: nan nan
1233193500: nan nan
1233193800: nan nan
1233194100: nan nan
1233194400: nan nan
1233194700: nan nan
1233195000: nan nan
1233195300: nan nan
1233195600: nan nan
1233195900: nan nan
1233196200: nan nan
1233196500: nan nan
1233196800: nan nan
1233197100: nan nan
1233197400: nan nan
1233197700: nan nan
1233198000: nan nan
1233198300: nan nan
1233198600: nan nan
1233198900: nan nan
1233199200: nan nan
1233199500: nan nan
1233199800: nan nan
1233200100: nan nan
1233200400: nan nan
1233200700: nan nan
1233201000: nan nan
1233201300: nan nan
1233201600: nan nan
1233201900: nan nan
1233202200: nan nan
1233202500: nan nan
1233202800: nan nan
1233203100: nan nan
1233203400: nan nan
1233203700: nan nan
1233204000: nan nan
1233204300: nan nan
1233204600: nan nan
1233204900: nan nan
1233205200: nan nan
1233205500: nan nan
1233205800: nan nan
1233206100: nan nan
1233206400: 3.5190506060e+06 1.4056560490e+07
1233206700: 3.3113085203e+06 1.3766325031e+07
1233207000: 3.0216990111e+06 1.2811344358e+07
1233207300: 3.4800640846e+06 1.3043159584e+07
1233207600: 3.7765751283e+06 1.3665016303e+07
1233207900: 3.2307186833e+06 1.3612721273e+07
1233208200: 3.2559486251e+06 1.3509512304e+07
1233208500: 2.8264022186e+06 1.3361858362e+07
1233208800: 2.9480360382e+06 1.3178166848e+07
1233209100: 2.8953622817e+06 1.3053094287e+07
1233209400: 3.2760168886e+06 1.3136880504e+07
1233209700: 2.2208047451e+06 1.3318044046e+07
1233210000: 9.1597039814e+05 1.2516679541e+07
1233210300: 7.3974908227e+05 1.2328462353e+07
1233210600: 5.5962656734e+05 1.1928776830e+07
1233210900: 2.6204670365e+05 1.1696186908e+07
1233211200: 5.1377011642e+05 1.1206460846e+07
1233211500: 1.3228684562e+07 1.1003077752e+07
1233211800: 1.2722829076e+07 1.0529352652e+07
1233212100: 1.1082492851e+07 9.5524655427e+06
1233212400: 1.0805971313e+07 9.1587149155e+06
1233212700: 1.1198619950e+07 8.7719218020e+06
1233213000: 1.0112368714e+07 8.4354316614e+06
1233213300: 1.0857504322e+07 8.2376877888e+06
1233213600: 9.6876499754e+06 7.3908974935e+06
1233213900: 9.0433418506e+06 7.1722281757e+06
1233214200: 9.7209555358e+06 7.1891289578e+06

etc....

Notice the NaN values still occuring even though the traffic values are less than 1e+09?
Attachments
graph_image.png
graph_image.png (79.67 KiB) Viewed 3234 times
BelgianViking
Cacti User
Posts: 97
Joined: Thu Mar 24, 2005 4:59 am
Location: Brussels, Belgium

Post by BelgianViking »

You are getting 32 bit values for your counter. Check if your switch support 64 bit counters and configure your poll to get these.
[size=75][color=#EE5019]| Cacti 0.8.6g | MySQL 4.1.14 w Query Cache | Net-SNMP 5.2.1 | IIS 6 | fast-cgi | PHP 5.0.3 | RRDtool 1.2.9 | Windows 2003 Server SP1 | Cactid 0.8.6f |
| Dell 2450 - 2x P3 733 MHz, 1GB RAM |[/size][/color]
richardmc
Posts: 13
Joined: Wed Oct 29, 2008 11:49 pm
Location: Melbourne, Australia

[Solved] GigE traffic interface 64 bit issue

Post by richardmc »

Thanks Viking and you were spot on. I see that the issue actually occurs when the traffic exceeds 114MBps = 2^32 / 300 * 8.

The Redback does support 64 bit counters and it's working a treat now.

Thanks again.
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

BTB: This is a sticky topic
Reinhard
pzserg
Posts: 10
Joined: Tue Jan 27, 2009 5:52 am

Tell me please what I must do for total bandwidth ?

Post by pzserg »

Tell me please what I must do for total bandwidth ?

Now I have install 0.8.7c
I use 64 bit counters for traffic in/out bits 64 bit counters and everething is allright, but then I use "create graphs for total bandwidth", I see that graphs are stoped on 114 mbit.

Poller cache:

81.x.x.x - Traffic - Gi5/1 SNMP Version: 2, Community: x, OID: .1.3.6.1.2.1.2.2.1.10.1
RRD: /var/www/cacti-0.8.7c/rra/x_traffic_in_9.rrd
81.x.x.x - Traffic - Gi5/1 SNMP Version: 2, Community: x, OID: .1.3.6.1.2.1.2.2.1.16.1
RRD: /var/www/cacti-0.8.7c/rra/x_traffic_in_9.rrd

So we can see that for total bandwidth are used 32 bit counters
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

The predefined Bandwidth graph is indeed a 32 bit graph.
You may use the 64 bit graph and add the bandwidth graph templates lines
Reinhard
pzserg
Posts: 10
Joined: Tue Jan 27, 2009 5:52 am

Thank you, but I have a question

Post by pzserg »

So if I understood right, I must modify template - Interface - Traffic (bits/sec) , add two lines - "total in" and "total out" ? Add don't use template total ? Isn't it ?
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Re: Thank you, but I have a question

Post by gandalf »

pzserg wrote:So if I understood right, I must modify template - Interface - Traffic (bits/sec) , add two lines - "total in" and "total out" ? Add don't use template total ? Isn't it ?
Wrong. Use the 64 bit template as a base. You may want to copy it instead of modifying the original one
Reinhard
pzserg
Posts: 10
Joined: Tue Jan 27, 2009 5:52 am

Thank you very much

Post by pzserg »

Thank you very much

I changed in data queries ifInOctets and ifOutOctets to ifHCInOctets and IfHCOutOctets - total bandwidth.
Post Reply

Who is online

Users browsing this forum: No registered users and 3 guests