Gaps on cacti rrd graphics
Moderators: Developers, Moderators
Gaps on cacti rrd graphics
Hello,
Sometimes I got a weird issue on graphics, like in the attached picture.
In the attached screen we have a three graphic from one router. But first and second graphics are fine, but third one has some blank regions. As I see, sometime one or both datasource don't provide information to graph.
Is it bug or feature? Can I fix that?
Thanks!
Sometimes I got a weird issue on graphics, like in the attached picture.
In the attached screen we have a three graphic from one router. But first and second graphics are fine, but third one has some blank regions. As I see, sometime one or both datasource don't provide information to graph.
Is it bug or feature? Can I fix that?
Thanks!
- Attachments
-
- cacti-n.png (422.17 KiB) Viewed 2242 times
Last edited by pchel on Mon Apr 09, 2012 6:17 pm, edited 1 time in total.
Re: Blank ranges on cacti rrd graphics
Try increasing the snmpbulkwalk setting for that host.
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Blank ranges on cacti rrd graphics
Thank you for your suggestion but it didn't help me.Linegod wrote:Try increasing the snmpbulkwalk setting for that host.
Maybe do you have any other thoughts?
Thanks
- Attachments
-
- cacti-n2.png (196.86 KiB) Viewed 2224 times
Re: Blank ranges on cacti rrd graphics
Try increasing the timeout to that host.
It could be that the tunnel interface is too low on the tree, and is taking to long to get the information back.
You could try creating another device, with just that interface on it. Without seeing what else is going on in your network, or that device, it is difficult to say.
It could be that the tunnel interface is too low on the tree, and is taking to long to get the information back.
You could try creating another device, with just that interface on it. Without seeing what else is going on in your network, or that device, it is difficult to say.
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Blank ranges on cacti rrd graphics
This host at the same place as cacti. RTA is 0.62ms. Also this host has just 7 graphs only.Linegod wrote:Try increasing the timeout to that host.
It could be that the tunnel interface is too low on the tree, and is taking to long to get the information back.
You could try creating another device, with just that interface on it. Without seeing what else is going on in your network, or that device, it is difficult to say.
Code: Select all
Description** ID Graphs Data Sources Status In State Hostname Current (ms) Average (ms) Availability
renton-gw2 3 7 8 Up - 10.10.49.4 0.66 0.62 100
Thanks
Re: Blank ranges on cacti rrd graphics
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Blank ranges on cacti rrd graphics
Thank you for linkLinegod wrote:http://docs.cacti.net/manual:087:4_help.2_debugging
There nothing about any issue with this hosts.
For example
Code: Select all
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DEBUG: HOST COMPLETE: About to Exit Host Polling Thread Function
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] Total Time: 0.018 Seconds
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[44] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.16, value: 5580480
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[44] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.16, value: 5015312
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[40] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.12, value: 2217278175
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[40] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.12, value: 1970743241
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[39] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.10, value: 3791714708
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[39] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.10, value: 3326899693
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[35] SNMP: v2: 10.10.49.4, dsname: cisco_memfree, oid: .1.3.6.1.4.1.9.9.48.1.1.1.6.1, value: 417951100
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[36] SNMP: v2: 10.10.49.4, dsname: cisco_memused, oid: .1.3.6.1.4.1.9.9.48.1.1.1.5.1, value: 109577836
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[37] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.1, value: 1478501771
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[37] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.1, value: 2378435078
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[38] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.9, value: 2447115190
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[38] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.9, value: 2786286390
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[34] SNMP: v2: 10.10.49.4, dsname: 5min_cpu, oid: .1.3.6.1.4.1.9.9.109.1.1.1.1.5.1, value: 3
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] NOTE: There are '13' Polling Items for this Host
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] Recache DataQuery[1] OID: .1.3.6.1.2.1.1.3.0, output: 14193699
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] RECACHE: Processing 1 items in the auto reindex cache for '10.10.49.4'
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] SNMP Result: Host responded to SNMP
Code: Select all
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_38.rrd --template traffic_in:traffic_out 1334012700:2786286390:2447115190
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_5min_cpu_34.rrd --template 5min_cpu 1334012700:3
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_37.rrd --template traffic_out:traffic_in 1334012700:1478501771:2378435078
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memused_36.rrd --template cisco_memused 1334012700:109577836
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memfree_35.rrd --template cisco_memfree 1334012700:417951100
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_39.rrd --template traffic_in:traffic_out 1334012700:3791714708:3326899693
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd --template traffic_in:traffic_out 1334012700:2217278175:1970743241
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_44.rrd --template traffic_in:traffic_out 1334012700:5580480:5015312
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_5min_cpu_34.rrd --template 5min_cpu 1334012400:3
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_38.rrd --template traffic_out:traffic_in 1334012400:1929018181:2716259258
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_37.rrd --template traffic_out:traffic_in 1334012400:894460550:1792786654
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memused_36.rrd --template cisco_memused 1334012400:109561624
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memfree_35.rrd --template cisco_memfree 1334012400:417968392
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_39.rrd --template traffic_in:traffic_out 1334012400:3276211709:3258979527
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd --template traffic_in:traffic_out 1334012400:2181037608:1933120030
04/09/2012 04:00:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_44.rrd --template traffic_in:traffic_out 1334012400:5568886:5004956
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_38.rrd --template traffic_in:traffic_out 1334012101:2651795295:1400072067
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_5min_cpu_34.rrd --template 5min_cpu 1334012101:3
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_37.rrd --template traffic_out:traffic_in 1334012101:304346410:1201493956
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memused_36.rrd --template cisco_memused 1334012101:109540140
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_cisco_memfree_35.rrd --template cisco_memfree 1334012101:417989876
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_39.rrd --template traffic_in:traffic_out 1334012101:2751338016:3199948691
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd --template traffic_in:traffic_out 1334012101:2154103075:1905272331
04/09/2012 03:55:02 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_44.rrd --template traffic_in:traffic_out 1334012101:5556420:4994160
Code: Select all
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[40] SNMP: v2: 10.10.49.4, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.12, value: 2217278175
04/09/2012 04:05:00 PM - SPINE: Poller[0] Host[3] TH[1] DS[40] SNMP: v2: 10.10.49.4, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.12, value: 1970743241
04/09/2012 04:05:01 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd --template traffic_in:traffic_out 1334012700:2217278175:1970743241
Re: Gaps on cacti rrd graphics
Which data source is the one having issues (ie 37?, 40?)
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Gaps on cacti rrd graphics
40Linegod wrote:Which data source is the one having issues (ie 37?, 40?)
Re: Gaps on cacti rrd graphics
Do an 'rrdtool info' on that data source, and see what the max data size is - it may have picked up a low setting for the tunnel interface, and is discarding a value that is too high.
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Gaps on cacti rrd graphics
Output is below. It was set as 'U' in datasource setting. Today I set it to 10000000. But I don't see any changes in rrd. Should I change it by rrdtool manually?Linegod wrote:Do an 'rrdtool info' on that data source, and see what the max data size is - it may have picked up a low setting for the tunnel interface, and is discarding a value that is too high.
Thanks
Code: Select all
pavelbsd# rrdtool info /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd
filename = "/usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd"
rrd_version = "0003"
step = 300
last_update = 1334026200
header_size = 3496
ds[traffic_in].index = 0
ds[traffic_in].type = "COUNTER"
ds[traffic_in].minimal_heartbeat = 600
ds[traffic_in].min = 0.0000000000e+00
ds[traffic_in].max = 1.0000000000e+05
ds[traffic_in].last_ds = "3100665260"
ds[traffic_in].value = 0.0000000000e+00
ds[traffic_in].unknown_sec = 0
ds[traffic_out].index = 1
ds[traffic_out].type = "COUNTER"
ds[traffic_out].minimal_heartbeat = 600
ds[traffic_out].min = 0.0000000000e+00
ds[traffic_out].max = 1.0000000000e+05
ds[traffic_out].last_ds = "3771916056"
ds[traffic_out].value = 0.0000000000e+00
ds[traffic_out].unknown_sec = 0
rra[0].cf = "AVERAGE"
rra[0].rows = 500
rra[0].cur_row = 366
rra[0].pdp_per_row = 1
rra[0].xff = 5.0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
rra[0].cdp_prep[1].value = NaN
rra[0].cdp_prep[1].unknown_datapoints = 0
rra[1].cf = "AVERAGE"
rra[1].rows = 600
rra[1].cur_row = 229
rra[1].pdp_per_row = 1
rra[1].xff = 5.0000000000e-01
rra[1].cdp_prep[0].value = NaN
rra[1].cdp_prep[0].unknown_datapoints = 0
rra[1].cdp_prep[1].value = NaN
rra[1].cdp_prep[1].unknown_datapoints = 0
rra[2].cf = "AVERAGE"
rra[2].rows = 700
rra[2].cur_row = 57
rra[2].pdp_per_row = 6
rra[2].xff = 5.0000000000e-01
rra[2].cdp_prep[0].value = 6.8454890000e+04
rra[2].cdp_prep[0].unknown_datapoints = 0
rra[2].cdp_prep[1].value = 1.0592219000e+05
rra[2].cdp_prep[1].unknown_datapoints = 0
rra[3].cf = "AVERAGE"
rra[3].rows = 775
rra[3].cur_row = 444
rra[3].pdp_per_row = 24
rra[3].xff = 5.0000000000e-01
rra[3].cdp_prep[0].value = 2.2749897000e+05
rra[3].cdp_prep[0].unknown_datapoints = 0
rra[3].cdp_prep[1].value = 2.1120137844e+05
rra[3].cdp_prep[1].unknown_datapoints = 2
rra[4].cf = "AVERAGE"
rra[4].rows = 797
rra[4].cur_row = 480
rra[4].pdp_per_row = 288
rra[4].xff = 5.0000000000e-01
rra[4].cdp_prep[0].value = 1.2230305666e+06
rra[4].cdp_prep[0].unknown_datapoints = 2
rra[4].cdp_prep[1].value = 1.0305293070e+06
rra[4].cdp_prep[1].unknown_datapoints = 6
rra[5].cf = "MAX"
rra[5].rows = 500
rra[5].cur_row = 263
rra[5].pdp_per_row = 1
rra[5].xff = 5.0000000000e-01
rra[5].cdp_prep[0].value = NaN
rra[5].cdp_prep[0].unknown_datapoints = 0
rra[5].cdp_prep[1].value = NaN
rra[5].cdp_prep[1].unknown_datapoints = 0
rra[6].cf = "MAX"
rra[6].rows = 600
rra[6].cur_row = 395
rra[6].pdp_per_row = 1
rra[6].xff = 5.0000000000e-01
rra[6].cdp_prep[0].value = NaN
rra[6].cdp_prep[0].unknown_datapoints = 0
rra[6].cdp_prep[1].value = NaN
rra[6].cdp_prep[1].unknown_datapoints = 0
rra[7].cf = "MAX"
rra[7].rows = 700
rra[7].cur_row = 356
rra[7].pdp_per_row = 6
rra[7].xff = 5.0000000000e-01
rra[7].cdp_prep[0].value = 1.8896714031e+04
rra[7].cdp_prep[0].unknown_datapoints = 0
rra[7].cdp_prep[1].value = 3.6785399380e+04
rra[7].cdp_prep[1].unknown_datapoints = 0
rra[8].cf = "MAX"
rra[8].rows = 775
rra[8].cur_row = 345
rra[8].pdp_per_row = 24
rra[8].xff = 5.0000000000e-01
rra[8].cdp_prep[0].value = 4.3464033333e+04
rra[8].cdp_prep[0].unknown_datapoints = 0
rra[8].cdp_prep[1].value = 3.6785399380e+04
rra[8].cdp_prep[1].unknown_datapoints = 2
rra[9].cf = "MAX"
rra[9].rows = 797
rra[9].cur_row = 701
rra[9].pdp_per_row = 288
rra[9].xff = 5.0000000000e-01
rra[9].cdp_prep[0].value = 9.7894435305e+04
rra[9].cdp_prep[0].unknown_datapoints = 2
rra[9].cdp_prep[1].value = 8.8855183023e+04
rra[9].cdp_prep[1].unknown_datapoints = 6
Re: Gaps on cacti rrd graphics
Changing it in the GUI will not change it in the RRD file (only if you blow the rrd file away).
You need to use 'rrdtool tune' to adjust the ds[traffic_in].max and ds[traffic_out].max to a higher value ( would suggest one factor higher then your current input - so 10000000000)
You need to use 'rrdtool tune' to adjust the ds[traffic_in].max and ds[traffic_out].max to a higher value ( would suggest one factor higher then your current input - so 10000000000)
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Re: Gaps on cacti rrd graphics
I did followingLinegod wrote:Changing it in the GUI will not change it in the RRD file (only if you blow the rrd file away).
You need to use 'rrdtool tune' to adjust the ds[traffic_in].max and ds[traffic_out].max to a higher value ( would suggest one factor higher then your current input - so 10000000000)
Code: Select all
rrdtool tune /usr/local/share/cacti/rra/renton-gw2_traffic_in_40.rrd --maximum traffic_in:U --maximum traffic_out:U
Thanks
Re: Gaps on cacti rrd graphics
It works for me. Thank you!Linegod wrote:Changing it in the GUI will not change it in the RRD file (only if you blow the rrd file away).
You need to use 'rrdtool tune' to adjust the ds[traffic_in].max and ds[traffic_out].max to a higher value ( would suggest one factor higher then your current input - so 10000000000)
Should I leave the feature request to developers about to apply tune for rrd from webinterface?
Thanks
Re: Gaps on cacti rrd graphics
We have looked at doing it before, but it is not currently on the road map - too many security concerns.pchel wrote: Should I leave the feature request to developers about to apply tune for rrd from webinterface?
Thanks
--
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Live fast, die young
You're sucking up my bandwidth.
J.P. Pasnak,CD
CCNA, LPIC-1
http://www.warpedsystems.sk.ca
Who is online
Users browsing this forum: No registered users and 5 guests