Nan on all grapsh

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

Post Reply
mechanid
Posts: 1
Joined: Tue Mar 26, 2013 7:53 am

Nan on all grapsh

Post by mechanid »

Hi, i installed fresh Version 0.8.8a and also tried few prevous versions. i have "nan" on all graphs
i am following
http://docs.cacti.net/manual:087:4_help ... #debugging
I will really appreciate any help.

1 Check Cacti Log File
debug on - no errors,

2 Check Basic Data Gathering
according to log and running from consoleall fine
bash-4.1$ perl /home/cacti/scripts/query_unix_partitions.pl get available /dev/md0
410182

3 Check Cacti's Poller:
i dont know why but "php -q cmd.php" 1 1 (host id is1) give no output but just php -q cmd.php have it:
php -q cmd.php
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] SNMP: Host responded to SNMP
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] RECACHE: Processing 2 items in the auto reindex cache for '127.0.0.1'.
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] RECACHE DQ[1] OID: .1.3.6.1.2.1.1.3.0
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] RECACHE DQ[1] OID: .1.3.6.1.2.1.1.3.0, output: 1668706
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] RECACHE DQ[2] OID: .1.3.6.1.2.1.1.3.0
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] RECACHE DQ[2] OID: .1.3.6.1.2.1.1.3.0, output: 1668709
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[46] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get used /dev/mapper/sysKtkh-vartmp, output: 68620
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[46] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get available /dev/mapper/sysKtkh-vartmp, output: 1890732
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[45] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get used /dev/mapper/sysKtkh-tmp, output: 70476
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[45] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get available /dev/mapper/sysKtkh-tmp, output: 1888876
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[44] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get used /dev/mapper/sysKtkh-root, output: 30997596
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[44] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get available /dev/mapper/sysKtkh-root, output: 416845244
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[43] SNMP: v1: 127.0.0.1, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.3, output: 1530565643
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[42] SNMP: v1: 127.0.0.1, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.2, output: 732
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[43] SNMP: v1: 127.0.0.1, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.3, output: 326825210
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[42] SNMP: v1: 127.0.0.1, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.2, output: 6355016
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[41] CMD: perl /home/cacti/scripts/unix_processes.pl, output: 163
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[40] CMD: perl /home/cacti/scripts/unix_users.pl , output: 4
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[39] CMD: perl /home/cacti/scripts/loadavg_multi.pl, output: 1min:2.18 5min:0.93 10min:0.50
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[38] SNMP: v1: 127.0.0.1, dsname: mem_free, oid: .1.3.6.1.4.1.2021.4.6.0, output: 305464
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[37] SNMP: v1: 127.0.0.1, dsname: mem_cache, oid: .1.3.6.1.4.1.2021.4.15.0, output: 2656732
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[36] SNMP: v1: 127.0.0.1, dsname: mem_buffers, oid: .1.3.6.1.4.1.2021.4.14.0, output: 182476
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[35] SNMP: v1: 127.0.0.1, dsname: load_5min, oid: .1.3.6.1.4.1.2021.10.1.3.2, output: 0.93
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[34] SNMP: v1: 127.0.0.1, dsname: load_15min, oid: .1.3.6.1.4.1.2021.10.1.3.3, output: 0.50
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[33] SNMP: v1: 127.0.0.1, dsname: load_1min, oid: .1.3.6.1.4.1.2021.10.1.3.1, output: 2.18
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[32] SNMP: v1: 127.0.0.1, dsname: cpu_user, oid: .1.3.6.1.4.1.2021.11.50.0, output: 874362
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[31] SNMP: v1: 127.0.0.1, dsname: cpu_system, oid: .1.3.6.1.4.1.2021.11.52.0, output: 181871
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[30] SNMP: v1: 127.0.0.1, dsname: cpu_nice, oid: .1.3.6.1.4.1.2021.11.51.0, output: 13176
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[29] CMD: perl /home/cacti/scripts/linux_memory.pl SwapFree:, output: 4162348
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[28] CMD: perl /home/cacti/scripts/linux_memory.pl MemFree:, output: 305620
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[47] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get used /dev/md0, output: 60051
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[47] CMD: perl /home/cacti/scripts/query_unix_partitions.pl get available /dev/md0, output: 410182
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Time: 0.5622 s, Theads: N/A, Hosts: 1

4 Check Bulkwalk Behaviour (SNMP Data Queries only)
i have not only snmp queries problems but i set snmp version to 1 - no help. also in log above it gets data via snmp.

5 Check RRD File Update:
i used bash-4.1$ /usr/bin/rrdtool update /home/cacti/rra/localhost_hdd_free_47.rrd --template hdd_used:hdd_free 1364306712:60051:410182
no output

6 Check RRD File Ownership
cactiuser - valid system user with shell and etc , a lso try chmod a+w *.rdd anyway nan on graphs
ls -l ./rra/
total 2680
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 06:20 localhost_cpu_nice_17.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 07:05 localhost_cpu_nice_30.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 06:20 localhost_cpu_system_18.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 07:05 localhost_cpu_system_31.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 06:20 localhost_cpu_user_19.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 07:05 localhost_cpu_user_32.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 06:20 localhost_hdd_free_13.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 06:20 localhost_hdd_free_14.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 06:20 localhost_hdd_free_15.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 06:20 localhost_hdd_free_16.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 07:05 localhost_hdd_free_44.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 07:05 localhost_hdd_free_45.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 07:05 localhost_hdd_free_46.rrd
-rw-rw-rw- 1 cactiuser cactiuser 94816 Mar 26 07:07 localhost_hdd_free_47.rrd
-rw-rw-rw- 1 cactiuser cactiuser 47992 Mar 26 06:20 localhost_load_15min_21.rrd

7 Check RRD File Numbers
bash-4.1$ /usr/bin/rrdtool fetch /home/cacti/rra/localhost_traffic_in_43.rrd AVERAGE
traffic_in traffic_out

1364217300: -nan -nan
1364217600: -nan -nan
1364217900: -nan -nan
1364218200: -nan -nan
1364218500: -nan -nan
1364218800: -nan -nan
1364219100: -nan -nan
1364219400: -nan -nan
1364219700: -nan -nan
1364220000: -nan -nan
and more nan
8 Check RRD File Numbers
i have issu on all graps eve on linux logget in users that outputs 1 i dont belive this is error on all graph templates.
exmaple,
bash-4.1$ /usr/bin/rrdtool info /home/cacti/rra/localhost_traffic_in_43.rrd
filename = "/home/cacti/rra/localhost_traffic_in_43.rrd"
rrd_version = "0003"
step = 300
last_update = 1364307001
ds[traffic_in].type = "COUNTER"
ds[traffic_in].minimal_heartbeat = 600
ds[traffic_in].min = 0.0000000000e+00
ds[traffic_in].max = 1.0000000000e+09
ds[traffic_in].last_ds = "327821916"
ds[traffic_in].value = 1.7736521739e+03
ds[traffic_in].unknown_sec = 0
ds[traffic_out].type = "COUNTER"
ds[traffic_out].minimal_heartbeat = 600
ds[traffic_out].min = 0.0000000000e+00
ds[traffic_out].max = 1.0000000000e+09
ds[traffic_out].last_ds = "1540734394"
ds[traffic_out].value = 1.4936458194e+04
ds[traffic_out].unknown_sec = 0
rra[0].cf = "AVERAGE"
rra[0].rows = 600
rra[0].cur_row = 293
rra[0].pdp_per_row = 1
rra[0].xff = 5.0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
rra[0].cdp_prep[1].value = NaN
rra[0].cdp_prep[1].unknown_datapoints = 0
rra[1].cf = "AVERAGE"
rra[1].rows = 700
rra[1].cur_row = 252
rra[1].pdp_per_row = 6
rra[1].xff = 5.0000000000e-01
rra[1].cdp_prep[0].value = 3.5788898791e+03
rra[1].cdp_prep[0].unknown_datapoints = 0
rra[1].cdp_prep[1].value = 3.5817086563e+04
rra[1].cdp_prep[1].unknown_datapoints = 0
rra[2].cf = "AVERAGE"
rra[2].rows = 775
rra[2].cur_row = 517
rra[2].pdp_per_row = 24
rra[2].xff = 5.0000000000e-01
rra[2].cdp_prep[0].value = 3.5788898791e+03
rra[2].cdp_prep[0].unknown_datapoints = 0
rra[2].cdp_prep[1].value = 3.5817086563e+04
rra[2].cdp_prep[1].unknown_datapoints = 0
rra[3].cf = "AVERAGE"
rra[3].rows = 797
rra[3].cur_row = 89
rra[3].pdp_per_row = 288
rra[3].xff = 5.0000000000e-01
rra[3].cdp_prep[0].value = 1.6650030555e+04
rra[3].cdp_prep[0].unknown_datapoints = 162
rra[3].cdp_prep[1].value = 1.9560943957e+05
rra[3].cdp_prep[1].unknown_datapoints = 162
rra[4].cf = "MAX"
rra[4].rows = 600
rra[4].cur_row = 163
rra[4].pdp_per_row = 1
rra[4].xff = 5.0000000000e-01
rra[4].cdp_prep[0].value = NaN
rra[4].cdp_prep[0].unknown_datapoints = 0
rra[4].cdp_prep[1].value = NaN
rra[4].cdp_prep[1].unknown_datapoints = 0
rra[5].cf = "MAX"
rra[5].rows = 700
rra[5].cur_row = 312
rra[5].pdp_per_row = 6
rra[5].xff = 5.0000000000e-01
rra[5].cdp_prep[0].value = 1.8050140262e+03
rra[5].cdp_prep[0].unknown_datapoints = 0
rra[5].cdp_prep[1].value = 2.4215907221e+04
rra[5].cdp_prep[1].unknown_datapoints = 0
rra[6].cf = "MAX"
rra[6].rows = 775
rra[6].cur_row = 661
rra[6].pdp_per_row = 24
rra[6].xff = 5.0000000000e-01
rra[6].cdp_prep[0].value = 1.8050140262e+03
rra[6].cdp_prep[0].unknown_datapoints = 0
rra[6].cdp_prep[1].value = 2.4215907221e+04
rra[6].cdp_prep[1].unknown_datapoints = 0
rra[7].cf = "MAX"
rra[7].rows = 797
rra[7].cur_row = 32
rra[7].pdp_per_row = 288
rra[7].xff = 5.0000000000e-01
rra[7].cdp_prep[0].value = 4.3202046980e+03
rra[7].cdp_prep[0].unknown_datapoints = 162
rra[7].cdp_prep[1].value = 5.8578832215e+04
rra[7].cdp_prep[1].unknown_datapoints = 162

9 Check RRDTool Graph Statement
in all graphs its OK
RRDTool Command:
/usr/bin/rrdtool graph - \
--imgformat=PNG \
--start=-86400 \
--end=-300 \
--title='Localhost - Logged in Users' \
--rigid \
--base=1000 \
--height=120 \
--width=500 \
--alt-autoscale-max \
--lower-limit='0' \
--vertical-label='users' \
--slope-mode \
--font TITLE:10: \
--font AXIS:7: \
--font LEGEND:8: \
--font UNIT:7: \
DEF:a="/home/cacti/rra/localhost_users_40.rrd":'users':AVERAGE \
AREA:a#4668E4FF:"Users" \
GPRINT:a:LAST:"Current\:%8.0lf" \
GPRINT:a:AVERAGE:"Average\:%8.0lf" \
GPRINT:a:MAX:"Maximum\:%8.0lf\n"
RRDTool Says:
OK

10 Miscellaneous
mysql> select count(*) from poller_output;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)
omegafoo
Posts: 4
Joined: Mon Mar 25, 2013 2:47 pm

Re: Nan on all grapsh

Post by omegafoo »

mechanid wrote:4 Check Bulkwalk Behaviour (SNMP Data Queries only)
i have not only snmp queries problems but i set snmp version to 1 - no help. also in log above it gets data via snmp.

5 Check RRD File Update:
i used bash-4.1$ /usr/bin/rrdtool update /home/cacti/rra/localhost_hdd_free_47.rrd --template hdd_used:hdd_free 1364306712:60051:410182
no output
4) what output do you get from this `snmpwalk -v 1 -c <RO_String> 127.0.0.1 1.3.6.1.2.1.1.3.0`

5) You should not be executing that but seeing that in your log file. Do you see that in your log file?
fr0gi
Posts: 1
Joined: Mon Apr 15, 2013 5:47 am

Re: Nan on all grapsh

Post by fr0gi »

I've got the same problem. I installed cacti 0.8.8a on Centos and Debian.

snmpwalk -v 1 -c <RO_String> 127.0.0.1 1.3.6.1.2.1.1.3.0
iso.3.6.1.2.1.1.3.0 = Timeticks: (477750) 1:19:37.50

What i doing wrong?
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Re: Nan on all grapsh

Post by gandalf »

mechanid wrote:03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[43] SNMP: v1: 127.0.0.1, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.3, output: 1530565643
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[42] SNMP: v1: 127.0.0.1, dsname: traffic_out, oid: .1.3.6.1.2.1.2.2.1.16.2, output: 732
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[43] SNMP: v1: 127.0.0.1, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.3, output: 326825210
03/26/2013 07:00:45 AM - CMDPHP: Poller[0] Host[1] DS[42] SNMP: v1: 127.0.0.1, dsname: traffic_in, oid: .1.3.6.1.2.1.2.2.1.10.2, output: 6355016
So you get valid data for traffic
5 Check RRD File Update:
i used bash-4.1$ /usr/bin/rrdtool update /home/cacti/rra/localhost_hdd_free_47.rrd --template hdd_used:hdd_free 1364306712:60051:410182
no output
This is for hdd, not for traffic. For the sake of debugging it is better to stick to one graph/rrd file.
7 Check RRD File Numbers
bash-4.1$ /usr/bin/rrdtool fetch /home/cacti/rra/localhost_traffic_in_43.rrd AVERAGE
traffic_in traffic_out

1364217300: -nan -nan
1364217600: -nan -nan
1364217900: -nan -nan
1364218200: -nan -nan
1364218500: -nan -nan
1364218800: -nan -nan
1364219100: -nan -nan
1364219400: -nan -nan
1364219700: -nan -nan
1364220000: -nan -nan
and more nan
As you get valid traffic data, we would expect to find them in the rrd file. So the issue is between fetching data and updating rrd file (step 7). Either permissions are an issue, or rrd file might be updated with a higher timestamp than current or data updated is above some configured rrd max limit or ...
So you should repeat step 7 for traffic stuff
R.
Post Reply

Who is online

Users browsing this forum: No registered users and 3 guests