Blank graphs in cacti

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

Post Reply
nothingmore
Posts: 4
Joined: Mon Mar 23, 2009 11:09 am

Blank graphs in cacti

Post by nothingmore »

Hi all.

I'm facing the problem of blank graphs in cacti.

Using cacti-0.8.7d / rrdtool-1.3.5_1.

Info is gathering via SNMP, rrd file created:

Code: Select all

/usr/local/bin/rrdtool create \
/usr/local/share/cacti/rra/apc1_snmp_oid_14.rrd \
--step 300  \
DS:snmp_oid:GAUGE:600:0:200 \
RRA:AVERAGE:0.5:1:600 \
RRA:AVERAGE:0.5:6:700 \
RRA:AVERAGE:0.5:24:775 \
RRA:AVERAGE:0.5:288:797 \
RRA:MAX:0.5:1:600 \
RRA:MAX:0.5:6:700 \
RRA:MAX:0.5:24:775 \
RRA:MAX:0.5:288:797 \

Code: Select all

ls -la apc1_snmp_oid_14.rrd
-rw-rw-rw-  1 root  cacti  47992 Mar 23 16:52 apc1_snmp_oid_14.rrd
Item in poller cache exists:

Code: Select all

SNMP Version: 1, Community: SecretCOMM, OID: 1.3.6.1.4.1.318.1.1.12.2.3.1.1.2.1
	RRD: /usr/local/share/cacti/rra/apc1_snmp_oid_14.rrd 
Poller exists as cron task:

Code: Select all

crontab -l | grep poller
*/5 * * * * /usr/local/bin/php /usr/local/share/cacti/poller.php > /dev/null 2>&1
Also try this one but result is the same:

Code: Select all

crontab -l | grep poller
*/5 * * * * cacti /usr/local/bin/php /usr/local/share/cacti/poller.php > /dev/null 2>&1
Log file contains records about this item only when poller has been started manually:

Code: Select all

grep 1.3.6.1.4.1.318.1.1.12.2.3.1.1.2.1 /usr/local/share/cacti/log/cacti.log | awk '{print $1,$2,$14,$15,$16,$17}'
03/23/2009 04:52:21 oid: 1.3.6.1.4.1.318.1.1.12.2.3.1.1.2.1, output: 74
03/23/2009 05:07:48 oid: 1.3.6.1.4.1.318.1.1.12.2.3.1.1.2.1, output: 74

Mysql access granted.

Seems to be cron mistake.. but can't find where it is.
What may be wrong?

Thanks in advance for help.
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Please post a screenshot of the empty graph as well as a Graph Debug from Graph Management
Reinhard
nothingmore
Posts: 4
Joined: Mon Mar 23, 2009 11:09 am

Post by nothingmore »

Here it is.

Thanks.
Attachments
screen.JPG
screen.JPG (52 KiB) Viewed 17421 times
nothingmore
Posts: 4
Joined: Mon Mar 23, 2009 11:09 am

Post by nothingmore »

It seems rrd file has not been updating automatically.. how can this be fixed?

# grep 'rrdtool update /usr/local/share/cacti/rra/apc' /usr/local/share/cacti/log/cacti.log
03/23/2009 04:52:22 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/apc1_snmp_oid_14.rrd --template snmp_oid 1237823541:74
03/23/2009 06:06:19 PM - POLLER: Poller[0] CACTI2RRD: /usr/local/bin/rrdtool update /usr/local/share/cacti/rra/apc1_snmp_oid_14.rrd --template snmp_oid 1237827978:75
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Please see 2nd link of my sig
Reinhard
warpik
Posts: 5
Joined: Tue Jun 02, 2009 8:14 am

Post by warpik »

can you write that link here? I seem to have the exact problem but I can't see your signature anymore

thanks


edit: now I see it :) http://docs.cacti.net/manual:087:4_help ... #debugging
warpik
Posts: 5
Joined: Tue Jun 02, 2009 8:14 am

Post by warpik »

when I run rrdtool fetch against my rrd files, i see only NaNs inside

Code: Select all

$ rrdtool fetch localhost_load_1min_5.rrd AVERAGE
1243935600: nan nan nan
1243935900: nan nan nan
1243936200: nan nan nan
1243936500: nan nan nan
1243936800: nan nan nan
1243937100: nan nan nan
...and so on

Code: Select all

$ rrdtool info localhost_load_1min_5.rrd | grep ds
ds[load_1min].type = "GAUGE"
ds[load_1min].minimal_heartbeat = 600
ds[load_1min].min = 0.0000000000e+00
ds[load_1min].max = 5.0000000000e+02
ds[load_1min].last_ds = "0.07"
ds[load_1min].value = 7.0899010000e-01
ds[load_1min].unknown_sec = 45
ds[load_5min].type = "GAUGE"
ds[load_5min].minimal_heartbeat = 600
ds[load_5min].min = 0.0000000000e+00
ds[load_5min].max = 5.0000000000e+02
ds[load_5min].last_ds = "0.03"
ds[load_5min].value = 3.0385290000e-01
ds[load_5min].unknown_sec = 45
ds[load_15min].type = "GAUGE"
ds[load_15min].minimal_heartbeat = 600
ds[load_15min].min = 0.0000000000e+00
ds[load_15min].max = 5.0000000000e+02
ds[load_15min].last_ds = "0.00"
ds[load_15min].value = 0.0000000000e+00
ds[load_15min].unknown_sec = 45
I have tried to use instructions found here: http://docs.cacti.net/manual:087:4_help ... #debugging

do you have some idea what am I missing?
warpik
Posts: 5
Joined: Tue Jun 02, 2009 8:14 am

Post by warpik »

i did not post a screenshot because graphs look the same as on the screenshot uploaded by nothingmore 4 posts above
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

You only showed some steps from debugging. To help better, I need output of every debugging step
Reinhard
warpik
Posts: 5
Joined: Tue Jun 02, 2009 8:14 am

Post by warpik »

I picked 'Unix - Processes' script for debugging.
I turned on DEBUG mode, and then:

1. Check Cacti Log File

Code: Select all

$ cat cacti.log | grep DEBUG
shows a lot of stuff

Code: Select all

$ cat cacti.log | grep WARNING
shows nothing

Code: Select all

$ cat cacti.log | grep ERROR
shows nothing

Seems OK to me

2. Check Basic Data Gathering

Code: Select all

$ cd /var/www/cacti/scripts
$ perl unix_processes.pl
92$ perl unix_processes.pl
92$ perl unix_processes.pl
92$
Script returns '92'. Seems OK to me

3. Check Cacti's Poller

I can see proller typing crontab -l.

When launched manually, poller shows:

Code: Select all

$ /var/udash/u/uS.02/go/cacti_poller.sh
06/03/2009 01:23:46 PM - POLLER: Poller[0] DEBUG: About to Spawn a Remote Process [CMD: /usr/bin/php, ARGS: -q /var/www/cacti/cmd.php 0 1]
06/03/2009 01:23:47 PM - POLLER: Poller[0] Parsed MULTI output field '1min:0.00' [map 1min->load_1min]
06/03/2009 01:23:47 PM - POLLER: Poller[0] Parsed MULTI output field '5min:0.01' [map 5min->load_5min]
06/03/2009 01:23:47 PM - POLLER: Poller[0] Parsed MULTI output field '10min:0.00' [map 10min->load_15min]
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_buffers_3.rrd --template mem_buffers 1244028226:13068
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_swap_4.rrd --template mem_swap 1244028226:244072
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_load_1min_5.rrd --template load_1min:load_5min:load_15min 1244028226:0.00:0.01:0.00
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_users_6.rrd --template users 1244028226:3
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_proc_7.rrd --template proc 1244028226:94
06/03/2009 01:23:47 PM - SYSTEM STATS: Time:1.0592 Method:cmd.php Processes:1 Threads:N/A Hosts:2 HostsPerProcess:2 DataSources:5 RRDsProcessed:5
OK u:0.00 s:0.01 r:0.04
OK u:0.00 s:0.01 r:0.04
OK u:0.00 s:0.01 r:0.04
OK u:0.00 s:0.01 r:0.04
OK u:0.00 s:0.01 r:0.06
Seems OK to me

4. Check MySQL Update

I type:

Code: Select all

$ cat log/cacti.log | grep Exec:
I can see there a lot of inserts, so I pick one:

Code: Select all

06/03/2009 01:23:47 PM - CMDPHP: Poller[0] DEBUG: SQL Exec: "insert into poller_output (local_data_id, rrd_name, time, output) values (7, 'proc', '2009-06-03 13:23:46', '94')"
I insert it using MySQL console:

Code: Select all

mysql> insert into poller_output (local_data_id, rrd_name, time, output) values (7, 'proc', '2009-06-03 13:23:46', '94');
Query OK, 1 row affected (0.00 sec)
Seems OK to me

5. Check RRD File Update

Code: Select all

$ tail -n 1000 log/cacti.log | grep rrdtool.update
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_buffers_3.rrd --template mem_buffers 1244028226:13068
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_swap_4.rrd --template mem_swap 1244028226:244072
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_load_1min_5.rrd --template load_1min:load_5min:load_15min 1244028226:0.00:0.01:0.00
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_users_6.rrd --template users 1244028226:3
06/03/2009 01:23:47 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_proc_7.rrd --template proc 1244028226:94
06/03/2009 01:38:00 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_buffers_3.rrd --template mem_buffers 1244029079:6424
06/03/2009 01:38:00 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_mem_swap_4.rrd --template mem_swap 1244029079:244072
06/03/2009 01:38:00 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_load_1min_5.rrd --template load_1min:load_5min:load_15min 1244029079:0.00:0.00:0.00
06/03/2009 01:38:00 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_users_6.rrd --template users 1244029079:4
06/03/2009 01:38:00 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/cacti/rra/localhost_proc_7.rrd --template proc 1244028226:94
Shows exactly one update statement for each file. Seems OK to me.

6. Check RRD File Ownership

Code: Select all

$ ls -l
total 336
-rw-rw-r-- 1 udash udash 141640 Jun  3 13:38 localhost_load_1min_5.rrd
-rw-rw-r-- 1 udash udash  47992 Jun  3 13:38 localhost_mem_buffers_3.rrd
-rw-rw-r-- 1 udash udash  47992 Jun  3 13:38 localhost_mem_swap_4.rrd
-rw-rw-r-- 1 udash udash  47992 Jun  3 13:38 localhost_proc_7.rrd
-rw-rw-r-- 1 udash udash  47992 Jun  3 13:38 localhost_users_6.rrd
This is the same user as the one who runs Cacti poller. Seems OK to me.

7. Check RRD File Numbers

Code: Select all

$ rrdtool info localhost_proc_7.rrd | grep ds
ds[proc].type = "GAUGE"
ds[proc].minimal_heartbeat = 600
ds[proc].min = 0.0000000000e+00
ds[proc].max = 1.0000000000e+03
ds[proc].last_ds = "100"
ds[proc].value = NaN
ds[proc].unknown_sec = 179
ds[proc].min and .max seem OK

But...

Code: Select all

$ rrdtool fetch localhost_proc_7.rrd AVERAGE
1244025000: nan
1244025300: nan
1244025600: nan
1244025900: nan
1244026200: nan
1244026500: nan
1244026800: nan
1244027100: nan
1244027400: nan
1244027700: nan
1244028000: nan
1244028300: nan
1244028600: nan
1244028900: nan
1244029200: nan
1244029500: nan
1244029800: nan
1244030100: nan
1244030400: nan
1244030700: nan
1244031000: nan
1244031300: nan
1244031600: nan
1244031900: nan
1244032200: nan
1244032500: nan
... and so on
Proc value 92 returned earlier is certainly between 0.0000000000e+00 and 1.0000000000e+03.

I don't know what to do at this point :(
warpik
Posts: 5
Joined: Tue Jun 02, 2009 8:14 am

Post by warpik »

anyone preety please?
khufure
Cacti User
Posts: 203
Joined: Wed Oct 24, 2007 5:47 pm
Location: San Francisco, CA
Contact:

Post by khufure »

warpik wrote:anyone preety please?

Code: Select all

*/5 * * * * /usr/local/bin/php /usr/local/share/cacti/poller.php > /dev/null 2>&1
make it

Code: Select all

*/5 * * * * /usr/local/bin/php /usr/local/share/cacti/poller.php > /tmp/poller_log.txt 2>&1
Check that file to see what your poller says. I don't know if it will help though, but it won't hurt to have the log whenever you want it.

Maybe delete your rrd although they look ok to me. It seems like the problem is in transferring the information from the script into the RRD. I guess that would mean something system level or perhaps from a PHP level. You've got the latest of a lot of this software. Are you running the latest PHP with 128 or more megs of ram in php.ini?

If you are really stuck, try upgrading to spine. It's faster and at this point it can't hurt. You can debug individual devices like

spine -V 5 -R -C $conf_file $DEVICE_TO_CHECK $DEVICE_TO_CHECK
Post Reply

Who is online

Users browsing this forum: No registered users and 3 guests