[SOLVED] WARNING: Poller Output Table not Empty
Moderators: Developers, Moderators
I did the hack for the PHP_SELF issue the the other day and it did let me upgrade to 0.8.7b.
Fingers were crossed it might solve by problems with RRDs not updating but unfortunately not. The only thing I can think of is that there must be an issue somewhere in the database but I don't really know what to look for.
Fingers were crossed it might solve by problems with RRDs not updating but unfortunately not. The only thing I can think of is that there must be an issue somewhere in the database but I don't really know what to look for.
I have the same issue as tymbow, with certain data sources not updating. This was working until I updated to 0.8.7b from 0.8.6j(?). This is a segment from my log where the issue occurs:
Like tymbow's description, it is polling correctly, but is not updating RRDs. In my case it is all related to graphs created by the Unix - Partition Information data query, but it may just be that its some of my more recent data sources. Some partitions work, but others do not.
It seems to be related to items missing from the poller cache or poller_item table. I found an entry is missing in the poller_item table for data sources that do not work properly. Show below, local_data_id 4123 is a broken DS, while 4136 is a working one:
What fixed polling for me was this:
This is not really a solution, as the DS will break again if i rebuild the poller cache, and it has to be done for each broken DS. I haven't had the time to trace through how it rebuilds the poller cache, but at some point I may.
Anybody have any suggestions, given these symptoms? Let me know if any further info is needed.
Thanks!
Code: Select all
03/25/2008 01:50:00 PM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: LowKBytesUsed(DS[2809]), LowKBytesUsed(DS[2810]), LowKBytesUsed(DS[2811]), LowKBytesUsed(DS[2812]), LowKBytesUsed(DS[3858]), LowKBytesUsed(DS[3859]), LowKBytesUsed(DS[3860]), LowKBytesUsed(DS[3863]), hrStorageSize(DS[4120]), hrStorageUsed(DS[4120]), hrStorageSize(DS[4121]), hrStorageUsed(DS[4121]), hrStorageSize(DS[4122]), hrStorageUsed(DS[4122]), hrStorageSize(DS[4123]), hrStorageUsed(DS[4123]), hrStorageSize(DS[4124]), hrStorageUsed(DS[4124]), hrStorageSize(DS[4125]), hrStorageUsed(DS[4125]), hrStorageSize(DS[4128]), hrStorageUsed(DS[4128])
It seems to be related to items missing from the poller cache or poller_item table. I found an entry is missing in the poller_item table for data sources that do not work properly. Show below, local_data_id 4123 is a broken DS, while 4136 is a working one:
Code: Select all
mysql> select local_data_id, rrd_name from poller_item where local_data_id=4123;
+---------------+---------------+
| local_data_id | rrd_name |
+---------------+---------------+
| 4123 | hrStorageSize |
| 4123 | hrStorageUsed |
+---------------+---------------+
2 rows in set (0.00 sec)
mysql> select local_data_id, rrd_name from poller_item where local_data_id=4136;
+---------------+-------------------+
| local_data_id | rrd_name |
+---------------+-------------------+
| 4136 | hrAllocationUnits |
| 4136 | hrStorageSize |
| 4136 | hrStorageUsed |
+---------------+-------------------+
3 rows in set (0.00 sec)
Code: Select all
insert into poller_item(local_data_id,poller_id,host_id,action,hostname,snmp_community,snmp_version,snmp_port,snmp_timeout,rrd_name,rrd_path,rrd_num,rrd_step,rrd_next_step,arg1) values (4123,0,146,0,xxxx,xxxx,1,161,500,'hrAllocationUnits','/usr/local/www/html/cacti/rra/xxx_hrstorageused_4123.rrd',3,300,0,'.1.3.6.1.2.1.25.2.3.1.5.10')
Anybody have any suggestions, given these symptoms? Let me know if any further info is needed.
Thanks!
I have similar issues here....
Using cmd.php
Maximum Concurrent Poller Processes = 20
Using cmd.php
Maximum Concurrent Poller Processes = 20
Cacti Version - 0.8.7b
Plugin Architecture - 2.1
Poller Type - CMD.php
Server Info - Linux 2.6.9-55.0.9.plus.c4
Web Server - Apache/2.0.59 (CentOS)
PHP - 5.1.6
PHP Extensions - libxml, xml, wddx, tokenizer, sysvshm, sysvsem, sysvmsg, standard, SimpleXML, sockets, SPL, shmop, session, Reflection, pspell, posix, mime_magic, iconv, hash, gmp, gettext, ftp, exif, date, curl, ctype, calendar, bz2, zlib, pcre, openssl, apache2handler, gd, ldap, mysql, mysqli, PDO, pdo_mysql, pdo_sqlite, snmp, eAccelerator
MySQL - 5.0.54
RRDTool - 1.2.23
SNMP - 5.1.2
Plugins
- Global Plugin Settings (settings - v0.3)
Thresholds (thold - v0.3.6)
Large Site Performane Booster for Cacti (boost - v1.6)
Device Monitoring (monitor - v0.8)
Network Discovery (discovery - v0.7)
Network Tools (tools - v0.2)
Syslog Monitoring (syslog - v0.5.2)
Device Tracking (mactrack - v1.1)
RRD Cleaner (rrdclean - v1.1)
Update Checker (update - v0.4)
FlowView (flowview - v0.4)
Host Info (hostinfo - v0.2)
Error Images (errorimage - v0.1)
PHP Network Weathermap (weathermap - v0.941)
Create Aggregate Graphs (aggregate - v0.63)
Documents (docs - v0.1)
report it! (ReportIt - v0.4.2)
03/26/2008 04:55:48 PM - CMDPHP: Poller[0] Host[48] DS[871] WARNING: Result from SNMP not valid. Partial Result:
03/26/2008 04:55:45 PM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: traffic_in(DS[585]), traffic_out(DS[585]), traffic_in(DS[586]), traffic_out(DS[586]), traffic_in(DS[587]), traffic_out(DS[587]), traffic_in(DS[588]), traffic_out(DS[588]), traffic_in(DS[589]), traffic_out(DS[589]), traffic_in(DS[590]), traffic_out(DS[590]), traffic_in(DS[591]), traffic_out(DS[591]), traffic_in(DS[592]), traffic_out(DS[592]), traffic_in(DS[593]), traffic_out(DS[593]), traffic_in(DS[594]), traffic_out(DS[594]), traffic_in(DS[595]), traffic_out(DS[595]), traffic_in(DS[596]), traffic_out(DS[596]), traffic_in(DS[597]), traffic_out(DS[597]), traffic_in(DS[598]), traffic_out(DS[598]), traffic_in(DS[599]), traffic_out(DS[599]), traffic_in(DS[600]), traffic_out(DS[600]), traffic_in(DS[601]), traffic_out(DS[601]), traffic_in(DS[602]), traffic_out(DS[602]), traffic_in(DS[692]), traffic_out(DS[692]), traffic_in(DS[693]), traffic_out(DS[693]), traffic_in(DS[694]), traffic_out(DS[694]), traffic_in(DS[695]), traffic_out(DS[695]), traffic_in(DS[696]), traffic_out(DS[696]), traffic_in(DS[697]), traffic_out(DS[697]), traffic_in(DS[698]), traffic_out(DS[698]), traffic_in(DS[699]), traffic_in(DS[888]), traffic_out(DS[888]), traffic_in(DS[889]), traffic_out(DS[889]), traffic_in(DS[890]), traffic_out(DS[890]), traffic_in(DS[891]), traffic_out(DS[891]), traffic_in(DS[892]), traffic_out(DS[892]), traffic_in(DS[893]), traffic_out(DS[893]), traffic_in(DS[894]), traffic_out(DS[894]), traffic_in(DS[895]), traffic_out(DS[895]), traffic_in(DS[916]), traffic_out(DS[916]), traffic_in(DS[917]), traffic_out(DS[917]), traffic_in(DS[918]), traffic_out(DS[918]), traffic_in(DS[919]), traffic_out(DS[919]), traffic_in(DS[920]), traffic_out(DS[920]), traffic_in(DS[921]), traffic_out(DS[921]), traffic_in(DS[922]), traffic_in(DS[990]), traffic_out(DS[990]), traffic_in(DS[991]), traffic_out(DS[991]), traffic_in(DS[1038]), traffic_out(DS[1038]), traffic_in(DS[1039]), traffic_out(DS[1039]), traffic_in(DS[1040]), traffic_out(DS[1040]), traffic_in(DS[1041]), traffic_out(DS[1041]), traffic_in(DS[1042]), traffic_out(DS[1042]), traffic_in(DS[1043]), traffic_out(DS[1043]), traffic_in(DS[1044]), traffic_out(DS[1044]), traffic_in(DS[1045]), traffic_out(DS[1045]), traffic_in(DS[1046]), traffic_out(DS[1046]), traffic_in(DS[1047]), traffic_out(DS[1047]), traffic_in(DS[1048]), traffic_out(DS[1048]), traffic_in(DS[1049]), traffic_out(DS[1049]), traffic_in(DS[1179]), traffic_out(DS[1179]), traffic_in(DS[1180]), traffic_out(DS[1180]), traffic_in(DS[1181]), traffic_out(DS[1181]), traffic_in(DS[1182]), traffic_out(DS[1182]), traffic_in(DS[1183]), traffic_out(DS[1183]), traffic_in(DS[1184]), traffic_out(DS[1184]), traffic_in(DS[1185]), traffic_out(DS[1185]), traffic_in(DS[1186]), traffic_out(DS[1186]), traffic_in(DS[1187]), traffic_out(DS[1187]), traffic_in(DS[1188]), traffic_out(DS[1188]), traffic_in(DS[1189]), traffic_out(DS[1189]), traffic_in(DS[1190]), traffic_out(DS[1190]), traffic_in(DS[1258]), traffic_out(DS[1258]), traffic_in(DS[1259]), traffic_out(DS[1259]), traffic_in(DS[1260]), traffic_out(DS[1260]), traffic_in(DS[1261]), traffic_out(DS[1261]), traffic_in(DS[1262]), traffic_out(DS[1262]), traffic_in(DS[1263]), traffic_out(DS[1263]), traffic_in(DS[1264]), traffic_out(DS[1264]), traffic_in(DS[1265]), traffic_out(DS[1265]), traffic_in(DS[1266]), traffic_out(DS[1266]), traffic_in(DS[1267]), traffic_out(DS[1267]), traffic_in(DS[1268]), traffic_out(DS[1268]), traffic_in(DS[1269]), traffic_out(DS[1269]), traffic_in(DS[1270]), traffic_out(DS[1270]), traffic_in(DS[1435]), traffic_out(DS[1435])
03/26/2008 04:55:45 PM - SYSTEM STATS: Time:305.3993 Method:cmd.php Processes:20 Threads:N/A Hosts:84 HostsPerProcess:5 DataSources:2736 RRDsProcessed:2464
03/26/2008 04:55:45 PM - POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting.
I have a similar issue on a new cacti build that uses a shell script to poll some labs for logged in users. All labs were set up at the same time using the same method and templates.
It appears as if the data is collected properly by the script but then sometimes the rrd isn't updated. Other times it is, resulting in graphs that have gaps in them. The error hasn't happened in all rrds yet but they've only been live the last few hours. In the logs I see:
03/31/2008 03:00:01 PM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[36]), (DS[38]), (DS[39]), (DS[44]), (DS[45]), (DS[46]), (DS[47]), (DS[48]), (DS[49]), (DS[50]), (DS[51]), (DS[52]), (DS[53]), (DS[54]), (DS[55]), (DS[57]), (DS[58])
It appears as if the data is collected properly by the script but then sometimes the rrd isn't updated. Other times it is, resulting in graphs that have gaps in them. The error hasn't happened in all rrds yet but they've only been live the last few hours. In the logs I see:
03/31/2008 03:00:01 PM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[36]), (DS[38]), (DS[39]), (DS[44]), (DS[45]), (DS[46]), (DS[47]), (DS[48]), (DS[49]), (DS[50]), (DS[51]), (DS[52]), (DS[53]), (DS[54]), (DS[55]), (DS[57]), (DS[58])
- Attachments
-
- graph_image.php.png (25.6 KiB) Viewed 19475 times
Well I may have fixed my build...I've now been through 10 cycles without seeing any:
04/01/2008 09:00:01 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[9]), (DS[10]), (DS[11]), (DS[12]), (DS[36]), (DS[38]), (DS[39]), (DS[40]), (DS[41]), (DS[42]), (DS[43]), (DS[44]), (DS[45]), (DS[46]), (DS[47]), (DS[48]), (DS[49]), (DS[50]), (DS[51]), (DS[52]), (DS[53]), (DS[54]), (DS[55]), (DS[56]), (DS[57]), (DS[58]), (DS[59])
04/01/2008 08:55:01 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[9]), (DS[10]), (DS[40]), (DS[44]), (DS[47]), (DS[50]), (DS[53]), (DS[54]), (DS[56]), (DS[58])
04/01/2008 10:08:02 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[44])
The solution? It wasn't changing from the default values:
Maximum Concurrent Poller Processes ->30
or
Script and Script Server Timeout Value -> 300
or
Poller Interval "every 1 minute" instead of "every 5 minutes" with a Cron Interval of "every 5 minutes".
the above 3 settings were set and then I allowed the system to run for at least two cycles to see if it made a difference with no luck. As I was getting "Poller has nothing to do" warnings with the 1 minute Poller Interval on I decided to put it back to 5 minutes and then walked off to do some other work whilst I thought about it.
Fifty minutes later there's been no errors in any of the 10 cycles for any of the graphs, the first time it's happened since installation. Could it be possible that resetting the value to 5 minutes kicked something in that doesn't happen when the setting is untouched from a default install?
04/01/2008 09:00:01 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[9]), (DS[10]), (DS[11]), (DS[12]), (DS[36]), (DS[38]), (DS[39]), (DS[40]), (DS[41]), (DS[42]), (DS[43]), (DS[44]), (DS[45]), (DS[46]), (DS[47]), (DS[48]), (DS[49]), (DS[50]), (DS[51]), (DS[52]), (DS[53]), (DS[54]), (DS[55]), (DS[56]), (DS[57]), (DS[58]), (DS[59])
04/01/2008 08:55:01 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[9]), (DS[10]), (DS[40]), (DS[44]), (DS[47]), (DS[50]), (DS[53]), (DS[54]), (DS[56]), (DS[58])
04/01/2008 10:08:02 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: (DS[44])
The solution? It wasn't changing from the default values:
Maximum Concurrent Poller Processes ->30
or
Script and Script Server Timeout Value -> 300
or
Poller Interval "every 1 minute" instead of "every 5 minutes" with a Cron Interval of "every 5 minutes".
the above 3 settings were set and then I allowed the system to run for at least two cycles to see if it made a difference with no luck. As I was getting "Poller has nothing to do" warnings with the 1 minute Poller Interval on I decided to put it back to 5 minutes and then walked off to do some other work whilst I thought about it.
Fifty minutes later there's been no errors in any of the 10 cycles for any of the graphs, the first time it's happened since installation. Could it be possible that resetting the value to 5 minutes kicked something in that doesn't happen when the setting is untouched from a default install?
I found that in my install of 0.8.7b I would get this error with some perl scripts.
What I found was that if the host was down or a service shut off so the perl script was returning an "Use of uninitialized value in ..." message I would get the "WARNING: Poller Output Table not Empty." message.
I solved this in this situation by putting a "if ($content)" line into the script (where content is the name of my variable) so the script would stop when it did not receive data from the target.
I also added a:
else {
print "free:0 total:0\n"
}
So I am controlling the output in the event the host is down, but that is not always best in all cases.
What I found was that if the host was down or a service shut off so the perl script was returning an "Use of uninitialized value in ..." message I would get the "WARNING: Poller Output Table not Empty." message.
I solved this in this situation by putting a "if ($content)" line into the script (where content is the name of my variable) so the script would stop when it did not receive data from the target.
I also added a:
else {
print "free:0 total:0\n"
}
So I am controlling the output in the event the host is down, but that is not always best in all cases.
I had a look at that myself and it appears I have the same issue. The rrd_name field is empty in the poller_item table for the problem data sources.mcrocker wrote:It seems to be related to items missing from the poller cache or poller_item table. I found an entry is missing in the poller_item table for data sources that do not work properly.
I'm really not sure what to do about it or how to fix it. Rebuilding the poller cache makes no difference. I could try to insert the missing data manually but I am loath to try this as I'm not confident enough to muck about with the database especially when I'm starting to wonder if there is a potential Cacti bug at work here.
Has this ever been resolved by anyone?
I'm experiencing this similar issue. After upgrading to version 0.8.7b, I've been constantly trying to troubleshoot what I THOUGHT were NaN's in graphs. However, I've RARELY seen a graph NaN. I have seen it and typically in my case either clearing poller cache or recreating the graph fixes the NaN.
I currently I get a lot of the "WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources" errors on several devices. Any time I see this, I don't collect data. I never see host ID's, just the poller information with the associated query ID's. What is interesting is it's not consistent. It can show the poller output table not empty error for all queries in some cases... and in some cases, right in the middle of a poll it starts working again. My graphs are showing data inconsistenly with gaps throughout...sometimes for days, sometimes minutes, sometimes hours. Very very strange.
So, has anyone been able to resolve this?
Thanks
I currently I get a lot of the "WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources" errors on several devices. Any time I see this, I don't collect data. I never see host ID's, just the poller information with the associated query ID's. What is interesting is it's not consistent. It can show the poller output table not empty error for all queries in some cases... and in some cases, right in the middle of a poll it starts working again. My graphs are showing data inconsistenly with gaps throughout...sometimes for days, sometimes minutes, sometimes hours. Very very strange.
So, has anyone been able to resolve this?
Thanks
Re: Has this ever been resolved by anyone?
I think I see what's going on... not sure how to fix it though.trwagner1 wrote:I'm experiencing this similar issue. After upgrading to version 0.8.7b, I've been constantly trying to troubleshoot what I THOUGHT were NaN's in graphs. However, I've RARELY seen a graph NaN. I have seen it and typically in my case either clearing poller cache or recreating the graph fixes the NaN.
I currently I get a lot of the "WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources" errors on several devices. Any time I see this, I don't collect data. I never see host ID's, just the poller information with the associated query ID's. What is interesting is it's not consistent. It can show the poller output table not empty error for all queries in some cases... and in some cases, right in the middle of a poll it starts working again. My graphs are showing data inconsistenly with gaps throughout...sometimes for days, sometimes minutes, sometimes hours. Very very strange.
So, has anyone been able to resolve this?
Thanks
We upgraded to version 0.8.7b. Previously, cron was set up (cactiuser) to run poller.php every 5 minutes. The system settings for Cacti has polling at 5 and cron at 5. All appears ok.
Poller.php defines the max runtime as 298.
Code: Select all
define("MAX_POLLER_RUNTIME", 298);
We have 50 devices with 1866 data sources. I'm surprised that it takes this long.
Any advice on how to increase the polling interval... adjusting the numbers?
Thanks
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
Re: Has this ever been resolved by anyone?
This is knows as the double poller issue, see 2nd link of my sig to get rid of ittrwagner1 wrote:But, then what I'm seeing is that the poller is taking LONGER than 5 minutes to run. When I show processes, the cactiuser is running poller twice....
Reinhard
Thanks Gandalf. Yep, had already read that, but the poller.php file contents I have didn't correspond to the posts in that link.
I did a bit more digging and research...
Add this one to your list guys... There's ANOTHER reason why you can get gaps in graphs... check the box's over-all load! The linux box I'm using has doubled as a syslog server. The box was so busy receiving messages to syslog that it was taking longer than 5 minutes to an entire poll cycle. We weren't getting NaN's, just no data.
What appeared to be happening was that the poller would not complete. A new instance was started by cron. The old one was dropped... no further queries were completed by the old one!
As soon as we disabled syslog, has worked perfectly.
Ted
I did a bit more digging and research...
Add this one to your list guys... There's ANOTHER reason why you can get gaps in graphs... check the box's over-all load! The linux box I'm using has doubled as a syslog server. The box was so busy receiving messages to syslog that it was taking longer than 5 minutes to an entire poll cycle. We weren't getting NaN's, just no data.
What appeared to be happening was that the poller would not complete. A new instance was started by cron. The old one was dropped... no further queries were completed by the old one!
As soon as we disabled syslog, has worked perfectly.
Ted
Hmm, I have this issue too. There is a strange thing: I have this warning EMMIDEATELY AFTER some host timeout:
All datasources listed above are from 's650'. Poller forgets about partially fetched data? E.g. if timeout is encountered duging polling when some data was got sucsessfully poller forgets about all data from that host.
debug in progress:
I toggled SNMP on some host to simulate this situation. After poller exited, select on 256 line @ lib/poller.php showed non-null records:
Fix for this situation:
Code: Select all
09/15/2008 11:40:22 AM - SYSTEM STATS: Time:21.8781 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
09/15/2008 11:35:20 AM - SYSTEM STATS: Time:20.7255 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
09/15/2008 11:30:19 AM - SYSTEM STATS: Time:18.8150 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
09/15/2008 11:30:00 AM - POLLER: Poller[0] WARNING: Poller Output Table not Empty. Potential Data Source Issues for Data Sources: discards_in(DS[1257]), discards_out(DS[1257]), errors_out(DS[1257]), discards_in(DS[1258]), discards_out(DS[1258]), errors_out(DS[1258]), discards_in(DS[1259]), discards_out(DS[1259]), errors_out(DS[1259]), discards_in(DS[1263]), discards_out(DS[1263]), errors_out(DS[1263]), discards_in(DS[1264]), discards_out(DS[1264]), errors_out(DS[1264]), discards_in(DS[1269]), discards_out(DS[1269]), errors_out(DS[1269]), discards_in(DS[1273]), discards_out(DS[1273]), errors_out(DS[1273]), discards_in(DS[1274]), discards_out(DS[1274]), errors_out(DS[1274]), discards_in(DS[1275]), discards_out(DS[1275]), errors_out(DS[1275]), discards_in(DS[1276]), discards_out(DS[1276]), errors_out(DS[1276]), discards_in(DS[1277]), discards_out(DS[1277]), errors_out(DS[1277]), discards_in(DS[1422]), discards_out(DS[1422]), errors_out(DS[1422]), discards_in(DS[1425]), discards_out(DS[1425]), errors_out(DS[1425]), discards_in(DS[1426]), discards_out(DS[1426]), errors_out(DS[1426]), traffic_in(DS[1436]), errors_in(DS[1444]), errors_in(DS[1445])
09/15/2008 11:25:22 AM - SYSTEM STATS: Time:21.7471 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2472
09/15/2008 11:25:15 AM - SPINE: Poller[0] Host[24] DS[1433] WARNING: SNMP timeout detected [800 ms], ignoring host 's650'
09/15/2008 11:25:15 AM - SPINE: Poller[0] Host[24] DS[1432] WARNING: SNMP timeout detected [800 ms], ignoring host 's650'
09/15/2008 11:20:24 AM - SYSTEM STATS: Time:24.1787 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
09/15/2008 11:15:21 AM - SYSTEM STATS: Time:20.0788 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
09/15/2008 11:10:24 AM - SYSTEM STATS: Time:24.3203 Method:spine Processes:1 Threads:2 Hosts:29 HostsPerProcess:29 DataSources:7293 RRDsProcessed:2500
debug in progress:
I toggled SNMP on some host to simulate this situation. After poller exited, select on 256 line @ lib/poller.php showed non-null records:
Code: Select all
select
poller_output.output,
poller_output.time,
poller_output.local_data_id,
poller_item.rrd_path,
poller_item.rrd_name,
poller_item.rrd_num
from (poller_output,poller_item)
where (poller_output.local_data_id=poller_item.local_data_id and poller_output.rrd_name=poller_item.rrd_name);
+--------+---------------------+---------------+----------------------------------------------------------+--------------+---------+
| output | time | local_data_id | rrd_path | rrd_name | rrd_num |
+--------+---------------------+---------------+----------------------------------------------------------+--------------+---------+
| U | 2008-09-15 16:40:02 | 244 | /usr/local/www/cacti-0.8.7b/rra/wf1044_errors_in_244.rrd | errors_out | 4 |
| U | 2008-09-15 16:40:02 | 244 | /usr/local/www/cacti-0.8.7b/rra/wf1044_errors_in_244.rrd | discards_out | 4 |
+--------+---------------------+---------------+----------------------------------------------------------+--------------+---------+
2 rows in set (0.00 sec)
Fix for this situation:
Code: Select all
>diff -au lib/poller.php~ lib/poller.php
--- lib/poller.php~ 2008-09-15 17:08:52.000000000 +0400
+++ lib/poller.php 2008-09-15 17:08:52.000000000 +0400
@@ -320,6 +320,8 @@
}
}
}
+ /* delete undefined values from poller_output */
+ db_execute("delete from poller_output where output='U'");
api_plugin_hook_function('poller_output', $rrd_update_array);
if (api_plugin_hook_function('poller_on_demand', $results)) {
Who is online
Users browsing this forum: No registered users and 1 guest