I've screwed something up somehow, but I'm not quite sure how. I can see cacti running the poller, the poller is logging results it's getting from the things we want to monitor, but the RRD files aren't getting updated. Here's what happened.
I added "noatime" to the mount options on the filesystem we have mysql and cacti and the rrd files installed on. When I restarted mysql and cacti, it wasn't updating anything. I stopped mysql remounted the filesystem again without the noatime option, then restarted mysql but the RRD files are still not updating. I dropped and re-created the poller_output table and even had cacti rebuild the poller table, and STILL the rrd files don't get updated.
I took a peek at the poller cache and it seems to contain all the data sources. And they're referring to the right fully-qualified path for the RRD file. The permissions haven't changed. The RRD files are owned by the same user that runs the poller. It doesn't matter if I use cmd.php or cactid - they both write the values they're getting to the log file but don't update the rrd files.
Any ideas? What did I do?
cacti not updating RRD files anymore
Moderators: Developers, Moderators
Aha! I think I know what happened, though I'm not entirely sure why. This last time the poller ran it reported running out of space in the poller_output table (I had been running ENGINE=MEMORY). I dropped and recreated the table back as a normal MYISAM table and the RRD files are getting updated again. (whew)
The memory config for mysql didn't change so I'm not sure why using a heap-based poller_output table worked fine before and didn't after restarting. Dunno what's up with that. But cacti/cactid (er, I mean spine? I think I saw it got renamed) performs fast enough for us right now without the poller_output being only in heap. (and I'll re-try using noatime to see if that improves the polling time or not sometime later)
So, for my own edjimication... I assume the poller_cache contains a list of things the poller needs to query, and the poller_output contains what the poller (be it cactid or cmd.php) got back? What actually updates the RRD files? Does the poller shell out and run rrdtool feeding it commands to open/close/update all the various RRD files using whatever is in the poller_output table?
The memory config for mysql didn't change so I'm not sure why using a heap-based poller_output table worked fine before and didn't after restarting. Dunno what's up with that. But cacti/cactid (er, I mean spine? I think I saw it got renamed) performs fast enough for us right now without the poller_output being only in heap. (and I'll re-try using noatime to see if that improves the polling time or not sometime later)
So, for my own edjimication... I assume the poller_cache contains a list of things the poller needs to query, and the poller_output contains what the poller (be it cactid or cmd.php) got back? What actually updates the RRD files? Does the poller shell out and run rrdtool feeding it commands to open/close/update all the various RRD files using whatever is in the poller_output table?
- rony
- Developer/Forum Admin
- Posts: 6022
- Joined: Mon Nov 17, 2003 6:35 pm
- Location: Michigan, USA
- Contact:
Correct! You get a cookie! No, I'm trying to be funny, not sarcastic...bbice wrote:So, for my own edjimication... I assume the poller_cache contains a list of things the poller needs to query, and the poller_output contains what the poller (be it cactid or cmd.php) got back? What actually updates the RRD files? Does the poller shell out and run rrdtool feeding it commands to open/close/update all the various RRD files using whatever is in the poller_output table?
Poller.php is the process that is updating the RRD files. It uses a process pipe and feeds multiple updates to the RRDTool process. Yes, it uses the values in the poller_output table.
Concerning your problem, I don't know why the HEAP table failed. Did you recently add move devices/datasources? This would explain the need for a larger memory table, which would require you to increase some settings in mySQL.
[size=117][i][b]Tony Roman[/b][/i][/size]
[size=84][i]Experience is what causes a person to make new mistakes instead of old ones.[/i][/size]
[size=84][i]There are only 3 way to complete a project: Good, Fast or Cheap, pick two.[/i][/size]
[size=84][i]With age comes wisdom, what you choose to do with it determines whether or not you are wise.[/i][/size]
[size=84][i]Experience is what causes a person to make new mistakes instead of old ones.[/i][/size]
[size=84][i]There are only 3 way to complete a project: Good, Fast or Cheap, pick two.[/i][/size]
[size=84][i]With age comes wisdom, what you choose to do with it determines whether or not you are wise.[/i][/size]
(heh heh) I was a programmer in a previous lifetime and sorta got shunted sideways into IT by accident. Plus I've been using cacti for 4 or 5 years so I oughta know at least a little about it by now. I haven't really dug into the source much yet but the plugin architecture looks like something I definitely want to start playing with.rony wrote:Correct! You get a cookie!
I hadn't added any recently. That was why I was confused. I'd stopped and restarted mysql since the database files live on the filesystem I was going to umount and re-mount. But I wouldn't have thought that would cause the heap table to suddenly fail unless someone's been monkeying with the mysql config file without telling me or unless mysql started using a different config file for some odd reason I can't fathom. (shrug)rony wrote:Concerning your problem, I don't know why the HEAP table failed. Did you recently add move devices/datasources? This would explain the need for a larger memory table, which would require you to increase some settings in mySQL.
In any event, it seems that switching back to MyISAM didn't actually hurt my performance at all. It's still within a second or two of what it used to be (around 6319 data sources in around 14 seconds). It's plenty fast for now. I'll look at switching back to a heap-based poller_output table (and/or using boost) later when I need to. I intend to eventually have every production server here instrumented and monitored in addition to the usual nagios up/down/degraded sorta monitoring we already do. So one day in the not too distant future I'll likely be pushing upwards of 60,000 data sources.
Who is online
Users browsing this forum: No registered users and 0 guests