Newest cacti or cacti 0.6.3 & newest rrd or older rrd
What cuases broken graphs as in the blue area graph? The data is there as provided below the example. The text file created is output from the same variable (last value on each line) in the script that outputs the data to rrd.
Any help would be appreciated.
note: the labels have been removed to protect my NDA.
Data clip
Wed Jun 26 16:45:23 MDT 2002 END 82
Wed Jun 26 16:51:41 MDT 2002 END 82
Wed Jun 26 16:55:37 MDT 2002 END 82
Wed Jun 26 16:56:32 MDT 2002 END 82
Wed Jun 26 17:00:27 MDT 2002 END 80
Wed Jun 26 17:05:23 MDT 2002 END 81
Wed Jun 26 17:10:25 MDT 2002 END 81
Wed Jun 26 17:15:25 MDT 2002 END 81
Wed Jun 26 17:20:27 MDT 2002 END 81
Wed Jun 26 17:25:25 MDT 2002 END 81
Wed Jun 26 17:30:33 MDT 2002 END 81
Wed Jun 26 17:35:21 MDT 2002 END 72
Wed Jun 26 17:40:22 MDT 2002 END 73
Wed Jun 26 17:45:22 MDT 2002 END 73
Wed Jun 26 17:50:20 MDT 2002 END 73
Wed Jun 26 17:55:21 MDT 2002 END 73
Wed Jun 26 18:00:27 MDT 2002 END 71
Wed Jun 26 18:05:22 MDT 2002 END 71
Wed Jun 26 18:10:22 MDT 2002 END 71
Wed Jun 26 18:15:22 MDT 2002 END 72
Wed Jun 26 18:20:24 MDT 2002 END 72
Wed Jun 26 18:25:21 MDT 2002 END 72
Wed Jun 26 18:30:27 MDT 2002 END 72
Wed Jun 26 18:35:22 MDT 2002 END 72
Wed Jun 26 18:40:23 MDT 2002 END 72
Wed Jun 26 18:45:23 MDT 2002 END 72
Wed Jun 26 18:50:22 MDT 2002 END 72
Wed Jun 26 18:55:21 MDT 2002 END 72
Wed Jun 26 19:00:25 MDT 2002 END 72
Wed Jun 26 19:05:21 MDT 2002 END 72
Wed Jun 26 19:10:23 MDT 2002 END 72
Wed Jun 26 19:15:23 MDT 2002 END 72
Wed Jun 26 19:20:23 MDT 2002 END 72
Wed Jun 26 19:25:20 MDT 2002 END 72
Wed Jun 26 19:30:21 MDT 2002 END 72
Wed Jun 26 19:36:23 MDT 2002 END 72
Wed Jun 26 19:40:23 MDT 2002 END 72
Wed Jun 26 19:44:22 MDT 2002 END 72
Wed Jun 26 19:48:23 MDT 2002 END 72
Wed Jun 26 19:52:22 MDT 2002 END 72
Wed Jun 26 19:56:22 MDT 2002 END 72
Wed Jun 26 20:00:24 MDT 2002 END 71
Wed Jun 26 20:04:22 MDT 2002 END 71
Wed Jun 26 20:08:22 MDT 2002 END 71
Wed Jun 26 20:12:22 MDT 2002 END 71
Wed Jun 26 20:16:23 MDT 2002 END 72
Wed Jun 26 20:20:23 MDT 2002 END 72
Wed Jun 26 20:24:22 MDT 2002 END 72
Wed Jun 26 20:28:22 MDT 2002 END 72
Wed Jun 26 20:32:22 MDT 2002 END 72
Wed Jun 26 20:36:22 MDT 2002 END 72
Missing datapoints (example) Why?
Moderators: Developers, Moderators
I was actually logging on today to post about this same issue. I originally saw this on my test system, but did not see it on a small sampling of devices I tested at the same time on my production system. So I dismissed it at the time.
Now, I am in the process of moving my work (MRTG with RRDTOOL) to production and am seeing the same types of gaps of data on the production system.
In my environment, the gaps are occurring across the board - all of my RRD files are affected, and affected for the same gaps of time.
I am running Cacti 0.6.8 on Windows 2000 Server.
Now, I am in the process of moving my work (MRTG with RRDTOOL) to production and am seeing the same types of gaps of data on the production system.
In my environment, the gaps are occurring across the board - all of my RRD files are affected, and affected for the same gaps of time.
I am running Cacti 0.6.8 on Windows 2000 Server.
Broken areas load related?
I bumped the cron job from 5 minutes to 4 minutes and the broken areas are mostly gone, but during the log rotation cron the broken lines appear again. The data graphed is not dependant on the logs.
The cacti cmd.php finishes in less than 90 seconds.
Strange.
The cacti cmd.php finishes in less than 90 seconds.
Strange.
It is strange. In my environment, I am currently using Cacti as a GUI frontend to the RRD files that are produced externally by MRTG. And so, the Cacti php cmd process is not a factor in my situation.
And even though the RRD files are routinely updated, I still get these periodic gaps for every single monitoring point of every single device for which I am collecting data.
And even though the RRD files are routinely updated, I still get these periodic gaps for every single monitoring point of every single device for which I am collecting data.
gaps in graphs (solution)
I was seeing this too. But I figured it out and fixed it. I'm transitioning to RRD/cacti
so I'm still running MRTG every 5 minutes and all my machines have their clocks
sync'd with NTP. So MRTG was firing off (in parallel on 7 or 8 separate machines) at
exactly the same time cacti would start to try to gather SNMP data to feed to RRDtool.
This caused some SNMP queries to not succeed -- I'd just not get all the replies I
should have.
So as a first part of the solution, I changed my cron config to fire off cacti 1 minute
AFTER I started MRTG on all the other machines. That way the devices I'm querying
weren't busily responding to MRTG queries. This fixed a LOT of my missing data
problems, but not all of 'em.
Next, I opened up include/snmp_functions.php and put in a few extra lines of code to
implement SNMP retries. If it does a query and it fails, it will retry the query several
times (I've got mine set for 5 retries). This eliminated all my lost data.
Here's the bits of code I changed/added to cacti_snmp_get():
snmp_set_quick_print(0);
$max_tries = 5;
$try = 0;
do {
$snmp_value = snmpget($hostname, $community, $oid);
$try++;
} while ((strlen($snmp_value) == 0) && ($try < $max_tries));
so I'm still running MRTG every 5 minutes and all my machines have their clocks
sync'd with NTP. So MRTG was firing off (in parallel on 7 or 8 separate machines) at
exactly the same time cacti would start to try to gather SNMP data to feed to RRDtool.
This caused some SNMP queries to not succeed -- I'd just not get all the replies I
should have.
So as a first part of the solution, I changed my cron config to fire off cacti 1 minute
AFTER I started MRTG on all the other machines. That way the devices I'm querying
weren't busily responding to MRTG queries. This fixed a LOT of my missing data
problems, but not all of 'em.
Next, I opened up include/snmp_functions.php and put in a few extra lines of code to
implement SNMP retries. If it does a query and it fails, it will retry the query several
times (I've got mine set for 5 retries). This eliminated all my lost data.
Here's the bits of code I changed/added to cacti_snmp_get():
snmp_set_quick_print(0);
$max_tries = 5;
$try = 0;
do {
$snmp_value = snmpget($hostname, $community, $oid);
$try++;
} while ((strlen($snmp_value) == 0) && ($try < $max_tries));
Who is online
Users browsing this forum: No registered users and 2 guests