[HOWTO] Easily maintain 5 minute resolution for all graphs

If you figure out how to do something interesting/cool in Cacti and want to share it with the community, please post your experience here.

Moderators: Developers, Moderators

Post Reply
time
Posts: 32
Joined: Mon Jun 27, 2005 6:30 pm

[HOWTO] Easily maintain 5 minute resolution for all graphs

Post by time »

I know there have been other threads out there detailing how to maintain accurate 5 minute resolution for your graphs, although this method is really easy. All you have to do is import the four templates below and one of them I assume has the effect of altering the RRAs so all new graphs you create (not just the graphs included in the templates below) will keep 5 minute resolution. This does not change graphs that you have already created, so this is probably best to do with a new installation or a small installation where you don't mind re-creating your graphs.

The developers or people with test boxes may want to experiment to see which of the templates actually does it but I have posted all four because I imported these to a fresh installation last week and confirmed that without any other changes at all the graphs created with it maintain 5 minute resolution.

Also note that one of the templates will update the default "Interface Traffic (bits/sec)" with one that includes percentage bandwidth used and the totals underneath as in the attached graph below.

Tim
Attachments
templates.zip
Extract 4 templates from the zip file and import them.
(47.71 KiB) Downloaded 560 times
Example of updated Interface Traffic (bits/sec) template.
Example of updated Interface Traffic (bits/sec) template.
graph example.png (39.55 KiB) Viewed 8973 times
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Thank you for your post.
I do not want to be rude, but I'd like to mention, that rrd's created with those changed rra's may be much bigger than usual. While this is not a disk space problem (this is cheap), it may result in longer poller's runtime due to the amount of rrdtool disk interaction.
And when viewing graphs with a big timespan, you may get results, that are difficult to interpret. See my rrdtool links (signature) for more.
Reinhard
time
Posts: 32
Joined: Mon Jun 27, 2005 6:30 pm

Post by time »

Hi Reinhard,

Indeed the rrd files created are bigger, in fact they are roughly 21.5 times bigger. For example, a normal rrd file with two data sources (e.g. traffic in/out) is 141700 bytes, whereas the 5 minute rrd files are 3053432 bytes.

As you said, this uses more disk space although disk is cheap these days and the extra size shouldn't really present a problem for anyone unless they are running it on boxes with tiny disks. For example, I have roughly 7000 data sources and almost 4000 rrd files with my installation and my entire cacti directory is just under 10 Gigabytes.

As for performance, I recently re-created about 3000 of these rrd files of mine (to use 64-bit counters instead of 32-bit ones) and saw no real increase in polling time. This is going from most of these 3000 rrd files which I created ages ago and were the small size to the new larger ones. My installation is fairly large from what I gather although it is running on fairly meagre hardware (Pentium 4 2.4 Ghz, 1 Gig ram and one normal 40 Gig IDE hard drive) and these are typical polling times for me:
05/11/2006 09:05:30 AM - SYSTEM STATS: Time:28.4890 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:10:20 AM - SYSTEM STATS: Time:18.8081 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:15:33 AM - SYSTEM STATS: Time:32.1785 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:20:27 AM - SYSTEM STATS: Time:25.8833 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:25:30 AM - SYSTEM STATS: Time:28.9919 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:30:27 AM - SYSTEM STATS: Time:25.9766 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:35:24 AM - SYSTEM STATS: Time:22.5177 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:40:25 AM - SYSTEM STATS: Time:24.0108 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:45:29 AM - SYSTEM STATS: Time:28.0576 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:50:25 AM - SYSTEM STATS: Time:23.4109 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
So I see no performance issues at this stage. I guess this is a tribute to the Cacti team and how efficiently the software runs.

As for viewing graphs with a large timespan I'm not sure what you mean. I don't see any difference between viewing graphs of a large timespan which use the 5 minute rrd files or normal ones. Graphs of a large timespan still "compress" spikes so they appear smaller although the benefit of having 5 minute rrd files is that you can zoom in on these little spikes and see the full detail and exactly how high the spike went - it is not averaged out over 30 minutes, 2 hours or a day.

I think it would be a handy feature in future versions of Cacti if you could choose to maintain 5 minute resolution as a simple option. That way, people who are worried about disk space or performance can opt not to keep the extra resolution. For anyone else though, I believe the benefit of keeping all the data accurately outweighs any other side effects (none of which I can see are that bad anyway) and this is something people have wanted judging from other posts on the topic.

I've also read the posts in your signature on achieving similar results by modifying RRAs and using the resize.pl script etc and you've obviously put a lot of work into it. I believe this method is a lot easier for people to do though as all you have to do is import a few templates and viola, any new graph you create maintains 5 minute resolution.

Tim
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

time wrote:As for performance, I recently re-created about 3000 of these rrd files of mine (to use 64-bit counters instead of 32-bit ones) and saw no real increase in polling time. This is going from most of these 3000 rrd files which I created ages ago and were the small size to the new larger ones. My installation is fairly large from what I gather although it is running on fairly meagre hardware (Pentium 4 2.4 Ghz, 1 Gig ram and one normal 40 Gig IDE hard drive) and these are typical polling times for me:
05/11/2006 09:05:30 AM - SYSTEM STATS: Time:28.4890 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:10:20 AM - SYSTEM STATS: Time:18.8081 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:15:33 AM - SYSTEM STATS: Time:32.1785 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:20:27 AM - SYSTEM STATS: Time:25.8833 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:25:30 AM - SYSTEM STATS: Time:28.9919 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:30:27 AM - SYSTEM STATS: Time:25.9766 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:35:24 AM - SYSTEM STATS: Time:22.5177 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:40:25 AM - SYSTEM STATS: Time:24.0108 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:45:29 AM - SYSTEM STATS: Time:28.0576 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
05/11/2006 09:50:25 AM - SYSTEM STATS: Time:23.4109 Method:cactid Processes:1 Threads:12 Hosts:189 HostsPerProcess:189 DataSources:6946 RRDsProcessed:3868
Fine, this is good to know. But I suppose, that rrdtool update is not counted as poller runtime. Anyway, your findings are worth to notice. Questions concerning this topic were already asked lots of times.
So I see no performance issues at this stage. I guess this is a tribute to the Cacti team and how efficiently the software runs.
Not wanting to insult the devs. The poller runtime is in fact a very good result. But the issue I was talking about is a pure rrdtool disk interaction thingy.
As for viewing graphs with a large timespan I'm not sure what you mean. I don't see any difference between viewing graphs of a large timespan which use the 5 minute rrd files or normal ones. Graphs of a large timespan still "compress" spikes so they appear smaller although the benefit of having 5 minute rrd files is that you can zoom in on these little spikes and see the full detail and exactly how high the spike went - it is not averaged out over 30 minutes, 2 hours or a day.
What I'm aiming at is called "graphical consolidation" (by me :wink:):
You may have data of 5 min precision. But when graphing a timespan of lets say some months, there are not enough pixels to represent each rrd value. So rrdtool "consolidates" the data to fit into the graph. Thereby, it averages out spikes of short duration.

Try to display a graph with 5 min resolution and a timespan of 1 year. Remember the maximum value of those AVERAGES displayed within the graph.
Then zoom into the Graph, e.g. to see a month's timespan. Again, write down the highest value.
Now zoom again ...
You will notice, that those "highest values" increase as the timespan decreases. While this is normal rrdtool behaviour, it is not transparent for "some" people.

Personally, I prefer to display not only AVERAGEs but MAXIMUM as well to avoid misinterpreting. I hope you're able to reproduce what I'm talking about.

Reinhard
time
Posts: 32
Joined: Mon Jun 27, 2005 6:30 pm

Post by time »

The poller runtime is in fact a very good result. But the issue I was talking about is a pure rrdtool disk interaction thingy.
Good point, I've monitored disk usage using vmstat during polling and disk activity does continue for just over a minute. i.e. polling takes ~25 seconds and rrdtool takes ~65 seconds to update all the files. I think this is still a pretty good result and therefore I don't see it as much of an issue at this stage either, but definitely something to keep an eye on.
Personally, I prefer to display not only AVERAGEs but MAXIMUM as well to avoid misinterpreting. I hope you're able to reproduce what I'm talking about.
Yep, I know what you mean here - see the graphs below which demonstrate the problem clearly. You can see the MAXIMUM changes from 1 on the yearly view, to 12 and finally to 15 on the daily timespan. I will try and work on the graph templates so the MAXIMUM is shown on large timespans.

Tim
Attachments
Zoomed right in to 16th November last year
Zoomed right in to 16th November last year
daily.png (27.65 KiB) Viewed 8850 times
Zoomed in to a week in November last year
Zoomed in to a week in November last year
weekly.png (27.9 KiB) Viewed 8850 times
Yearly graph
Yearly graph
yearly.png (39.99 KiB) Viewed 8850 times
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Yep. I fully agree to both of your statements. I'm happy, that you've got the second one. Personally, I've found this to be very difficult to explain to collegues. Showing the graphs changing makes thing clearer ...
Reinhard
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests