Edit Graph's start and end for temporary views
Moderators: Developers, Moderators
Edit Graph's start and end for temporary views
This may be a little difficult to describe, but some commecial apps have options like this:
When viewing the graphs, it'd be nice to view alternate time spans, maybe similarly to the way we view the preview when building the graph. So if you were looking at the last week, and wanted to view just a couple of hours off of that graph, you could "drill down" by specifying the start/end times.
I started trying to hack this out (with limited skills to work with), but I need to figure out how to pass my start and end variables through to other scripts with Php (oh, and I also need to find some free time).
I thought I'd just use the graph_items.php, removing the "view..." options on the side, and making the graph larger, and adding a couple of text boxes for start and end, and then get the variables to work with the actual graph_image.php. That way users could add/remove datasources on the fly if they wanted, and either just bookmark the graph or save the .gif off when they're done.
I dunno, what do you guys think?
When viewing the graphs, it'd be nice to view alternate time spans, maybe similarly to the way we view the preview when building the graph. So if you were looking at the last week, and wanted to view just a couple of hours off of that graph, you could "drill down" by specifying the start/end times.
I started trying to hack this out (with limited skills to work with), but I need to figure out how to pass my start and end variables through to other scripts with Php (oh, and I also need to find some free time).
I thought I'd just use the graph_items.php, removing the "view..." options on the side, and making the graph larger, and adding a couple of text boxes for start and end, and then get the variables to work with the actual graph_image.php. That way users could add/remove datasources on the fly if they wanted, and either just bookmark the graph or save the .gif off when they're done.
I dunno, what do you guys think?
Edit Graph's start and end for temporary views
I agree with this feature.
Cacti is an excellent app for collecting data and graphing from multiple sources. Having the ability to look back in time, and "drill" down to a 5-min avg graph from anytime in the past year would be great!
Is the old 5-min info still in the rrd file? Does the 5-min info get compressed as time goes on (like in MRTG)?
Thanks for your time,
-Rob
Cacti is an excellent app for collecting data and graphing from multiple sources. Having the ability to look back in time, and "drill" down to a 5-min avg graph from anytime in the past year would be great!
Is the old 5-min info still in the rrd file? Does the 5-min info get compressed as time goes on (like in MRTG)?
Thanks for your time,
-Rob
Yes, the data is aggregated like it was in MRTG so getting detailed historical graphs isn't possible. As far as letting you build ad-hoc graphs or customized views of existing graphs, it's definitely something that Cacti needs. I'm envisioning the user clicking on a 'Customize Graph' button that gives them the graph and form fields to change what the graph displays and how. It might also be possible to save that 'custom' graph depending on user permissions.
Unfortunately, I think this feature will have to wait until 0.9. We've got *lots* to do with Cacti in 0.8 and I'd rather put out releases more frequently with fewer added features (and bugs) than take forever between releases. It's one of my annoyances about Debian Linux.
Definitely a valuable feature. Thanks for the input.
Rob.
Unfortunately, I think this feature will have to wait until 0.9. We've got *lots* to do with Cacti in 0.8 and I'd rather put out releases more frequently with fewer added features (and bugs) than take forever between releases. It's one of my annoyances about Debian Linux.
Definitely a valuable feature. Thanks for the input.
Rob.
I did something similar to this using some time ago in perl, using a form with start and end date for the graph.
The datasources was 5 minute ones saved for 1 year or something so the data was not aggregated (step=1 row=a few 100k), you just have to think a little different when you design your RRA's. The rrd files might be a few megabytes, but disk is cheap
This is a feature I also eagerly await.
/Christian
The datasources was 5 minute ones saved for 1 year or something so the data was not aggregated (step=1 row=a few 100k), you just have to think a little different when you design your RRA's. The rrd files might be a few megabytes, but disk is cheap
This is a feature I also eagerly await.
/Christian
Yep, we're on the same page. As I said, that'll prolly have to wait until the next rev. I know that it's an important feature but we're building things that are even more important right now. For example, I think I've decided how to handle adding support for decentralized polling in a large organization. This rev is mostly focused on how to get Cacti set up and get data collected in a larger organization with a little focus on revising the viewing stuff. Once the data collection and setup is more robust, we can spend lots of time adding bells and whistles to the data presentation codebase.
As always, thanks for your input.
Rob.
As always, thanks for your input.
Rob.
Yepp, there are more important stuff to add first, agreed.
Distributed polling would rule
I guess it could be solved by adding a column to the datasource table telling us what poller should collect the data, then have that poller scan that table every once in a while to get it's workorders. After the data is collected it sends it to a central daemon that updates the RRD files.
/Christian
Distributed polling would rule
I guess it could be solved by adding a column to the datasource table telling us what poller should collect the data, then have that poller scan that table every once in a while to get it's workorders. After the data is collected it sends it to a central daemon that updates the RRD files.
/Christian
Actually, it looks like we'll be going with a slightly more robust solution than the one you're describing. Today I've implemented 'Poller Zones' into Cacti. This is still a rough concept of how it would work but the idea is:
- You define your zones and as you add hosts to poll, you set which zone that host is in.
- You have your database replicated to each of the remote poller boxes (which would also be running Apache and PHP - more on that later)
- You set a flag in the DB that polling data has changed and pollers need to reload their configs.
- Periodically, a cron'ed script looks for that flag. If it sees it, a new poller config file is generated.
- At the beginning of each poller cycle, the poller compares the mod date of the running config to the mod date on the file. If it's changed, the poller reloads its config and starts polling again.
- In addition to the poller doing rrd writes locally, it also uses RRDD to broadcast them to the 'main' server. Once per X hours (prolly 12) each poller pushesits rrd files out to the otherpollers (Atlanta gets rrd's from NY, FL, WA, etc. NY gets GA, FL, and WA). Redundant data storage is good for the soul - expecially when it's rrd files that don't grow to be *really* large.
So, why do the remote boxes have Apache and PHP? If the city housing your web server (serverS, actually, because they're load-balanced uber-servers, right ) spontaneously combusts, you can hit any of your poller boxes for reasonably recent data.
Thoughts?
Rob
- You define your zones and as you add hosts to poll, you set which zone that host is in.
- You have your database replicated to each of the remote poller boxes (which would also be running Apache and PHP - more on that later)
- You set a flag in the DB that polling data has changed and pollers need to reload their configs.
- Periodically, a cron'ed script looks for that flag. If it sees it, a new poller config file is generated.
- At the beginning of each poller cycle, the poller compares the mod date of the running config to the mod date on the file. If it's changed, the poller reloads its config and starts polling again.
- In addition to the poller doing rrd writes locally, it also uses RRDD to broadcast them to the 'main' server. Once per X hours (prolly 12) each poller pushesits rrd files out to the otherpollers (Atlanta gets rrd's from NY, FL, WA, etc. NY gets GA, FL, and WA). Redundant data storage is good for the soul - expecially when it's rrd files that don't grow to be *really* large.
So, why do the remote boxes have Apache and PHP? If the city housing your web server (serverS, actually, because they're load-balanced uber-servers, right ) spontaneously combusts, you can hit any of your poller boxes for reasonably recent data.
Thoughts?
Rob
I guess this is the first time I officially heard your plan about 'Poller Zones', and let me stress that I *really* like it.
Just to recap, as you mentioned in a separate post, cacti will generate XML files for each 'Zone', correct? My question is how should we handle distributing these XML files to each polling client per-say? HTTP/SCP/FTP are all options I guess, not sure what you had in mind though.
Also other than having to have RRDD running on each 'Zone' machine, I am assuming that a stripped down version of cacti would have to be installed as well. This would handle parsing the XML file, executing the tasks, and generating the appropriate RRD code (for RRDD).
I guess the client I've mentioned would be the threaded c app that we have talked so much about. I just hope we can get some reincarnation of "Spine" rolling before 0.8 gets to close to release.
Just my 2 cents.
-Ian
Just to recap, as you mentioned in a separate post, cacti will generate XML files for each 'Zone', correct? My question is how should we handle distributing these XML files to each polling client per-say? HTTP/SCP/FTP are all options I guess, not sure what you had in mind though.
Also other than having to have RRDD running on each 'Zone' machine, I am assuming that a stripped down version of cacti would have to be installed as well. This would handle parsing the XML file, executing the tasks, and generating the appropriate RRD code (for RRDD).
I guess the client I've mentioned would be the threaded c app that we have talked so much about. I just hope we can get some reincarnation of "Spine" rolling before 0.8 gets to close to release.
Just my 2 cents.
-Ian
Why poll when you can push?
How about just having the remote servers submit data via http, similar to my addon script:
http://www.raxnet.net/board/viewtopic.php?t=499
Just have cmd.php or whatever take cgi vars to insert the data directly into the rrds. Since just inserting data should be very fast, it could easily handle a 100 or more submissions a second.
That way all your servers don't have to have apache and php installed, just wget and a few of the scripts to collect data, much more secure.
If you need to poll snmp devices that can't send data themselves you could just have seperate cron tasks on the cacti box collect and submit data the same way simultanesouly.
I would like to see this as an option.
http://www.raxnet.net/board/viewtopic.php?t=499
Just have cmd.php or whatever take cgi vars to insert the data directly into the rrds. Since just inserting data should be very fast, it could easily handle a 100 or more submissions a second.
That way all your servers don't have to have apache and php installed, just wget and a few of the scripts to collect data, much more secure.
If you need to poll snmp devices that can't send data themselves you could just have seperate cron tasks on the cacti box collect and submit data the same way simultanesouly.
I would like to see this as an option.
Two problems with that approach:
1) http is *much* higher overhead than a single UDP packet (which is what RRDD uses)
2) The point of having Apache and PHP on the remote boxes is so that if your central server blows up, not only do you have a backup of the data on the remote servers but you can also view graphs based on it. And if your central POP just blew up or something, that could be a very important capability.
Rob.
1) http is *much* higher overhead than a single UDP packet (which is what RRDD uses)
2) The point of having Apache and PHP on the remote boxes is so that if your central server blows up, not only do you have a backup of the data on the remote servers but you can also view graphs based on it. And if your central POP just blew up or something, that could be a very important capability.
Rob.
Who is online
Users browsing this forum: No registered users and 0 guests