Search found 17 matches
- Wed Jan 07, 2009 5:04 pm
- Forum: Feature Requests
- Topic: Metro Ethernet
- Replies: 4
- Views: 4875
sflow or netflow
Sounds like you should use something like sflow or netflow if your router supports it to get the data. See something like this for sflow: http://www.inmon.com/technology/sflowTools.php Products that use sflow: http://www.sflow.org/products/network.php As long as you have the IP addresses of your cus...
- Thu May 22, 2008 10:25 am
- Forum: Help: General
- Topic: Date and time problem
- Replies: 4
- Views: 1442
- Wed May 21, 2008 5:52 pm
- Forum: Help: General
- Topic: automatically create directory for RRDs
- Replies: 1
- Views: 833
automatically create directory for RRDs
For each host I store the RRDs in a directory named after the host so I can call the files easier using outside scripts (more organized). I modified the cacti source in lib/functions.php to automatically append the directory to the rra/rrd path when creating a data source. Right now what I do is I g...
- Wed May 21, 2008 3:14 pm
- Forum: Help: General
- Topic: Date and time problem
- Replies: 4
- Views: 1442
If RRDTool updated the files while the time was in the future, then your fairly stuck. RRDTool keeps track of the time that the files were updated and won't update data from the past as far as I know. Depending on how much pain this causes the easiest thing to do is probably delete(or at least renam...
- Tue May 20, 2008 5:50 pm
- Forum: Help: General
- Topic: Public Graphs[SOLVED]
- Replies: 2
- Views: 1697
You can use the graph export feature in cacti to automatically create graphs and put them in a specific location at a specified interval. Prior to 0.8.7 you could go directly to /graph_view.php and view graphs without logging in. I used this quite a bit for guest logins. With 0.8.7 it seems the beha...
- Tue May 20, 2008 5:17 pm
- Forum: Help: General
- Topic: Cluster cacti
- Replies: 3
- Views: 1493
You could use VMWare ESX + HA, which has the ability to detect a failed VM and automatically fire up the VM on another system in the cluster. Though it requires some additional infrastructure(storage etc). You could probably script something similar in Xen as well, though I've never used Xen so don'...
- Tue May 20, 2008 5:13 pm
- Forum: Help: General
- Topic: How to replicate data (devices, graphes etc) to another svr
- Replies: 1
- Views: 783
setup mysql replication from the master to the backup. There's lots of docs on the net on how to setup replication. The RRDs themselves are another matter, you could rsync them if it's not a lot of data, or put them on some sort of highly available centralized storage. I think there's some way to do...
- Mon May 19, 2008 11:32 am
- Forum: Help: General
- Topic: Spine/0.8.7b not picking up all output from script
- Replies: 3
- Views: 1426
- Fri May 16, 2008 8:36 pm
- Forum: Help: General
- Topic: Script Server Data slowing Spine
- Replies: 6
- Views: 2358
try re-writing the script so you only have 1 DS per host instead of 9. I have several scripts that return a dozen or more data points to cacti for one data source, which seems to allow it to scale much higher than it otherwise would be able to. Extra work is involved in creating the graphs and stuff...
- Fri May 16, 2008 8:27 pm
- Forum: Help: General
- Topic: Spine/0.8.7b not picking up all output from script
- Replies: 3
- Views: 1426
update
One more update, I have another mysql stats script that's behaving the same way(no surprise I guess) 05/16/2008 06:26:09 PM - SPINE: Poller[0] Host[292] DS[378] SCRIPT: /usr/bin/perl /home/cacti/public_html/cacti-0.8.7b/scripts/mysql-extended-stats.pl my_host, output: QUERY_CACHE_SIZE:20971520 When ...
- Fri May 16, 2008 8:01 pm
- Forum: Help: General
- Topic: Spine/0.8.7b not picking up all output from script
- Replies: 3
- Views: 1426
update
An update, I have another script that gathers a similarly large amount of data from my load balancer, and it works fine.. It returns ~690 characters while the mysql script returns ~1007 characters 05/16/2008 05:37:07 PM - SPINE: Poller[0] Host[305] DS[349] SCRIPT: /usr/bin/perl /home/cacti/public_ht...
- Fri May 16, 2008 7:51 pm
- Forum: Help: General
- Topic: Spine/0.8.7b not picking up all output from script
- Replies: 3
- Views: 1426
Spine/0.8.7b not picking up all output from script
I reported a similar issue with cactid in bug #960 I think it's related, but it's not identical. This script works fine with cmd.php but it does not work in spine. Looking at bug #960 TheWitness says he is using BUFSIZE in spine to control the amount of output it will take, I adjusted this value in ...
- Thu Apr 10, 2008 12:20 pm
- Forum: Help: General
- Topic: cacti with 5000+ DS not updating all data sources
- Replies: 2
- Views: 1102
alright after digging into the code a bit more looks like the poller is just timing out after 296 seconds, so not much I can do with the way it's currently setup. It seems while data collection can be run in parallel with multiple threads, the back-end process is serial? anyways, no big deal at leas...
- Wed Apr 09, 2008 6:54 pm
- Forum: Help: General
- Topic: cacti with 5000+ DS not updating all data sources
- Replies: 2
- Views: 1102
update
I found an error now in the debug log, will try to debug it more tomorrow morning, the error is
04/09/2008 04:32:19 PM - PCOMMAND: Poller[0] ERROR: Poller Command processing timed out after processing 'Array'
04/09/2008 04:32:19 PM - PCOMMAND: Poller[0] ERROR: Poller Command processing timed out after processing 'Array'
- Wed Apr 09, 2008 11:22 am
- Forum: Help: General
- Topic: cacti with 5000+ DS not updating all data sources
- Replies: 2
- Views: 1102
cacti with 5000+ DS not updating all data sources
I've inherited a cacti installation that isn't setup in the best fashion I think. It's running cacti 0.8.6h and cactid 0.8.6h. Currently there are about 300 devices and about 5800 data sources. Enabling full debug shows cacti claiming it's updating the data sources for some hosts, but the files have...