Graphs not right...
Moderators: Developers, Moderators
-
- Posts: 1
- Joined: Tue Jul 22, 2008 2:55 am
Graphs not right...
Hi,
please check out one of my graphs below. It has 'holes' in it...
This just happened a couple of days ago when I added some more graphs.
Currently I have 1127 Graps on 67 devices. I've set the poller interval to 5 minutes and the cron interval also on every 5 minutes.
The maximum concurrent poller processes are set to 5.
Can anyone please advise me what could be wrong?
please check out one of my graphs below. It has 'holes' in it...
This just happened a couple of days ago when I added some more graphs.
Currently I have 1127 Graps on 67 devices. I've set the poller interval to 5 minutes and the cron interval also on every 5 minutes.
The maximum concurrent poller processes are set to 5.
Can anyone please advise me what could be wrong?
- Attachments
-
- All my graphs look simmular like this....
- graph.jpg (588.92 KiB) Viewed 3809 times
I've got the same problem.
My Server1 meets no problem, here is its poller process :
08/12/2008 12:16:30 PM - EXPORT STATS: ExportDate:2008-08-12_12:16:30 ExportDuration:11.6148 TotalGraphsExported:245
08/12/2008 12:16:17 PM - SYSTEM STATS: Time:76.4729 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 12:11:29 PM - EXPORT STATS: ExportDate:2008-08-12_12:11:29 ExportDuration:10.8521 TotalGraphsExported:245
08/12/2008 12:11:17 PM - SYSTEM STATS: Time:75.9747 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2236
08/12/2008 12:06:32 PM - EXPORT STATS: ExportDate:2008-08-12_12:06:32 ExportDuration:11.9326 TotalGraphsExported:245
08/12/2008 12:06:19 PM - SYSTEM STATS: Time:77.6444 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 12:01:32 PM - EXPORT STATS: ExportDate:2008-08-12_12:01:32 ExportDuration:12.0403 TotalGraphsExported:245
08/12/2008 12:01:18 PM - SYSTEM STATS: Time:77.4130 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 11:56:30 AM - EXPORT STATS: ExportDate:2008-08-12_11:56:30 ExportDuration:11.9068 TotalGraphsExported:245
08/12/2008 11:56:16 AM - SYSTEM STATS: Time:74.8992 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2236
I've got many holes on many graphs on my Server2, here is its poller process :
08/12/2008 12:10:16 PM - EXPORT STATS: ExportDate:2008-08-12_12:10:16 ExportDuration:0.3677 TotalGraphsExported:0
08/12/2008 12:10:15 PM - SYSTEM STATS: Time:13.8367 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 12:05:16 PM - EXPORT STATS: ExportDate:2008-08-12_12:05:16 ExportDuration:0.4195 TotalGraphsExported:0
08/12/2008 12:05:15 PM - SYSTEM STATS: Time:13.3618 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 12:05:10 PM - SYSTEM MACTRACK STATS: Time:10.3183 ConcurrentProcesses:7 Devices:2
08/12/2008 12:05:00 PM - EXPORT STATS: ExportDate:2008-08-12_12:05:00 ExportDuration:0.4263 TotalGraphsExported:0
08/12/2008 12:04:59 PM - SYSTEM STATS: Time:298.8177 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1222
08/12/2008 12:00:01 PM - EXPORT STATS: ExportDate:2008-08-12_12:00:01 ExportDuration:0.6703 TotalGraphsExported:0
08/12/2008 12:00:00 PM - SYSTEM STATS: Time:298.6695 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1221
08/12/2008 11:50:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:50:16 ExportDuration:0.4154 TotalGraphsExported:0
08/12/2008 11:50:16 AM - SYSTEM STATS: Time:15.0499 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:50:11 AM - SYSTEM MACTRACK STATS: Time:10.3317 ConcurrentProcesses:7 Devices:2
08/12/2008 11:50:00 AM - EXPORT STATS: ExportDate:2008-08-12_11:50:00 ExportDuration:0.4264 TotalGraphsExported:0
08/12/2008 11:50:00 AM - SYSTEM STATS: Time:298.8218 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1620
08/12/2008 11:40:17 AM - EXPORT STATS: ExportDate:2008-08-12_11:40:17 ExportDuration:0.4153 TotalGraphsExported:0
08/12/2008 11:40:17 AM - SYSTEM STATS: Time:14.8279 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:35:18 AM - EXPORT STATS: ExportDate:2008-08-12_11:35:18 ExportDuration:0.4128 TotalGraphsExported:0
08/12/2008 11:35:17 AM - SYSTEM STATS: Time:14.7168 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:35:13 AM - SYSTEM MACTRACK STATS: Time:10.8315 ConcurrentProcesses:7 Devices:2
08/12/2008 11:35:02 AM - EXPORT STATS: ExportDate:2008-08-12_11:35:02 ExportDuration:0.7096 TotalGraphsExported:0
08/12/2008 11:35:01 AM - SYSTEM STATS: Time:299.4512 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1359
08/12/2008 11:30:03 AM - EXPORT STATS: ExportDate:2008-08-12_11:30:03 ExportDuration:0.5349 TotalGraphsExported:0
08/12/2008 11:30:01 AM - SYSTEM STATS: Time:299.7684 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1481
08/12/2008 11:20:26 AM - SYSTEM MACTRACK STATS: Time:10.0835 ConcurrentProcesses:7 Devices:2
08/12/2008 11:20:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:20:16 ExportDuration:0.4924 TotalGraphsExported:0
08/12/2008 11:20:15 AM - SYSTEM STATS: Time:13.9478 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:15:15 AM - EXPORT STATS: ExportDate:2008-08-12_11:15:15 ExportDuration:0.3392 TotalGraphsExported:0
08/12/2008 11:15:15 AM - SYSTEM STATS: Time:13.0575 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:10:17 AM - EXPORT STATS: ExportDate:2008-08-12_11:10:17 ExportDuration:0.4462 TotalGraphsExported:0
08/12/2008 11:10:17 AM - SYSTEM STATS: Time:14.9096 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:05:26 AM - SYSTEM MACTRACK STATS: Time:10.0997 ConcurrentProcesses:7 Devices:2
08/12/2008 11:05:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:05:16 ExportDuration:0.5101 TotalGraphsExported:0
08/12/2008 11:05:15 AM - SYSTEM STATS: Time:14.1968 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:00:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:00:16 ExportDuration:0.3919 TotalGraphsExported:0
08/12/2008 11:00:15 AM - SYSTEM STATS: Time:13.8446 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
They poll every 5 minutes, therefore we can notice that Server2 takes almost 5 minutes to poll at many times :
First, why do my cacti poll twice (at 11:35am, 11:50am, 12:05pm) where one of both polls takes 5 minutes.
Second, at 11:30am and 12:00pm, there is only one poll and it takes 5 minutes
What's the matter ?
My Server1 meets no problem, here is its poller process :
08/12/2008 12:16:30 PM - EXPORT STATS: ExportDate:2008-08-12_12:16:30 ExportDuration:11.6148 TotalGraphsExported:245
08/12/2008 12:16:17 PM - SYSTEM STATS: Time:76.4729 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 12:11:29 PM - EXPORT STATS: ExportDate:2008-08-12_12:11:29 ExportDuration:10.8521 TotalGraphsExported:245
08/12/2008 12:11:17 PM - SYSTEM STATS: Time:75.9747 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2236
08/12/2008 12:06:32 PM - EXPORT STATS: ExportDate:2008-08-12_12:06:32 ExportDuration:11.9326 TotalGraphsExported:245
08/12/2008 12:06:19 PM - SYSTEM STATS: Time:77.6444 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 12:01:32 PM - EXPORT STATS: ExportDate:2008-08-12_12:01:32 ExportDuration:12.0403 TotalGraphsExported:245
08/12/2008 12:01:18 PM - SYSTEM STATS: Time:77.4130 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2217
08/12/2008 11:56:30 AM - EXPORT STATS: ExportDate:2008-08-12_11:56:30 ExportDuration:11.9068 TotalGraphsExported:245
08/12/2008 11:56:16 AM - SYSTEM STATS: Time:74.8992 Method:cactid Processes:10 Threads:10 Hosts:174 HostsPerProcess:18 DataSources:3331 RRDsProcessed:2236
I've got many holes on many graphs on my Server2, here is its poller process :
08/12/2008 12:10:16 PM - EXPORT STATS: ExportDate:2008-08-12_12:10:16 ExportDuration:0.3677 TotalGraphsExported:0
08/12/2008 12:10:15 PM - SYSTEM STATS: Time:13.8367 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 12:05:16 PM - EXPORT STATS: ExportDate:2008-08-12_12:05:16 ExportDuration:0.4195 TotalGraphsExported:0
08/12/2008 12:05:15 PM - SYSTEM STATS: Time:13.3618 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 12:05:10 PM - SYSTEM MACTRACK STATS: Time:10.3183 ConcurrentProcesses:7 Devices:2
08/12/2008 12:05:00 PM - EXPORT STATS: ExportDate:2008-08-12_12:05:00 ExportDuration:0.4263 TotalGraphsExported:0
08/12/2008 12:04:59 PM - SYSTEM STATS: Time:298.8177 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1222
08/12/2008 12:00:01 PM - EXPORT STATS: ExportDate:2008-08-12_12:00:01 ExportDuration:0.6703 TotalGraphsExported:0
08/12/2008 12:00:00 PM - SYSTEM STATS: Time:298.6695 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1221
08/12/2008 11:50:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:50:16 ExportDuration:0.4154 TotalGraphsExported:0
08/12/2008 11:50:16 AM - SYSTEM STATS: Time:15.0499 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:50:11 AM - SYSTEM MACTRACK STATS: Time:10.3317 ConcurrentProcesses:7 Devices:2
08/12/2008 11:50:00 AM - EXPORT STATS: ExportDate:2008-08-12_11:50:00 ExportDuration:0.4264 TotalGraphsExported:0
08/12/2008 11:50:00 AM - SYSTEM STATS: Time:298.8218 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1620
08/12/2008 11:40:17 AM - EXPORT STATS: ExportDate:2008-08-12_11:40:17 ExportDuration:0.4153 TotalGraphsExported:0
08/12/2008 11:40:17 AM - SYSTEM STATS: Time:14.8279 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:35:18 AM - EXPORT STATS: ExportDate:2008-08-12_11:35:18 ExportDuration:0.4128 TotalGraphsExported:0
08/12/2008 11:35:17 AM - SYSTEM STATS: Time:14.7168 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:35:13 AM - SYSTEM MACTRACK STATS: Time:10.8315 ConcurrentProcesses:7 Devices:2
08/12/2008 11:35:02 AM - EXPORT STATS: ExportDate:2008-08-12_11:35:02 ExportDuration:0.7096 TotalGraphsExported:0
08/12/2008 11:35:01 AM - SYSTEM STATS: Time:299.4512 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1359
08/12/2008 11:30:03 AM - EXPORT STATS: ExportDate:2008-08-12_11:30:03 ExportDuration:0.5349 TotalGraphsExported:0
08/12/2008 11:30:01 AM - SYSTEM STATS: Time:299.7684 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1481
08/12/2008 11:20:26 AM - SYSTEM MACTRACK STATS: Time:10.0835 ConcurrentProcesses:7 Devices:2
08/12/2008 11:20:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:20:16 ExportDuration:0.4924 TotalGraphsExported:0
08/12/2008 11:20:15 AM - SYSTEM STATS: Time:13.9478 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:15:15 AM - EXPORT STATS: ExportDate:2008-08-12_11:15:15 ExportDuration:0.3392 TotalGraphsExported:0
08/12/2008 11:15:15 AM - SYSTEM STATS: Time:13.0575 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:10:17 AM - EXPORT STATS: ExportDate:2008-08-12_11:10:17 ExportDuration:0.4462 TotalGraphsExported:0
08/12/2008 11:10:17 AM - SYSTEM STATS: Time:14.9096 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:05:26 AM - SYSTEM MACTRACK STATS: Time:10.0997 ConcurrentProcesses:7 Devices:2
08/12/2008 11:05:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:05:16 ExportDuration:0.5101 TotalGraphsExported:0
08/12/2008 11:05:15 AM - SYSTEM STATS: Time:14.1968 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
08/12/2008 11:00:16 AM - EXPORT STATS: ExportDate:2008-08-12_11:00:16 ExportDuration:0.3919 TotalGraphsExported:0
08/12/2008 11:00:15 AM - SYSTEM STATS: Time:13.8446 Method:spine Processes:10 Threads:15 Hosts:145 HostsPerProcess:15 DataSources:2570 RRDsProcessed:1857
They poll every 5 minutes, therefore we can notice that Server2 takes almost 5 minutes to poll at many times :
First, why do my cacti poll twice (at 11:35am, 11:50am, 12:05pm) where one of both polls takes 5 minutes.
Second, at 11:30am and 12:00pm, there is only one poll and it takes 5 minutes
What's the matter ?
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
All right, I gonna try your howto right now
Thanks.
Thanks.
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Ok.
First, at the third check of your howto (launch manually "./cactid --verbosity=5 <id> <id>") : what is cactid ?? I don't have any such file on my server
As you can imagine, I also wasn't able to make this check.
Second, my problem is NOT that I don't have any value. I have some !
Therefore all tests you suggested in your howto have passed !!!
Here is the output (last 20 lines) of "rrdtool fetch <rrd file> AVERAGE" :
1218613200: nan
1218613500: nan
1218613800: nan
1218614100: nan
1218614400: nan
1218614700: nan
1218615000: nan
1218615300: nan
1218615600: nan
1218615900: nan
1218616200: nan
1218616500: nan
1218616800: nan
1218617100: 5.0700000000e+02
1218617400: 5.0700000000e+02
1218617700: nan
1218618000: nan
1218618300: nan
1218618600: 1.3000000000e+01
1218618900: 1.3000000000e+01
1218619200: nan
1218619500: nan
This is an rrd where I've got the most "holes" in the values but many others rrds have just few holes (therefore less "NAN" values).
Then, about the "two poller", i've checked cron : i don't have cacti cron in /etc/cron.d/
First, at the third check of your howto (launch manually "./cactid --verbosity=5 <id> <id>") : what is cactid ?? I don't have any such file on my server
As you can imagine, I also wasn't able to make this check.
Second, my problem is NOT that I don't have any value. I have some !
Therefore all tests you suggested in your howto have passed !!!
Here is the output (last 20 lines) of "rrdtool fetch <rrd file> AVERAGE" :
1218613200: nan
1218613500: nan
1218613800: nan
1218614100: nan
1218614400: nan
1218614700: nan
1218615000: nan
1218615300: nan
1218615600: nan
1218615900: nan
1218616200: nan
1218616500: nan
1218616800: nan
1218617100: 5.0700000000e+02
1218617400: 5.0700000000e+02
1218617700: nan
1218618000: nan
1218618300: nan
1218618600: 1.3000000000e+01
1218618900: 1.3000000000e+01
1218619200: nan
1218619500: nan
This is an rrd where I've got the most "holes" in the values but many others rrds have just few holes (therefore less "NAN" values).
Then, about the "two poller", i've checked cron : i don't have cacti cron in /etc/cron.d/
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Of course I did, I followed faithfully your howto
There are no others errors in Cacti log, except those 300 seconds polling and double pollers (or both)
There are no others errors in Cacti log, except those 300 seconds polling and double pollers (or both)
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Please help!! my server still polls twice and/or polls during 300 seconds, creating holes in graphs I don't know where to make research and what I have to research/check...
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Now my server2 takes ALWAYS more than 300 seconds and is therefore NEVER able to finish
Please help
Please help
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
-
- Cacti Guru User
- Posts: 1884
- Joined: Mon Oct 16, 2006 5:57 am
- Location: United Kingdom
- Contact:
Sounds very much like a bad data source - do you have a script running on this server for a datasource that either contains an error, or does not finish ?
Check the logs (no debug needed - medium is fine), for SEGFAULT or TERMINATING. I've also noticed that you are using CACTID. If you are using 0.8.7, then this really should be SPINE.
Hope this is of some use.
Check the logs (no debug needed - medium is fine), for SEGFAULT or TERMINATING. I've also noticed that you are using CACTID. If you are using 0.8.7, then this really should be SPINE.
Hope this is of some use.
Cacti Version 0.8.8b
Cacti OS Ubuntu LTS
RRDTool Version RRDTool 1.4.7
Poller Information
Type SPINE 0.8.8b
Thank you !!!
I had added around 20 devices where SarParse graphs coudn't work as long as I didn't installed what is needed on thoses devices.
And you helped me to find the problem !!
By checking the date source, i've found that many oid requests were taken too much time. They were waiting for 20 seconds (equal of the Script and Script Server Timeout Value) per request which means that many threads per processes were waiting too !! (and failed to reach the value of course)
By setting the timeout value from 20 to 2, the poller could ending
Thanks again
I had added around 20 devices where SarParse graphs coudn't work as long as I didn't installed what is needed on thoses devices.
And you helped me to find the problem !!
By checking the date source, i've found that many oid requests were taken too much time. They were waiting for 20 seconds (equal of the Script and Script Server Timeout Value) per request which means that many threads per processes were waiting too !! (and failed to reach the value of course)
By setting the timeout value from 20 to 2, the poller could ending
Thanks again
Server1
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Cacti : 0.8.7h | Architecture : 3.0
autom8 : 0.35 | aggregate : 0.75 | settings : 0.71 | thold : 0.4.7 | weathermap : 0.97a
Server2
Cacti : 0.8.7g | Architecture : 2.8
autom8 : 0.35 | aggregate : 0.75 | settings : 0.7 | thold : 0.4.3 | weathermap : 0.97a | flowview : 0.6
Who is online
Users browsing this forum: No registered users and 2 guests