5 minute poller intervall
Moderators: Developers, Moderators
5 minute poller intervall
Hi folks,
I'm using cacti 1.2.2 running on a raspberry pi.
I just created some graphs by using the 5 minute poller intervall.
However, if I check the intervall within cacti.log, I see, that the script used for gathering data is run every 6 minutes instead of every 5 minutes like
07.11.2022 13:23:09 - POLLER: Poller[1] PID[685658] Device[5] DS[76] TT[2496.56] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:29:09 - POLLER: Poller[1] PID[686576] Device[5] DS[76] TT[2415.44] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:35:09 - POLLER: Poller[1] PID[687429] Device[5] DS[76] TT[2431.21] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:41:09 - POLLER: Poller[1] PID[688346] Device[5] DS[76] TT[2282.65] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
Other graphs created in the past are updated every minute and I'm using cmd.php called by cron every minute (and all of these graphs working very well).
Does anybody have the same issue (breaks with the graphs / poller) and any idea how to solve this ?
kind regards
Bruno
I'm using cacti 1.2.2 running on a raspberry pi.
I just created some graphs by using the 5 minute poller intervall.
However, if I check the intervall within cacti.log, I see, that the script used for gathering data is run every 6 minutes instead of every 5 minutes like
07.11.2022 13:23:09 - POLLER: Poller[1] PID[685658] Device[5] DS[76] TT[2496.56] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:29:09 - POLLER: Poller[1] PID[686576] Device[5] DS[76] TT[2415.44] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:35:09 - POLLER: Poller[1] PID[687429] Device[5] DS[76] TT[2431.21] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
07.11.2022 13:41:09 - POLLER: Poller[1] PID[688346] Device[5] DS[76] TT[2282.65] SCRIPT: /var/www/html/cacti/scripts/getTemp.sh '192.168.178.202' 'ist', output: 20.9
Other graphs created in the past are updated every minute and I'm using cmd.php called by cron every minute (and all of these graphs working very well).
Does anybody have the same issue (breaks with the graphs / poller) and any idea how to solve this ?
kind regards
Bruno
Re: 5 minute poller intervall
Additional Info:
I just checked the "build in" graphs like memory / cpu usage, number of processes which should even gathered every 5 Minutes by poller.
But within cacti log, i can see that even these datasources were called every 6 Minutes, e.g.
pi@raspberrypi:/var/www/html/cacti/log $ grep "ss_boost_table" cacti.log
09.11.2022 00:05:03 - POLLER: Poller[1] PID[982432] Device[3] DS[53] TT[0.77] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:11:03 - POLLER: Poller[1] PID[983363] Device[3] DS[53] TT[0.73] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:17:03 - POLLER: Poller[1] PID[984210] Device[3] DS[53] TT[0.82] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:23:03 - POLLER: Poller[1] PID[985069] Device[3] DS[53] TT[0.73] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:29:03 - POLLER: Poller[1] PID[985913] Device[3] DS[53] TT[0.84] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:35:03 - POLLER: Poller[1] PID[986792] Device[3] DS[53] TT[1.02] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:41:04 - POLLER: Poller[1] PID[987715] Device[3] DS[53] TT[0.72] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:47:03 - POLLER: Poller[1] PID[988564] Device[3] DS[53] TT[0.79] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:53:21 - POLLER: Poller[1] PID[989479] Device[3] DS[53] TT[1.15] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
Seems to be a bug (maybe just on raspberries or not)
kind regards
B
I just checked the "build in" graphs like memory / cpu usage, number of processes which should even gathered every 5 Minutes by poller.
But within cacti log, i can see that even these datasources were called every 6 Minutes, e.g.
pi@raspberrypi:/var/www/html/cacti/log $ grep "ss_boost_table" cacti.log
09.11.2022 00:05:03 - POLLER: Poller[1] PID[982432] Device[3] DS[53] TT[0.77] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:11:03 - POLLER: Poller[1] PID[983363] Device[3] DS[53] TT[0.73] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:17:03 - POLLER: Poller[1] PID[984210] Device[3] DS[53] TT[0.82] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:23:03 - POLLER: Poller[1] PID[985069] Device[3] DS[53] TT[0.73] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:29:03 - POLLER: Poller[1] PID[985913] Device[3] DS[53] TT[0.84] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:35:03 - POLLER: Poller[1] PID[986792] Device[3] DS[53] TT[1.02] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:41:04 - POLLER: Poller[1] PID[987715] Device[3] DS[53] TT[0.72] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:47:03 - POLLER: Poller[1] PID[988564] Device[3] DS[53] TT[0.79] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
09.11.2022 00:53:21 - POLLER: Poller[1] PID[989479] Device[3] DS[53] TT[1.15] SERVER: /var/www/html/cacti/scripts/ss_poller.php ss_boost_table, output: 16384
Seems to be a bug (maybe just on raspberries or not)
kind regards
B
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
Re: 5 minute poller intervall
Can you upgrade to something more recent? Then, repopulate your poller cache.
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Re: 5 minute poller intervall
Oooops, I've done a mistake in my first post. I was on 1.2.20
Three days ago I made an update to cacti 1.2.22 and checked the behaviour (still persist).
In addition I looked into the poller_item table but found nothing uncommon (rrd_step is always on 300 and rrd_next_step is decreased by 60 every minute).
But to be sure I repopulate my poller cache once more and let you know the results in some hours.
kind regards
Three days ago I made an update to cacti 1.2.22 and checked the behaviour (still persist).
In addition I looked into the poller_item table but found nothing uncommon (rrd_step is always on 300 and rrd_next_step is decreased by 60 every minute).
But to be sure I repopulate my poller cache once more and let you know the results in some hours.
kind regards
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
Re: 5 minute poller intervall
Good. If you are using boost, make sure you update:
lib/boost.php
poller_boost.php
lib/functions.php
From the 1.2.x branch. There were a couple of material bugs with those three files.
lib/boost.php
poller_boost.php
lib/functions.php
From the 1.2.x branch. There were a couple of material bugs with those three files.
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
Re: 5 minute poller intervall
Lastly post this page for your Data Source Profile. I have a suspicion about something. Just not sure.
- Attachments
-
- DataSourceProfileForPost.png (56.43 KiB) Viewed 722 times
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Re: 5 minute poller intervall
All 3 files were from release 1.2.22 (file date Aug 14th 23:42).
And I don't use boost as far as I know. I'm calling poller.php by crontab every minute.
The data source profiles for 30secs and 5mins poller are attached.
Many Thx for your help
B
And I don't use boost as far as I know. I'm calling poller.php by crontab every minute.
The data source profiles for 30secs and 5mins poller are attached.
Many Thx for your help
B
- Attachments
-
- DataSourceProdile30s.jpg (57.22 KiB) Viewed 712 times
-
- DataSourceProdile5m.jpg (58.68 KiB) Viewed 712 times
Re: 5 minute poller intervall
What is your poller interval?
Before history, there was a paradise, now dust.
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
Re: 5 minute poller intervall
Goto Console > Configuration > Settings > Poller and post that screen shot.
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
Re: 5 minute poller intervall
Looks like you are only using the 5 minute Data Source Profile. What I would do is:
1) Switch from cron to the cactid service
2) Change your poller interval and cron/service interval to 5 minutes
3) Repopulate your Poller Cache
4) Switch to spine and use more threads, but not too many. For a few thousand hosts 2 processes and 10 threads is more than enough.
5) Make sure your server has 10+ cores and lot's of RAM, or enable boost.
1) Switch from cron to the cactid service
2) Change your poller interval and cron/service interval to 5 minutes
3) Repopulate your Poller Cache
4) Switch to spine and use more threads, but not too many. For a few thousand hosts 2 processes and 10 threads is more than enough.
5) Make sure your server has 10+ cores and lot's of RAM, or enable boost.
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Re: 5 minute poller intervall
Well I don't want to bother you and from my point of view it's not a big thing.
I think your proposals are very well for great or big cacti servers with thousands of graphs. But I've done a small installation just for monitoring and reporting 3 graphs. The raspberry is a 4b Model with "plenty RAM" which is 4 GB, the cpu is almost on idle and the number of processes is also next to easy. And even other -out of the cacti-box- standard graphs on linux machines show the same behaviour.
pi@raspberrypi:/var/www/html/cacti/log $ grep unix_users cacti.log | tail -5
17.11.2022 09:48:04 - POLLER: Poller[1] PID[2903039] Device[1] DS[3] TT[10.09] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 09:54:03 - POLLER: Poller[1] PID[2903969] Device[1] DS[3] TT[10.11] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:00:04 - POLLER: Poller[1] PID[2904915] Device[1] DS[3] TT[10.25] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:06:03 - POLLER: Poller[1] PID[2905848] Device[1] DS[3] TT[10.05] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:12:03 - POLLER: Poller[1] PID[2906880] Device[1] DS[3] TT[10.03] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 1
pi@raspberrypi:/var/www/html/cacti/log $ grep unix_proc cacti.log | tail -5
17.11.2022 09:46:03 - POLLER: Poller[1] PID[2902744] Device[1] DS[1] TT[29.67] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 166
17.11.2022 09:52:03 - POLLER: Poller[1] PID[2903676] Device[1] DS[1] TT[27.45] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 165
17.11.2022 09:58:03 - POLLER: Poller[1] PID[2904604] Device[1] DS[1] TT[27.57] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 166
17.11.2022 10:04:03 - POLLER: Poller[1] PID[2905543] Device[1] DS[1] TT[27.75] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 169
17.11.2022 10:10:03 - POLLER: Poller[1] PID[2906513] Device[1] DS[1] TT[27.2] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 164
pi@raspberrypi:/var/www/html/cacti/log $ grep linux_mem cacti.log | tail -5
17.11.2022 10:02:03 - POLLER: Poller[1] PID[2905244] Device[1] DS[5] TT[10.54] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:07:03 - POLLER: Poller[1] PID[2906002] Device[1] DS[4] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 555344
17.11.2022 10:08:03 - POLLER: Poller[1] PID[2906152] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:13:03 - POLLER: Poller[1] PID[2907036] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 547284
17.11.2022 10:14:03 - POLLER: Poller[1] PID[2907192] Device[1] DS[5] TT[10.59] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
pi@raspberrypi:/var/www/html/cacti/log $ grep linux_mem cacti.log | tail -10
17.11.2022 09:49:04 - POLLER: Poller[1] PID[2903193] Device[1] DS[4] TT[10.57] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 556316
17.11.2022 09:50:03 - POLLER: Poller[1] PID[2903358] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 09:55:03 - POLLER: Poller[1] PID[2904138] Device[1] DS[4] TT[10.7] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 557824
17.11.2022 09:56:03 - POLLER: Poller[1] PID[2904314] Device[1] DS[5] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:01:03 - POLLER: Poller[1] PID[2905086] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 556464
17.11.2022 10:02:03 - POLLER: Poller[1] PID[2905244] Device[1] DS[5] TT[10.54] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:07:03 - POLLER: Poller[1] PID[2906002] Device[1] DS[4] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 555344
17.11.2022 10:08:03 - POLLER: Poller[1] PID[2906152] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:13:03 - POLLER: Poller[1] PID[2907036] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 547284
17.11.2022 10:14:03 - POLLER: Poller[1] PID[2907192] Device[1] DS[5] TT[10.59] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
pi@raspberrypi:/var/www/html/cacti/log $ grep load_avg cacti.log | tail -5
pi@raspberrypi:/var/www/html/cacti/log $ grep loadavg_ cacti.log | tail -5
17.11.2022 09:47:03 - POLLER: Poller[1] PID[2902892] Device[1] DS[2] TT[11.13] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.16 5min:0.16 10min:0.17
17.11.2022 09:53:03 - POLLER: Poller[1] PID[2903823] Device[1] DS[2] TT[11.08] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.06 5min:0.11 10min:0.15
17.11.2022 09:59:03 - POLLER: Poller[1] PID[2904753] Device[1] DS[2] TT[14.71] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.17 5min:0.14 10min:0.15
17.11.2022 10:05:04 - POLLER: Poller[1] PID[2905694] Device[1] DS[2] TT[11.15] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.00 5min:0.05 10min:0.09
17.11.2022 10:11:03 - POLLER: Poller[1] PID[2906684] Device[1] DS[2] TT[11.13] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.01 5min:0.04 10min:0.07
pi@raspberrypi:/var/www/html/cacti/log $
So I took a closer look to the poller_item table and took every minute a screenshot of these poller-items (loadavg, unix processes, unix users, linux_memory.....)
I realize, that the script writes into colum "rrd_next_step" an inital value of 300 for the 5 minutes poller interval items. This number is decreased by 60 every minute and the job seems to run in the next poller cycle, if the value within this column is 0. But it tooks 6 steps/minutes to count down from 300 to 0 by 60 !!
(300 - 240 - 180 - 120 - 60 - 0 --> job runs)
May this be the whole problem ?
Within the one minute poller_items, the values in colum rrd_next_step is allways 0. Could it be the solution, to set the inital number for "rrd_next_step" to 240 instead of 300 ??
kind regards
Bruno
I think your proposals are very well for great or big cacti servers with thousands of graphs. But I've done a small installation just for monitoring and reporting 3 graphs. The raspberry is a 4b Model with "plenty RAM" which is 4 GB, the cpu is almost on idle and the number of processes is also next to easy. And even other -out of the cacti-box- standard graphs on linux machines show the same behaviour.
pi@raspberrypi:/var/www/html/cacti/log $ grep unix_users cacti.log | tail -5
17.11.2022 09:48:04 - POLLER: Poller[1] PID[2903039] Device[1] DS[3] TT[10.09] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 09:54:03 - POLLER: Poller[1] PID[2903969] Device[1] DS[3] TT[10.11] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:00:04 - POLLER: Poller[1] PID[2904915] Device[1] DS[3] TT[10.25] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:06:03 - POLLER: Poller[1] PID[2905848] Device[1] DS[3] TT[10.05] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 0
17.11.2022 10:12:03 - POLLER: Poller[1] PID[2906880] Device[1] DS[3] TT[10.03] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_users.pl '', output: 1
pi@raspberrypi:/var/www/html/cacti/log $ grep unix_proc cacti.log | tail -5
17.11.2022 09:46:03 - POLLER: Poller[1] PID[2902744] Device[1] DS[1] TT[29.67] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 166
17.11.2022 09:52:03 - POLLER: Poller[1] PID[2903676] Device[1] DS[1] TT[27.45] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 165
17.11.2022 09:58:03 - POLLER: Poller[1] PID[2904604] Device[1] DS[1] TT[27.57] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 166
17.11.2022 10:04:03 - POLLER: Poller[1] PID[2905543] Device[1] DS[1] TT[27.75] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 169
17.11.2022 10:10:03 - POLLER: Poller[1] PID[2906513] Device[1] DS[1] TT[27.2] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/unix_processes.pl, output: 164
pi@raspberrypi:/var/www/html/cacti/log $ grep linux_mem cacti.log | tail -5
17.11.2022 10:02:03 - POLLER: Poller[1] PID[2905244] Device[1] DS[5] TT[10.54] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:07:03 - POLLER: Poller[1] PID[2906002] Device[1] DS[4] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 555344
17.11.2022 10:08:03 - POLLER: Poller[1] PID[2906152] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:13:03 - POLLER: Poller[1] PID[2907036] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 547284
17.11.2022 10:14:03 - POLLER: Poller[1] PID[2907192] Device[1] DS[5] TT[10.59] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
pi@raspberrypi:/var/www/html/cacti/log $ grep linux_mem cacti.log | tail -10
17.11.2022 09:49:04 - POLLER: Poller[1] PID[2903193] Device[1] DS[4] TT[10.57] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 556316
17.11.2022 09:50:03 - POLLER: Poller[1] PID[2903358] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 09:55:03 - POLLER: Poller[1] PID[2904138] Device[1] DS[4] TT[10.7] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 557824
17.11.2022 09:56:03 - POLLER: Poller[1] PID[2904314] Device[1] DS[5] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:01:03 - POLLER: Poller[1] PID[2905086] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 556464
17.11.2022 10:02:03 - POLLER: Poller[1] PID[2905244] Device[1] DS[5] TT[10.54] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:07:03 - POLLER: Poller[1] PID[2906002] Device[1] DS[4] TT[10.58] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 555344
17.11.2022 10:08:03 - POLLER: Poller[1] PID[2906152] Device[1] DS[5] TT[10.61] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
17.11.2022 10:13:03 - POLLER: Poller[1] PID[2907036] Device[1] DS[4] TT[10.6] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'MemFree:', output: 547284
17.11.2022 10:14:03 - POLLER: Poller[1] PID[2907192] Device[1] DS[5] TT[10.59] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/linux_memory.pl 'SwapFree:', output: 102396
pi@raspberrypi:/var/www/html/cacti/log $ grep load_avg cacti.log | tail -5
pi@raspberrypi:/var/www/html/cacti/log $ grep loadavg_ cacti.log | tail -5
17.11.2022 09:47:03 - POLLER: Poller[1] PID[2902892] Device[1] DS[2] TT[11.13] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.16 5min:0.16 10min:0.17
17.11.2022 09:53:03 - POLLER: Poller[1] PID[2903823] Device[1] DS[2] TT[11.08] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.06 5min:0.11 10min:0.15
17.11.2022 09:59:03 - POLLER: Poller[1] PID[2904753] Device[1] DS[2] TT[14.71] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.17 5min:0.14 10min:0.15
17.11.2022 10:05:04 - POLLER: Poller[1] PID[2905694] Device[1] DS[2] TT[11.15] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.00 5min:0.05 10min:0.09
17.11.2022 10:11:03 - POLLER: Poller[1] PID[2906684] Device[1] DS[2] TT[11.13] SCRIPT: perl /var/www/html/cacti-1.2.22/scripts/loadavg_multi.pl, output: 1min:0.01 5min:0.04 10min:0.07
pi@raspberrypi:/var/www/html/cacti/log $
So I took a closer look to the poller_item table and took every minute a screenshot of these poller-items (loadavg, unix processes, unix users, linux_memory.....)
I realize, that the script writes into colum "rrd_next_step" an inital value of 300 for the 5 minutes poller interval items. This number is decreased by 60 every minute and the job seems to run in the next poller cycle, if the value within this column is 0. But it tooks 6 steps/minutes to count down from 300 to 0 by 60 !!
(300 - 240 - 180 - 120 - 60 - 0 --> job runs)
May this be the whole problem ?
Within the one minute poller_items, the values in colum rrd_next_step is allways 0. Could it be the solution, to set the inital number for "rrd_next_step" to 240 instead of 300 ??
kind regards
Bruno
- Attachments
-
- Screenshot 2022-11-17 100158.jpg (124.02 KiB) Viewed 671 times
-
- Screenshot 2022-11-17 100326.jpg (115.36 KiB) Viewed 671 times
-
- Screenshot 2022-11-17 100419.jpg (114.18 KiB) Viewed 671 times
-
- Screenshot 2022-11-17 100520.jpg (109.82 KiB) Viewed 671 times
-
- Screenshot 2022-11-17 100632.jpg (114.04 KiB) Viewed 671 times
Re: 5 minute poller intervall
That has to be it I would say.
Before history, there was a paradise, now dust.
Who is online
Users browsing this forum: No registered users and 5 guests