Every single poller run on a server is emails errors/alerts passing the 60 second timeout, but here is the weird thing, all the graphs are updating correctly, no gaps, no missing data, but system load is very high as well (3+)
cat /proc/cpuinfo | grep processor | wc -l
4
PHP 8.1.30 (cli) (built: Sep 27 2024 04:07:29) (NTS)
df2d7f96a (HEAD -> develop, origin/develop, origin/HEAD) Merge branch 'develop' of https://github.com/Cacti/cacti into develop
Code: Select all
2024-11-28 08:31:31 - SYSTEM STATS: Time:30.7458 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:30:32 - SYSTEM STATS: Time:31.4839 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:29:33 - SYSTEM STATS: Time:31.9367 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:28:37 - SYSTEM STATS: Time:36.1866 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:27:52 - SYSTEM STATS: Time:23.8041 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:27:00 - SYSTEM STATS: Time:58.9428 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:25:38 - SYSTEM STATS: Time:37.7470 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:24:34 - SYSTEM STATS: Time:32.9448 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:23:37 - SYSTEM STATS: Time:36.1345 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:22:27 - SYSTEM STATS: Time:27.2584 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:21:30 - SYSTEM STATS: Time:27.8365 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:20:36 - SYSTEM STATS: Time:35.4239 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:19:36 - SYSTEM STATS: Time:34.4630 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:19:04 - SYSTEM STATS: Time:25.1903 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:18:40 - SYSTEM STATS: Time:99.0168 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1428
2024-11-28 08:16:36 - SYSTEM STATS: Time:34.6589 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:15:39 - SYSTEM STATS: Time:38.0872 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:14:30 - SYSTEM STATS: Time:29.8767 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:13:31 - SYSTEM STATS: Time:28.9734 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:12:23 - SYSTEM STATS: Time:22.2418 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:11:37 - SYSTEM STATS: Time:33.7727 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
2024-11-28 08:10:34 - SYSTEM STATS: Time:33.0161 Method:spine Processes:4 Threads:15 Hosts:169 HostsPerProcess:43 DataSources:2071 RRDsProcessed:1458
Examples
Code: Select all
08:34
Maximum runtime of 58 seconds exceeded for Poller[Main Poller]. Exiting.
WARNING: There are 1 processes detected as overrunning a polling cycle for Poller[Main Poller], please investigate.
WARNING: There are 1 processes detected as overrunning a polling cycle for Poller[Main Poller], please investigate.
WARNING: Cacti Polling Cycle Exceeded Poller Interval by 39.18 seconds
Maximum runtime of 58 seconds exceeded for Poller[Main Poller]. Exiting.
08:28
WARNING: There are 1 processes detected as overrunning a polling cycle for Poller[Main Poller], please investigate.
WARNING: Cacti Polling Cycle Exceeded Poller Interval by 26.97 seconds
so, i decided to upgrad the server to a new machine entirely.. followed a backup and restore (192 gig ram) 16 core... and actually seeing the same thing and I'm super confused..
Is there anything I can dig into deeper as to why this is happening?