Waiting on 1 of 2 pollers.

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

Post Reply
xefil
Cacti User
Posts: 233
Joined: Tue Jun 20, 2006 2:48 am
Location: Italy
Contact:

Waiting on 1 of 2 pollers.

Post by xefil »

Hello,

I'm migrating my setup from 0.8.8a on an CentOS5.8 server to a new CentOS7.3 with cacti1.1.6. I'm having different issues. Now I'm getting this errors in the poller:

Code: Select all

Waiting on 1 of 2 pollers.
How can I debug what is blocked?

Here some system infos:

Code: Select all

Technical Support [Summary]
General Information
Date	Fri, 12 May 2017 08:23:42 +0000
Cacti Version	1.1.6
Cacti OS	unix
RSA Fingerprint	db:41:be:f1:f6:e7:7a:32:1b:10:73:f8:5d:27:f0:d6
NET-SNMP Version	NET-SNMP version: 5.7.2
RRDtool Version	RRDTool 1.4.x
Devices	3100
Graphs	15309
Data Sources	Script/Command: 1206
SNMP Get: 2478
SNMP Query: 12590
Script Query: 316
Script Server: 62
Total: 16652

Poller Information
Interval	300
Type	SPINE 1.1.6 Copyright 2004-2017 by The Cacti Group
Items	Action[0]: 27550
Action[1]: 1766
Action[2]: 61
Total: 29377
Concurrent Processes	1
Max Threads	100
PHP Servers	6
Script Timeout	10
Max OID	10
Last Run Statistics	Time:298.5382 Method:spine Processes:1 Threads:100 Hosts:3026 HostsPerProcess:3026 DataSources:29377 RRDsProcessed:9704

System Memory
MemTotal	15,77 K MB
MemFree	1,37 K MB
Buffers	0,74 MB
Cached	11,97 K MB
Active	4,98 K MB
Inactive	8,37 K MB
SwapTotal	8,00 K MB
SwapFree	8,00 K MB

PHP Information
PHP Version	7.1.4
PHP OS	Linux
PHP uname	Linux cacti 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64
PHP SNMP	Installed
max_execution_time	30
memory_limit	512M

MariaDB Tuning (/etc/my.cnf) - [ Documentation ] Note: Many changes below require a database restart
Variable	Current Value	Recommended Value	Comments
version	10.1.23-MariaDB	>= 5.6	MySQL 5.6+ and MariaDB 10.0+ are great releases, and are very good versions to choose. Make sure you run the very latest release though which fixes a long standing low level networking issue that was casuing spine many issues with reliability.
collation_server	utf8mb4_general_ci	utf8mb4_unicode_ci	When using Cacti with languages other than English, it is important to use the utf8mb4_unicode_ci collation type as some characters take more than a single byte.
character_set_client	utf8mb4	utf8mb4	When using Cacti with languages other than English, it is important ot use the utf8mb4 character set as some characters take more than a single byte.
max_connections	500	>= 100	Depending on the number of logins and use of spine data collector, MariaDB will need many connections. The calculation for spine is: total_connections = total_processes * (total_threads + script_servers + 1), then you must leave headroom for user connections, which will change depending on the number of concurrent login accounts.
max_heap_table_size	800M	>=770M	If using the Cacti Performance Booster and choosing a memory storage engine, you have to be careful to flush your Performance Booster buffer before the system runs out of memory table space. This is done two ways, first reducing the size of your output column to just the right size. This column is in the tables poller_output, and poller_output_boost. The second thing you can do is allocate more memory to memory tables. We have arbitrarily chosen a recommended value of 10% of system memory, but if you are using SSD disk drives, or have a smaller system, you may ignore this recommendation or choose a different storage engine. You may see the expected consumption of the Performance Booster tables under Console -> System Utilities -> View Boost Status.
max_allowed_packet	33554432	>= 16777216	With Remote polling capabilities, large amounts of data will be synced from the main server to the remote pollers. Therefore, keep this value at or above 16M.
tmp_table_size	64M	>= 64M	When executing subqueries, having a larger temporary table size, keep those temporary tables in memory.
join_buffer_size	64M	>= 64M	When performing joins, if they are below this size, they will be kept in memory and never written to a temporary file.
innodb_file_per_table	ON	ON	When using InnoDB storage it is important to keep your table spaces separate. This makes managing the tables simpler for long time users of MariaDB. If you are running with this currently off, you can migrate to the per file storage by enabling the feature, and then running an alter statement on all InnoDB tables.
innodb_buffer_pool_size	4096M	>=3850M	InnoDB will hold as much tables and indexes in system memory as is possible. Therefore, you should make the innodb_buffer_pool large enough to hold as much of the tables and index in memory. Checking the size of the /var/lib/mysql/cacti directory will help in determining this value. We are recommending 25% of your systems total memory, but your requirements will vary depending on your systems size.
innodb_doublewrite	OFF	OFF	With modern SSD type storage, this operation actually degrades the disk more rapidly and adds a 50% overhead on all write operations.
innodb_additional_mem_pool_size	128M	>= 80M	This is where metadata is stored. If you had a lot of tables, it would be useful to increase this.
innodb_lock_wait_timeout	50	>= 50	Rogue queries should not for the database to go offline to others. Kill these queries before they kill your system.
innodb_flush_log_at_timeout	4	>= 3	As of MariaDB 10.1.23, the you can control how often MariaDB flushes transactions to disk. The default is 1 second, but in high I/O systems setting to a value greater than 1 can allow disk I/O to be more sequential
innodb_read_io_threads	64	>= 32	With modern SSD type storage, having multiple read io threads is advantageous for applications with high io characteristics.
innodb_write_io_threads	16	>= 16	With modern SSD type storage, having multiple write io threads is advantageous for applications with high io characteristics.
On the mySQL debug I'm seeing those select in loop:

Code: Select all

51502 Query     SELECT po.output, po.time,  UNIX_TIMESTAMP(po.time) as unix_time, po.local_data_id, dl.data_template_id,  pi.rrd_path, pi.rrd_name, pi.rrd_num  FROM poller_output AS po  INNER JOIN poller_item AS pi  ON po.local_data_id=pi.local_data_id  AND po.rrd_name=pi.rrd_name  INNER JOIN data_local AS dl  ON dl.id=po.local_data_id  ORDER BY po.local_data_id  LIMIT 40000
Which, manually, gives me an empty result.

Any help?
I would like to indentify where is blocked the poller :(

Thanks,

Simon
xefil
Cacti User
Posts: 233
Joined: Tue Jun 20, 2006 2:48 am
Location: Italy
Contact:

Re: Waiting on 1 of 2 pollers.

Post by xefil »

Here my last stats:

Code: Select all

2017-05-12 09:50:00 - SYSTEM STATS: Time:299.3939 Method:spine Processes:1 Threads:100 Hosts:3026 HostsPerProcess:3026 DataSources:29377 RRDsProcessed:9703
2017-05-12 09:55:00 - SYSTEM STATS: Time:299.6104 Method:spine Processes:1 Threads:100 Hosts:3026 HostsPerProcess:3026 DataSources:29377 RRDsProcessed:9703
2017-05-12 10:00:00 - SYSTEM STATS: Time:299.3964 Method:spine Processes:1 Threads:100 Hosts:3026 HostsPerProcess:3026 DataSources:29377 RRDsProcessed:9701
2017-05-12 10:05:00 - SYSTEM STATS: Time:298.4682 Method:spine Processes:1 Threads:100 Hosts:3026 HostsPerProcess:3026 DataSources:29377 RRDsProcessed:9704
User avatar
Osiris
Cacti Guru User
Posts: 1424
Joined: Mon Jan 05, 2015 10:10 am

Re: Waiting on 1 of 2 pollers.

Post by Osiris »

100 threads is excessive. Bring it down to something like 30, and enable Boost's on demand updating as disk I/O may be killing you.
Before history, there was a paradise, now dust.
Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests