Boost Plugin 5.1 & Poller Performance
Moderators: Developers, Moderators
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Boost Plugin 5.1 & Poller Performance
Hi,
In one of my recent installation i have enabled boost plugin to improve the performance of polling.
Here are my installation details.
Cacti-0.8.8a,OS-windows-2003,NET-SNMP- 5.6.1.1,RRDTool-1.4.x,PHP-5.3.17,SPINE-0.8.8a,Boost-5.1,Hosts-46,DS-3993,Graphs-1019
For every 5 minutes polling it is taking close to 200 seconds, so i have enabled boost plugin. Boost is working fine with following configuration
Enable On Demand RRD Updating - Enabled
How Often Should Boost Update All RRD's - 30 Minutes
Maximum Records - 1000000
Maximum Data Source Items Per Pass - 10000 Data source Items
Maximum Argument Length - 2000
Memory Limit for Boost and Poller - 1GB
Maximum RRD Update Script Run Time - 20 Minutes
Enable direct population of poller_output_boost table by spine - Enabled
Enable Image Caching - Enabled
Location for Image Files - C:/Apache2/htdocs/nms/plugins/boost/cache
My doubt here is after enabling Boost the polling time should be reduced from 200 seconds. but there is no reduction in the poller time, still it is taking 300 seconds.
Before Enabling boost
04/30/2014 11:38:30 AM - SYSTEM STATS: Time:209.4441 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:33:30 AM - SYSTEM STATS: Time:209.8830 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:28:29 AM - SYSTEM STATS: Time:209.1939 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:23:31 AM - SYSTEM STATS: Time:210.4459 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
After Enabling Boost
04/30/2014 01:23:30 PM - SYSTEM STATS: Time:209.6098 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:18:31 PM - SYSTEM STATS: Time:210.5634 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:13:32 PM - SYSTEM BOOST STATS: Time:1.7300 RRDUpdates:16823
04/30/2014 01:13:30 PM - SYSTEM STATS: Time:209.6888 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:08:29 PM - SYSTEM STATS: Time:208.6273 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:03:29 PM - SYSTEM STATS: Time:209.1124 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:58:30 PM - SYSTEM STATS: Time:209.9066 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:53:29 PM - SYSTEM STATS: Time:208.6427 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:48:29 PM - SYSTEM STATS: Time:208.8120 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:43:30 PM - SYSTEM STATS: Time:209.8460 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:38:33 PM - SYSTEM BOOST STATS: Time:2.5500 RRDUpdates:24029
pl help me to identify the where to check & how to reduce the poller time.
Regards
Soma
In one of my recent installation i have enabled boost plugin to improve the performance of polling.
Here are my installation details.
Cacti-0.8.8a,OS-windows-2003,NET-SNMP- 5.6.1.1,RRDTool-1.4.x,PHP-5.3.17,SPINE-0.8.8a,Boost-5.1,Hosts-46,DS-3993,Graphs-1019
For every 5 minutes polling it is taking close to 200 seconds, so i have enabled boost plugin. Boost is working fine with following configuration
Enable On Demand RRD Updating - Enabled
How Often Should Boost Update All RRD's - 30 Minutes
Maximum Records - 1000000
Maximum Data Source Items Per Pass - 10000 Data source Items
Maximum Argument Length - 2000
Memory Limit for Boost and Poller - 1GB
Maximum RRD Update Script Run Time - 20 Minutes
Enable direct population of poller_output_boost table by spine - Enabled
Enable Image Caching - Enabled
Location for Image Files - C:/Apache2/htdocs/nms/plugins/boost/cache
My doubt here is after enabling Boost the polling time should be reduced from 200 seconds. but there is no reduction in the poller time, still it is taking 300 seconds.
Before Enabling boost
04/30/2014 11:38:30 AM - SYSTEM STATS: Time:209.4441 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:33:30 AM - SYSTEM STATS: Time:209.8830 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:28:29 AM - SYSTEM STATS: Time:209.1939 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
04/30/2014 11:23:31 AM - SYSTEM STATS: Time:210.4459 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:1019
After Enabling Boost
04/30/2014 01:23:30 PM - SYSTEM STATS: Time:209.6098 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:18:31 PM - SYSTEM STATS: Time:210.5634 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:13:32 PM - SYSTEM BOOST STATS: Time:1.7300 RRDUpdates:16823
04/30/2014 01:13:30 PM - SYSTEM STATS: Time:209.6888 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:08:29 PM - SYSTEM STATS: Time:208.6273 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 01:03:29 PM - SYSTEM STATS: Time:209.1124 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:58:30 PM - SYSTEM STATS: Time:209.9066 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:53:29 PM - SYSTEM STATS: Time:208.6427 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:48:29 PM - SYSTEM STATS: Time:208.8120 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:43:30 PM - SYSTEM STATS: Time:209.8460 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 12:38:33 PM - SYSTEM BOOST STATS: Time:2.5500 RRDUpdates:24029
pl help me to identify the where to check & how to reduce the poller time.
Regards
Soma
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
Hi
Boost will only reduce the polling time signifiantly if you have large I/O operations on the disk. From what boost reports, your RRD updates take 2 to 3 seconds to complete, so /O is not your issue.
Looking at the numbers I can see that you have only a few hosts, but quite a large number of data sources. You should look into chaning the following 2 settings to decrease the polling time:
Boost will only reduce the polling time signifiantly if you have large I/O operations on the disk. From what boost reports, your RRD updates take 2 to 3 seconds to complete, so /O is not your issue.
Looking at the numbers I can see that you have only a few hosts, but quite a large number of data sources. You should look into chaning the following 2 settings to decrease the polling time:
- Increase the Maximum OIDS er get request of the hosts ( see here: http://realworldnumbers.com/cacti-tunin ... t-request/ )
- Increase the number of threads for each host ( can be set in the devices/host page )
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
HI phalek,
thanks for the update,
Increase the Maximum OIDS er get request of the hosts
The Present value which is been configured for this setting is - 75
Increase the number of threads for each host ( can be set in the devices/host page )
Present Value is bee set as - Threads - 15, process - 1
Regards
Soma
thanks for the update,
Increase the Maximum OIDS er get request of the hosts
The Present value which is been configured for this setting is - 75
Increase the number of threads for each host ( can be set in the devices/host page )
Present Value is bee set as - Threads - 15, process - 1
Regards
Soma
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
Your looking at the poller settings for the threads. Edit a device and change the following setting there:
Number of Collection Threads
The number of concurrent threads to use for polling this device. This applies to the Spine poller only.
Also check this setting for the hosts:
Maximum OID's Per Get Request
Specified the number of OID's that can be obtained in a single SNMP Get request.
Number of Collection Threads
The number of concurrent threads to use for polling this device. This applies to the Spine poller only.
Also check this setting for the hosts:
Maximum OID's Per Get Request
Specified the number of OID's that can be obtained in a single SNMP Get request.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
i have edited all the 43 devices & restarted the Poller but still it is taking 280 seconds.
Number of Collection Threads - 6
Maximum OID's Per Get Request - 50
Present logs
04/30/2014 03:49:57 PM - SYSTEM STATS: Time:294.5295 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:44:44 PM - SYSTEM STATS: Time:281.8298 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:39:37 PM - SYSTEM STATS: Time:274.6559 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:34:28 PM - SYSTEM STATS: Time:265.7377 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:29:16 PM - SYSTEM STATS: Time:253.7342 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:24:39 PM - SYSTEM BOOST STATS: Time:1.7500 RRDUpdates:4121
Any Other parameter need to be checked.
Regards
Soma
Number of Collection Threads - 6
Maximum OID's Per Get Request - 50
Present logs
04/30/2014 03:49:57 PM - SYSTEM STATS: Time:294.5295 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:44:44 PM - SYSTEM STATS: Time:281.8298 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:39:37 PM - SYSTEM STATS: Time:274.6559 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:34:28 PM - SYSTEM STATS: Time:265.7377 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:29:16 PM - SYSTEM STATS: Time:253.7342 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
04/30/2014 03:24:39 PM - SYSTEM BOOST STATS: Time:1.7500 RRDUpdates:4121
Any Other parameter need to be checked.
Regards
Soma
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
You could increase the cacti log level to HIGH and check individual systems. The log contains the polling time for each individual host. Then look into which of the grahs/queries cause these high polling time.
The lines you should look for are these:
The lines you should look for are these:
Code: Select all
04/30/2014 10:25:03 AM - SPINE: Poller[0] Host[58] TH[1] Total Time: 0.54 Seconds
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
phalek wrote:You could increase the cacti log level to HIGH and check individual systems. The log contains the polling time for each individual host. Then look into which of the grahs/queries cause these high polling time.
The lines you should look for are these:Code: Select all
04/30/2014 10:25:03 AM - SPINE: Poller[0] Host[58] TH[1] Total Time: 0.54 Seconds
Most of the devices are saying the following logs.
04/30/2014 04:38:01 PM - SPINE: Poller[0] Host[1928] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:38:01 PM - SPINE: Poller[0] Host[2275] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:57 PM - SPINE: Poller[0] Host[2141] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:54 PM - SPINE: Poller[0] Host[1928] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:54 PM - SPINE: Poller[0] Host[2275] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:50 PM - SPINE: Poller[0] Host[2141] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:47 PM - SPINE: Poller[0] Host[1928] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:47 PM - SPINE: Poller[0] Host[2275] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:42 PM - SPINE: Poller[0] Host[2141] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:40 PM - SPINE: Poller[0] Host[1928] DEBUG: Exceeded Host Timeout, Retrying
04/30/2014 04:37:40 PM - SPINE: Poller[0] Host[2275] DEBUG: Exceeded Host Timeout, Retrying
But i am able to do the SNMP walk & through device menu the snmp response are fine for these devices. should i reduce the no of collection threads from 6 to 3 and try?
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
Yes, unfortunately, it's a try-and-error thing.
You'll have to play around with these figures. Unfortunately there's no silver-bullet that solves these things
You'll have to play around with these figures. Unfortunately there's no silver-bullet that solves these things
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
Present setting are made like this now.
Number of Collection Threads - 1 default
Maximum OID's Per Get Request - 50
Present Logs
04/30/2014 07:01:06 PM - SYSTEM STATS: Time:63.7940 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3989 RRDsProcessed:0
04/30/2014 06:56:09 PM - SYSTEM STATS: Time:66.8783 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3989 RRDsProcessed:0
04/30/2014 06:51:30 PM - SYSTEM STATS: Time:87.6576 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3989 RRDsProcessed:0
04/30/2014 06:46:39 PM - SYSTEM STATS: Time:96.3083 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3989 RRDsProcessed:0
04/30/2014 06:41:40 PM - SYSTEM BOOST STATS: Time:2.4100 RRDUpdates:23902
04/30/2014 06:41:37 PM - SYSTEM STATS: Time:94.5909 Method:spine Processes:1 Threads:15 Hosts:46 HostsPerProcess:46 DataSources:3993 RRDsProcessed:0
Ping Retires - 3000ms
snmp retries - 3000ms
But still lot of snmp timeout logs are appearing.
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36689] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36689] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36689] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36688] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36688] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36688] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36688] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36687] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36687] WARNING: SNMP timeout detected [3000 ms], ignoring host
04/30/2014 07:06:06 PM - SPINE: Poller[0] Host[2469] TH[1] DS[36687] WARNING: SNMP timeout detected [3000 ms], ignoring host
Any other suggestions pl.
Regards
Soma
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
Seems to be one host only: Host[2469]
Focus on that one host and check your snmp settings, timeouts, threads, oid requests and things like these for this one host only.
Focus on that one host and check your snmp settings, timeouts, threads, oid requests and things like these for this one host only.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
No Phalek,phalek wrote:Seems to be one host only: Host[2469]
Focus on that one host and check your snmp settings, timeouts, threads, oid requests and things like these for this one host only.
its a sample log i didn't pasted all the logs. out of 43 devices most of the devices are giving this log messages randomly.
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Boost Plugin 5.1 & Poller Performance
well, timeouts are quite unspecific, they can have numerous issues. Network utilization as well as the device utilization can cause this as SNMP tends to have a very low priority on devices.
Nevertheless, your timeouts are way too high:
Ping Retires - 3000ms
snmp retries - 3000ms
That's a round-trip time equal to 10x Germany -> Sydney so I would reduce that to e.g. 300 max 400.
Unless you're in the mobile business or really have that long round-trip times.
Nevertheless, your timeouts are way too high:
Ping Retires - 3000ms
snmp retries - 3000ms
That's a round-trip time equal to 10x Germany -> Sydney so I would reduce that to e.g. 300 max 400.
Unless you're in the mobile business or really have that long round-trip times.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Cacti User
- Posts: 79
- Joined: Mon Jun 22, 2009 12:58 am
- Location: India
Re: Boost Plugin 5.1 & Poller Performance
Hi Phalek,
i was trying with some combinations and finally arrived at this situation.
05/05/2014 01:25:09 PM - SYSTEM STATS: Time:6.5959 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:20:09 PM - SYSTEM STATS: Time:6.6398 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:15:10 PM - SYSTEM STATS: Time:7.2628 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:704
05/05/2014 01:10:09 PM - SYSTEM STATS: Time:6.7192 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:05:10 PM - SYSTEM STATS: Time:7.4322 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:00:10 PM - SYSTEM STATS: Time:7.5644 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:762
05/05/2014 12:55:11 PM - SYSTEM STATS: Time:7.5288 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:762
but unable to conclude on the snmp timed out for most of the devices, infact the timeout values also been now reduced into 300ms for both ICMP & SNMP under host table.
Actually mine is not the default snmp interface polling its a OID base polling which i'm doing, do i need to check anything else.
Thanks
Regards
Soma
i was trying with some combinations and finally arrived at this situation.
05/05/2014 01:25:09 PM - SYSTEM STATS: Time:6.5959 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:20:09 PM - SYSTEM STATS: Time:6.6398 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:15:10 PM - SYSTEM STATS: Time:7.2628 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:704
05/05/2014 01:10:09 PM - SYSTEM STATS: Time:6.7192 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:05:10 PM - SYSTEM STATS: Time:7.4322 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:723
05/05/2014 01:00:10 PM - SYSTEM STATS: Time:7.5644 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:762
05/05/2014 12:55:11 PM - SYSTEM STATS: Time:7.5288 Method:spine Processes:5 Threads:15 Hosts:46 HostsPerProcess:10 DataSources:3989 RRDsProcessed:762
but unable to conclude on the snmp timed out for most of the devices, infact the timeout values also been now reduced into 300ms for both ICMP & SNMP under host table.
Actually mine is not the default snmp interface polling its a OID base polling which i'm doing, do i need to check anything else.
Thanks
Regards
Soma
Who is online
Users browsing this forum: No registered users and 0 guests