How do I decrease poller time?

Post general support questions here that do not specifically fall into the Linux or Windows categories.

Moderators: Developers, Moderators

Post Reply
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

How do I decrease poller time?

Post by OneZero »

Hi, I'm trying to decrease the poll time, as have been trying figure out why my graphs get chopped on some, but not all graphs. Looks like its the higher numbered graphs, the ones that have been created the latest.

Does anyone have suggestions on how to decrease the poll time, with this many hosts and RRD's?

Currently,
Max Threads is set to 30
Number of PHP Script Servers is set to 3 (But Spine processes in logs shows 1?)
Script and Server Timeout is set to 20
Maximum OIDS per SNMP Get Request is set to 10

Here is the status times from my logs, and I can post others, just tell me what you would like see.
Thanks

Code: Select all

Log File [Total Lines: 120 - Non-Matching Items Hidden]
06/11/2011 08:27:24 PM - SYSTEM STATS: Time:142.9258 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17185
06/11/2011 08:23:00 PM - SYSTEM STATS: Time:178.5417 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17289
06/11/2011 08:18:40 PM - SYSTEM STATS: Time:219.0665 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17379
06/11/2011 08:15:46 PM - SYSTEM STATS: Time:342.7079 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17015
06/11/2011 08:12:57 PM - SYSTEM STATS: Time:775.2992 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17370
06/11/2011 08:10:46 PM - SYSTEM STATS: Time:344.2370 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:14612
06/11/2011 07:57:41 PM - SYSTEM STATS: Time:160.0772 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17394
06/11/2011 07:52:45 PM - SYSTEM STATS: Time:163.7930 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17503
06/11/2011 07:47:19 PM - SYSTEM STATS: Time:137.5568 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17095
06/11/2011 07:42:20 PM - SYSTEM STATS: Time:138.8153 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17392
06/11/2011 07:37:53 PM - SYSTEM STATS: Time:171.5027 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:16698
06/11/2011 07:34:31 PM - SYSTEM STATS: Time:270.6526 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17420
06/11/2011 07:27:30 PM - SYSTEM STATS: Time:148.5095 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17068
06/11/2011 07:22:48 PM - SYSTEM STATS: Time:167.1925 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17440
06/11/2011 07:17:47 PM - SYSTEM STATS: Time:165.7809 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17436
06/11/2011 07:13:07 PM - SYSTEM STATS: Time:184.9738 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17302
06/11/2011 07:11:53 PM - SYSTEM STATS: Time:411.3652 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17273
06/11/2011 07:04:11 PM - SYSTEM STATS: Time:248.9576 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17413
06/11/2011 06:57:26 PM - SYSTEM STATS: Time:144.6996 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17256
06/11/2011 06:52:27 PM - SYSTEM STATS: Time:146.7530 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17211
06/11/2011 06:47:17 PM - SYSTEM STATS: Time:135.4308 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17494
06/11/2011 06:42:18 PM - SYSTEM STATS: Time:136.7487 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17044
06/11/2011 06:37:59 PM - SYSTEM STATS: Time:177.5213 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17162
06/11/2011 06:35:36 PM - SYSTEM STATS: Time:334.8578 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17206
06/11/2011 06:27:22 PM - SYSTEM STATS: Time:140.7109 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17141
06/11/2011 06:22:24 PM - SYSTEM STATS: Time:142.8719 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17360
06/11/2011 06:17:40 PM - SYSTEM STATS: Time:158.8808 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17417
06/11/2011 06:14:38 PM - SYSTEM STATS: Time:277.3262 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17425
06/11/2011 06:13:32 PM - SYSTEM STATS: Time:510.8682 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17425
06/11/2011 06:13:14 PM - SYSTEM STATS: Time:793.6180 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17388
06/11/2011 05:57:28 PM - SYSTEM STATS: Time:146.3823 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17479
06/11/2011 05:52:16 PM - SYSTEM STATS: Time:134.9967 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17540
06/11/2011 05:47:27 PM - SYSTEM STATS: Time:145.9067 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17565
06/11/2011 05:42:26 PM - SYSTEM STATS: Time:145.4006 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17492
06/11/2011 05:37:46 PM - SYSTEM STATS: Time:164.8803 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17368
06/11/2011 05:34:42 PM - SYSTEM STATS: Time:280.4698 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17482
06/11/2011 05:27:22 PM - SYSTEM STATS: Time:140.3151 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17472
06/11/2011 05:22:22 PM - SYSTEM STATS: Time:140.8714 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17371
06/11/2011 05:17:19 PM - SYSTEM STATS: Time:138.1082 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17297
06/11/2011 05:12:31 PM - SYSTEM STATS: Time:150.0533 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17249
06/11/2011 05:08:18 PM - SYSTEM STATS: Time:196.9539 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17461
06/11/2011 05:07:22 PM - SYSTEM STATS: Time:441.0275 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17513
06/11/2011 04:57:44 PM - SYSTEM STATS: Time:163.4499 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17575
06/11/2011 04:52:20 PM - SYSTEM STATS: Time:139.1823 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17509
06/11/2011 04:47:24 PM - SYSTEM STATS: Time:143.7484 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17478
06/11/2011 04:42:17 PM - SYSTEM STATS: Time:136.2077 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17337
06/11/2011 04:37:50 PM - SYSTEM STATS: Time:167.9995 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17465
06/11/2011 04:34:38 PM - SYSTEM STATS: Time:276.2676 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17415
06/11/2011 04:27:37 PM - SYSTEM STATS: Time:155.0101 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17457
06/11/2011 04:22:17 PM - SYSTEM STATS: Time:135.7284 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17438
06/11/2011 04:17:23 PM - SYSTEM STATS: Time:142.1292 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17436
06/11/2011 04:13:14 PM - SYSTEM STATS: Time:192.7771 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17300
06/11/2011 04:11:23 PM - SYSTEM STATS: Time:382.1793 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17390
06/11/2011 04:10:15 PM - SYSTEM STATS: Time:614.2546 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17411
06/11/2011 03:57:33 PM - SYSTEM STATS: Time:151.3601 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17418
06/11/2011 03:52:13 PM - SYSTEM STATS: Time:131.7212 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17417
06/11/2011 03:47:33 PM - SYSTEM STATS: Time:152.0135 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17304
06/11/2011 03:42:23 PM - SYSTEM STATS: Time:141.9593 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17407
06/11/2011 03:37:20 PM - SYSTEM STATS: Time:139.0158 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17505
06/11/2011 03:33:34 PM - SYSTEM STATS: Time:213.3955 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17440
06/11/2011 03:27:20 PM - SYSTEM STATS: Time:139.3265 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17436
06/11/2011 03:22:27 PM - SYSTEM STATS: Time:145.4531 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17231
06/11/2011 03:17:26 PM - SYSTEM STATS: Time:144.9258 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17366
06/11/2011 03:12:30 PM - SYSTEM STATS: Time:148.8173 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17331
06/11/2011 03:08:00 PM - SYSTEM STATS: Time:177.9883 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17337
06/11/2011 03:03:53 PM - SYSTEM STATS: Time:232.1658 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17315
06/11/2011 02:57:48 PM - SYSTEM STATS: Time:167.2620 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17365
06/11/2011 02:52:20 PM - SYSTEM STATS: Time:138.4657 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17328
06/11/2011 02:47:24 PM - SYSTEM STATS: Time:143.0863 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17257
06/11/2011 02:42:31 PM - SYSTEM STATS: Time:149.8681 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17461
06/11/2011 02:38:05 PM - SYSTEM STATS: Time:183.7768 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17392
06/11/2011 02:34:53 PM - SYSTEM STATS: Time:292.6435 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17394
06/11/2011 02:27:14 PM - SYSTEM STATS: Time:132.8764 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17373
06/11/2011 02:22:13 PM - SYSTEM STATS: Time:132.1911 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17390
06/11/2011 02:17:19 PM - SYSTEM STATS: Time:137.8082 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17394
06/11/2011 02:14:34 PM - SYSTEM STATS: Time:271.7441 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17296
06/11/2011 02:11:39 PM - SYSTEM STATS: Time:697.3512 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17369
06/11/2011 02:11:32 PM - SYSTEM STATS: Time:391.5330 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17394
06/11/2011 01:57:32 PM - SYSTEM STATS: Time:150.9152 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17312
06/11/2011 01:52:28 PM - SYSTEM STATS: Time:147.2721 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17323
06/11/2011 01:47:12 PM - SYSTEM STATS: Time:131.4473 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17413
06/11/2011 01:42:12 PM - SYSTEM STATS: Time:131.4312 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17281
06/11/2011 01:37:34 PM - SYSTEM STATS: Time:152.8147 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17279
06/11/2011 01:33:50 PM - SYSTEM STATS: Time:228.9948 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17256
06/11/2011 01:27:33 PM - SYSTEM STATS: Time:151.8504 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17417
06/11/2011 01:22:27 PM - SYSTEM STATS: Time:145.2164 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17421
06/11/2011 01:17:32 PM - SYSTEM STATS: Time:151.3822 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17484
06/11/2011 01:12:33 PM - SYSTEM STATS: Time:152.4230 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17478
06/11/2011 01:07:21 PM - SYSTEM STATS: Time:139.7027 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17497
06/11/2011 01:03:49 PM - SYSTEM STATS: Time:227.4415 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17426
06/11/2011 12:57:33 PM - SYSTEM STATS: Time:151.4462 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17373
06/11/2011 12:52:37 PM - SYSTEM STATS: Time:155.5055 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17365
06/11/2011 12:47:18 PM - SYSTEM STATS: Time:137.2959 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17390
06/11/2011 12:42:28 PM - SYSTEM STATS: Time:146.4145 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17465
06/11/2011 12:38:33 PM - SYSTEM STATS: Time:201.7113 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17433
06/11/2011 12:34:54 PM - SYSTEM STATS: Time:292.7761 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17329
06/11/2011 12:27:25 PM - SYSTEM STATS: Time:142.8291 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17258
06/11/2011 12:22:12 PM - SYSTEM STATS: Time:131.1168 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17166
06/11/2011 12:17:30 PM - SYSTEM STATS: Time:148.2703 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17284
06/11/2011 12:14:12 PM - SYSTEM STATS: Time:250.6023 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17056
06/11/2011 12:11:09 PM - SYSTEM STATS: Time:367.0419 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:15953
06/11/2011 12:10:39 PM - SYSTEM STATS: Time:638.0086 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:16586
06/11/2011 11:57:24 AM - SYSTEM STATS: Time:142.7991 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:16855
06/11/2011 11:52:36 AM - SYSTEM STATS: Time:154.9485 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17277
06/11/2011 11:47:23 AM - SYSTEM STATS: Time:141.3362 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17308
06/11/2011 11:42:37 AM - SYSTEM STATS: Time:155.2277 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17287
06/11/2011 11:37:36 AM - SYSTEM STATS: Time:155.2020 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17276
06/11/2011 11:34:15 AM - SYSTEM STATS: Time:253.4423 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17283
06/11/2011 11:27:52 AM - SYSTEM STATS: Time:170.5080 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17199
06/11/2011 11:22:32 AM - SYSTEM STATS: Time:150.6699 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17352
06/11/2011 11:17:35 AM - SYSTEM STATS: Time:154.0445 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17352
06/11/2011 11:12:48 AM - SYSTEM STATS: Time:166.2481 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17329
06/11/2011 11:07:33 AM - SYSTEM STATS: Time:151.9714 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17258
06/11/2011 11:04:05 AM - SYSTEM STATS: Time:243.3851 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17257
06/11/2011 10:57:33 AM - SYSTEM STATS: Time:152.4194 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17211
06/11/2011 10:52:37 AM - SYSTEM STATS: Time:155.8460 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17298
06/11/2011 10:47:54 AM - SYSTEM STATS: Time:171.8230 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17125
06/11/2011 10:42:35 AM - SYSTEM STATS: Time:153.7426 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17306
06/11/2011 10:37:56 AM - SYSTEM STATS: Time:174.4300 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17105
06/11/2011 10:34:58 AM - SYSTEM STATS: Time:296.1315 Method:spine Processes:1 Threads:30 Hosts:1001 HostsPerProcess:1001 DataSources:24589 RRDsProcessed:17195 
noname
Cacti Guru User
Posts: 1566
Joined: Thu Aug 05, 2010 2:04 am
Location: Japan

Re: How do I decrease poller time?

Post by noname »

Some suggestions:
- http://forums.cacti.net/viewtopic.php?f=2&t=39399

For example, try to increase concurrent processes.
(But take care not to exceed MySQL "max_connections")

Or, consider using boost plugin.
http://docs.cacti.net/plugin:boost
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

Check your logs, take a look at what was being polled near the end of the 5 minute interval, one of the times you were having excessive times. Likely what will happen in a situation like this, where it's erratic, is that you'll have some cluster of hosts (often recently added ones) that are failing as a group, probably with some kind of script based query without a built-in timeout.

A little debug, and a little log tailing should help you figure out where your issue lies, then we can help more.
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

Re: How do I decrease poller time?

Post by OneZero »

I'm looking into boost, is there any more documentation on how to set it up then the ones you posted?

Also, I do have a few shell scripts that have been created. What would I look for in them, to insure they have built-in timeouts?
Thanks for all the help, I'm still trying to weed them out. Also, going to look at moving these to a new server possibly. This is only a single core amd, with 2gb ram. Might be part of my issue as well.
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

OneZero wrote:Also, I do have a few shell scripts that have been created. What would I look for in them, to insure they have built-in timeouts?
Thanks for all the help, I'm still trying to weed them out. Also, going to look at moving these to a new server possibly. This is only a single core amd, with 2gb ram. Might be part of my issue as well.
I'm going to go out on a limb and guess that your scripts probably don't have timeouts. Shell (Bash/etc) are pretty difficult to get working alarm/timeout code right. If the scripts are perl or python, then it's quite a bit easier.

What are you doing with your custom scripts?

If you have any ideas as to what script(s) might be causing the longer poller time, you can wrap it in: http://www.cpan.org/authors/id/D/DE/DEX ... ut-0.11.pl
Either use that inside the shell scripts for when you call whatever it is that you're collecting data with (unless whatever you're collecting data with has its own timeout function), or update cacti's reference to your script to look something like:

Code: Select all

<script_path>/usr/bin/perl |path_cacti|/scripts/timeout.pl -9 5 |path_cacti|/scripts/ft_counts.sh</script_path>
That will cause the timeout.pl script to launch my ft_counts.sh script, killing it after a five second timeout. Often better to fix the script in question, but if there's external failures out of your control...

It's not great, if you can't collect your data, but if you've got a timeout for your scripts, then at least (for example) someone screwing up dns resolution, doesn't cascade and cause Cacti to fall over.
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

Re: How do I decrease poller time?

Post by OneZero »

Actually after digging into my shell scripts, they are simply snmpget's, and then doing some cutting. So I added a -t 2, to timeout the snmpget after 2 seconds if no response. Think it might be to high yet?

All of my hosts are wireless devices, and some of them have high pings at times, so have been trying to adjust my timeouts. Would rather it just fails on one graph, then muck up the whole thing.

Code: Select all

snmpget -t 2 -v1 -Cf -c public $1 .1.3.6.1.4.1.12394.1.1.11.10.0|cut -d: -f4|cut -d- -f2
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

OneZero wrote:

Code: Select all

snmpget -t 2 -v1 -Cf -c public $1 .1.3.6.1.4.1.12394.1.1.11.10.0|cut -d: -f4|cut -d- -f2
2 seconds for an snmpget should be fine, unless the devices have particularly high latency. But, since snmpget defaults to a 1 second timeout, (unless you passed in some different configure options) you probably want to leave timeout alone and adjust the retries - default is five, try two. Also, with some output flags you can skip your cuts.

Try:

Code: Select all

snmpget -r 2 -OQv -v1 -Cf -c public $1 .1.3.6.1.4.1.12394.1.1.11.10.0
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

gninja wrote:Also, with some output flags you can skip your cuts.
Well, you can skip the first cut, not 100% sure what the second cut does without seeing the snmpget output.
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

Re: How do I decrease poller time?

Post by OneZero »

Thanks for the tips on snmpget, wasn't aware there was an argument to strip the output. The last cut, removes the - from the value (-85).
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

Let me know what you see after that change, and if you've found anything in your logs that might point to some other issue.

That change should help, if that's the script causing the issue. Your poll times should hopefully stabilize a bit.

As for setting up Boost, just follow the docs. If you get stuck, ask here, or in one of the boost threads. Not having local disk i/o impacting your poller time can help out a huge amount - but unless you're having very odd disk usage patterns, that file io time wouldn't cause those wildly variant poll times that you're seeing. Maybe someone's using a microwave near where you're collecting wireless data. :)
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

Re: How do I decrease poller time?

Post by OneZero »

Is it possible, the problem is not in the getting of the data (snmpgets from cacti as well as scripts), but instead in the rrdtool updates? The poller only takes about 60 seconds to poll all the devices it looks like from the debug logs, but takes alot longer to do the rrdtool updates.
User avatar
gninja
Cacti User
Posts: 371
Joined: Tue Aug 24, 2004 5:02 pm
Location: San Francisco, CA
Contact:

Re: How do I decrease poller time?

Post by gninja »

It is entirely likely that Boost will have a huge impact on your poller times. I only suspect a data collection issue because your poll times vary so wildly. You absolutely should not underestimate the impact you'll get out of Boost. But when you're having a 140 second baseline with such a huge variance, it's usually data collection that's causing the variance.

If it is the rrd output causing all your trouble, then you may have some kind of disk issue. Maybe run iostat to look at disk throughput while there's an update running? Check out dmesg? If you're running a software raid, that might also be part of it. Cacti polling's pretty complicated when you get down into all the various things it can do, so the list of possible issues - especially when you start making custom graphs, can be pretty large, narrowing it down sometimes takes a little time.
FreeBSD/RHEL
cacti-0.8.7i, spine 0.8.7i, PIA 3.1+boost 5.1
MySQL 5.5/InnoDB
RRDtool 1.2.27, PHP 5.1.6
OneZero
Posts: 14
Joined: Sat Jun 11, 2011 7:31 pm

Re: How do I decrease poller time?

Post by OneZero »

Well, thats probably it then. I am running software raid, a raid 0 and raid 1. I'll keep working on getting boost going, but guess I will start working on getting a new server to move all this to as well.
CCNAJ
Posts: 30
Joined: Fri Apr 15, 2011 7:13 am
Location: Australia

Re: How do I decrease poller time?

Post by CCNAJ »

I was having the same problem with a high latency location with more than 10 graphs it started to timeout....

Played around and realized that Cacti usually spreads the 1 device per poll so a workaround I did was split the host.

I had 50 graphs on one host - and timeouts - only 3 poller processes would run

Now I have 20,20,10 on three hosts (even though it is the same device) and 6 poller processes... all processes finished by 180secs.
++++++ Cacti Newbie ++++++++++++++++++++++++++++++++++++++++++
Cacti 0.8.8f with PIA on a Centos6 VM
Plugins Used: Infopage | RouterConfigs | Spikekill | BOOST | REALTIME | CLOG
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests