Poller runtime varying very much

Post general support questions here that do not specifically fall into the Linux or Windows categories.

Moderators: Developers, Moderators

User avatar
BSOD2600
Cacti Moderator
Posts: 12171
Joined: Sat May 08, 2004 12:44 pm
Location: USA

Post by BSOD2600 »

Mind posting the other cacti stats graph which shows the rrd counts?

also, you're signature should have cacti 0.8.7 -- 0.9.7 won't be released for years ;-).

Are you monitoring the memory usage of the server? Regardless, whats it's usage stats, etc? That would help you determine if more memory would help...
Christian
Posts: 46
Joined: Thu Feb 14, 2008 4:24 am
Location: Oelde/Gütersloh, NRW, Germany

Post by Christian »

Here's everything I've got. The memory graph seems to be broken though.
But I can tell you that there's about 4 MiB - 50 MiB of free memory during the polling process.

I grabbed something out of the log for you:

Code: Select all

03:35:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 13976
03:30:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 26656
03:25:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 21552
03:20:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 27284
03:15:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 20540
03:10:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 41852
03:05:03 PM - SPINE: [...]linux_memory.pl MemFree:, output: 15384
03:00:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 44480
02:55:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 27168
02:50:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 40576
02:45:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 25484
02:40:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 42576
02:35:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 14036
02:30:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 54192
02:25:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 58740
02:20:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 50024
02:15:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 44884
02:10:01 PM - SPINE: [...]linux_memory.pl MemFree:, output: 46496
02:05:02 PM - SPINE: [...]linux_memory.pl MemFree:, output: 8336
This data isn't exactly useful if you try to bring the lack of memory of the server and the spikes occuring every 6th poller run into any relation.

Before the gap cacti was running cactid, after gap spine is plugging away.

The slight change in the count of rrds was due to me deleting some unused data sources.
Attachments
poller_statistics.PNG
poller_statistics.PNG (48.02 KiB) Viewed 1454 times
cacti: 0.8.7g
spine: 0.8.7g
plugins:
specs: 2xIntel Xeon @ 2.40GHz | 6GiB RAM | CentOS 5.5 | PHP 5.1.6 | MySQL 5.0.77 | RRDTool 1.4.4 | Apache/2.2.3
User avatar
BSOD2600
Cacti Moderator
Posts: 12171
Joined: Sat May 08, 2004 12:44 pm
Location: USA

Post by BSOD2600 »

Christian wrote:But I can tell you that there's about 4 MiB - 50 MiB of free memory during the polling process.
thats it? yikes. Then I'd have to agree that more memory will probably solve your issue.
aleu
Cacti User
Posts: 216
Joined: Mon Dec 11, 2006 10:17 am

Post by aleu »

BSOD2600 wrote:
Christian wrote:But I can tell you that there's about 4 MiB - 50 MiB of free memory during the polling process.
thats it? yikes. Then I'd have to agree that more memory will probably solve your issue.
Guys my $0.02. Not so long ago, I have upgraded my install to PA 1.4, Cacti 0.8.7a and spine. The installation seems to be working fine, but...

My poller runtime keeps growing steadily a second or two a day. I started with runtime value (shortly after upgrade) of 75 seconds or so. Today poller runtime is around 175 seconds. I have no idea why is it growing. It does not look like it is going to saturate at any point. BTW. I have recently rebooted my Cacti server, and immediately after the reboot the poller runtime was at the level of 175 seconds again. The server load does not change much (4.75, 3.50, 3.30 for 1min, 5 min and 15 min).

Do you have any idea how to troubleshoot this? My hardware specs: 2x 3.06 Xeon with 2GB RAM.
Cacti Version - 0.8.7a
Plugin Architecture - 1.4
Poller Type - Cactid v
Server Info - Linux 2.6.9-5.ELsmp
Web Server - Apache/2.0.52 (Red Hat)
PHP - 4.3.9
PHP Extensions - yp, xml, wddx, tokenizer, sysvshm, sysvsem, standard, sockets, shmop, session, pspell, posix, pcre, overload, mime_magic, iconv, gmp, gettext, ftp, exif, dio, dbx, dba, curl, ctype, calendar, bz2, bcmath, zlib, openssl, apache2handler, ldap, mysql
MySQL - 4.1.7
RRDTool - 1.2.15
SNMP - 5.1.2
Plugins
  • Thresholds (thold - v0.3.9)
    Network Discovery (discovery - v0.8.3)
    Global Plugin Settings (settings - v0.3)
    Update Checker (update - v0.4)
    Documents (docs - v0.1)
    Host Info (hostinfo - v0.2)
    IP subnet Calculator IPv4 IPv6 (ipsubnet - v.4d)
    Device Monitoring (monitor - v0.8.2)
    Network Tools (tools - v0.2)
    Cycle Graphs (Cycle Graphs - v0.4)
    RRD File Cleaner (RRD Cleaner - v0.34)
2500 RRDs and around 5000 data sources. Poller set as:

4 concurrent poller processes
Max threads per process 15
2 php script servers
MAX SNMP OIDs per SNP GET Request 60


Please advise.
Thanks
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Post by TheWitness »

Well, Cactid is not compatible with Cacti 0.8.7x. You need to user spine, but you should get the SVN at the moment as there have been problems reported on 64bit systems.

Then, do a "ps -ef | grep cactid" to make sure you don't have a load issue.

Finally, do the following:

Code: Select all

cd <path_cacti>/rra
du -k .
How much data do you have. Make sure your server has 1+GByte more than that are in that directory.

TheWitness
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
aleu
Cacti User
Posts: 216
Joined: Mon Dec 11, 2006 10:17 am

Post by aleu »

TheWitness wrote:Well, Cactid is not compatible with Cacti 0.8.7x. You need to user spine, but you should get the SVN at the moment as there have been problems reported on 64bit systems.

Then, do a "ps -ef | grep cactid" to make sure you don't have a load issue.
I did compile/install spine (the latest version, not from SVN though) and I do not see any cactid entries in the logs. However, host info plugin reports it as cactid :-)
TheWitness wrote: Finally, do the following:

Code: Select all

cd <path_cacti>/rra
du -k .
How much data do you have. Make sure your server has 1+GByte more than that are in that directory.
TheWitness
Well, this seems to be a problem here. I have 2.2GB of the rrd data and 2GB of RAM installed in this server. Do you think that this is causing this increasing poller time?
Frizz
Cacti User
Posts: 80
Joined: Sat Mar 05, 2005 5:07 pm
Location: Herne Germany

Post by Frizz »

aleu wrote:
TheWitness wrote:Well, Cactid is not compatible with Cacti 0.8.7x. You need to user spine, but you should get the SVN at the moment as there have been problems reported on 64bit systems.

Then, do a "ps -ef | grep cactid" to make sure you don't have a load issue.
I did compile/install spine (the latest version, not from SVN though) and I do not see any cactid entries in the logs. However, host info plugin reports it as cactid :-)
TheWitness wrote: Finally, do the following:

Code: Select all

cd <path_cacti>/rra
du -k .
How much data do you have. Make sure your server has 1+GByte more than that are in that directory.
TheWitness
Well, this seems to be a problem here. I have 2.2GB of the rrd data and 2GB of RAM installed in this server. Do you think that this is causing this increasing poller time?
I agree completly with The Witness, as I had the same experience when our RRA directory exceeded the physical RAM. (du -h /rra). The 30 minutes cycle is the runtime of the first RRD consolidation function (if the default is used) , which has to write more data to every RRD file, and the I/O subsystem has not enough memory cache. Currently we have reduced our RRD volume and the poling cycle is running constantly (6 GB rra with 8 GB RAM).
Best regards.
Frizz
Cacti 0.8.6j | Cactid 0.8.6j | RRDtool 1.2.23 |
SuSe 9.x | PHP 4.4.4 | MySQL 5.0.27 | IHS 2.0.42.1
Come and join the 3.CCC.eu
http://forums.cacti.net/viewtopic.php?t=27908
Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests