Switching from CMD.PHP to SPINE now shows gaps in graphs

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

The issue is simple here. CMD.PHP was taking in upwards of 6-7 minutes to complete for 33 hosts, and roughly 15K Datasources on this server. So I switched to SPINE, and now the polling says it completes anywhere from 4-8 seconds for everything! But now, I get a substantial amount of little gaps in the graphs as shown below:

Image


Here's the settings I'm using:

Code: Select all

### SERVER INFO ###
CPU:   2 x 4 Cores
RAM:   16GB
DISK:   750GB (RRD LUN)
NIC:   10Gbps  

Code: Select all

### CACTI VERSION INFO ###
CACTI VERSION:    0.8.8b
SPINE VERSION:     0.8.8b1
% dpkg -l | grep spine
ii  cacti-spine                         0.8.8b-1                         amd64        Multi-Threading poller for cacti

Code: Select all

### CACTI POLLER BASE SETTINGS ###
POLLER:   SPINE
POLLER INTERVAL:   5 Minutes
CRON INTERVAL:     5 Minutes
Maximum Concurrent Poller Processes:    4
Balance Load Process:    Yes
Maximum Threads Per Process:    8
Number of PHP Script Servers:    10
Script and Script Server Timeout Value:   300
Maximum OID's Per SNMP Get:     100   (Have tried values ranging from 1 - 100

Code: Select all

### SYSTEM STATS OUTPUT ###
[b]01/06/2015 10:10:07 AM - SYSTEM STATS: Time:5.7336 Method:spine Processes:4 Threads:8 Hosts:33 HostsPerProcess:9 DataSources:15261 RRDsProcessed:5744[/b]

Code: Select all

### DEBUG INFO ###
[b]01/06/2015 10:10:07 AM - SPINE: Poller[0] Time: 5.6914 s, Threads: 8, Hosts: 17 [/b]  <--- Should be 33 Hosts
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: Net-SNMP Close Completed
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: MYSQL Free & Close Completed
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: Allocated Variable Memory Freed
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: PHP Script Server Pipes Closed
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: Thread Cleanup Complete
01/06/2015 10:10:07 AM - SPINE: Poller[0] DEBUG: The Value of Active Threads is 0
01/06/2015 10:10:07 AM - SPINE: Poller[0] Host[186] TH[1] DEBUG: HOST COMPLETE: About to Exit Host Polling Thread Function
01/06/2015 10:10:07 AM - SPINE: Poller[0] Host[186] TH[1] Total Time: 4 Seconds
First, I've been searching the forums, and I see a few people with this issue. I've been running debug level logging with the SPINE poller, but absolutely nothing jumps out at me. By all accounts, it all "looks" to be fine. I can switch back to CMD.PHP, but it overruns the polling time by minutes with this many DS's. Now, I do have a lot of plugins, but to troubleshoot, I've disabled everything except AUTOM8, MONITOR and a few others that don't actually launch PHP SCRIPTS or engage the POLLER.PHP file.

Also worth noting, I implemented SPINE according to the standard way in Gandalf's links in his signature, and have also started to try and debug according to the "how-to's". Any thoughts on this? This has to be something basic that I'm missing here. Never had this problem on any of the other Cacti servers I've implemented out of dozens...
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

*** EDIT (Removed victory text) ***

Spoke too soon. Problem absolutely still exists.
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
User avatar
phalek
Developer
Posts: 2838
Joined: Thu Jan 31, 2008 6:39 am
Location: Kressbronn, Germany
Contact:

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by phalek »

Try changing the Balance Load Process to no.

I had this skipping some hosts/Ds from time to time.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

phalek wrote:Try changing the Balance Load Process to no.

I had this skipping some hosts/Ds from time to time.

Awesome, I will give this a shot and cross my fingers :)
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
ClontarfX
Posts: 4
Joined: Tue Jan 06, 2015 2:13 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by ClontarfX »

Also check that your rrd's are being updated per interval, and that the spine process has access to read and modify those files.
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

ClontarfX wrote:Also check that your rrd's are being updated per interval, and that the spine process has access to read and modify those files.
Okay, so on Phalek's recommendation, no joy there.

With regards to ClontarfX, permissions "appear" to be solid. After switching to SPINE, I dumped the poller cache, and I've even killed all the previous RRD files and let them completely rebuild from scratch (This install is pre-prod). I did find some errors that I was able to correct with regards to this exact thing, which were fixed by CHMOD +s to the SPINE process but that's it. The RRD's are being updated normally just fine. However, every now and again the gaps appear. I turned on DEBUG level logging and let it run through the night, and there are quite a few entries in the ERROR/SQL CALLS LOG. Screenshot below...

So, regarding those errors, I've done a little googling and searching on these forums (http://forums.cacti.net/viewtopic.php?f ... 8&start=30), but I'm truly not sure what to make of this. I'm far from a SQL guy or a PHP guy...I'm a network guy. So perhaps it's something obvious. But it's lost to me at this point. This is a standard Ubuntu 14.04.1 LTS debian cacti install. So I assumed all the versions and settings are confirmed solid in the package. Any thoughts?

Image
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
User avatar
phalek
Developer
Posts: 2838
Joined: Thu Jan 31, 2008 6:39 am
Location: Kressbronn, Germany
Contact:

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by phalek »

lost connection, maybe your mysql connection reaches a limit try the recommendation from BSOD2600 from this posting: http://forums.cacti.net/viewtopic.php?f ... ns#p254413

Error 2006 could also be this case: http://stackoverflow.com/questions/1047 ... -gone-away
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

phalek wrote:lost connection, maybe your mysql connection reaches a limit try the recommendation from BSOD2600 from this posting: http://forums.cacti.net/viewtopic.php?f ... ns#p254413

Error 2006 could also be this case: http://stackoverflow.com/questions/1047 ... -gone-away

I suppose that's possible from what BSOD2600 posts, but I did actually adjust that before implementing spine. I had caught that in my "Googling" adventures when I was researching. As for the StackOverlofw post, It's already sitting at the exact recommended number of 64M from when I google'd about heavy load and fine tuning of the DB side before.

To that end, I just saw 2 things which bother me.

1. The default MySQL value that ships with the debian package for "max_connections" is 151. Considering what BSOD2600 said above, I'd be WAY over that number based on the threading in place with SPINE. I'm going to bump it to the MAX which according to http://dev.mysql.com/doc/refman/5.5/en/ ... onnections is 16,384 for MySQL5.5.

2. There is a value that handles threading in my.cnf called "thread_cache_size". Based on "recommendations" from various MySQL performance tuning sites, they said to adjust that if you have heavy load. I had that value set to 8. It's default is actually 0, which disables it by default (Shame on me for not restoring my backup of my.cnf for troubleshooting). So I've set that back to 0 to disable it, and I'm going to increase the MAX connections value as stated above.

If I stop getting gaps in the graphs, I will actually roll back the "max_connections" value to default (commented out, which gives you 151 as a default). I want to know if the mechanism causing spine to get angry was one or the other for the future.

I will report back after some time has gone by. Thanks you guys!
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

FYI - I'm now wondering if this isn't my issue here:

http://forums.cacti.net/viewtopic.php?f=2&t=46338

I've reverted my number of threads down to 1. If it solves the issue, then I'll have to figure out how to do all this MySQL/Spine juju with recompiling against previous versions and all that fun stuff....I wonder if upgrading MySQL from the default of 5.5 to 5.6 would fix this...then Spine would reference the new libmysqlclient_r.so.XXXX file....

I really like seeing the polling only take 5 seconds for 15K Datasources. This particular cluster will eventually house about 500 hosts, and probably 150K Datasources...so the threading will be vital. l just saw that without multiple threads, and 1 single process, the polling is at 37 seconds now. I need to let a few hours pass to see if this goes away.
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

jimcjulsonjr wrote:FYI - I'm now wondering if this isn't my issue here:

http://forums.cacti.net/viewtopic.php?f=2&t=46338

I've reverted my number of threads down to 1. If it solves the issue, then I'll have to figure out how to do all this MySQL/Spine juju with recompiling against previous versions and all that fun stuff....I wonder if upgrading MySQL from the default of 5.5 to 5.6 would fix this...then Spine would reference the new libmysqlclient_r.so.XXXX file....

I really like seeing the polling only take 5 seconds for 15K Datasources. This particular cluster will eventually house about 500 hosts, and probably 150K Datasources...so the threading will be vital. l just saw that without multiple threads, and 1 single process, the polling is at 37 seconds now. I need to let a few hours pass to see if this goes away.

Okay, I'm not going to type the "Victory Post" yet, but this is promising. I purged the logs after reducing the process/threads, and all actual errors are gone in full debug mode. No warnings either. Everything is just verbose logging and as it should be. Prior to this, every polling cycle was yielding "some" sort of error/warning related to the original issue.

So it looks like I am suffering from the same thing in the post in the link above. So, is this a MySQL issue, SPINE issue, or a Cacti issue? Or...a combination thereof..? Looks like this has plagued some folks for a couple years now, or at least since October of 2012. I have to believe that this is corrected in more recent versions of MySQL....
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

jimcjulsonjr wrote:
jimcjulsonjr wrote:FYI - I'm now wondering if this isn't my issue here:

http://forums.cacti.net/viewtopic.php?f=2&t=46338

I've reverted my number of threads down to 1. If it solves the issue, then I'll have to figure out how to do all this MySQL/Spine juju with recompiling against previous versions and all that fun stuff....I wonder if upgrading MySQL from the default of 5.5 to 5.6 would fix this...then Spine would reference the new libmysqlclient_r.so.XXXX file....

I really like seeing the polling only take 5 seconds for 15K Datasources. This particular cluster will eventually house about 500 hosts, and probably 150K Datasources...so the threading will be vital. l just saw that without multiple threads, and 1 single process, the polling is at 37 seconds now. I need to let a few hours pass to see if this goes away.

Okay, I'm not going to type the "Victory Post" yet, but this is promising. I purged the logs after reducing the process/threads, and all actual errors are gone in full debug mode. No warnings either. Everything is just verbose logging and as it should be. Prior to this, every polling cycle was yielding "some" sort of error/warning related to the original issue.

So it looks like I am suffering from the same thing in the post in the link above. So, is this a MySQL issue, SPINE issue, or a Cacti issue? Or...a combination thereof..? Looks like this has plagued some folks for a couple years now, or at least since October of 2012. I have to believe that this is corrected in more recent versions of MySQL....
Okay, it's official. I haven't had one "blip" since removing the threading options. Sadly though, my polling went from 4-7 Seconds, to 45-60 seconds and I still have hundreds of devices to add before this thing gets turned over to PROD....So it's looking like a potentially bumpy road ahead in terms of figuring out an actual "fix"
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

jimcjulsonjr wrote:
jimcjulsonjr wrote:
jimcjulsonjr wrote:FYI - I'm now wondering if this isn't my issue here:

http://forums.cacti.net/viewtopic.php?f=2&t=46338

I've reverted my number of threads down to 1. If it solves the issue, then I'll have to figure out how to do all this MySQL/Spine juju with recompiling against previous versions and all that fun stuff....I wonder if upgrading MySQL from the default of 5.5 to 5.6 would fix this...then Spine would reference the new libmysqlclient_r.so.XXXX file....

I really like seeing the polling only take 5 seconds for 15K Datasources. This particular cluster will eventually house about 500 hosts, and probably 150K Datasources...so the threading will be vital. l just saw that without multiple threads, and 1 single process, the polling is at 37 seconds now. I need to let a few hours pass to see if this goes away.

Okay, I'm not going to type the "Victory Post" yet, but this is promising. I purged the logs after reducing the process/threads, and all actual errors are gone in full debug mode. No warnings either. Everything is just verbose logging and as it should be. Prior to this, every polling cycle was yielding "some" sort of error/warning related to the original issue.

So it looks like I am suffering from the same thing in the post in the link above. So, is this a MySQL issue, SPINE issue, or a Cacti issue? Or...a combination thereof..? Looks like this has plagued some folks for a couple years now, or at least since October of 2012. I have to believe that this is corrected in more recent versions of MySQL....
Okay, it's official. I haven't had one "blip" since removing the threading options. Sadly though, my polling went from 4-7 Seconds, to 45-60 seconds and I still have hundreds of devices to add before this thing gets turned over to PROD....So it's looking like a potentially bumpy road ahead in terms of figuring out an actual "fix"
Okay, no joy here. Things are significantly "better", but the problem exists still on the heavier hit devices. I'm in the process of spinning up another robust VM, and going to roll with a custom install on Ubuntu 14.04 with MySQL 5.6, Cacti 0.8.8c and Spine 0.8.8c. I'll see if this behaves any better. I'm still scratching my head on this...these Ubuntu deb packages have always been so nice and easy to deal with. This is a total one-off.
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
User avatar
phalek
Developer
Posts: 2838
Joined: Thu Jan 31, 2008 6:39 am
Location: Kressbronn, Germany
Contact:

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by phalek »

I'd suggest to add only one of the "heavier hit devices" to the new box and check if the problem occurs with a one-device Cacti as well instead of juggling around with multiple devices.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

Okay, now I'm REALLY scratching my head here...

Brand new VM on a different Physical host, with DOUBLE the RAM and 4 extra CPU cores...Running on a 10Gbps NIC as well.

Cacti 0.8.8c
Spine 0.8.8c


I've got 4 hosts added, and only a total of roughly 5,000 Data Sources. I've watched the disk usage, latency, write delay, CPU, Memory Usage, Process spawns etc.. The server is absolutely humming along just fine. This brand new setup just got it's first "gaps" now. I'm completely at a loss here.

I have it setup to run 2 Concurrent Poller Processes, 6 Threads Per Process, 2 PHP Script Servers, Default time-out (25 Seconds), and default OID's per request (10). CRON is default at 5, and Poller is set to 5 minutes too.

We are talking 4 physical boxes. 2 x Cisco 7609's, and 2 x Cisco Nexus 5000's. As stated above, there's only about 5000 DS's between all of them that I'm graphing against. Does Spine 0.8.8c suffer from the same problem as 0.8.8b in that it was compiled against that MySQL version that has the bug for threading? This is the last thing I have to work out before deploying a LOT of servers. We have some datacenters that will have in upwards of probably 5000-8000 DS's PER Unit, and about 20 Units per site....So the threading is critical. I'm about to call it a day and just literally remove the existing MySQL and install MariaDB or Percona....

Lastly, Debug level logging has been running. There are ZERO SQL Call entries, ZERO Errors, Standard warnings about SNMP Timeouts for various OIDs (Which I don't fully understand since I'm using SNMP Interface Statistics Data Queries for "Bits, Unicast, Errors/discards" and then standard CPU graphs. Nothing fancy. As for add-ons, this one is running clean, with only Weathermap, Autom8 and a few others that are presently disabled.

What else can I look at? It's worth noting that this was an UPGRADE from 0.8.8b by compiling 0.8.8c into a Debian package. I used the following site as my guide: http://blog.asiantuntijakaveri.fi/2014/ ... cacti.html

I will provide any logs or whatever else is needed. Thanks everyone!
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
jimcjulsonjr
Posts: 48
Joined: Fri Dec 07, 2012 11:11 am

Re: Switching from CMD.PHP to SPINE now shows gaps in graphs

Post by jimcjulsonjr »

Okay, so I've flipped over to MariaDB, and the problem still exists for me. I have a lot of these errors in the ERROR-LOG, and I can't parse anything out of them that actually tells me "what" the error is...



Code: Select all

01/12/2015 01:43:14 PM - WEBLOG: Poller[0] CACTI2RRD: /usr/bin/rrdtool graph - --imgformat=PNG -c CANVAS#000000 -c FONT#FFFFFF -c BACK#000000 --title="SGU1-CHR02 - Errors & Discards on Te2/4 | (e1-4.sw01.us1b.core)" --start='1421088399' --end='1421094912' --title='SGU1-CHR02 - Errors & Discards on Te2/4 | (e1-4.sw01.us1b.core)' --rigid --base='1000' --height='150' --width='650' --alt-autoscale-max --lower-limit='0' COMMENT:"From 2015/01/12 11\:46\:39 To 2015/01/12 13\:35\:12\c" COMMENT:" \n" --vertical-label='*** ERRORS & DISCARDS ***' --slope-mode --font TITLE:9: --font AXIS:7: --font LEGEND:7: --font UNIT:6: DEF:a='/var/lib/cacti/rra/13/3835.rrd':'discards_in':AVERAGE DEF:b='/var/lib/cacti/rra/13/3835.rrd':'errors_in':AVERAGE DEF:c='/var/lib/cacti/rra/13/3835.rrd':'discards_out':AVERAGE DEF:d='/var/lib/cacti/rra/13/3835.rrd':'errors_out':AVERAGE AREA:a#FF990066:'' LINE1:a#FF9900FF:'Discards In' GPRINT:a:LAST:' Current\:%8.2lf %s' GPRINT:a:AVERAGE:'Average\:%8.2lf %s' GPRINT:a:MAX:'Maximum\:%8.2lf %s\n' AREA:b#FF000066:'' LINE1:b#FF0000FF:'Errors In' GPRINT:b:LAST:' Current\:%8.2lf %s' GPRINT:b:AVERAGE:'Average\:%8.2lf %s' GPRINT:b:MAX:'Maximum\:%8.2lf %s\n' AREA:c#C266C266:'' LINE1:c#C266C2FF:'Discards Out' GPRINT:c:LAST:'Current\:%8.2lf %s' GPRINT:c:AVERAGE:'Average\:%8.2lf %s' GPRINT:c:MAX:'Maximum\:%8.2lf %s\n' AREA:d#FFFF0066:'' LINE1:d#FFFF00FF:'Errors Out' GPRINT:d:LAST:' Current\:%8.2lf %s' GPRINT:d:AVERAGE:'Average\:%8.2lf %s' GPRINT:d:MAX:'Maximum\:%8.2lf %s\n' COMMENT:'Graph Last Updated\:Mon 12 Jan 13\:40\:04 MST 2015\n'


01/12/2015 01:43:14 PM - WEBLOG: Poller[0] CACTI2RRD: /usr/bin/rrdtool graph - --imgformat=PNG -c CANVAS#000000 -c FONT#FFFFFF -c BACK#000000 --title="SGU1-CHR02 - Errors & Discards on Te2/1 | (e2-4-1.sw01.iaas.core.sgu1)" --start='1421088399' --end='1421094912' --title='SGU1-CHR02 - Errors & Discards on Te2/1 | (e2-4-1.sw01.iaas.core.sgu1)' --rigid --base='1000' --height='150' --width='650' --alt-autoscale-max --lower-limit='0' COMMENT:"From 2015/01/12 11\:46\:39 To 2015/01/12 13\:35\:12\c" COMMENT:" \n" --vertical-label='*** ERRORS & DISCARDS ***' --slope-mode --font TITLE:9: --font AXIS:7: --font LEGEND:7: --font UNIT:6: DEF:a='/var/lib/cacti/rra/13/3833.rrd':'discards_in':AVERAGE DEF:b='/var/lib/cacti/rra/13/3833.rrd':'errors_in':AVERAGE DEF:c='/var/lib/cacti/rra/13/3833.rrd':'discards_out':AVERAGE DEF:d='/var/lib/cacti/rra/13/3833.rrd':'errors_out':AVERAGE AREA:a#FF990066:'' LINE1:a#FF9900FF:'Discards In' GPRINT:a:LAST:' Current\:%8.2lf %s' GPRINT:a:AVERAGE:'Average\:%8.2lf %s' GPRINT:a:MAX:'Maximum\:%8.2lf %s\n' AREA:b#FF000066:'' LINE1:b#FF0000FF:'Errors In' GPRINT:b:LAST:' Current\:%8.2lf %s' GPRINT:b:AVERAGE:'Average\:%8.2lf %s' GPRINT:b:MAX:'Maximum\:%8.2lf %s\n' AREA:c#C266C266:'' LINE1:c#C266C2FF:'Discards Out' GPRINT:c:LAST:'Current\:%8.2lf %s' GPRINT:c:AVERAGE:'Average\:%8.2lf %s' GPRINT:c:MAX:'Maximum\:%8.2lf %s\n' AREA:d#FFFF0066:'' LINE1:d#FFFF00FF:'Errors Out' GPRINT:d:LAST:' Current\:%8.2lf %s' GPRINT:d:AVERAGE:'Average\:%8.2lf %s' GPRINT:d:MAX:'Maximum\:%8.2lf %s\n' COMMENT:'Graph Last Updated\:Mon 12 Jan 13\:40\:04 MST 2015\n'


01/12/2015 01:43:14 PM - WEBLOG: Poller[0] CACTI2RRD: /usr/bin/rrdtool graph - --imgformat=PNG -c CANVAS#000000 -c FONT#FFFFFF -c BACK#000000 --title="SGU1-CHR02 - Errors & Discards on Te2/5 | (te3-1.br01.lax1 (INTEGRA:PL/KGWD/763651))" --start='1421088399' --end='1421094912' --title='SGU1-CHR02 - Errors & Discards on Te2/5 | (te3-1.br01.lax1 (INTEGRA:PL/KGWD/763651))' --rigid --base='1000' --height='150' --width='650' --alt-autoscale-max --lower-limit='0' COMMENT:"From 2015/01/12 11\:46\:39 To 2015/01/12 13\:35\:12\c" COMMENT:" \n" --vertical-label='*** ERRORS & DISCARDS ***' --slope-mode --font TITLE:9: --font AXIS:7: --font LEGEND:7: --font UNIT:6: DEF:a='/var/lib/cacti/rra/13/3836.rrd':'discards_in':AVERAGE DEF:b='/var/lib/cacti/rra/13/3836.rrd':'errors_in':AVERAGE DEF:c='/var/lib/cacti/rra/13/3836.rrd':'discards_out':AVERAGE DEF:d='/var/lib/cacti/rra/13/3836.rrd':'errors_out':AVERAGE AREA:a#FF990066:'' LINE1:a#FF9900FF:'Discards In' GPRINT:a:LAST:' Current\:%8.2lf %s' GPRINT:a:AVERAGE:'Average\:%8.2lf %s' GPRINT:a:MAX:'Maximum\:%8.2lf %s\n' AREA:b#FF000066:'' LINE1:b#FF0000FF:'Errors In' GPRINT:b:LAST:' Current\:%8.2lf %s' GPRINT:b:AVERAGE:'Average\:%8.2lf %s' GPRINT:b:MAX:'Maximum\:%8.2lf %s\n' AREA:c#C266C266:'' LINE1:c#C266C2FF:'Discards Out' GPRINT:c:LAST:'Current\:%8.2lf %s' GPRINT:c:AVERAGE:'Average\:%8.2lf %s' GPRINT:c:MAX:'Maximum\:%8.2lf %s\n' AREA:d#FFFF0066:'' LINE1:d#FFFF00FF:'Errors Out' GPRINT:d:LAST:' Current\:%8.2lf %s' GPRINT:d:AVERAGE:'Average\:%8.2lf %s' GPRINT:d:MAX:'Maximum\:%8.2lf %s\n' COMMENT:'Graph Last Updated\:Mon 12 Jan 13\:40\:04 MST 2015\n'


01/12/2015 01:43:14 PM - WEBLOG: Poller[0] CACTI2RRD: /usr/bin/rrdtool graph - --imgformat=PNG -c CANVAS#000000 -c FONT#FFFFFF -c BACK#000000 --title="SGU1-CHR02 - Errors & Discards on Te2/6 | (e2-4-1.sw02.iaas.core.sgu1)" --start='1421088399' --end='1421094912' --title='SGU1-CHR02 - Errors & Discards on Te2/6 | (e2-4-1.sw02.iaas.core.sgu1)' --rigid --base='1000' --height='150' --width='650' --alt-autoscale-max --lower-limit='0' COMMENT:"From 2015/01/12 11\:46\:39 To 2015/01/12 13\:35\:12\c" COMMENT:" \n" --vertical-label='*** ERRORS & DISCARDS ***' --slope-mode --font TITLE:9: --font AXIS:7: --font LEGEND:7: --font UNIT:6: DEF:a='/var/lib/cacti/rra/13/3837.rrd':'discards_in':AVERAGE DEF:b='/var/lib/cacti/rra/13/3837.rrd':'errors_in':AVERAGE DEF:c='/var/lib/cacti/rra/13/3837.rrd':'discards_out':AVERAGE DEF:d='/var/lib/cacti/rra/13/3837.rrd':'errors_out':AVERAGE AREA:a#FF990066:'' LINE1:a#FF9900FF:'Discards In' GPRINT:a:LAST:' Current\:%8.2lf %s' GPRINT:a:AVERAGE:'Average\:%8.2lf %s' GPRINT:a:MAX:'Maximum\:%8.2lf %s\n' AREA:b#FF000066:'' LINE1:b#FF0000FF:'Errors In' GPRINT:b:LAST:' Current\:%8.2lf %s' GPRINT:b:AVERAGE:'Average\:%8.2lf %s' GPRINT:b:MAX:'Maximum\:%8.2lf %s\n' AREA:c#C266C266:'' LINE1:c#C266C2FF:'Discards Out' GPRINT:c:LAST:'Current\:%8.2lf %s' GPRINT:c:AVERAGE:'Average\:%8.2lf %s' GPRINT:c:MAX:'Maximum\:%8.2lf %s\n' AREA:d#FFFF0066:'' LINE1:d#FFFF00FF:'Errors Out' GPRINT:d:LAST:' Current\:%8.2lf %s' GPRINT:d:AVERAGE:'Average\:%8.2lf %s' GPRINT:d:MAX:'Maximum\:%8.2lf %s\n' COMMENT:'Graph Last Updated\:Mon 12 Jan 13\:40\:04 MST 2015\n'


01/12/2015 01:43:14 PM - WEBLOG: Poller[0] CACTI2RRD: /usr/bin/rrdtool graph - --imgformat=PNG -c CANVAS#000000 -c FONT#FFFFFF -c BACK#000000 --title="SGU1-CHR02 - Errors & Discards on Po4 | (po4.sw0x.us1b.core)" --start='1421088399' --end='1421094912' --title='SGU1-CHR02 - Errors & Discards on Po4 | (po4.sw0x.us1b.core)' --rigid --base='1000' --height='150' --width='650' --alt-autoscale-max --lower-limit='0' COMMENT:"From 2015/01/12 11\:46\:39 To 2015/01/12 13\:35\:12\c" COMMENT:" \n" --vertical-label='*** ERRORS & DISCARDS ***' --slope-mode --font TITLE:9: --font AXIS:7: --font LEGEND:7: --font UNIT:6: DEF:a='/var/lib/cacti/rra/13/3847.rrd':'discards_in':AVERAGE DEF:b='/var/lib/cacti/rra/13/3847.rrd':'errors_in':AVERAGE DEF:c='/var/lib/cacti/rra/13/3847.rrd':'discards_out':AVERAGE DEF:d='/var/lib/cacti/rra/13/3847.rrd':'errors_out':AVERAGE AREA:a#FF990066:'' LINE1:a#FF9900FF:'Discards In' GPRINT:a:LAST:' Current\:%8.2lf %s' GPRINT:a:AVERAGE:'Average\:%8.2lf %s' GPRINT:a:MAX:'Maximum\:%8.2lf %s\n' AREA:b#FF000066:'' LINE1:b#FF0000FF:'Errors In' GPRINT:b:LAST:' Current\:%8.2lf %s' GPRINT:b:AVERAGE:'Average\:%8.2lf %s' GPRINT:b:MAX:'Maximum\:%8.2lf %s\n' AREA:c#C266C266:'' LINE1:c#C266C2FF:'Discards Out' GPRINT:c:LAST:'Current\:%8.2lf %s' GPRINT:c:AVERAGE:'Average\:%8.2lf %s' GPRINT:c:MAX:'Maximum\:%8.2lf %s\n' AREA:d#FFFF0066:'' LINE1:d#FFFF00FF:'Errors Out' GPRINT:d:LAST:' Current\:%8.2lf %s' GPRINT:d:AVERAGE:'Average\:%8.2lf %s' GPRINT:d:MAX:'Maximum\:%8.2lf %s\n' COMMENT:'Graph Last Updated\:Mon 12 Jan 13\:40\:04 MST 2015\n'
-------------------------------------

VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests