Switching from CMD.PHP to SPINE now shows gaps in graphs
Moderators: Developers, Moderators
-
- Posts: 48
- Joined: Fri Dec 07, 2012 11:11 am
Re: Switching from CMD.PHP to SPINE now shows gaps in graphs
Any words of awesome wisdom here? I'm finding it difficult to believe that something as stable as cacti/spine has issues with so few Datasources. Especially considering this is on very solid hardware with lots of Disk IOPS and all that. I feel like I'm just missing something super easy. I'm wishing I was a systems guy right about now
-------------------------------------
VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Switching from CMD.PHP to SPINE now shows gaps in graphs
Yes, log entries which start with the "WEBLOG" token are simply coming from the web-interface itself, i.e. you clicked on a graph, a screen refresh took place ... Nothing to actually take care of.
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Switching from CMD.PHP to SPINE now shows gaps in graphs
Oh, and:
Enable debug logging and then check for the following entries:
This should at least provide you with information on how long the polling for a host took. Maybe some are reaching some sort of timeout from time-to-time ?
It may not be cacti related but a bottleneck on the monitored system ( i.e. my Linux boxes have a very(!) low prio on SNMP and tend to simply ignore request coming in when the system load is going high )
Enable debug logging and then check for the following entries:
Code: Select all
01/15/2015 05:50:24 AM - SPINE: Poller[0] Host[254] TH[1] Total Time: 15 Seconds
01/15/2015 05:50:19 AM - SPINE: Poller[0] Host[213] TH[1] Total Time: 13 Seconds
01/15/2015 05:50:17 AM - SPINE: Poller[0] Host[253] TH[1] Total Time: 8 Seconds
It may not be cacti related but a bottleneck on the monitored system ( i.e. my Linux boxes have a very(!) low prio on SNMP and tend to simply ignore request coming in when the system load is going high )
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
-
- Posts: 48
- Joined: Fri Dec 07, 2012 11:11 am
Re: Switching from CMD.PHP to SPINE now shows gaps in graphs
So it's all Network gear that I'm polling. And we have other devices that poll these devices just fine, and are graphing. As an example, I graph all of our MPLS Ring network gear with Observium. It uses a PHP Script similar to CMD.PHP, but is wrapped in Python. It is multi-threaded as well. No gaps in the RRD graphs it produces at all. Additionally, if I get down to about 30 devices, and only about 20K Datasources, I can get CMD.PHP to complete within about 120 seconds, and also no gaps....phalek wrote:Oh, and:
Enable debug logging and then check for the following entries:
This should at least provide you with information on how long the polling for a host took. Maybe some are reaching some sort of timeout from time-to-time ?Code: Select all
01/15/2015 05:50:24 AM - SPINE: Poller[0] Host[254] TH[1] Total Time: 15 Seconds 01/15/2015 05:50:19 AM - SPINE: Poller[0] Host[213] TH[1] Total Time: 13 Seconds 01/15/2015 05:50:17 AM - SPINE: Poller[0] Host[253] TH[1] Total Time: 8 Seconds
It may not be cacti related but a bottleneck on the monitored system ( i.e. my Linux boxes have a very(!) low prio on SNMP and tend to simply ignore request coming in when the system load is going high )
So this is definitely something with SPINE itself. I am seeing very fast queries with DEBUG output on a per host bases for SPINE compared to CMD.PHP though.
-------------------------------------
VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
VERSION: Cacti 0.8.8b
POLLER: Spine
DATA SOURCES: 100,000K and Growing (Multiple Servers)
PLUGINS: AUTOM8, THOLD, DISCOVER, WEATHERMAP, BOOST, CLOG, NECTAR, MACTRACK, FLOWVIEW, SPIKEKILL, INTROPAGE, MONITOR
Who is online
Users browsing this forum: No registered users and 1 guest