Ad blocker detected: Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker on our website.
Does anyone have any of these device on there network and are they having any problems monitoring these devices?
I have several of these switches in my network and i have tried almost everything i can think of with cacti and i find that 50% or more of the time these devices don't respond properly to snmp requests and the result being is the graphs look terrible in cacti.
I have attached a sample graph to give you an idea of what the interface graphs look like. I don't have any other switches that have this problem in cacti nor do i have any issues with any of my other cacti graphs except when my poller runs longer then 298 seconds.
I used the CactiEZ CD to build my install but I ended up updating the cacti it installed to a newer version.
Here is my Cacti setup info::
Cacti Version - 0.8.7d Plugin Architecture - 2.4 Poller Type - Cactid v Server Info - Linux 2.6.9-78.0.13.plus.c4smp Web Server - Apache/2.0.63 (CentOS) PHP - 5.1.6 PHP Extensions - libxml, xml, wddx, tokenizer, sysvshm, sysvsem, sysvmsg, standard, SimpleXML, sockets, SPL, shmop, session, Reflection, pspell, posix, mime_magic, iconv, hash, gmp, gettext, ftp, exif, date, curl, ctype, calendar, bz2, zlib, pcre, openssl, apache2handler, gd, ldap, mysql, mysqli, PDO, pdo_mysql, pdo_sqlite, snmp, eAccelerator MySQL - 5.0.68 RRDTool - 1.2.23 SNMP - 5.1.2 Plugins
Have you tried just manual SNMP requests to the switch in question? Just to check that its responding reliably to requests? We have a dodgy UPS here that for some unknown reason only responds to 1 of 3 SNMP requests. This produces a very similer graph to what you have.
No VM involved, I wish my cacti could run as a vm but i had to migrate to a dedicated server almost 2 years ago because of the number of data sources we were pulling. I also had to migrate recently to a more powerfull dedicated server in the last month.
Using both Boost and Spine still have poll times in the 80 second range had to change the data collection to 300seconds from the default in CactiEZ of 60 seconds.
If i do a manual snmpwalk of the device i seam to get reliable responses to the inital part of the request. Even if i do it 3 or 4 times it responds reliably to the request. I notice with snmpwalk though it gets stuck at a certain point.
Last edited by locutus233 on Mon Mar 23, 2009 12:15 am, edited 2 times in total.
BTW i can post more of my cacti log if you want. Just wanted to put the things that are related to the problem in the post. I have other issues that are unrelated to my switch problems.
Also i have other dlink switches that are a different model type that don't have this problem.
Setting the value to 500ms doesn't seam to make any difference.
I have also tried setting the number of oids per request from 1-200 and various values in between. I presently have it set to 10, but it doesn't seam to make any difference. I have also tried setting the version of snmp on 1 and 2 and it doesn't make any difference.
Have you tried manually editing the snmp timeout value for the device in question? The one that we mentioned will not change anything as it only applies when you create new devices.
I'm trying to push a 0.8.7d-pre of Spine that corrects an SNMPv3 issue. Also, please note that when using some agents, the MAX OID's must be set to as little as 1 in order for the SNMP to work.
I will put in the announcements forum.
TheWitness
True understanding begins only when we realize how little we truly understand...