Only half of the data from an interface poll?
Moderators: Developers, Moderators
Only half of the data from an interface poll?
I'm trying to poll a 64-bit SNMP interface value from a Foundry switch, but I only seem to get data for outbound traffic on the RRD graph. The inbound just says "nan" and never updates.
I've checked that cmd.php is populating the rrd instance with good data. It seems so:
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 897
snmp: 10.10.254.25, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.897, value: 45094359089258
snmp: 10.10.254.25, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.897, value: 84883795349452
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 113
update /var/www/html/cacti/rra/welles_traffic_in_113.rrd --template traffic_in:traffic_out N:45102467242984:84887015522618
So, it's getting data from port index 897 on the switch, then putting it into the 113 rrd in cacti.
Is this a parsing error in reading the two values? (the N:45102467242984:84887015522618 above)
Thanks for any help,
-Will
I've checked that cmd.php is populating the rrd instance with good data. It seems so:
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 897
snmp: 10.10.254.25, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.897, value: 45094359089258
snmp: 10.10.254.25, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.897, value: 84883795349452
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 113
update /var/www/html/cacti/rra/welles_traffic_in_113.rrd --template traffic_in:traffic_out N:45102467242984:84887015522618
So, it's getting data from port index 897 on the switch, then putting it into the 113 rrd in cacti.
Is this a parsing error in reading the two values? (the N:45102467242984:84887015522618 above)
Thanks for any help,
-Will
Seems to be related to 10-Gig interfaces
This seems to be a bug with 10-Gig interfaces...the 1-Gig ones report fine, despite them all being set up the same way.
Will
Will
Kinda working now
Weird....there is now data for in and out traffic. I'm not sure why I only saw outbound for hours yesterday.
So, yay! I guess there is no bug after all. Patience rewards.
So, yay! I guess there is no bug after all. Patience rewards.
- pestilence
- Cacti User
- Posts: 207
- Joined: Fri Jul 25, 2003 10:37 am
- Location: Athens/Greece
- Contact:
Ethernet Bug...
I ahve problems messuring my Ethernet bandwidth as well, i have 6 Mgs of outgoing traffic but cacti fails to report the outbound at all (it reports a really small unrealistic ammount of traffic).
Plus the graphs sometimes loose it, on a 2Mbit serial Interface i got 100Mbits inbound traffic reported...ah...that can't be...i think i am going to start checking the source code for whats wrong with this.
A small note i tryed changing the counter to 64bit on the Ethernet Interface and after that i saw that serial interfaces started having Outbound traffic (which was lower that the Inbound but also was not graphed) on the graphs, prolly there is a some mix somewhere and the traffic gets lost...
Plus the graphs sometimes loose it, on a 2Mbit serial Interface i got 100Mbits inbound traffic reported...ah...that can't be...i think i am going to start checking the source code for whats wrong with this.
A small note i tryed changing the counter to 64bit on the Ethernet Interface and after that i saw that serial interfaces started having Outbound traffic (which was lower that the Inbound but also was not graphed) on the graphs, prolly there is a some mix somewhere and the traffic gets lost...
Re: Only half of the data from an interface poll?
Hello, I have exactly the same problem: I'm monitoring some gigabit subinterfaces using 64 bit counter and snmp v2c but I only see outbound graphs. Nothing to see about the inbound traffic: I read "nan" for inbound traffic but checking the counters by snmpget I read real counter values...
Any news about it? How I can resolve that problem?
Please answer me! Thanks.
gabar
Any news about it? How I can resolve that problem?
Please answer me! Thanks.
gabar
wmelick wrote:I'm trying to poll a 64-bit SNMP interface value from a Foundry switch, but I only seem to get data for outbound traffic on the RRD graph. The inbound just says "nan" and never updates.
I've checked that cmd.php is populating the rrd instance with good data. It seems so:
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 897
snmp: 10.10.254.25, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.897, value: 45094359089258
snmp: 10.10.254.25, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.897, value: 84883795349452
[cactiuser@alfred rra]$ /var/www/html/cacti/cmd.php | grep 113
update /var/www/html/cacti/rra/welles_traffic_in_113.rrd --template traffic_in:traffic_out N:45102467242984:84887015522618
So, it's getting data from port index 897 on the switch, then putting it into the 113 rrd in cacti.
Is this a parsing error in reading the two values? (the N:45102467242984:84887015522618 above)
Thanks for any help,
-Will
Help in monitoring gigabit ethernet sub interfaces
Hi,lvm wrote:You're referencing a _very_ old post here. Please try my link on "NaN Debugging". It should help in case you're facing the exactly same error. Else report your findings here
Reinhard
I need to write another time about the same topic because (after your hints) I didn't still find a solution and now, my coordinator had ask to me to resolve the problem eventually using mrtg (but I don't want to use mrtg and unistall cacti).
The problem is that I can't obtain right values on gigabit ethernet sub interfaces of cisco router (while I succeed in monitoring physical gigabit ethernet interfaces).
Before posting this message I "study" many other related topics on this forum and I want to premise that I'm using cacti with 64 bit counter and snmp v2 (infact I succeed in monitoring gigabit ethernet interfaces).
Now (as you can check viewing the related graphs below) I'm monitoring a cisco router with a gigabit ethernet interface configured with three sub interfaces.
As you can see, the total inbound traffic showed in Gi0/1 (about 1 Mbps) is distributed only on Gi0/1.2 (about 200 Kbps) while no inbound traffic is showed on Gi0/1.1 and Gi0/1.3. So I lost 800 Kbps (1000-200) of
inbound traffic. If I check Gi0/1.1 and Gi0/1.3 I see "nan" for inbound traffic and checking cacti log and poller cache (setting DEBUG mode) I see no errors and all seems to be ok.
My cacti version is the last released (0.8.6i) and next you can see some
snmpwalk related queries.
Please help me! I don't know more what to do.
Thanks in advance
Output of "snmpwalk -c <mycomm> -v2c <myrouter-ip> 1.3.6.1.2.1.2.2.1":
IF-MIB::ifDescr.1 = STRING: GigabitEthernet0/1
IF-MIB::ifDescr.13 = STRING: GigabitEthernet0/1.1-802.1Q vLAN subif
IF-MIB::ifDescr.14 = STRING: GigabitEthernet0/1.2-802.1Q vLAN subif
IF-MIB::ifDescr.15 = STRING: GigabitEthernet0/1.3-802.1Q vLAN subif
IF-MIB::ifType.1 = INTEGER: ethernetCsmacd(6)
IF-MIB::ifType.13 = INTEGER: l2vlan(135)
IF-MIB::ifType.14 = INTEGER: l2vlan(135)
IF-MIB::ifType.15 = INTEGER: l2vlan(135)
IF-MIB::ifSpeed.1 = Gauge32: 1000000000
IF-MIB::ifSpeed.13 = Gauge32: 1000000000
IF-MIB::ifSpeed.14 = Gauge32: 1000000000
IF-MIB::ifSpeed.15 = Gauge32: 1000000000
IF-MIB::ifInOctets.1 = Counter32: 2791766250
IF-MIB::ifInOctets.13 = Counter32: 517381057
IF-MIB::ifInOctets.14 = Counter32: 6637734
IF-MIB::ifInOctets.15 = Counter32: 3497881830
IF-MIB::ifOutOctets.1 = Counter32: 1157411434
IF-MIB::ifOutOctets.13 = Counter32: 195354900
IF-MIB::ifOutOctets.14 = Counter32: 54366119
IF-MIB::ifOutOctets.15 = Counter32: 900303720
(to test 64 bit counter cisco ios support)
Output of "snmpwalk -c <mycomm> -v2c <myrouter-ip> .1.3.6.1.2.1.31.1.1.1.6":
IF-MIB::ifHCInOctets.1 = Counter64: 93063187387
IF-MIB::ifHCInOctets.13 = Counter64: 460071689375495764
IF-MIB::ifHCInOctets.14 = Counter64: 6646502
IF-MIB::ifHCInOctets.15 = Counter64: 6277650827964345
IF-MIB::ifHCOutOctets.1 = Counter64: 276460815035
IF-MIB::ifHCOutOctets.13 = Counter64: 39080321585
IF-MIB::ifHCOutOctets.14 = Counter64: 54420211
IF-MIB::ifHCOutOctets.15 = Counter64: 23731889626
- Attachments
-
- stats.png (31.76 KiB) Viewed 6546 times
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
Re: Help in monitoring gigabit ethernet sub interfaces
As if "installing some tool" would resolve problems. I "love" those "coordinators", even if I'm one of that kind, myself.gabar wrote:...and now, my coordinator had ask to me to resolve the problem eventually using mrtg (but I don't want to use mrtg and unistall cacti).
So, for this topic, let's assume you've configured your host as SNMP V2c host within cacti....that I'm using cacti with 64 bit counter and snmp v2 (infact I succeed in monitoring gigabit ethernet interfaces).
So we've a problem with inboud only, outbound works well.As you can see, the total inbound traffic showed in Gi0/1 (about 1 Mbps) is distributed only on Gi0/1.2 (about 200 Kbps) while no inbound traffic is showed on Gi0/1.1 and Gi0/1.3. So I lost 800 Kbps (1000-200) of inbound traffic.
We have to re-visit this step. Please, run cacti in DEBUG mode for one polling cycle again (or take the log you've already taken if you have saved it). Please grep for _all_ host log entries of this host and post all of them. We should not only see HCOutOctets but HCInOctets request there. If not, we'll revist the poller cache.If I check Gi0/1.1 and Gi0/1.3 I see "nan" for inbound traffic and checking cacti log and poller cache (setting DEBUG mode) I see no errors and all seems to be ok.
Don't worry, we'll resolve this one!
Reinhard
Re: Help in monitoring gigabit ethernet sub interfaces
Yes you have right!!! I think that we may have the same problem also with mrtg and more generally with any other snmp software.As if "installing some tool" would resolve problems. I "love" those "coordinators", even if I'm one of that kind, myself.
I think so but I'm not sure (are outbound values correct?). Today when I wake up I checked out the graphs and I saw that different than yesterday, strange inbound traffic (with holes) appears on Gi0/1.3... I still have outbound traffic (not showed because the scale graphs is bigger)... but it is strange.. I didn't do anything.. ok, now I post the new graphs.So we've a problem with inboud only, outbound works well.
Ok I posted grep results as you ask me!Please grep for _all_ host log entries of this host and post all of them. We should not only see HCOutOctets but HCInOctets request there. If not, we'll revist the poller cache.
Oh, thank you again. This is a very big problem for me because this statistics will be used for monitoring the bandwidth that we pay to our internet service providerDon't worry, we'll resolve this one!
You are very kind.
- Attachments
-
- New Graphs
- stats2.png (42.67 KiB) Viewed 6529 times
-
- cacti-log.txt
- Grep from cacti.log
- (175.9 KiB) Downloaded 192 times
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
First, lets talk about Gi1.1
I've reconstructed your rrd files using your output from last post (see attached screenshots, both stem from same time interval). Again, as you stated already, input traffic is missing on Gi1.1
This is due to the "weird" numbers for traffic_in in your log file
Now, let's turn to the question, why they are so huge. Conforming to my "NaN Debugging" howto, we should now return to the first few steps. Again, "DEBUG" mode (or take your log file which you have used for grepping) is required. Then, please find the "snmpget" statements belonging to that host and find the numbers for "traffic_in" (you have to find the correct OID). Or simply grep for the Host[..] statements in that log file and post all of them (this was, what I've meant in my previous post ).
We will then see, whether snmpget returned those weird numbers.
Reinhard
I've reconstructed your rrd files using your output from last post (see attached screenshots, both stem from same time interval). Again, as you stated already, input traffic is missing on Gi1.1
This is due to the "weird" numbers for traffic_in in your log file
See the number 456441259364798423? Isn't it a bit huge? So are all of them! So rrdtool clips them off during update, as they exceed the defined rra's maximum.10/28/2006 06:55:04 PM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/www/secure/cacti/rra/ciscogbrouter_traffic_in_446.rrd --template traffic_in:traffic_out N:456441259364798423:38549627024
Now, let's turn to the question, why they are so huge. Conforming to my "NaN Debugging" howto, we should now return to the first few steps. Again, "DEBUG" mode (or take your log file which you have used for grepping) is required. Then, please find the "snmpget" statements belonging to that host and find the numbers for "traffic_in" (you have to find the correct OID). Or simply grep for the Host[..] statements in that log file and post all of them (this was, what I've meant in my previous post ).
We will then see, whether snmpget returned those weird numbers.
Reinhard
- Attachments
-
- Your routers graphs (2)
- pic2.png (97.98 KiB) Viewed 6525 times
-
- Your routers graphs (1)
- pic1.png (99.52 KiB) Viewed 6525 times
Help in monitoring gigabit ethernet sub interfaces
I didn't find "snmpget" keyword in my log files but searching for my host I found the OID that you are finding (I hope ).
The OID that I found are mainly:
.1.3.6.1.2.1.31.1.1.1.6.1
.1.3.6.1.2.1.31.1.1.1.6.13
.1.3.6.1.2.1.31.1.1.1.6.14
.1.3.6.1.2.1.31.1.1.1.6.15
but I also post my log file so you can check.
Thank you, thank you, thank you.
The OID that I found are mainly:
.1.3.6.1.2.1.31.1.1.1.6.1
.1.3.6.1.2.1.31.1.1.1.6.13
.1.3.6.1.2.1.31.1.1.1.6.14
.1.3.6.1.2.1.31.1.1.1.6.15
but I also post my log file so you can check.
Thank you, thank you, thank you.
- Attachments
-
- log-10-29.txt
- (163.37 KiB) Downloaded 257 times
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
The numbers found in your log file correspond to those of the first rrdtool update statementss log. So those "big" numbers are reported by the device when snmpget-ing the interface data.
I've now changed the MAXimum limits of those rra's in all 4 rrd files. Please find attached the new graphs. But I really wonder, how your router transferred tera bytes of data per second.
Perhaps I should consider buying such a rocket fast device
But please don't ask me, why this device reports such weird data. I would recommend using SNMP V1 counters in comparison (you may define a second device with same IP@/hostname but SNMP V1 defined to have both data in parallel)
Reinhard
I've now changed the MAXimum limits of those rra's in all 4 rrd files. Please find attached the new graphs. But I really wonder, how your router transferred tera bytes of data per second.
Perhaps I should consider buying such a rocket fast device
But please don't ask me, why this device reports such weird data. I would recommend using SNMP V1 counters in comparison (you may define a second device with same IP@/hostname but SNMP V1 defined to have both data in parallel)
Reinhard
- Attachments
-
- Your router's interfaces (2)
- pic2.png (97.26 KiB) Viewed 6506 times
-
- Your router's interfaces (1)
- pic1.png (94.67 KiB) Viewed 6506 times
Are you talking about using snmp v1 with 32 bit counter? I tried to get 64 bit counter with snmpv1 without success.I would recommend using SNMP V1 counters in comparison (you may define a second device with same IP@/hostname but SNMP V1 defined to have both data in parallel)
So, now the question is:
If I reduce my polling time from 5 minutes to 2 minutes and I use snmp v1 with 32 bit counter could I solve my problem? In such case should I only change my crontab configuration from
"*/5 * * * * cactiuser php /var/www/secure/cacti/poller.php > /dev/null 2>&1"
to
"*/2 * * * * cactiuser php /var/www/secure/cacti/poller.php > /dev/null 2>&1" ?
Thank you for your support and for your time.
G.
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
SNMP V1 does not know about 64 bit counters. But your current traffic values are that low. It should be ok to graph them as 32 bit counters until you hit the 114 Mbps barrier.gabar wrote:Are you talking about using snmp v1 with 32 bit counter? I tried to get 64 bit counter with snmpv1 without success.
If you want to reduce the polling interval, please refer to the post in the Announcement forum that publishes the patch necessary for thisSo, now the question is:
If I reduce my polling time from 5 minutes to 2 minutes and I use snmp v1 with 32 bit counter could I solve my problem? In such case should I only change my crontab configuration from
"*/5 * * * * cactiuser php /var/www/secure/cacti/poller.php > /dev/null 2>&1"
to
"*/2 * * * * cactiuser php /var/www/secure/cacti/poller.php > /dev/null 2>&1" ?
Thank you for your support and for your time.
G.
Reinhard
Ok. Thank you again for your support.
I'm thinking to use an hybrid solution monitoring my physical gigabit interfaces with 64 bit counters/snmp v2 and my gigabit subinterfaces with 32 bit counters/snmp v1 (I think that 100 Mbps are enough for a sub interface traffic). What do you think about? Is it a good solutions?
I'm thinking to use an hybrid solution monitoring my physical gigabit interfaces with 64 bit counters/snmp v2 and my gigabit subinterfaces with 32 bit counters/snmp v1 (I think that 100 Mbps are enough for a sub interface traffic). What do you think about? Is it a good solutions?
Who is online
Users browsing this forum: No registered users and 5 guests