NetApp 2.4TB volume

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

pbulteel
Cacti User
Posts: 150
Joined: Fri Sep 05, 2003 9:20 am
Location: London
Contact:

Post by pbulteel »

Well, I get the same thing if I use -v 2c, so I'm not sure if I've configured my snmpd correctly. I'm going to check it.
uname -a
pbulteel
Cacti User
Posts: 150
Joined: Fri Sep 05, 2003 9:20 am
Location: London
Contact:

Post by pbulteel »

Bad news I guess -- it get the same results using SNMPv2 :(
uname -a
pbulteel
Cacti User
Posts: 150
Joined: Fri Sep 05, 2003 9:20 am
Location: London
Contact:

Post by pbulteel »

uname -a
User avatar
rony
Developer/Forum Admin
Posts: 6022
Joined: Mon Nov 17, 2003 6:35 pm
Location: Michigan, USA
Contact:

Post by rony »

Can each of you email me a walk of the devices in question and give me some background on what they are?
[size=117][i][b]Tony Roman[/b][/i][/size]
[size=84][i]Experience is what causes a person to make new mistakes instead of old ones.[/i][/size]
[size=84][i]There are only 3 way to complete a project: Good, Fast or Cheap, pick two.[/i][/size]
[size=84][i]With age comes wisdom, what you choose to do with it determines whether or not you are wise.[/i][/size]
brunom
Posts: 12
Joined: Tue Apr 05, 2005 7:53 pm
Location: Sydney, Australia
Contact:

Using hrStorageUsed instead seems to work up to 12TB

Post by brunom »

We started using hrStorageUsed (available in the standard template from Cacti under "SNMP - Get Mounted Partitions" in the data queries for a "ucd/Net SNMP host").

It works for larger filesystems however it's overflowing for our 12TB volume. I am trying to figure out how to work around that as we will rolling out 24TB volumes very soon and even a 44TB one too :-(
User avatar
fmangeant
Cacti Guru User
Posts: 2345
Joined: Fri Sep 19, 2003 8:36 am
Location: Sophia-Antipolis, France
Contact:

Re: Using hrStorageUsed instead seems to work up to 12TB

Post by fmangeant »

brunom wrote:We started using hrStorageUsed (available in the standard template from Cacti under "SNMP - Get Mounted Partitions" in the data queries for a "ucd/Net SNMP host").

It works for larger filesystems however it's overflowing for our 12TB volume. I am trying to figure out how to work around that as we will rolling out 24TB volumes very soon and even a 44TB one too :-(
Hi

I've tried the "SNMP - Get Mounted Partitions", but it doesn't work for me. :cry:

I'm trying this on a "NetApp Release 6.5.2R1P16D6: Fri Nov 26 21:10:07 PST 2004". Can you tell me which release you're running ?
[size=84]
[color=green]HOWTOs[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15353]Install and configure the Net-SNMP agent for Unix[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=26151]Install and configure the Net-SNMP agent for Windows[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=28175]Graph multiple servers using an SNMP proxy[/url][/list]
[color=green]Templates[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15412]Multiple CPU usage for Linux[/url]
[*][url=http://forums.cacti.net/viewtopic.php?p=125152]Memory & swap usage for Unix[/url][/list][/size]
brunom
Posts: 12
Joined: Tue Apr 05, 2005 7:53 pm
Location: Sydney, Australia
Contact:

hrStorageUsed

Post by brunom »

Hello,

Maybe I should have clarified that I was using that on Linux (not NetApp). To make sure that you have the hrStorage table available you can do:

snmpwalk -v SNMPVERSION -c COMMUNITYNAME HOST hrStorage

where SNMPVERSION is the version you are using, COMMUNITYNAME is whatever name you gave it, HOST ... well, that's descriptive enough.

In my case I get (partial reproduction):

HOST-RESOURCES-MIB::hrMemorySize.0 = INTEGER: 2075484 KBytes
HOST-RESOURCES-MIB::hrStorageIndex.1 = INTEGER: 1
HOST-RESOURCES-MIB::hrStorageIndex.2 = INTEGER: 2
HOST-RESOURCES-MIB::hrStorageIndex.3 = INTEGER: 3
HOST-RESOURCES-MIB::hrStorageIndex.4 = INTEGER: 4
HOST-RESOURCES-MIB::hrStorageIndex.5 = INTEGER: 5
[...]
pbulteel
Cacti User
Posts: 150
Joined: Fri Sep 05, 2003 9:20 am
Location: London
Contact:

Post by pbulteel »

Thanks this worked for me as well - again this was a Linux system with 6.6 T filesystem. I can see it! Woohoo...

-P
uname -a
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Find a template for this reading the first link of my signature ...
Reinhard
Last edited by gandalf on Fri Nov 30, 2007 6:11 am, edited 1 time in total.
birger
Posts: 2
Joined: Thu Jan 18, 2007 4:50 am
Location: Bergen, Norway

Post by birger »

I am also having problems with this.

snmpget -v 1 on my biggest aggregate (2.7TB) returns a negative number.
snmpget -v 2c to the netapp times out.

snmpget -c public -v 1 netapp .1.3.6.1.4.1.789.1.5.4.1.3.1
SNMPv2-SMI::enterprises.789.1.5.4.1.3.1 = INTEGER: -1318648000

rrdtool info says:
ds[dfKBytesTotal].type = "GAUGE"
ds[dfKBytesTotal].minimal_heartbeat = 600
ds[dfKBytesTotal].min = 0.0000000000e+00
ds[dfKBytesTotal].max = NaN
ds[dfKBytesTotal].last_ds = "UNKN"
ds[dfKBytesTotal].value = NaN
ds[dfKBytesTotal].unknown_sec = 4

I have read the NaN tutorial, but I cannot see what to do. As you may guess I am a cacti newbie.
birger
Posts: 2
Joined: Thu Jan 18, 2007 4:50 am
Location: Bergen, Norway

Post by birger »

Ok... I think I solved it.

I set the min values to 'U' for the rrd's in question and then added 2 new CDEF's:

"convert 32bit signed to unsigned"
cdef=CURRENT_DATA_SOURCE,0,GE,CURRENT_DATA_SOURCE,4294967295,CURRENT_DATA_SOURCE,+,IF

"convert 32bit signed to unsigned, multiply by 1024"
Build on the brevious one, and multiply by 1024.

I then applied the last one to all data in the 'netapp space detail' graph template.
duckhead
Cacti User
Posts: 59
Joined: Wed Oct 20, 2004 7:41 pm

Post by duckhead »

NetApp's are a PITA when it comes to SNMP. They steadfastly refuse to support anything other than SNMPv1. V1 only goes as high as a 32-bit integer, and overflows around 4 TB. So, as a workaround, they supply some OID's in their MIB that let you query the space. You have to read 2 MIB values in their custom MIB, and do some math.

You can read more about it here:

http://now.netapp.com/NOW/cgi-bin/bol?T ... play=80268

The math you need is:

Code: Select all

Total = High * ((dfLowBytes < 0) + 1) * 2^32 + Low
Laffer
Posts: 14
Joined: Fri Aug 24, 2007 8:47 am

Post by Laffer »

Here I have done the work with the CDEF's, but the Total Size is still wrong! Anybody suggestions?
Attachments
cacti_graph_template_netapp_storage_volume_usage.xml
Converting Signed to unsigned for Volumes bigger than 2 TB
(17.38 KiB) Downloaded 778 times
Laffer
Posts: 14
Joined: Fri Aug 24, 2007 8:47 am

Post by Laffer »

and the associated data template (with negative numbers in the RRA)
Attachments
cacti_data_template_netapp_storage_volume_usage.xml
(8.9 KiB) Downloaded 738 times
NivenHuH
Posts: 4
Joined: Wed Nov 28, 2007 2:07 pm
Location: SF, CA

Post by NivenHuH »

birger,

thanks for the workaround... it worked for me. (until i hit 4tb.. :) )
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests