NetApp 2.4TB volume
Moderators: Developers, Moderators
- rony
- Developer/Forum Admin
- Posts: 6022
- Joined: Mon Nov 17, 2003 6:35 pm
- Location: Michigan, USA
- Contact:
Can each of you email me a walk of the devices in question and give me some background on what they are?
[size=117][i][b]Tony Roman[/b][/i][/size]
[size=84][i]Experience is what causes a person to make new mistakes instead of old ones.[/i][/size]
[size=84][i]There are only 3 way to complete a project: Good, Fast or Cheap, pick two.[/i][/size]
[size=84][i]With age comes wisdom, what you choose to do with it determines whether or not you are wise.[/i][/size]
[size=84][i]Experience is what causes a person to make new mistakes instead of old ones.[/i][/size]
[size=84][i]There are only 3 way to complete a project: Good, Fast or Cheap, pick two.[/i][/size]
[size=84][i]With age comes wisdom, what you choose to do with it determines whether or not you are wise.[/i][/size]
Using hrStorageUsed instead seems to work up to 12TB
We started using hrStorageUsed (available in the standard template from Cacti under "SNMP - Get Mounted Partitions" in the data queries for a "ucd/Net SNMP host").
It works for larger filesystems however it's overflowing for our 12TB volume. I am trying to figure out how to work around that as we will rolling out 24TB volumes very soon and even a 44TB one too
It works for larger filesystems however it's overflowing for our 12TB volume. I am trying to figure out how to work around that as we will rolling out 24TB volumes very soon and even a 44TB one too
- fmangeant
- Cacti Guru User
- Posts: 2345
- Joined: Fri Sep 19, 2003 8:36 am
- Location: Sophia-Antipolis, France
- Contact:
Re: Using hrStorageUsed instead seems to work up to 12TB
Hibrunom wrote:We started using hrStorageUsed (available in the standard template from Cacti under "SNMP - Get Mounted Partitions" in the data queries for a "ucd/Net SNMP host").
It works for larger filesystems however it's overflowing for our 12TB volume. I am trying to figure out how to work around that as we will rolling out 24TB volumes very soon and even a 44TB one too
I've tried the "SNMP - Get Mounted Partitions", but it doesn't work for me.
I'm trying this on a "NetApp Release 6.5.2R1P16D6: Fri Nov 26 21:10:07 PST 2004". Can you tell me which release you're running ?
[size=84]
[color=green]HOWTOs[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15353]Install and configure the Net-SNMP agent for Unix[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=26151]Install and configure the Net-SNMP agent for Windows[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=28175]Graph multiple servers using an SNMP proxy[/url][/list]
[color=green]Templates[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15412]Multiple CPU usage for Linux[/url]
[*][url=http://forums.cacti.net/viewtopic.php?p=125152]Memory & swap usage for Unix[/url][/list][/size]
[color=green]HOWTOs[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15353]Install and configure the Net-SNMP agent for Unix[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=26151]Install and configure the Net-SNMP agent for Windows[/url]
[*][url=http://forums.cacti.net/viewtopic.php?t=28175]Graph multiple servers using an SNMP proxy[/url][/list]
[color=green]Templates[/color] :
[list][*][url=http://forums.cacti.net/viewtopic.php?t=15412]Multiple CPU usage for Linux[/url]
[*][url=http://forums.cacti.net/viewtopic.php?p=125152]Memory & swap usage for Unix[/url][/list][/size]
hrStorageUsed
Hello,
Maybe I should have clarified that I was using that on Linux (not NetApp). To make sure that you have the hrStorage table available you can do:
snmpwalk -v SNMPVERSION -c COMMUNITYNAME HOST hrStorage
where SNMPVERSION is the version you are using, COMMUNITYNAME is whatever name you gave it, HOST ... well, that's descriptive enough.
In my case I get (partial reproduction):
HOST-RESOURCES-MIB::hrMemorySize.0 = INTEGER: 2075484 KBytes
HOST-RESOURCES-MIB::hrStorageIndex.1 = INTEGER: 1
HOST-RESOURCES-MIB::hrStorageIndex.2 = INTEGER: 2
HOST-RESOURCES-MIB::hrStorageIndex.3 = INTEGER: 3
HOST-RESOURCES-MIB::hrStorageIndex.4 = INTEGER: 4
HOST-RESOURCES-MIB::hrStorageIndex.5 = INTEGER: 5
[...]
Maybe I should have clarified that I was using that on Linux (not NetApp). To make sure that you have the hrStorage table available you can do:
snmpwalk -v SNMPVERSION -c COMMUNITYNAME HOST hrStorage
where SNMPVERSION is the version you are using, COMMUNITYNAME is whatever name you gave it, HOST ... well, that's descriptive enough.
In my case I get (partial reproduction):
HOST-RESOURCES-MIB::hrMemorySize.0 = INTEGER: 2075484 KBytes
HOST-RESOURCES-MIB::hrStorageIndex.1 = INTEGER: 1
HOST-RESOURCES-MIB::hrStorageIndex.2 = INTEGER: 2
HOST-RESOURCES-MIB::hrStorageIndex.3 = INTEGER: 3
HOST-RESOURCES-MIB::hrStorageIndex.4 = INTEGER: 4
HOST-RESOURCES-MIB::hrStorageIndex.5 = INTEGER: 5
[...]
I am also having problems with this.
snmpget -v 1 on my biggest aggregate (2.7TB) returns a negative number.
snmpget -v 2c to the netapp times out.
snmpget -c public -v 1 netapp .1.3.6.1.4.1.789.1.5.4.1.3.1
SNMPv2-SMI::enterprises.789.1.5.4.1.3.1 = INTEGER: -1318648000
rrdtool info says:
ds[dfKBytesTotal].type = "GAUGE"
ds[dfKBytesTotal].minimal_heartbeat = 600
ds[dfKBytesTotal].min = 0.0000000000e+00
ds[dfKBytesTotal].max = NaN
ds[dfKBytesTotal].last_ds = "UNKN"
ds[dfKBytesTotal].value = NaN
ds[dfKBytesTotal].unknown_sec = 4
I have read the NaN tutorial, but I cannot see what to do. As you may guess I am a cacti newbie.
snmpget -v 1 on my biggest aggregate (2.7TB) returns a negative number.
snmpget -v 2c to the netapp times out.
snmpget -c public -v 1 netapp .1.3.6.1.4.1.789.1.5.4.1.3.1
SNMPv2-SMI::enterprises.789.1.5.4.1.3.1 = INTEGER: -1318648000
rrdtool info says:
ds[dfKBytesTotal].type = "GAUGE"
ds[dfKBytesTotal].minimal_heartbeat = 600
ds[dfKBytesTotal].min = 0.0000000000e+00
ds[dfKBytesTotal].max = NaN
ds[dfKBytesTotal].last_ds = "UNKN"
ds[dfKBytesTotal].value = NaN
ds[dfKBytesTotal].unknown_sec = 4
I have read the NaN tutorial, but I cannot see what to do. As you may guess I am a cacti newbie.
Ok... I think I solved it.
I set the min values to 'U' for the rrd's in question and then added 2 new CDEF's:
"convert 32bit signed to unsigned"
cdef=CURRENT_DATA_SOURCE,0,GE,CURRENT_DATA_SOURCE,4294967295,CURRENT_DATA_SOURCE,+,IF
"convert 32bit signed to unsigned, multiply by 1024"
Build on the brevious one, and multiply by 1024.
I then applied the last one to all data in the 'netapp space detail' graph template.
I set the min values to 'U' for the rrd's in question and then added 2 new CDEF's:
"convert 32bit signed to unsigned"
cdef=CURRENT_DATA_SOURCE,0,GE,CURRENT_DATA_SOURCE,4294967295,CURRENT_DATA_SOURCE,+,IF
"convert 32bit signed to unsigned, multiply by 1024"
Build on the brevious one, and multiply by 1024.
I then applied the last one to all data in the 'netapp space detail' graph template.
NetApp's are a PITA when it comes to SNMP. They steadfastly refuse to support anything other than SNMPv1. V1 only goes as high as a 32-bit integer, and overflows around 4 TB. So, as a workaround, they supply some OID's in their MIB that let you query the space. You have to read 2 MIB values in their custom MIB, and do some math.
You can read more about it here:
http://now.netapp.com/NOW/cgi-bin/bol?T ... play=80268
The math you need is:
You can read more about it here:
http://now.netapp.com/NOW/cgi-bin/bol?T ... play=80268
The math you need is:
Code: Select all
Total = High * ((dfLowBytes < 0) + 1) * 2^32 + Low
Here I have done the work with the CDEF's, but the Total Size is still wrong! Anybody suggestions?
- Attachments
-
- cacti_graph_template_netapp_storage_volume_usage.xml
- Converting Signed to unsigned for Volumes bigger than 2 TB
- (17.38 KiB) Downloaded 778 times
and the associated data template (with negative numbers in the RRA)
- Attachments
-
- cacti_data_template_netapp_storage_volume_usage.xml
- (8.9 KiB) Downloaded 738 times
Who is online
Users browsing this forum: No registered users and 1 guest