Request, template to monitor HP LeftHand P4300
Moderators: Developers, Moderators
Request, template to monitor HP LeftHand P4300
Has anyone successfully managed to plot performance metrics for a HP LeftHand cluster?
If so, could you please share your templates?
Thank you very much,
Otto
If so, could you please share your templates?
Thank you very much,
Otto
Learn how to make them yourself and then publish it for the community. http://docs.cacti.net/manual:087
| Scripts: Monitor processes | RFC1213 MIB | DOCSIS Stats | Dell PowerEdge | Speedfan | APC UPS | DOCSIS CMTS | 3ware | Motorola Canopy |
| Guides: Windows Install | [HOWTO] Debug Windows NTFS permission problems |
| Tools: Windows All-in-one Installer |
-
- Cacti User
- Posts: 234
- Joined: Mon Dec 13, 2004 3:03 pm
I'm building them. I'm having a few problems with some of the SNMP objects. For instance, the operations for the RAID controller are in the 10^12 per second range ... which is clearly impossible. I don't think HP/LeftHand have completely worked out the accuracy for all of the SNMP objects that are available. I have a support case open with HP to try and resolve some of these issues.
At any rate, I'm going to finish building the templates (the way they should be) but certain graphs/stats just plain won't make sense. I'll post here when I've finished them. Should be soon. I have per-volume stats working correctly. I'm finishing up per-cluster stats (IOPs, Latency, Throughput, Cache hits, QDepth). I will NOT be building templates for per-initiator stats ... this would just be way too much to handle, at least in our environment. If there is a specific request for this I'd be able to build them however. The per-node stats are giving me an especially good headache. This is where I'm running into the Teraflop values <g>
At any rate, I'm going to finish building the templates (the way they should be) but certain graphs/stats just plain won't make sense. I'll post here when I've finished them. Should be soon. I have per-volume stats working correctly. I'm finishing up per-cluster stats (IOPs, Latency, Throughput, Cache hits, QDepth). I will NOT be building templates for per-initiator stats ... this would just be way too much to handle, at least in our environment. If there is a specific request for this I'd be able to build them however. The per-node stats are giving me an especially good headache. This is where I'm running into the Teraflop values <g>
-
- Cacti User
- Posts: 234
- Joined: Mon Dec 13, 2004 3:03 pm
Got the templates up.
See:
http://forums.cacti.net/viewtopic.php?p=198713
http://docs.cacti.net/usertemplate:host:hp:lefthand
See:
http://forums.cacti.net/viewtopic.php?p=198713
http://docs.cacti.net/usertemplate:host:hp:lefthand
-
- Posts: 2
- Joined: Thu Nov 18, 2010 7:38 am
Re: Request, template to monitor HP LeftHand P4300
@eschoeller
great job! i just registred to say THANKS for this work!
i would be more than happy&interested if you implement a per-initiator (IOps/sec) option to reflect all the view capabilities from CMC. and let us know how it goes with the support call.
peter
great job! i just registred to say THANKS for this work!
i would be more than happy&interested if you implement a per-initiator (IOps/sec) option to reflect all the view capabilities from CMC. and let us know how it goes with the support call.
peter
-
- Posts: 2
- Joined: Wed Feb 16, 2011 8:05 pm
Re: Request, template to monitor HP LeftHand P4300
First, I would like to say thanks for all of your work. I have hit a limitation, and I would like to see if I can expand your template, so I am posting to see if you can give me a few pointers...
We use a lot of snap shots both local & remote, which causes the issue we are having. The graphs doe not appear to sum the usage of the snapshots, so whatever space the top layer has is all that gets shown. Have you looked into this? Do you have any suggestions?
We use a lot of snap shots both local & remote, which causes the issue we are having. The graphs doe not appear to sum the usage of the snapshots, so whatever space the top layer has is all that gets shown. Have you looked into this? Do you have any suggestions?
-
- Cacti User
- Posts: 234
- Joined: Mon Dec 13, 2004 3:03 pm
Re: Request, template to monitor HP LeftHand P4300
No, I haven't run into this problem, but that is not to say that it doesn't exist. I don't watch this service closely, I merely got this up and running and turned it over to the people who run the service. The most complaints I get are about the latency graphs and how they differ from the CMC. I still can't explain that, but I know the graphs represent what the SNMP agents are reporting ... anyway sorry to ramble.
I only spent a few seconds looking at this, but the issue makes sense. You certainly don't want the volume size reported to reflect both its own size plus the size of the snapshots. This would confuse the heck out of people. But I also understand the need to know how much snapshot space you're using, especially if you're generating lots of snapshots. The max we have is 2. Again, I don't run the service, so I don't know why we don't use more.
There is a whole additional table dedicated to Snapshots ... LEFTHAND-NETWORKS-NSM-CLUSTERING-MIB::clusVolumeSnapshotEntry, or .1.3.6.1.4.1.9804.3.1.1.2.12.101.1. This template set doesn't look at this table at all, but it really should!!
There is some great info in there ... including clusVolumeSnapshotUsedSpace, which is exactly what you want, or even clusVolumeSnapshotClusterUsedPercent or clusVolumeSnapshotProvisionedSpace. Heck, they even have IO and latency information for the snapshots ... so they're essentially treated like additional volumes!
This would not be hard to duplicate the volume data query and adapt it to the snapshot table. Thinking through this in my head though ... you'd have to create new graphs for each snapshot you create. Then you'd have stats for just *that* snapshot. If you automate your snapshot creation through the CMC CLI (I only know it exists, never used it) then this script could also automate the creation of the graphs ... but this seems like a lot of effort. I imagine you want an aggregate graph that includes the space of all the snapshots for the entire cluster? or an aggregate of all snapshots for a particular volume? This would require a script-server perhaps that is capable of reading in all the right OIDs, and then spits out a total space used. There is nothing (from what I can see) within the current MIB tree that provides a good overview of snapshot use. This would be a good feature request to HP I guess. Then it would be trivial (and far more useful, Nagios would like it too!)
Hope this helps. Let me know what you think. And sorry for the late reply, I am slammed with work.
I only spent a few seconds looking at this, but the issue makes sense. You certainly don't want the volume size reported to reflect both its own size plus the size of the snapshots. This would confuse the heck out of people. But I also understand the need to know how much snapshot space you're using, especially if you're generating lots of snapshots. The max we have is 2. Again, I don't run the service, so I don't know why we don't use more.
There is a whole additional table dedicated to Snapshots ... LEFTHAND-NETWORKS-NSM-CLUSTERING-MIB::clusVolumeSnapshotEntry, or .1.3.6.1.4.1.9804.3.1.1.2.12.101.1. This template set doesn't look at this table at all, but it really should!!
There is some great info in there ... including clusVolumeSnapshotUsedSpace, which is exactly what you want, or even clusVolumeSnapshotClusterUsedPercent or clusVolumeSnapshotProvisionedSpace. Heck, they even have IO and latency information for the snapshots ... so they're essentially treated like additional volumes!
This would not be hard to duplicate the volume data query and adapt it to the snapshot table. Thinking through this in my head though ... you'd have to create new graphs for each snapshot you create. Then you'd have stats for just *that* snapshot. If you automate your snapshot creation through the CMC CLI (I only know it exists, never used it) then this script could also automate the creation of the graphs ... but this seems like a lot of effort. I imagine you want an aggregate graph that includes the space of all the snapshots for the entire cluster? or an aggregate of all snapshots for a particular volume? This would require a script-server perhaps that is capable of reading in all the right OIDs, and then spits out a total space used. There is nothing (from what I can see) within the current MIB tree that provides a good overview of snapshot use. This would be a good feature request to HP I guess. Then it would be trivial (and far more useful, Nagios would like it too!)
Hope this helps. Let me know what you think. And sorry for the late reply, I am slammed with work.
-
- Posts: 2
- Joined: Wed Feb 16, 2011 8:05 pm
Re: Request, template to monitor HP LeftHand P4300
Thanks for pointing me in the right direction! Now, if I can just find the time to play with it.
Who is online
Users browsing this forum: No registered users and 7 guests