OID's change after reboot ?
Moderators: Developers, Moderators
OID's change after reboot ?
Guys
I can't believe I haven't found anything about that on here and maybe I didn't search long enough but I'm pulling my hair out here.
It seems that certain oids don't persist accross reboots.
F.e. I monitor all my RHEL4-U7 boxes via snmpv2 and I noticed the following.
Disk usage for local drives seems to be persistent accross reboots.
I also graph mount points we have on a DMX4 and whenever I upgrade a kernel or upgrade PowerPath some of those mointpoints change.
Let's say I set up a box for the first time and it snmpwalks all my filesystems. I pick the ones I want and it's all good. F.e. /dev/mapper/VGora-LVarc1 which sits on the EMC..
Then I upgrade the kernel which requires me to reinstall PowerPath against that new kernel. After the reboot the graph that used to show usage for this device is now graphing a random other device like /export/home/{user} which it nfs mounted and I don't even wanna graph that.
This happens to some but not all mountpoint graphs every time I do some work on them. I read something about snmp oids not being presistent accross reboots ?
How can that be though ? It doesn't make sense to me that these oids wouldn't be sticky. How is any app like cacti, nagios etc... supposed to monitor stuff reliably when it randomly changes ?
The only way to get back is to delete the host and start over again since this does another snmpwalk. This makes the entire effort useless cause i can't get long term historic graphs that way.
Can I tell snmpd to make these persistent ?
I can't believe I haven't found anything about that on here and maybe I didn't search long enough but I'm pulling my hair out here.
It seems that certain oids don't persist accross reboots.
F.e. I monitor all my RHEL4-U7 boxes via snmpv2 and I noticed the following.
Disk usage for local drives seems to be persistent accross reboots.
I also graph mount points we have on a DMX4 and whenever I upgrade a kernel or upgrade PowerPath some of those mointpoints change.
Let's say I set up a box for the first time and it snmpwalks all my filesystems. I pick the ones I want and it's all good. F.e. /dev/mapper/VGora-LVarc1 which sits on the EMC..
Then I upgrade the kernel which requires me to reinstall PowerPath against that new kernel. After the reboot the graph that used to show usage for this device is now graphing a random other device like /export/home/{user} which it nfs mounted and I don't even wanna graph that.
This happens to some but not all mountpoint graphs every time I do some work on them. I read something about snmp oids not being presistent accross reboots ?
How can that be though ? It doesn't make sense to me that these oids wouldn't be sticky. How is any app like cacti, nagios etc... supposed to monitor stuff reliably when it randomly changes ?
The only way to get back is to delete the host and start over again since this does another snmpwalk. This makes the entire effort useless cause i can't get long term historic graphs that way.
Can I tell snmpd to make these persistent ?
- TheWitness
- Developer
- Posts: 17007
- Joined: Tue May 14, 2002 5:08 pm
- Location: MI, USA
- Contact:
You likely can't. It's the responsibility of those who create the subagent. The important thing is to develop a data query that re-indexes based upon some primary key field like "Volume Name" for disks, of ifName for ifInterfaces. Once you do that, make sure it's OID is the primary key. Then, you can reindex the data query based upon three different events. Index Count Changed, Uptime Goes Backward or None.
TheWitness
TheWitness
True understanding begins only when we realize how little we truly understand...
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Life is an adventure, let yours begin with Cacti!
Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages
For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Witness
Are you saying the problem is with net-snmp and everyone using cacti with net-snmp will have that problem ?
Or is the problem with the way cacti uses data queries for net-snmp by default ?
I'm not very savvy with snmp at all - I just use the wizard to add more devices.
How is everybody dealing with that ?
Can you give an example of how you'd change a dataquery to index based on volume name instead of oid ? That sound like the solution.
I just don't get why the cacti templates would index by oid knowing that doing that will create problems.
stucky
Are you saying the problem is with net-snmp and everyone using cacti with net-snmp will have that problem ?
Or is the problem with the way cacti uses data queries for net-snmp by default ?
I'm not very savvy with snmp at all - I just use the wizard to add more devices.
How is everybody dealing with that ?
Can you give an example of how you'd change a dataquery to index based on volume name instead of oid ? That sound like the solution.
I just don't get why the cacti templates would index by oid knowing that doing that will create problems.
stucky
linux labels
How about you create a label for your volumes under rhel?
using e2label .... you can label your volumes.....and use the label's (which are persistent)....could work.
using e2label .... you can label your volumes.....and use the label's (which are persistent)....could work.
If all else fails, rm -rf /
To answer my own post. I seem to have stumbled upon a solution by accident.
Click on "Data Sources"
Pick the host in question
Click on the name of the partition in question
Under "Custom Data" change "Index Type" from "dskIndex" to "dskPath"
I tried this on a host before upgrading PopwerPath and the kernel and indeed the graphs remained unchanged - YEY !!!
However, I can't seem to find a way to make this the default. I have to click on EVERY single host and every single partition graph to make this change.
I don't why this wouldn't be the default setting. Why would anyone prefer an index that can randomly change versus a sticky and therefore relieable disk path ?
Anyone ?
Click on "Data Sources"
Pick the host in question
Click on the name of the partition in question
Under "Custom Data" change "Index Type" from "dskIndex" to "dskPath"
I tried this on a host before upgrading PopwerPath and the kernel and indeed the graphs remained unchanged - YEY !!!
However, I can't seem to find a way to make this the default. I have to click on EVERY single host and every single partition graph to make this change.
I don't why this wouldn't be the default setting. Why would anyone prefer an index that can randomly change versus a sticky and therefore relieable disk path ?
Anyone ?
Who is online
Users browsing this forum: No registered users and 0 guests