I've been using Cacti for about a week and a half. I've just finished setting up all my SLES servers and now I'm starting on our NetWare cluster.
The cluster poses an interesting issue.
There are 12 nodes, each with it's own IP and a pair of system volumes.
There are 20 resources, each with it's own IP, one or more volumes and multiple services.
The resources move between nodes in the event of failover or manual migration. Therefore, a query of memory of cpu utilization from resource1 will return values from whichever node it happens to be resident on at the time of the query.
I'm thinking about the best way to monitor everything with a minimal number of SNMP queries. I could just restrict memory/cpu/sensor monitoring to nodes and volume service monitoring to resources but those the former is still desirable to be grouped with the resource graphs...
Has anyone dealt with this situation or similar before? What did you do?
I'm thinking just using querying everything I want, everywhere I want since I don't think I have so many resources that multiple queries of the same stats would have a measurable effect on the server.
Monitoring Novell Cluster Services - Resources, nodes, both?
Moderators: Developers, Moderators
-
- Posts: 6
- Joined: Tue Feb 13, 2007 5:14 pm
- Location: Columbus, OH
In our scenario we have 70 or more netware clusters of varying size, and 1300 or so virtual servers. For organizational purposes, I added device records in cacti for all of the physical servers, and under there I graph things like CPU, FS read/rights, caching, network performance, etc.
I then add all of the virtual servers as their own devices. Since our virtual servers are pretty much all just volume resources, I then add graphs for volume specfic info to the virtual server tree items, i.e. available space, purgable, etc. This effectively seperates the "server-type stuff" of cpus etc from the "resource-type stuff" of volume space, and works well for us.
As for the cluster IP address itself, I ignore it. I haven't found anything of real value to keep an eye on it for.
I then add all of the virtual servers as their own devices. Since our virtual servers are pretty much all just volume resources, I then add graphs for volume specfic info to the virtual server tree items, i.e. available space, purgable, etc. This effectively seperates the "server-type stuff" of cpus etc from the "resource-type stuff" of volume space, and works well for us.
As for the cluster IP address itself, I ignore it. I haven't found anything of real value to keep an eye on it for.
Who is online
Users browsing this forum: No registered users and 2 guests