Ad blocker detected: Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker on our website.
b0fh wrote:Template looks nice, but same problem here with two Xeon 6 core CPUs: User/system times are wrong, idle is "nan".
Did anyone fix this or know any other nice scalable Linux-Multi-CPU template?
Edit: The template seems to work when most of the cores are under load. "calculated logical CPUs" and usage/idle percent are shown correctly. It seems to be a problem when most of the available cores are not loaded in the given interval.
Hello.
I'm having exactly the same issue on all machines.
The machines have 24 cores but the Calculated Logical CPUs are being reported as 2, 3 or 4. So the graph is really wrong.
Is there any known fix for this?
The Fridh scripts are awesome, we are using it on our Cacti installation, graphing 4 hosts, let's call them A, B, C and D.
Cacti runs on host C, graphing itself using SNMP on localhost.
But I ran into a problem. On two hosts, A and B, it is graphing the CPU usage and IO stats correctly. These are A: a Centos 6.4 (NET-SNMP 5.5, 3 cores) and B: a Ubuntu 10.04 (NET-SNMP 5.4.2.1, 6 cores) installations. The machine C, on which Cacti is running on, is a Ubuntu 12.04 machine with NET-SNMP 5.4.3, and 1 core.
Machine D is another Ubuntu 12.04 machine with 2 cores. C can not graph the Fridh datasources on C and D. I tried running the poller script with --force, to see if the corresponding .RRD files would be created, but no. Any idea why? What can I do to debug this?
FYI, there is a max value of 2000 set on some of the values stored in the RRD. I think this translates to 20*100% so that causes the graphs to stop working for any systems with more than 20 CPUs. To fix it, just change the 9 instances of
I had installed this and have been getting wrong information for a while. I wasn't able to get your fix to work in my environment, however, I used rrdtool to update the associated RRD files using your discovery.
This will update all files containing _sscpurawsystem_ for each ds with a new upper bound limit of NaN. Setting the maximum value of these ds's to NaN means there is no upper bound.
Has anyone tried to create a threshold on the total CPU usage at all? Trying to wrap my head around this as you can only create thresholds on the actual data source. The current template uses the CDEF function to find the total. Anyone know a way to grab the value of that CDEF?
Hi,
My CPU localhost graph was worked correctly, but when I changed RRD's setting, it didn't work anymore!!!
I have memory usage, logged in user and etc localhost graph but in CPU I see nan value.
How to fix it?
Bump for catchvjay's question. I'm having the exact same issue with dual Intel Xeons with 6 cores giving me 24 threads. My graph looks exactly like his wrt the values being inaccurate and the CPU count being 0. Fridh's graph works for my other boxes (thanks), which have dual Xeons with 4 cores (16 threads). Maybe we're exceeding some threshold?
Cacti version is 0.8.7b
I verified the OIDs are correct for systemstats.xml and verified via Graph Management with debugging turned on.
Thanks
Hi,
This works fine for my 64 cores CPU servers.
px5.png (60 KiB) Viewed 2738 times
But I just realized that one of them only shows 63 CPUs
px6.png (60.71 KiB) Viewed 2738 times
On the other side, this doesn't work for my 128 cores CPU servers.
One is showing 12 CPUs
chuonthis wrote:FYI, there is a max value of 2000 set on some of the values stored in the RRD. I think this translates to 20*100% so that causes the graphs to stop working for any systems with more than 20 CPUs. To fix it, just change the 9 instances of
I'm using a the original fdridh template (Thanks!). and I'd like to add a reading to graph the CPU core with the highest usage.
The system that's being graphed has 48 cores (2x Xeon + Hyper-threading enabled).
It's being used for a site-to-site VPN, and with the pcrypt kernel module, we're able to use more than one CPU core for a single site-to-site VPN. However, not al CPU cores are used for VPN, to the graph doesn't show is a CPU core is core to 100% (If the pcrypt doesn't work, then the kernel will only use a single core, which will immediately max out).
I was looking at the CDEF function that calculates the average from all the CPU cores, but could not figure out how to graph a max of all the cores
Something like this:
cdef=a,b,c,d,e,f,+,+,+,+,+, ALL_DATA_SOURCES_MAX
(I'm not sure if there is a function I could use to get the max from all the variables)