Remote Load Average Script
Moderators: Developers, Moderators
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Remote Load Average Script
I have a remote Ubuntu 10.04 machine running as a DNS and mail server and am having problem graphing certain information. I placed the default loadavg_multi.pl script in a location on the remote machine and used SNMP's extend feature to send that information to the Cacti machine. I am able to get the 5 min and 10 min times to graph but the 1 min never seems to show any data.
Here is the output of the loadavg_multi.pl script on the mail/DNS server
Oakfieldleft-mail:/usr/local/bin$ sudo ./loadavg_multi.pl
1min:0.01 5min:0.02 10min:0.00
Here is the output of a snmpwalk from the Cacti machine
homeserver:/usr/share/cacti/site/scripts$ sudo ./unix_rem_loadavg.pl 192.168.2.xxx
incoming:0 active:0 deferred:0 hold:0
1min:0.00 5min:0.00 10min:0.00
SIOTemp:33.0 VCC:3.3 Vcore:1.4 cpu0_vid:1.5 in0:2.4 in6:1.5 in7:1.7 temp2:36.0 temp3:31.0
The above script simply is used to query remote clients who use SNMP's extend feature to send data through this protocol. The script is posted below. As you can see, only one index is used for all extend information.
#!/bin/bash
#/usr/share/cacti/site/scripts/rem_snmp.pl
#/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1$
output=`/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.$
printf "$output"
echo
This is the data input method page showing the output names.
This is the actual graph that is not showing the 1 min intervals.
I am using Cacti 0.8.7e on Ubuntu desktop 10.04 amd64. This install was done no less than 3 weeks ago.
Please advise. Thanks!
Here is the output of the loadavg_multi.pl script on the mail/DNS server
Oakfieldleft-mail:/usr/local/bin$ sudo ./loadavg_multi.pl
1min:0.01 5min:0.02 10min:0.00
Here is the output of a snmpwalk from the Cacti machine
homeserver:/usr/share/cacti/site/scripts$ sudo ./unix_rem_loadavg.pl 192.168.2.xxx
incoming:0 active:0 deferred:0 hold:0
1min:0.00 5min:0.00 10min:0.00
SIOTemp:33.0 VCC:3.3 Vcore:1.4 cpu0_vid:1.5 in0:2.4 in6:1.5 in7:1.7 temp2:36.0 temp3:31.0
The above script simply is used to query remote clients who use SNMP's extend feature to send data through this protocol. The script is posted below. As you can see, only one index is used for all extend information.
#!/bin/bash
#/usr/share/cacti/site/scripts/rem_snmp.pl
#/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1$
output=`/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.$
printf "$output"
echo
This is the data input method page showing the output names.
This is the actual graph that is not showing the 1 min intervals.
I am using Cacti 0.8.7e on Ubuntu desktop 10.04 amd64. This install was done no less than 3 weeks ago.
Please advise. Thanks!
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Re: Remote Load Average Script
Here is some more information on this problem.
cacti.log output is showing that the 5min interval is the only one being polled even though I have the 1min and 10min setup to be polled as well.
06/25/2013 03:01:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_load_10min_63.rrd --template rem_load_5min 1372143661:0.00
This should be polling all three data sources.
I have tried deleting and recreating the graph several times to no avail.
It seems as though the 10min will poll only at certain times by the 1min graph never graphs.
I really need some input on this as I cannot get the 1min poll to graph and can't get the 10min interval to poll all the time.
cacti.log output is showing that the 5min interval is the only one being polled even though I have the 1min and 10min setup to be polled as well.
06/25/2013 03:01:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_load_10min_63.rrd --template rem_load_5min 1372143661:0.00
This should be polling all three data sources.
I have tried deleting and recreating the graph several times to no avail.
It seems as though the 10min will poll only at certain times by the 1min graph never graphs.
I really need some input on this as I cannot get the 1min poll to graph and can't get the 10min interval to poll all the time.
- Attachments
-
- loadavg2.jpg (187.17 KiB) Viewed 2255 times
-
- loadavg.jpg (324.88 KiB) Viewed 2255 times
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Re: Remote Load Average Script
Since I cannot seem to get any answers from anyone on this, I will post more information.
By looking at the DEBUG output of my Cacti machine, I can tell that it is ALWAYS polling the 5 minute interval. Sometimes it will poll the 10 minute interval and will show this accordingly in the DEBUG output as shown below.
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field '5min:0.01' [map 5min->rem_5min] <------ Load Average Poll - 5 Minute
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field '10min:0.00' [map 10min->rem_10min] <----- Load Average Poll - 10 Minute
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp2:40.0' [map temp2->mb_temps2]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp3:35.0' [map temp3->mb_temps3]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'incoming:0' [map incoming->pf_incoming]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'active:0' [map active->pf_active]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'deferred:0' [map deferred->pf_deferred]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'hold:0' [map hold->pf_hold]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'VCC:3.3' [map VCC->mb_vcc]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'Vcore:1.4' [map Vcore->mb_vcore]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'cpu0_vid:1.5' [map cpu0_vid->gpu_volt]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in0:2.4' [map in0->mb_volt0]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in6:1.5' [map in6->mb_volt6]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in7:1.7' [map in7->mb_volt7]
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_5min:rem_10min 1372225262:0.01:0.00 <------RRDTool graph update - This is only graphing the 5 and 10 minute interval as shown here. If Cacti were to correctly poll all three sources it should look like this:
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_1min:rem_5min:rem_10min 1372225262:0.01:0.00
----------------------------------------------------------------------------------------------------------------------
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_temps3_67.rrd --template mb_temps2:mb_temps3 1372225262:40.0:35.0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_pf_deferred_53.rrd --template pf_incoming:pf_active:pf_deferred:pf_hold 1372225262:0:0:0:0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_volt6_51.rrd --template mb_vcc:mb_vcore:gpu_volt:mb_volt0:mb_volt6:mb_volt7 1372225262:3.3:1.4:1.5:2.4:1.5:1.7
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_ping_35.rrd --template ping 1372225262:0.125
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_processes_31.rrd --template unix_processes 1372225262:94
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_users_30.rrd --template unix_users 1372225262:1
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_icmpechoin_29.rrd --template unix_icmpechoout:unix_icmpechoin 1372225262:0:10330
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_estconn_28.rrd --template unix_estconn 1372225262:2
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_traffic_in_27.rrd --template traffic_out:traffic_in 1372225262:102995834:93795770
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_cpu_26.rrd --template cpu 1372225262:1
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_25.rrd --template hdd_total:hdd_used 1372225262:77268647936:1067106304
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_24.rrd --template hdd_total:hdd_used 1372225262:1523572736:0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_23.rrd --template hdd_total:hdd_used 1372225262:222818304:222818304
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_22.rrd --template hdd_total:hdd_used 1372225262:519880704:5505024
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_21.rrd --template hdd_total:hdd_used 1372225262:519880704:301363200
Now sometimes it doesn't poll the 10 minute interval as shown here.
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field '5min:0.02' [map 5min->rem_5min] <---- Load Average Poll - 5 Minute
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp2:39.0' [map temp2->mb_temps2]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp3:35.0' [map temp3->mb_temps3]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'incoming:0' [map incoming->pf_incoming]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'active:0' [map active->pf_active]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'deferred:0' [map deferred->pf_deferred]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'hold:0' [map hold->pf_hold]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'VCC:3.3' [map VCC->mb_vcc]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'Vcore:1.4' [map Vcore->mb_vcore]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'cpu0_vid:1.5' [map cpu0_vid->gpu_volt]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in0:2.4' [map in0->mb_volt0]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in6:1.5' [map in6->mb_volt6]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in7:1.7' [map in7->mb_volt7]
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_5min 1372225802:0.02 <----- This shows that only the 5 minute interval is being graphed by RRDTool. Neither the 10 or 1 minute interval is graphing or being polled.
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_temps3_67.rrd --template mb_temps2:mb_temps3 1372225802:39.0:35.0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_pf_deferred_53.rrd --template pf_incoming:pf_active:pf_deferred:pf_hold 1372225802:0:0:0:0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_volt6_51.rrd --template mb_vcc:mb_vcore:gpu_volt:mb_volt0:mb_volt6:mb_volt7 1372225802:3.3:1.4:1.5:2.4:1.5:1.7
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_ping_35.rrd --template ping 1372225802:0.121
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_processes_31.rrd --template unix_processes 1372225802:94
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_users_30.rrd --template unix_users 1372225802:1
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_icmpechoin_29.rrd --template unix_icmpechoout:unix_icmpechoin 1372225802:0:10339
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_estconn_28.rrd --template unix_estconn 1372225802:2
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_traffic_in_27.rrd --template traffic_out:traffic_in 1372225802:103076171:93857525
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_cpu_26.rrd --template cpu 1372225802:1
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_25.rrd --template hdd_total:hdd_used 1372225802:77268647936:1067151360
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_24.rrd --template hdd_total:hdd_used 1372225802:1523572736:0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_23.rrd --template hdd_total:hdd_used 1372225802:222867456:222867456
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_22.rrd --template hdd_total:hdd_used 1372225802:519880704:5505024
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_21.rrd --template hdd_total:hdd_used 1372225802:519880704:301363200
I have confirmed that my script is setup correctly and works fine when I run it manually as seen here.
homeserver:/usr/share/cacti/site/scripts$ sudo ./rem_snmp.pl 192.168.x.xxx
incoming:0 active:0 deferred:0 hold:0
1min:0.00 5min:0.00 10min:0.00
SIOTemp:37.0 VCC:3.3 Vcore:1.4 cpu0_vid:1.5 in0:2.4 in6:1.5 in7:1.7 temp2:39.0 temp3:35.0
Can anyone please give me some kind of update on whether this is a known bug with Cacti 0.8.7e on Ubuntu or a possible issue with Spine? Is there another type of script I should be using to gather the data I need? Any help is appreciated. Thanks.
By looking at the DEBUG output of my Cacti machine, I can tell that it is ALWAYS polling the 5 minute interval. Sometimes it will poll the 10 minute interval and will show this accordingly in the DEBUG output as shown below.
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field '5min:0.01' [map 5min->rem_5min] <------ Load Average Poll - 5 Minute
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field '10min:0.00' [map 10min->rem_10min] <----- Load Average Poll - 10 Minute
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp2:40.0' [map temp2->mb_temps2]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp3:35.0' [map temp3->mb_temps3]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'incoming:0' [map incoming->pf_incoming]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'active:0' [map active->pf_active]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'deferred:0' [map deferred->pf_deferred]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'hold:0' [map hold->pf_hold]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'VCC:3.3' [map VCC->mb_vcc]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'Vcore:1.4' [map Vcore->mb_vcore]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'cpu0_vid:1.5' [map cpu0_vid->gpu_volt]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in0:2.4' [map in0->mb_volt0]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in6:1.5' [map in6->mb_volt6]
06/26/2013 01:41:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in7:1.7' [map in7->mb_volt7]
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_5min:rem_10min 1372225262:0.01:0.00 <------RRDTool graph update - This is only graphing the 5 and 10 minute interval as shown here. If Cacti were to correctly poll all three sources it should look like this:
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_1min:rem_5min:rem_10min 1372225262:0.01:0.00
----------------------------------------------------------------------------------------------------------------------
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_temps3_67.rrd --template mb_temps2:mb_temps3 1372225262:40.0:35.0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_pf_deferred_53.rrd --template pf_incoming:pf_active:pf_deferred:pf_hold 1372225262:0:0:0:0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_volt6_51.rrd --template mb_vcc:mb_vcore:gpu_volt:mb_volt0:mb_volt6:mb_volt7 1372225262:3.3:1.4:1.5:2.4:1.5:1.7
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_ping_35.rrd --template ping 1372225262:0.125
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_processes_31.rrd --template unix_processes 1372225262:94
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_users_30.rrd --template unix_users 1372225262:1
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_icmpechoin_29.rrd --template unix_icmpechoout:unix_icmpechoin 1372225262:0:10330
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_estconn_28.rrd --template unix_estconn 1372225262:2
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_traffic_in_27.rrd --template traffic_out:traffic_in 1372225262:102995834:93795770
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_cpu_26.rrd --template cpu 1372225262:1
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_25.rrd --template hdd_total:hdd_used 1372225262:77268647936:1067106304
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_24.rrd --template hdd_total:hdd_used 1372225262:1523572736:0
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_23.rrd --template hdd_total:hdd_used 1372225262:222818304:222818304
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_22.rrd --template hdd_total:hdd_used 1372225262:519880704:5505024
06/26/2013 01:41:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_21.rrd --template hdd_total:hdd_used 1372225262:519880704:301363200
Now sometimes it doesn't poll the 10 minute interval as shown here.
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field '5min:0.02' [map 5min->rem_5min] <---- Load Average Poll - 5 Minute
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp2:39.0' [map temp2->mb_temps2]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'temp3:35.0' [map temp3->mb_temps3]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'incoming:0' [map incoming->pf_incoming]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'active:0' [map active->pf_active]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'deferred:0' [map deferred->pf_deferred]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'hold:0' [map hold->pf_hold]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'VCC:3.3' [map VCC->mb_vcc]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'Vcore:1.4' [map Vcore->mb_vcore]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'cpu0_vid:1.5' [map cpu0_vid->gpu_volt]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in0:2.4' [map in0->mb_volt0]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in6:1.5' [map in6->mb_volt6]
06/26/2013 01:50:04 AM - POLLER: Poller[0] Parsed MULTI output field 'in7:1.7' [map in7->mb_volt7]
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_5min 1372225802:0.02 <----- This shows that only the 5 minute interval is being graphed by RRDTool. Neither the 10 or 1 minute interval is graphing or being polled.
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_temps3_67.rrd --template mb_temps2:mb_temps3 1372225802:39.0:35.0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_pf_deferred_53.rrd --template pf_incoming:pf_active:pf_deferred:pf_hold 1372225802:0:0:0:0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_mb_volt6_51.rrd --template mb_vcc:mb_vcore:gpu_volt:mb_volt0:mb_volt6:mb_volt7 1372225802:3.3:1.4:1.5:2.4:1.5:1.7
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_ping_35.rrd --template ping 1372225802:0.121
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_processes_31.rrd --template unix_processes 1372225802:94
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_users_30.rrd --template unix_users 1372225802:1
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_icmpechoin_29.rrd --template unix_icmpechoout:unix_icmpechoin 1372225802:0:10339
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_unix_estconn_28.rrd --template unix_estconn 1372225802:2
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_traffic_in_27.rrd --template traffic_out:traffic_in 1372225802:103076171:93857525
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_cpu_26.rrd --template cpu 1372225802:1
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_25.rrd --template hdd_total:hdd_used 1372225802:77268647936:1067151360
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_24.rrd --template hdd_total:hdd_used 1372225802:1523572736:0
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_23.rrd --template hdd_total:hdd_used 1372225802:222867456:222867456
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_22.rrd --template hdd_total:hdd_used 1372225802:519880704:5505024
06/26/2013 01:50:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_hdd_used_21.rrd --template hdd_total:hdd_used 1372225802:519880704:301363200
I have confirmed that my script is setup correctly and works fine when I run it manually as seen here.
homeserver:/usr/share/cacti/site/scripts$ sudo ./rem_snmp.pl 192.168.x.xxx
incoming:0 active:0 deferred:0 hold:0
1min:0.00 5min:0.00 10min:0.00
SIOTemp:37.0 VCC:3.3 Vcore:1.4 cpu0_vid:1.5 in0:2.4 in6:1.5 in7:1.7 temp2:39.0 temp3:35.0
Can anyone please give me some kind of update on whether this is a known bug with Cacti 0.8.7e on Ubuntu or a possible issue with Spine? Is there another type of script I should be using to gather the data I need? Any help is appreciated. Thanks.
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Re: Remote Load Average Script
Here is the script that I am using to gather the data.
#!/bin/bash
#/usr/share/cacti/site/scripts/rem_snmp.pl
#/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1.1' | cut -d '"' -f 2
output=`/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1.1' | cut -d '"' -f 2`
printf "$output"
#!/bin/bash
#/usr/share/cacti/site/scripts/rem_snmp.pl
#/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1.1' | cut -d '"' -f 2
output=`/usr/bin/snmpwalk -Oqav -v2c -c *communityname* -t30 $1 '.1.3.6.1.4.1.8072.1.3.2.3.1.1' | cut -d '"' -f 2`
printf "$output"
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Re: Remote Load Average Script
Here is what I get when I try a manual RRDTool update using all three sources.
homeserver:/usr/share/cacti/site/scripts$ sudo rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_1min:rem_5min:rem_10min 1372225262:0.01:0.00
ERROR: /var/lib/cacti/rra/mailserver_rem_5min_68.rrd: expected 3 data source readings (got 3) from 1372225262
homeserver:/usr/share/cacti/site/scripts$ sudo rrdtool update /var/lib/cacti/rra/mailserver_rem_5min_68.rrd --template rem_1min:rem_5min:rem_10min 1372225262:0.01:0.00
ERROR: /var/lib/cacti/rra/mailserver_rem_5min_68.rrd: expected 3 data source readings (got 3) from 1372225262
- Howie
- Cacti Guru User
- Posts: 5508
- Joined: Thu Sep 16, 2004 5:53 am
- Location: United Kingdom
- Contact:
Re: Remote Load Average Script
Do you get the same thing if you use the standard SNMP graph template for this ("ucd/net - Load Average")?
Weathermap 0.98a is out! & QuickTree 1.0. Superlinks is over there now (and built-in to Cacti 1.x).
Some Other Cacti tweaks, including strip-graphs, icons and snmp/netflow stuff.
(Let me know if you have UK DevOps or Network Ops opportunities, too!)
Some Other Cacti tweaks, including strip-graphs, icons and snmp/netflow stuff.
(Let me know if you have UK DevOps or Network Ops opportunities, too!)
-
- Posts: 12
- Joined: Wed May 29, 2013 6:53 pm
Re: Remote Load Average Script
Howie,
I tried using ucd/net - Load Average and it worked for me. I checked my debug information and it is showing a separate RRD file for each load average. As shown below.
11/25/2013 02:07:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_1min_75.rrd --template load_1min 1385363221:0.08
11/25/2013 02:08:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_15min_76.rrd --template load_15min 1385363282:0.00
11/25/2013 02:09:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_5min_77.rrd --template load_5min 1385363342:0.02
The graph is now displaying correctly.
Thank you very much Howie for the advice. I didn't think I was ever going to get an answer.
I tried using ucd/net - Load Average and it worked for me. I checked my debug information and it is showing a separate RRD file for each load average. As shown below.
11/25/2013 02:07:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_1min_75.rrd --template load_1min 1385363221:0.08
11/25/2013 02:08:04 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_15min_76.rrd --template load_15min 1385363282:0.00
11/25/2013 02:09:03 AM - POLLER: Poller[0] CACTI2RRD: /usr/bin/rrdtool update /var/lib/cacti/rra/mailserver_load_5min_77.rrd --template load_5min 1385363342:0.02
The graph is now displaying correctly.
Thank you very much Howie for the advice. I didn't think I was ever going to get an answer.
Who is online
Users browsing this forum: No registered users and 1 guest