Trouble graphing Hard Drive Space for Debian 4.0

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

Post Reply
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

Hi,

First some Cacti particulars:
Cacti: 0.8.7g
OS: OpenSuSE 11.3 --it's a Xen domU

All I want to do is graph the Available and Used disk space on a Debian 4.0 host. This should be absolutely trivial, but apparently there is some sort of Cacti magic required. Let's start from the top:

1. I am using the "SNMP hrStorageTable" Data Query; it correctly finds the storage on the host. Output from the "Verbose Query" is available at: http://pastebin.com/KfCsj6zw.

2. I select the Host Template that is applied to the host in question. You can clearly see "Host MIB Available Disk Space" and "Host MIB hrStorageTable" graph templates. I have attached a screen shot of this.

3. Now I try to create a new graph by selecting "New Graphs". Under "Graph Templates" in the "create" pulldown, there is NO graph template for "Host MIB Available Disk Space" or "Host MIB hrStorageTable". There is also no "SNMP hrStorageTable" graph template. Because of this, I simply select the index of the volume I want to graph from the Data Query window. I have attached a screen shot of this.

4. The graph is created, but the values for Total Size and Used Space are both "NaN". The rrd file does not get updated with correct values; it has NaN as well.

5. The data source is number 4925; when I grep the cacti log for " DS[4925] " I get no results

At this point, I'm pretty lost. I have several other hosts graphing storage space utilization just fine, but a couple (including the host in question) just do not seem willing to graph. Any suggestions? (Yes, I've looked at the Debugging NaN Graphs document; no, it did not help resolve the issue.)

Greatly looking forward to any and all assistance with this,

Bobby_Tables
Attachments
cacti-Host-Templates.png
cacti-Host-Templates.png (106.02 KiB) Viewed 2689 times
cacti-Create-Graph-Data-Query-SNMP-hrStorageTable.png
cacti-Create-Graph-Data-Query-SNMP-hrStorageTable.png (25.07 KiB) Viewed 2690 times
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

Note the host id, then run spine as follows:

Code: Select all

./spine -V 3 -R -f <host_id> -l <host_id>
It should report the values for those settings clearly. If they are not reported correctly, you have a problem. Looks like your templates is modified a little bit. You may have someones hosed up template.

Be careful in bringing stray templates in from the cold. They may have rabies, tics or fleas.

TheWitness
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

Hi,

Thanks for the reply. The output of the spine command is below --I did not find any references to the hrStorageTable MIBs. All the data sources it got values for were for other graphs than the one in question.

What does this tell us?

Thanks,

Bobby_Tables

PS: You'll notice in the spine output that it says it scanned 2 hosts, but really it only scanned the one. Bug?

Code: Select all

SPINE: Using spine config file [spine.conf]
SPINE: Version 0.8.7g starting
10/19/2011 03:19:50 PM - SPINE: Poller[0] NOTE: Spine did not detect multithreaded device polling.
10/19/2011 03:19:50 PM - SPINE: Poller[0] NOTE: Spine is behaving in a 0.8.7g manner
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] NOTE: There are '51' Polling Items for this Host
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[378] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php command db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[379] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php status db01-primary monitor password Connections, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[380] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php handler db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[381] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php cache db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[382] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php status db01-primary monitor password Questions, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[384] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php thread db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[385] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php traffic db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[394] SCRIPT: /usr/bin/php -q /usr/share/cacti/scripts/mysql_stats.php hitratio db01-primary monitor password, output: 0
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[107] SNMP: v2: db01-primary, dsname: cpu_nice, oid: .1.3.6.1.4.1.2021.11.51.0, value: 4398294
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[108] SNMP: v2: db01-primary, dsname: cpu_system, oid: .1.3.6.1.4.1.2021.11.52.0, value: 1751964
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[109] SNMP: v2: db01-primary, dsname: cpu_user, oid: .1.3.6.1.4.1.2021.11.50.0, value: 1364562
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[110] SNMP: v2: db01-primary, dsname: load_1min, oid: .1.3.6.1.4.1.2021.10.1.3.1, value: 0.03
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[111] SNMP: v2: db01-primary, dsname: load_15min, oid: .1.3.6.1.4.1.2021.10.1.3.3, value: 0.00
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[112] SNMP: v2: db01-primary, dsname: load_5min, oid: .1.3.6.1.4.1.2021.10.1.3.2, value: 0.05
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[113] SNMP: v2: db01-primary, dsname: mem_buffers, oid: .1.3.6.1.4.1.2021.4.14.0, value: 324948
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[114] SNMP: v2: db01-primary, dsname: mem_cache, oid: .1.3.6.1.4.1.2021.4.15.0, value: 81232
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[115] SNMP: v2: db01-primary, dsname: mem_free, oid: .1.3.6.1.4.1.2021.4.6.0, value: 7732484
10/19/2011 03:19:50 PM - SPINE: Poller[0] Host[20] TH[1] DS[1476] SNMP: v2: db01-primary, dsname: read, oid: .1.3.6.1.4.1.2021.79.101.1, value: 160452691
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1477] SNMP: v2: db01-primary, dsname: write, oid: .1.3.6.1.4.1.2021.80.101.1, value: 15351482152
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1478] SNMP: v2: db01-primary, dsname: cpu_idle, oid: .1.3.6.1.4.1.2021.86.101.1, value: 241411826800
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1479] SNMP: v2: db01-primary, dsname: cpu_nice, oid: .1.3.6.1.4.1.2021.84.101.1, value: 70877844
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1480] SNMP: v2: db01-primary, dsname: cpu_system, oid: .1.3.6.1.4.1.2021.85.101.1, value: 24134272
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1481] SNMP: v2: db01-primary, dsname: cpu_user, oid: .1.3.6.1.4.1.2021.83.101.1, value: 20333925
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1482] SNMP: v2: db01-primary, dsname: eth0_recv_err, oid: .1.3.6.1.4.1.2021.111.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1483] SNMP: v2: db01-primary, dsname: eth0_xmit_err, oid: .1.3.6.1.4.1.2021.112.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1484] SNMP: v2: db01-primary, dsname: eth1_recv_err, oid: .1.3.6.1.4.1.2021.113.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1485] SNMP: v2: db01-primary, dsname: eth1_xmit_err, oid: .1.3.6.1.4.1.2021.114.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1486] SNMP: v2: db01-primary, dsname: forks, oid: .1.3.6.1.4.1.2021.63.101.1, value: 143090740
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1487] SNMP: v2: db01-primary, dsname: ctxt, oid: .1.3.6.1.4.1.2021.62.101.1, value: 19363533330
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1488] SNMP: v2: db01-primary, dsname: intr, oid: .1.3.6.1.4.1.2021.61.101.1, value: 26464224576
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1489] SNMP: v2: db01-primary, dsname: mem_active, oid: .1.3.6.1.4.1.2021.75.101.1, value: 14599477088
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1490] SNMP: v2: db01-primary, dsname: mem_apps, oid: .1.3.6.1.4.1.2021.68.101.1, value: 1632862980
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1491] SNMP: v2: db01-primary, dsname: mem_buffers, oid: .1.3.6.1.4.1.2021.70.101.1, value: 13731391314
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1492] SNMP: v2: db01-primary, dsname: mem_cached, oid: .1.3.6.1.4.1.2021.71.101.1, value: 2199393640
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1493] SNMP: v2: db01-primary, dsname: mem_free, oid: .1.3.6.1.4.1.2021.69.101.1, value: 520082777714
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1494] SNMP: v2: db01-primary, dsname: mem_inactive, oid: .1.3.6.1.4.1.2021.76.101.1, value: 1502683538
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1495] SNMP: v2: db01-primary, dsname: mem_page_tables, oid: .1.3.6.1.4.1.2021.66.101.1, value: 7341078
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1496] SNMP: v2: db01-primary, dsname: mem_slab, oid: .1.3.6.1.4.1.2021.64.101.1, value: 2541297664
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1497] SNMP: v2: db01-primary, dsname: mem_swap_cache, oid: .1.3.6.1.4.1.2021.65.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1498] SNMP: v2: db01-primary, dsname: mem_swap_used, oid: .1.3.6.1.4.1.2021.72.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1499] SNMP: v2: db01-primary, dsname: mem_vmalloc_used, oid: .1.3.6.1.4.1.2021.67.101.1, value: 423051392
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1505] SNMP: v2: db01-primary, dsname: swap_in, oid: .1.3.6.1.4.1.2021.81.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[1506] SNMP: v2: db01-primary, dsname: swap_out, oid: .1.3.6.1.4.1.2021.82.101.1, value: 0
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2510] SNMP: v2: db01-primary, dsname: forks, oid: .1.3.6.1.4.1.2021.63.101.1, value: 143090775
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2511] SNMP: v2: db01-primary, dsname: procs, oid: .1.3.6.1.4.1.2021.77.101.1, value: 321
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2799] SNMP: v2: db01-primary, dsname: net_active, oid: .1.3.6.1.4.1.2021.91.101.1, value: 90388
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2800] SNMP: v2: db01-primary, dsname: net_established, oid: .1.3.6.1.4.1.2021.95.101.1, value: 3
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2801] SNMP: v2: db01-primary, dsname: net_failed, oid: .1.3.6.1.4.1.2021.93.101.1, value: 309
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2802] SNMP: v2: db01-primary, dsname: net_passive, oid: .1.3.6.1.4.1.2021.92.101.1, value: 416008
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2803] SNMP: v2: db01-primary, dsname: net_resets, oid: .1.3.6.1.4.1.2021.94.101.1, value: 87608
10/19/2011 03:19:51 PM - SPINE: Poller[0] Host[20] TH[1] DS[2979] SNMP: v2: db01-primary, dsname: await_sda, oid: .1.3.6.1.4.1.2021.121.101.1, value: 34.89
10/19/2011 03:19:51 PM - SPINE: Poller[0] Time: 0.7494 s, Threads: 4, Hosts: 2
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

No on the bug. You are missing somthing in your setup. Either the Data Query /resource/script_server/host_disk.xml or something serious with the template. However, the fact that the information is there, I'm a bit perplexed. What is the name of the Data Query?

TheWitness
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

The name of the data query is SNMP hrStorageTable. I've attached a screen shot of the data query settings. Additionally, I've pasted both /usr/share/cacti/resource/snmp_queries/host_disk.xml and htStorageTable.xml.

Thank you again for your time, I really do appreciate it! This has had me running in circles for some time.

Bobby_Tables

host_disk.xml

Code: Select all

<interface>
        <name>Get Host Partition Information</name>
        <index_order_type>numeric</index_order_type>
        <oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index>

        <fields>
                <hrStorageIndex>
                        <name>Index</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.1</oid>
                </hrStorageIndex>
                <hrStorageDescr>
                        <name>Description</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.3</oid>
                </hrStorageDescr>
                <hrStorageAllocationUnits>
                        <name>Storage Allocation Units</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
                </hrStorageAllocationUnits>

                <hrStorageSize>
                        <name>Total Size</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>output</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.5</oid>
                </hrStorageSize>
                <hrStorageUsed>
                        <name>Total Used</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>output</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.6</oid>
                </hrStorageUsed>
                <hrStorageAllocationFailures>
                        <name>Allocation Failures</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>output</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.7</oid>
                </hrStorageAllocationFailures>
        </fields>
</interface>
hrStorageTable.xml

Code: Select all

<interface>
        <name>Get hrStoragedTable Information</name>
        <description>Get SNMP based Partition Information out of hrStorageTable</description>
        <index_order_type>numeric</index_order_type>
        <oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index>

        <fields>
                <hrStorageIndex>
                        <name>Index</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.1</oid>
                </hrStorageIndex>
                <hrStorageType>
                        <name>Type</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.2</oid>

                </hrStorageType>
                <hrStorageDescr>
                        <name>Description</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.3</oid>
                </hrStorageDescr>
                <hrStorageAllocationUnits>
                        <name>Allocation Units (Bytes)</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.4</oid>
                </hrStorageAllocationUnits>
                <hrStorageSize>
                        <name>Total Size (Units)</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.5</oid>
                </hrStorageSize>
                <hrStorageUsed>
                        <name>Used Space (Units)</name>
                        <method>walk</method>
                        <source>value</source>
                        <direction>input</direction>
                        <oid>.1.3.6.1.2.1.25.2.3.1.6</oid>
                </hrStorageUsed>


        </fields>
</interface>
Attachments
cacti-Data-Queries-SNMP-hrStorageTable.png
cacti-Data-Queries-SNMP-hrStorageTable.png (30.85 KiB) Viewed 2681 times
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

Notice the value of the 'direction' tags on the XML files. The ones on the host_disk.xml is correct, The other XML file is incorrect.

You will have to reindex/save the host once you have that fixed.
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

Ok, I changed the direction tags for hrStorageSize and hrStorageUsed to "output"; from what I could tell, these were the only incorrect direction tags. Then, I ran the poller_reindex.php script, and then the spine command as referenced in your previous post. On the spine output, the number of data sources did not change (still at 51) and I could not find the storage data sources in the output. I'll give it a bit and see if anything changes.

edit:

Still no dice --the data source doesn't show up either when running spine, or in the cacti log.

Any further suggestions?

Thanks,

Bobby_Tables
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

Did you re-index and save the host as I requested?
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

I believe so; I clicked the "Save" button on the host's page under "Devices". I then ran both "poller_reindex_hosts.php" and "rebuild_poller_cache.php". I then re-ran the spine command, it still reports 51 data sources. Additionally, I ran poller_reindex_hosts.php as follows:

Code: Select all

wwwrun@xen-util:/usr/share/cacti/cli> php poller_reindex_hosts.php --id=20 --qid=16 -d 
WARNING: Do not interrupt this script.  Reindexing can take quite some time
DEBUG: There are '1' data queries to run
DEBUG: Data query number '1' host: '20' SNMP Query Id: '16' starting
DEBUG: Data query number '1' host: '20' SNMP Query Id: '16' ending
Is there a different process to save/reindex a host?

Thanks,

Bobby_Tables
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

Well, by running the poller_reindex_hosts.php script, you essentially duplicated effort. Reindexing from the UI and Saving should have refreshed the poller cache.
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

So then it should have picked up the changes, but it still does not work.

I really don't understand what the problem is, or where it lies --this was all working before a Cacti upgrade a few months ago. Now most of my disk space graphs on most of my hosts are mysteriously broken...

Well, I don't know...any other suggestions?

edit:

I changed /etc/snmp/snmpd.conf on a few of the affected servers to include

Code: Select all

disk /
disk /home
disk /var
disk /boot
reloaded snmpd, and added the "ucd/net Get Mounted Partitions" data query to the host, and those hosts magically started graphing disk space again. Why this wasn't needed before, and what changed to make it necessary, I don't know, but it appears to work.
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

That's a different template from what I thought you were monitoring.
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Bobby_Tables
Posts: 29
Joined: Mon Nov 22, 2010 12:42 pm

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by Bobby_Tables »

What did you think I was monitoring? The template in use is Host MIB - Available Disk Space --that has not changed. When I added the "ucd/net Get Monitored Partitions" query, I didn't have to change or add anything else. Suddenly some graphs just started working again.

This thread is actually related to my other thread you replied to, where all the sudden Cacti just stopped getting correct data from ss_host_disk.php

I apologize for how round-about this all seems, but from my perspective, Cacti was working just fine for quite awhile, and then all the sudden it just stops working, and now I have to hack away at snmpd.conf files, add different data sources, data queries and all this other non-sense...when really, I just want to figure out why my graphs broke in the first place.
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Re: Trouble graphing Hard Drive Space for Debian 4.0

Post by TheWitness »

I wish I could lend perspective. However, unless I have access to the systems its beyond what I can do
True understanding begins only when we realize how little we truly understand...

Life is an adventure, let yours begin with Cacti!

Author of dozens of Cacti plugins and customization's. Advocate of LAMP, MariaDB, IBM Spectrum LSF and the world of batch. Creator of IBM Spectrum RTM, author of quite a bit of unpublished work and most of Cacti's bugs.
_________________
Official Cacti Documentation
GitHub Repository with Supported Plugins
Percona Device Packages (no support)
Interesting Device Packages


For those wondering, I'm still here, but lost in the shadows. Yearning for less bugs. Who want's a Cacti 1.3/2.0? Streams anyone?
Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests