Spine Progress Reports

Important information about Cacti developments that all users should be interested in.

Moderators: Developers, Moderators

User avatar
TheWitness
Developer
Posts: 16997
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

>> Poller <<

Post by TheWitness »

You might want to entertain having andy blyler write one. He did a pretty good job with the unix only variant. It ramps things up a bit for unix, but I haven't tested it on Win32 yet. It is a start anyway...

Larry
giZmo
Posts: 2
Joined: Fri Dec 27, 2002 2:21 am
Location: Marseille - France
Contact:

Spine

Post by giZmo »

I am beginning the writing of a spine evolution
for cacti 0.8

Eric
raX
Lead Developer
Posts: 2243
Joined: Sat Oct 13, 2001 7:00 pm
Location: Carlisle, PA
Contact:

Post by raX »

Keep in mind that we already have a working c-based poller for 0.8. It can be found in the CVS tree here:

http://cvs.sourceforge.net/cgi-bin/view ... ev/poller/

I doubt this one will be called spine though, but something more generic such as 'cactid'.

-Ian
schittel
Posts: 4
Joined: Mon Jan 20, 2003 8:55 am
Location: Cologne, Germany

feature request for poller

Post by schittel »

Hello,

I have two suggestions for features of the new poller:

- random sampling intervals as in RFC2330 (Section 11.1)

- include interface state (SNMP: IfOperState) in polling. If an interface
is not operational, NaN should be recorded in the database.

- implement bulk sampling. When snmpgetting *many* interfaces from the same device it would be *much* more efficient to snmpbulkwalk *all* interfaces and filter the response if necessary.

--- Christoph
ranko
Posts: 9
Joined: Tue Feb 04, 2003 7:07 pm
Location: Nicosia, Cyprus

A way to extend current cmd.php

Post by ranko »

Hi there,

Did you guys consider creating something like a "helper" application for the existing cmd.php:

The idea behind this is to try and keep the main code that decides when and what to pool in the php, and to have this little & simple helper app. that would do the "donkey" jobs very fast.

I see it as a fork from the cmd.php that passes to it what needs to be done during its run. The helper could support few basic "commands" that cmd.php assigns some sort of "job id's" to which it will expect the response to.

Now, the helper would not need to know how to talk to the database and the rest of the "high level" mumbo jumbo which might complicate the development, but its function would be only to provide the parallelism in the execution of the multiple tasks via threading and forking.

For the efficiency sake, it could internally handle all of the SNMP communications, because the most of the data anyhow is collected that way, and optionally something else, for example ICMP. The rest of the things would be forks to the external programs whose results should be returned to cmd.php as the "job results".

The basic session of cmd.php and the "helper" could look this way:
The cmd.php starts and generates the list of things for the helper to do in the following format:
1|1:1:1|S|<host1>|<snmp options>|<oid>
2|1:1:2|S|<host1>|<snmp options>|<oid>
3|1:1:3|S|<host1>|<snmp options>|<oid>
4|1:2:1|S|<host2>|<snmp options>|<oid>
5|1:2:2|S|<host2>|<snmp options>|<oid>
6|1:2:3|S|<host2>|<snmp options>|<oid>
7|2:1|F|<cmd1>|<params>
8|2:2|F|<cmd2>|<params>
9|3|F|<cmd3>|<params>
10|4|F|<cmd4>|<params>

Where: '|' is column separator
Column 1 - Job ID
Column 2 - Concurrency control
Column 3 - Command (in here S - SNMP, F - Fork)
For S(NMP):
Column 4 - is the hostname or IP
Column 5 - SNMP options (community, password, version ...)
Column 6 - OID
For the F(ork):
Column 4 - is the command to be executed
Column 5 - is the optional parameters

In the mean time, the helper immediately starts executing all of the requests honoring the concurrency control which could be defined to any level deep (just add on the ':')

As soon as some job is finished, the helper would be returning the results to the cmd.php in the format:
<jobid>|<result>

and the cmd.php would be passing those to the backend database.

You could add another level of scalability to this: If the jobs that need to be executed are still too much to be handled by one machine, the "helper" could be a daemon that could be running on multiple hosts and cmd.php could be connecting to them and assigning them the jobs that need to be executed. The same protocol could apply.

The same helper could be utilized to parallelize the execution of the rrdtool on the machine where the rrd files reside.

Regards,

Ranko
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests