Spine Progress Reports

Important information about Cacti developments that all users should be interested in.

Moderators: Developers, Moderators

robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

We still need a good poller!

Post by robsweet »

Just to clarify, most of the work that Ian and I are doing relates to the Cacti web-based system itself. We still need an 'industrial strength' poller to really make Cacti ready for prime time.

On that note, I'd like to recommend that the poller doesn't hit the mysql tables directly but instead uses an auto-generated, XML-based config file. There are several reasons for this:

1) It means that you can use any poller that will read the config file
2) If your poller of choice won't use the config file natively, it's easy to parse it into one that will work.
3) If you've got hundreds of thousands of OID's to poll (that sounds like an impossible number but it's becomming more and more likely), the overhead involved in pulling everything out of the database for each polling cycle is a problem. It'd be pretty easy for the poller to note the mod date of the config file when it starts and check that date against the file after every cycle. If the mod date changes, it reloads the config before starting the next cycle.

Thoughts, anyone?

Rob.
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

Polling...Polling...Polling

Post by TheWitness »

Rob,

I am an advocate of optimizing anything that needs it. If this product is to scale well, I would suggest that you make the config static. The allow changes to an intermediate file based upon the web interface. Then, set a flag whenever you change anything and allow you to update the static configs.

If you take this approach you could easily use the MRTG executables to manage the RRD update process save you time and money with managing that very complicated part of the app. (Just a thought).

This way you can keep the application development to the creation of configs based upon Device Profiles sililar to Lucents Vitalnet. If the system description is for a hub, you manage collisions, uplink utilization, etc. If it's an ATM or Ethernet switch you manage CPU, Memory, Discards, Errors, Utilization and allow users to select which ports they wish to monitor. With Cisco, record the Port Names, card & port. Use the checkbox to select which ports to monitor.

When all is done. Click, update configs. This will be complicated for some but not for all. Unfortunately, I have a family to raise on the off hours and a full time job during the day, so I can't offer much more than my opinion. Hope this helps.

Larry
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

What Language do I speak

Post by TheWitness »

I should proof read more often. Please disregard the missing characters. I am beginning to think that some of the keys on my keyboard are not fuctioning properly.

Larry
integr8er
Posts: 18
Joined: Thu Mar 28, 2002 4:43 pm
Location: Menomonee Falls, Wi. USA
Contact:

Post by integr8er »

Tossing in two cents...

In building any program that parallelizes the data collections task, the author should be aware that just taking the list (like the cron list shown in cacti) and blindly starting a thread for each one and letting them run could be excessive. Obviously, with appropritate limits and such... This highly depends on the nature of what you are polling. My thinking is that this is great, but there needs to be a means to say that some commands should fall in line of others. It makes me reflect on the way SCO Openserver Unix systems startup files are arranged. They extended on the S100scriptname (symlink) concept and added some P100scriptname symlinks. A system program would then order the full list and execute them serially for the Sxxx scripts and the ones with the Pxxx would be parallelized.

It appears that others out there are using it to poll large amounts of network stuff and in a case like this, sending out the polling requrests in parallel seems fine. In my situation, I'm polling a client that I can only hit one at a time. I'm doing various system performance metric data collection for trending and so on. If I tried to hit a given host with 10-30 queries all at once, it could skew my readings and besides - like I said, the client I'm making the query to is single threaded (at this time) and does not fork off a copy of itself to service each connection (i.e. query). O)kay, so you might say - What!, but that is the way it is and I want to keep their minds open to the varying ways of useage and needs of the user base.

I'm refering to the netsaint_statd daemon program. A perl script that runs as a daemon. It listens for and accepts requests, but does it in the main program - meaning it does not fork itself to make a copy that will handle the request (otherwise I could hit it with more queries at once). But if I hit it with multiple queries, then the performance samples I take could be skewed. Think about hitting a production system with 20 queries consisting of iostat, 4-6 'ps -e xxx' listings, totalling disk space (some system have hundreds of disks) and more all at nearly the same instant...

I'll say that to date, this program works excellent for our purposes. In fact I'm going to email to a few that requested it today. I don't know how to make it available to anyone as rax has not yet have a public repository. Maybe I have to email to him for posting. It's too big to post the perl code, so its a 26K tar.gz file.

Hmmm, I was studying the perl manual... There is support for threads. Althought it talks about the feature being experimental and it needs to be compiled in. It looks like it was quite a long time ago that it was experimental... Anyone know if it's now just a standard feature??

Thanks
User avatar
ablyler
Posts: 40
Joined: Tue Mar 19, 2002 7:00 pm
Location: Ann Arbor, MI
Contact:

cmd.php Forked

Post by ablyler »

Yes, perl supports forking in all new distributions, so does Visual Basic .NET. Last night I wrote 4 lines of VB code that forked 50 times. :D

I wrote a forked cmd.php via perl:
http://www.raxnet.net/board/viewtopic.php?t=349

Please note that the .tar is currently down, but I will get it back up by the end of the week.
robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

Post by robsweet »

If I understood correctly, he *didn't* want to fork or thread his data collection as it would skew his results.

If that's the case, I'd say that's a poller config issue. I imagine that it'd be pretty easy to write the poller in such a way that you could switch between parallel and serial polling (many at once versus one at a time for anybody not used to the terms).

And while we're talking about parallel polling, I was also thinking about SNMP polling - I think it's important to be able to set thresholds of some kind to say "If you're polling more than X OID's from this device in this polling cycle, use a bulk get instead of several small gets." I'm also thinking of incorporating 'equipment personality modules' into Cacti to allow for polling and interface discovery on equipment that doesn't follow the 'normal' setup (i.e. I'm told that on some equipment, you might want to graph VPN traffic but the VPN OID's aren't in the main Interfaces table). As part of that personality module, you could set the rules for polling that type of device.

Thoughts?

Rob.
User avatar
ablyler
Posts: 40
Joined: Tue Mar 19, 2002 7:00 pm
Location: Ann Arbor, MI
Contact:

Results are not skewed.

Post by ablyler »

By forking the code via vb or perl it does not skew any results. The only results it would skew are prcss and memory usage of the cacti machine. In regards to my last comment, that data is not really skewed but since the forking uses max CPU and alot of memory it does not portray an nice picture of the system resources.

My solution is only temporary and I support spine completely. By no means do I want this to look like a spine competitor.
User avatar
drub
Cacti User
Posts: 59
Joined: Thu Jan 31, 2002 7:00 pm
Location: Las Vegas
Contact:

has there been any recent work on Spine?

Post by drub »

has there been any recent work on Spine? I haven't seen anything recently. :o
robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

Post by robsweet »

Not that I've seen and I haven't gotten any response to the email that I sent to Jeff. Anybody know him personally? If he's fallen off the planet, I'd really like to get someone else working on the codebase that he started...

Jeff, you out there??

Rob.
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

>> Spinal Tap <<

Post by TheWitness »

Rob,

I've been around this planet for many a moon now and I've seen many a thing, and by judging from his earlier posts, he may have done just that, fallen off the face of the earth. Some shooting stars land hard.

I was impressed by the depth of his though process but he seemed to be running hard and without oil. Ballance is a must if you wish to survive the coding cycles, long sleepless nights, etc. It appeared that his obsession with Spine/Cacti cost him his Job. Not good.

Well, anyway, did you say you had source code, pseudo code, or a design document? If you do, just send me a note and I will let you know what, if anything, I can add.

My initial suggestion is that you simply place your scheduling information in a database and then create a daemon that wakes up periodically and hunts for work. Work elements would involve the execution of a MRTG config or submission to some other form of polling engine. That portion of the design will depend on where you are going with 0.8.

I think this is where 0.6 is today, however you are using a serial process today and not a pipelined paralel process. (i.e. you set the number of work queue's in the config and then the daemon would submit work requests to the numbered work queues).

In the database, each element (OID) would have a scan cycle, say for example, cpu temp every 10 minutes, interfaces every 5 minues. This cycle would be a part of the device polling config (wizard). Each queue in the pipeline would have to have some for of ICMP or busy flag and send alerts, traps, etc. when queues either stall or fill beyond a certain point. I could go on and on... I thin that this design is quite simple yet elegant. Oh well, realy looking forward to 0.8.

Larry
robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

Post by robsweet »

As far as Jeff's code is concerned, I don't have anything. No docs, no design stuff, no code - only what's been posted here on the boards. If anybody has any additional Spine info, please foward it along to the group so somebody can take over Spine development (even temporarily) or we may end up with 0.8 ready to release with no poller that will work with it.

Regarding the polling setup in 0.8, I guess it's time to spill what we've got planned:

- You have a polling host with IP, hostname, SNMP, and a few other bits of info.
- You have polling task(s) attached to that host with a polling interval and a few other things.
- You have polling items attached to each task. These items can be SNMP OID's or script output values.

The idea is that Cacti will be able to kick out an XML-based 'poller config' file. If you're using 'polling zones', it would generate one poller config for each zone. The poller should be a daemon that runs all the time. Periodically, it check the mod date of the config file to see if it's changed. If it has, it re-reads the config and continues polling. As you can have many different polling intervals, there's really not *one* polling cycle but several, perhaps many.

There are more details to this plan but you get the idea.
Comments are welcome as always.
Rob.
User avatar
TheWitness
Developer
Posts: 17007
Joined: Tue May 14, 2002 5:08 pm
Location: MI, USA
Contact:

>> Poller <<

Post by TheWitness »

Rob,

I don't know if you can get away with it with GNU, but what about MRTG run as daemon option? Only downside is that for every host you would run an MRTG process...

Larry
robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

Post by robsweet »

Well, we could do that but I could just as easily code a poller in Perl if we wanted a quick kludge. And that just gets us a poller that runs, not one that runs efficiently enough to take advantage of the enhancements we're coding into Cacti for larger enterprises.

While I agree that some poller is better than no poller, I'd really like to see a *good* poller for Cacti.

Thanks,
Rob.
djsloan
Posts: 3
Joined: Wed May 15, 2002 6:43 pm
Location: Australia

Spine

Post by djsloan »

Maybe a section should be created in the SourceForge cacti project, or an entire new project be created, to accomodate the existing spine code (if it should be recovered) or even any new start that is made. That would avoid several difficulties, including giving people an oppurtunity to spot any design mistakes or other problems that might appear, and allowing the possibility of multiple people working on the project, or at least submitting patches and ideas to it.
robsweet
Posts: 35
Joined: Fri Mar 22, 2002 7:00 pm
Location: Atlanta, GA

Post by robsweet »

Sounds fine to me but I think you're putting the cart before the horse.

Rob.
Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests