Spine Progress Reports
Moderators: Developers, Moderators
Hi Guys.
I just replied to the original thread on this stuff and hadn't seen this one yet.
I'll probably end up monitoring a few thousand interfaces so scalability is a big issue for me.
jwiegley, how's Spine coming? Need any help? I'm not a C guy myself but I can prolly recruit one from work if we can get this working relatively quickly.
Lemme know.
Rob.
I just replied to the original thread on this stuff and hadn't seen this one yet.
I'll probably end up monitoring a few thousand interfaces so scalability is a big issue for me.
jwiegley, how's Spine coming? Need any help? I'm not a C guy myself but I can prolly recruit one from work if we can get this working relatively quickly.
Lemme know.
Rob.
Where the heck have I been?
Here's kind of a heck of an Update.
On March 8th I lost my job. This is pretty much the reason I haven't done anything on
Spine or posted any updates in the past month. I had to take the time off to get my
personal life back on track. But don't worry for me. The loss of that job was a truly
positive event since I pretty much worked for Satan. I don't think I ever need another
job where my "superiors" theaten to throw me through a window if I don't "get it".
Anyhow. There hasn't been much progress withSpine since I haven't coded anything
for the past month. But I do have a laptop now, I finally got it all the software on it
running correctly and now I can code in my underwear while watching Spongebob
Squarepants. So Spine is progressing again.
In the past two days I've made some progress. I've shifted away from the abstract
data types and I've been focused on coding the threading control and process
framework.
As of now Spine starts up and will create the necessary threads (which only exit
right now) But the logging is working; to either syslog or a stderr or both.
The verbosity of logging is controllable via debug_levels and a debug bitmask.
(There can be 32 segregations of debug messages.) This is all being done as
a daemon. It can also run as a non-daemon but the more I code about it this the
less sense it makes. It will run as a non-daemon but the behavior is identical to
run as a daemon. With the priority queueing of tasks it doesn't make much sense
to process the data sources once and exit. So daemon behavior seems to be
what we're going to have in version 1.0. Maybe I or somebody else can add the
necessary code logic to cause a later version to process each data source once
and then exit. This would produce a "one pass" behavior similar to cmd.php but I
don't see this as a priority for the first version.
Next functionality to implement is to properly handle signals in the main thread.
The goal is to provide the ability to send SIGINT to cause the daemon to exit
gracefully and SIGHUP to cause Spine to suspend the task threads, reload the
global config file, reload the data sources from any cacti database(s), rebuild
the task priority queue and restart the threads.
Once that is done next steps are:
finalize the thread syncronization logic start fleshing out the task structures.
When That is complete Spine should have reached its first "runnable" version.
It won't process data sources correctly yet but the difficult logic associated with
concurrent programming should be complete. At this point I would be comfortable
putting what I have into a CVS tree and opening the tree up to anybody who
wants to help write the routines for gathering data and processing indiviidual
data-sources.
I think I can have signals done by the end of this week and concurrency done
by end of next week.
On March 8th I lost my job. This is pretty much the reason I haven't done anything on
Spine or posted any updates in the past month. I had to take the time off to get my
personal life back on track. But don't worry for me. The loss of that job was a truly
positive event since I pretty much worked for Satan. I don't think I ever need another
job where my "superiors" theaten to throw me through a window if I don't "get it".
Anyhow. There hasn't been much progress withSpine since I haven't coded anything
for the past month. But I do have a laptop now, I finally got it all the software on it
running correctly and now I can code in my underwear while watching Spongebob
Squarepants. So Spine is progressing again.
In the past two days I've made some progress. I've shifted away from the abstract
data types and I've been focused on coding the threading control and process
framework.
As of now Spine starts up and will create the necessary threads (which only exit
right now) But the logging is working; to either syslog or a stderr or both.
The verbosity of logging is controllable via debug_levels and a debug bitmask.
(There can be 32 segregations of debug messages.) This is all being done as
a daemon. It can also run as a non-daemon but the more I code about it this the
less sense it makes. It will run as a non-daemon but the behavior is identical to
run as a daemon. With the priority queueing of tasks it doesn't make much sense
to process the data sources once and exit. So daemon behavior seems to be
what we're going to have in version 1.0. Maybe I or somebody else can add the
necessary code logic to cause a later version to process each data source once
and then exit. This would produce a "one pass" behavior similar to cmd.php but I
don't see this as a priority for the first version.
Next functionality to implement is to properly handle signals in the main thread.
The goal is to provide the ability to send SIGINT to cause the daemon to exit
gracefully and SIGHUP to cause Spine to suspend the task threads, reload the
global config file, reload the data sources from any cacti database(s), rebuild
the task priority queue and restart the threads.
Once that is done next steps are:
finalize the thread syncronization logic start fleshing out the task structures.
When That is complete Spine should have reached its first "runnable" version.
It won't process data sources correctly yet but the difficult logic associated with
concurrent programming should be complete. At this point I would be comfortable
putting what I have into a CVS tree and opening the tree up to anybody who
wants to help write the routines for gathering data and processing indiviidual
data-sources.
I think I can have signals done by the end of this week and concurrency done
by end of next week.
Thank god spine is still alive
Thats really all I have to say
was getting worried, I am so close to overlapping cmd processes right now, my boss will have a coniption if I can't get it to scale a bit more here soon.
Drub
I forgot to login
was getting worried, I am so close to overlapping cmd processes right now, my boss will have a coniption if I can't get it to scale a bit more here soon.
Drub
I forgot to login
cmd.php forked
Check out: http://www.raxnet.net/board/viewtopic.php?t=349
I have forked cmd.php via perl. It is a small work around that should be of help until the spine is developed.
I have forked cmd.php via perl. It is a small work around that should be of help until the spine is developed.
Spine progress report
For the past week spine has made a small amount of progress.
Spine's ability to parse its command line arguments and/or
a configuration file is about 95% complete.
I actually see this as a good amount of prgress. I've been busy trying to
attract a new job as a tenure track faculty position and that has been
a prettty disappointing effort so far. But mostly, I've always hated having
to code parsers. I've tried to implement as much functionality and
configuration items that I thought would be useful to spine so that all
configuration is handled and out of the way before we start extending
spine to actually handle tasks. I'm just too thurough with parsers to
make coding them a quick or mindless prospect. Once spine is past
beta hopefully somebody who knows how to use flex/bison could recode
my parser using those, more flexible and generic, tools. Mine is just
raw C code.
Next week I hope to complete the parsing and configuration setups completely.
I also hope to code all of the threading activity and concurrency controls
by then. That would put spine in a position where it can start being extended
to handle actual tasks.
So road map:
Week of May 3rd: Finish configuration parser
Week of May 3rd: Implement Threading/concurrency controls.
Week of May 10th: implement Priority queue for scheduling tasks.
Week of May 10th: implement database inport/task representation.
Once those are done we should be close to beta testing. Arounf that time
I'll solicite requests to beta test spine and see what crashes.
Spine's ability to parse its command line arguments and/or
a configuration file is about 95% complete.
I actually see this as a good amount of prgress. I've been busy trying to
attract a new job as a tenure track faculty position and that has been
a prettty disappointing effort so far. But mostly, I've always hated having
to code parsers. I've tried to implement as much functionality and
configuration items that I thought would be useful to spine so that all
configuration is handled and out of the way before we start extending
spine to actually handle tasks. I'm just too thurough with parsers to
make coding them a quick or mindless prospect. Once spine is past
beta hopefully somebody who knows how to use flex/bison could recode
my parser using those, more flexible and generic, tools. Mine is just
raw C code.
Next week I hope to complete the parsing and configuration setups completely.
I also hope to code all of the threading activity and concurrency controls
by then. That would put spine in a position where it can start being extended
to handle actual tasks.
So road map:
Week of May 3rd: Finish configuration parser
Week of May 3rd: Implement Threading/concurrency controls.
Week of May 10th: implement Priority queue for scheduling tasks.
Week of May 10th: implement database inport/task representation.
Once those are done we should be close to beta testing. Arounf that time
I'll solicite requests to beta test spine and see what crashes.
Sounds Great
Sounds great J, I would be willing to Beta test it a bit for ya.
parser beta test....
Should have the Parser portions done Thursday afternoon.
I know longer have access to the dozens of switches and routers that I
use to since I lost that job (Thank God!)
So don't worry Drub, I've sort of got you target as my number one beta tester!
I know longer have access to the dozens of switches and routers that I
use to since I lost that job (Thank God!)
So don't worry Drub, I've sort of got you target as my number one beta tester!
Sounds Good!
Sounds Good to me, I am up to around 900 Datasources right now and I'm using 7 instances of cacti on two boxes, and use a global include to tie the web interfaces together, so users don't have to enter different URL's.
I'm currently looking for a solution to our network monitoring needs, and I'm keen on Cacti.
The collector scaling issues that exist currently are a big problem for us (we have over 5000
items to monitor, most of them SNMP data).
We currently use cricket, however adding new data to it is non-trivial, and it doesn't look
anywhere NEAR as pretty as cacti
I have a fair bit of C and PHP programming experience, and I would like to help if there is
anything I can do. In addition, I would like to help test the collector when it is ready for a
beta run, as I think I am in a good position to beat the proverbial hell out of it in stress testing
The collector scaling issues that exist currently are a big problem for us (we have over 5000
items to monitor, most of them SNMP data).
We currently use cricket, however adding new data to it is non-trivial, and it doesn't look
anywhere NEAR as pretty as cacti
I have a fair bit of C and PHP programming experience, and I would like to help if there is
anything I can do. In addition, I would like to help test the collector when it is ready for a
beta run, as I think I am in a good position to beat the proverbial hell out of it in stress testing
Hi there DJS.
I work for a large company and we ran into pretty much the same scenario. Our solution was to put me on full-time Cacti development - on the clock, even!
I approached Ian about coming on board to help with Cacti development and he's graciously allowed me to pitch in. Ian and I have been working for the past week or so on making lots of scalability changes with many more on the way. I don't want to speak for Ian but I think that he'll be more open to having more Cacti developers after he and I have some more time to get used to developing together and we get some protocols in place for how Cacti developers collaborate. I'll let Ian comment further if he wishes but I will say that Cacti's future with regard to larger networks is looking much brighter.
Please check out the 0.8 roadmap and feel free to request features in the Feature Requests forum.
Thanks,
Rob.
I work for a large company and we ran into pretty much the same scenario. Our solution was to put me on full-time Cacti development - on the clock, even!
I approached Ian about coming on board to help with Cacti development and he's graciously allowed me to pitch in. Ian and I have been working for the past week or so on making lots of scalability changes with many more on the way. I don't want to speak for Ian but I think that he'll be more open to having more Cacti developers after he and I have some more time to get used to developing together and we get some protocols in place for how Cacti developers collaborate. I'll let Ian comment further if he wishes but I will say that Cacti's future with regard to larger networks is looking much brighter.
Please check out the 0.8 roadmap and feel free to request features in the Feature Requests forum.
Thanks,
Rob.
I think Rob summed it up quite well there. He and I are going to work on large portions of 0.8 together keeping scalability in mind while we are developing it. Once 0.8 is released and stabilizes, I plan on opening up developement even further to other members that seem eager to contribute to cacti.
We'll see what happens from there.
-Ian
We'll see what happens from there.
-Ian
Who is online
Users browsing this forum: No registered users and 2 guests