cacti scalability (was cacti performance considerations)
Moderators: Developers, Moderators
Yes. Server load will be configurable.
the threaded command app (Anybody have a
good name for it? My first thought is
"spine"; goes with the cacti theme and
is also representative a central or core
item.</P>
Anyhow, Spine will start by having a
configurable thread count. My hope is that later versions will be intelligent enough to
control the number of active threads
dynamically.
the threaded command app (Anybody have a
good name for it? My first thought is
"spine"; goes with the cacti theme and
is also representative a central or core
item.</P>
Anyhow, Spine will start by having a
configurable thread count. My hope is that later versions will be intelligent enough to
control the number of active threads
dynamically.
I was able to extend cacti to support multiple cmd.php instances. Granted i haven't tested it very well. Perhaps someone more familiar with the code could stamp their approval. It was actually quite easy, what i did was add a column to rrd_ds which i called collection, modified the cmd.php to accept an argument then pass the arument to it in the cron/task (i'm running on winndows) Here are the modifications I've made:
rrd_ds schema
CREATE TABLE rrd_ds (
ID smallint(5) NOT NULL auto_increment,
Name varchar(50) NOT NULL default '',
DataSourceTypeID smallint(5) NOT NULL default '0',
Heartbeat mediumint(8) default '600',
MinValue mediumint(8) default '0',
MaxValue bigint(12) default '1',
SrcID smallint(5) NOT NULL default '0',
Active char(3) default '1',
DSName varchar(19) default NULL,
DSPath varchar(150) default NULL,
Step smallint(5) NOT NULL default '300',
Collection smallint(5) NOT NULL default '1', **********
PRIMARY KEY (ID),
UNIQUE KEY ID (ID)
) TYPE=MyISAM;
cmd.php
where d.active="on" and d.Collection=$argv[1]",$cnn_id);
cron/scheduled task
php.exe -q httpdapachehtdocscacticmd.php 2 <----- pass it the collection id
rrd_ds schema
CREATE TABLE rrd_ds (
ID smallint(5) NOT NULL auto_increment,
Name varchar(50) NOT NULL default '',
DataSourceTypeID smallint(5) NOT NULL default '0',
Heartbeat mediumint(8) default '600',
MinValue mediumint(8) default '0',
MaxValue bigint(12) default '1',
SrcID smallint(5) NOT NULL default '0',
Active char(3) default '1',
DSName varchar(19) default NULL,
DSPath varchar(150) default NULL,
Step smallint(5) NOT NULL default '300',
Collection smallint(5) NOT NULL default '1', **********
PRIMARY KEY (ID),
UNIQUE KEY ID (ID)
) TYPE=MyISAM;
cmd.php
where d.active="on" and d.Collection=$argv[1]",$cnn_id);
cron/scheduled task
php.exe -q httpdapachehtdocscacticmd.php 2 <----- pass it the collection id
One more thing about large amount of graphs.
I've just implement fast search future into cacti for faster graphs access.
It uses simple 'select .. from .. like ..' SQL statement on graphs names with resulting page containing links to actual placement of the found graphs in hierarchy.
Maybe it would be good for you too, raX.
I've just implement fast search future into cacti for faster graphs access.
It uses simple 'select .. from .. like ..' SQL statement on graphs names with resulting page containing links to actual placement of the found graphs in hierarchy.
Maybe it would be good for you too, raX.
Igor N Indick
JSC Doris
JSC Doris
The Spine logs will probably be handled like this:
Each thread will capture the stderr output of the command into a buffer. If the task itself generates error messages these will also be stored in this buffer. When the thread completes its current task it will lock the log mutex, dump the buffer to the open log file and then unlock the log mutex.
This way the logs will make sense and you will be able to see how a thread executed its task. You won't be able to see it in real time though and you won't be able to track progress for threads that take a long time to complete because the log won't be updated until the task has completed.
Each thread will capture the stderr output of the command into a buffer. If the task itself generates error messages these will also be stored in this buffer. When the thread completes its current task it will lock the log mutex, dump the buffer to the open log file and then unlock the log mutex.
This way the logs will make sense and you will be able to see how a thread executed its task. You won't be able to see it in real time though and you won't be able to track progress for threads that take a long time to complete because the log won't be updated until the task has completed.
You'll be able to change the collection through the console if you make these changes.On 2002-02-25 17:07, drub wrote:
I really like pyuska's solution, I am gonna try recombiming all of my data sources to one install of cacti, then I can add as many threads as needed, and do a little balancing based on the processes.
looks cool I am gonna try this out
Line 64 of ds.php
$heartbeat,$minvalue,$maxvalue,$srcid,"$active","$dsname","$dspath",$step,$Collection)",$cnn_id);
Line 77 of include/utility_functions.php
minvalue,maxvalue,srcid,active,dsname,dspath,step,Collection) values (0"
Line 77 of include/utility_functions.php
. ","" . mysql_result($sql_id, $i, "Collection") . """
Regarding the Spine Logs:
How about using syslog to record messages. If the app was setup to accept a facility and loglevel users could choose which type of messages are important to them and where they go. logs would no longer be managed by the app, but by syslog. I always use local0 and warning for management applications where I work, but anyone else could configure for their own likes.
How about using syslog to record messages. If the app was setup to accept a facility and loglevel users could choose which type of messages are important to them and where they go. logs would no longer be managed by the app, but by syslog. I always use local0 and warning for management applications where I work, but anyone else could configure for their own likes.
I'm checking the progress on "spine", since I got over 300 interfaces already, and cmd.php does not complete each collection cicle within 5 minutes.
So, in the meantime I thought to implement what pyuska proposed in a previous post.
But I tweaked it a little bit to "automatically" load balance datasources between multiple instances of cmd.php and wihtout requiring any change to MySQL DB structure.
What I did is:
* in cmd.php:
in line...
where d.active="on"",$cnn_id);
I added this...
where d.active="on" and <u>d.ID%5</u> = $argv[1]",$cnn_id);
* then in crontab:
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 0 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 1 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 2 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 3 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 4 >> /tmp/cacti.log 2>&1
this way, will be 5 instances of cmd.php running simultaneously every 5 minutes, each instance will be handling an equal (more or less) amount of RRD Datasources.
one can expand further this approach just changing the php code to "<u>ID%10</u> = $argv[1]..." and adding 10 lines to crontab (from 0 to 10) as above.
I know this is not an ellegant solution, but in the meantime it should work.
Bye,
So, in the meantime I thought to implement what pyuska proposed in a previous post.
But I tweaked it a little bit to "automatically" load balance datasources between multiple instances of cmd.php and wihtout requiring any change to MySQL DB structure.
What I did is:
* in cmd.php:
in line...
where d.active="on"",$cnn_id);
I added this...
where d.active="on" and <u>d.ID%5</u> = $argv[1]",$cnn_id);
* then in crontab:
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 0 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 1 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 2 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 3 >> /tmp/cacti.log 2>&1
*/5 * * * * apache php -q /var/www/html/cacti/cmd.php 4 >> /tmp/cacti.log 2>&1
this way, will be 5 instances of cmd.php running simultaneously every 5 minutes, each instance will be handling an equal (more or less) amount of RRD Datasources.
one can expand further this approach just changing the php code to "<u>ID%10</u> = $argv[1]..." and adding 10 lines to crontab (from 0 to 10) as above.
I know this is not an ellegant solution, but in the meantime it should work.
Bye,
On 2002-02-21 19:29, pyuska wrote:
I was able to extend cacti to support multiple cmd.php instances. Granted i haven't tested it very well. Perhaps someone more familiar with the code could stamp their approval. It was actually quite easy, what i did was add a column to rrd_ds which i called collection, modified the cmd.php to accept an argument then pass the arument to it in the cron/task (i'm running on winndows) Here are the modifications I've made:
rrd_ds schema
CREATE TABLE rrd_ds (
ID smallint(5) NOT NULL auto_increment,
Name varchar(50) NOT NULL default '',
DataSourceTypeID smallint(5) NOT NULL default '0',
Heartbeat mediumint(8) default '600',
MinValue mediumint(8) default '0',
MaxValue bigint(12) default '1',
SrcID smallint(5) NOT NULL default '0',
Active char(3) default '1',
DSName varchar(19) default NULL,
DSPath varchar(150) default NULL,
Step smallint(5) NOT NULL default '300',
Collection smallint(5) NOT NULL default '1', **********
PRIMARY KEY (ID),
UNIQUE KEY ID (ID)
) TYPE=MyISAM;
cmd.php
where d.active="on" and d.Collection=$argv[1]",$cnn_id);
cron/scheduled task
php.exe -q httpdapachehtdocscacticmd.php 2 <----- pass it the collection id
____________________________________________
Roberto Carlos Navas
Internet de Telemovil
El Salvador
rcnavas@telemovil.com
Roberto Carlos Navas
Internet de Telemovil
El Salvador
rcnavas@telemovil.com
Who is online
Users browsing this forum: No registered users and 1 guest