Cacti Migration shows graphs not same on OLD/NEW server
Moderators: Developers, Moderators
Cacti Migration shows graphs not same on OLD/NEW server
Hi,
I have been able to go through my cacti server migration from old to new server.
I worked through all previous issues one-by-one and got everything working finally.
However, I see that the graphs on old server (still running) and new server (now running) differ slightly as far as numbers on charts.
Old server shows more data in charts and in another case I see numbers for Disk Space appear to show 6GB (old) vs 10GB (new).
I did do the [old] rra/ *.rrd => *.xml, => [new] rra/ *.rrd operation too.
I checked the logs and obviously since its being charted there doesn't appear to be any errors in logs.
I will include some samples of what I am referring to.
Any reason why this is?
Where can I look?
BTW, I have rebuilt the cache too. i.e. Click console / System Utilities / Rebuild Poller Cache.
*I just added a more visible view of the weekly graphs. You can see where the import of cacti database takes place and then graph changes drastically.
I have been able to go through my cacti server migration from old to new server.
I worked through all previous issues one-by-one and got everything working finally.
However, I see that the graphs on old server (still running) and new server (now running) differ slightly as far as numbers on charts.
Old server shows more data in charts and in another case I see numbers for Disk Space appear to show 6GB (old) vs 10GB (new).
I did do the [old] rra/ *.rrd => *.xml, => [new] rra/ *.rrd operation too.
I checked the logs and obviously since its being charted there doesn't appear to be any errors in logs.
I will include some samples of what I am referring to.
Any reason why this is?
Where can I look?
BTW, I have rebuilt the cache too. i.e. Click console / System Utilities / Rebuild Poller Cache.
*I just added a more visible view of the weekly graphs. You can see where the import of cacti database takes place and then graph changes drastically.
- Attachments
-
- oldsrv1a.PNG (10.48 KiB) Viewed 2455 times
-
- newsrv1a.PNG (12.54 KiB) Viewed 2455 times
-
- oldsrv2.PNG (9.21 KiB) Viewed 2455 times
-
- newsrv2.PNG (10.7 KiB) Viewed 2455 times
-
- Weekly Old / New cacti servers.
- weeklycompare.png (51.05 KiB) Viewed 2458 times
/BestRegards,
-FuRoSh
-FuRoSh
Re: Cacti Migration shows graphs not same on OLD/NEW server
Does anyone know what to check? I've compared just about everything I can think of on GUI side-by-side and they both look the same. I've also checked php.ini.
The old and new servers are Ubuntu.
Anything else I can provide please let me know.
The old and new servers are Ubuntu.
Anything else I can provide please let me know.
/BestRegards,
-FuRoSh
-FuRoSh
- gandalf
- Developer
- Posts: 22383
- Joined: Thu Dec 02, 2004 2:46 am
- Location: Muenster, Germany
- Contact:
Re: Cacti Migration shows graphs not same on OLD/NEW server
The DNS stuff does not worry me. It may happen, that the new server is simply seeing a different performance here. You may want to test, what polling returns as a value here by increasing the log level. And perhaps you want to perform some tests from native cli.
The disk issue: Both graphs monitor the localhost, respectively? What does a "du -k" return, then? It is not unlikely, that different localhosts (old vs new) have different disk space usage.
R.
The disk issue: Both graphs monitor the localhost, respectively? What does a "du -k" return, then? It is not unlikely, that different localhosts (old vs new) have different disk space usage.
R.
Re: Cacti Migration shows graphs not same on OLD/NEW server
Hi Gandalf,
Thanks for your response! The disk issues, I think there is another issue with them. I found they were showing filesystems that really didn't exist on some clients. So I'll investigate it further.
However, I am scratching my head about the other charts/graphs. When I did the migration the charting/graphing on new-server was fine with the exception of the charts being off slightly in range/numbers. Now for some strange reason I am actually loosing data. This seems to be a common problem with "gaps in graphs" when searching. I tried to go through just about *all SOLVED* or other related post to see if I could figure out why this was with not much luck.
May I ask you why or what would cause graphs to be displayed correctly when running as cactiuser manually from command-line and extremely sporadic via crontab?
I do not see any pertinent Errors in cacti.log.
In some cases I get some data but lots of gaps via cron. I'll attach a screenshot of good data when ran as cactiuser from command-line.
Also, just about all other graphs such as mounted partitions and mysql_stats scripts work fine via cron.
my cron entry of cactiuser:
-----
*/1 * * * * php /local/cactiuser/cacti/poller.php >/dev/null 2>/var/log/cacti-poller-error.log
-----
I have also used/tried --force options.
In the screenshot, you can see nice good data when I ran from command-line. I watched `date` and ran at about the same time. I used --force via command-line but the graphs are fine doing this.
Any suggestions please?
btw, I've looked at your debugging link in signature... but I might be missing something.
Thanks for your response! The disk issues, I think there is another issue with them. I found they were showing filesystems that really didn't exist on some clients. So I'll investigate it further.
However, I am scratching my head about the other charts/graphs. When I did the migration the charting/graphing on new-server was fine with the exception of the charts being off slightly in range/numbers. Now for some strange reason I am actually loosing data. This seems to be a common problem with "gaps in graphs" when searching. I tried to go through just about *all SOLVED* or other related post to see if I could figure out why this was with not much luck.
May I ask you why or what would cause graphs to be displayed correctly when running as cactiuser manually from command-line and extremely sporadic via crontab?
I do not see any pertinent Errors in cacti.log.
In some cases I get some data but lots of gaps via cron. I'll attach a screenshot of good data when ran as cactiuser from command-line.
Also, just about all other graphs such as mounted partitions and mysql_stats scripts work fine via cron.
my cron entry of cactiuser:
-----
*/1 * * * * php /local/cactiuser/cacti/poller.php >/dev/null 2>/var/log/cacti-poller-error.log
-----
I have also used/tried --force options.
In the screenshot, you can see nice good data when I ran from command-line. I watched `date` and ran at about the same time. I used --force via command-line but the graphs are fine doing this.
Any suggestions please?
btw, I've looked at your debugging link in signature... but I might be missing something.
- Attachments
-
- cron_vs_manual.PNG (24.95 KiB) Viewed 2419 times
/BestRegards,
-FuRoSh
-FuRoSh
Re: Cacti Migration shows graphs not same on OLD/NEW server
^ Anyone know why this (above) would occur? Why graphs are consistent when running poller.php manually then sporadic with cron? ^
At this point, I have gone through the entire *debugging* sign link twice. I did find some interesting things which I worked through suggested recommendations. I see some improvements but I still do not see them the same way they were being graphed like on old cacti server I migrated from.
I do agree it's minimal since they are "dns" resolution from a pageload-agent.php script. It's just bugging me now . The graphs still have minor gaps but I think I can live with it.
If anyone else knows why manual running of scripts charts/graphs good while cron still has minor graphs, that would really help me. I have checked all permissions and also compared just about everything from old cacti server to new cacti server.
At this point, I have gone through the entire *debugging* sign link twice. I did find some interesting things which I worked through suggested recommendations. I see some improvements but I still do not see them the same way they were being graphed like on old cacti server I migrated from.
I do agree it's minimal since they are "dns" resolution from a pageload-agent.php script. It's just bugging me now . The graphs still have minor gaps but I think I can live with it.
If anyone else knows why manual running of scripts charts/graphs good while cron still has minor graphs, that would really help me. I have checked all permissions and also compared just about everything from old cacti server to new cacti server.
/BestRegards,
-FuRoSh
-FuRoSh
Re: Cacti Migration shows graphs not same on OLD/NEW server
Hello,
I think i'm on a similar situation.
I have 2 cactis:
-Debian and cacti 0.8.7g, rrdtool 1.4
-Ezcacti centos 6 0.8.8a, rrdtool 1.3 ( by default... if i change it now, will it cause problems?)
Look at the "same graph" in both nodes:
Old cacti: New cacti:
OID's are the same, i checked the data template and graph templates, and they are identical.
The old one seems to be the good one:
[root@XXX cli]# snmpwalk -v 2c -c XXX XXXXXXX 1.3.6.1.4.1.6527.3.1.2.1.1.1.0
SNMPv2-SMI::enterprises.6527.3.1.2.1.1.1.0 = Gauge32: 41
The values that the host answer to the snmpwalk are the same in both nodes.
I'm trying to check why this happens, already checked data template, graph template etc.
Comparing the debug data rrdtool, the only different thing given by diff command is:
OLD:
--lower-limit='0' \
NEW:
--lower-limit=0 \
Where can i keep on searching to solve this issue.
Additionnal information:
-Realtime plugin on the new machine graph correctly the values, just like the old one.
-I've got a preproduction ezcacti, with the same installation, and it graphs like new cacti.
-in OLD cacti, i erased ( time ago) the 1 minute RRA, new one is present BUT not active in the data template
thanks in advance
I think i'm on a similar situation.
I have 2 cactis:
-Debian and cacti 0.8.7g, rrdtool 1.4
-Ezcacti centos 6 0.8.8a, rrdtool 1.3 ( by default... if i change it now, will it cause problems?)
Look at the "same graph" in both nodes:
Old cacti: New cacti:
OID's are the same, i checked the data template and graph templates, and they are identical.
The old one seems to be the good one:
[root@XXX cli]# snmpwalk -v 2c -c XXX XXXXXXX 1.3.6.1.4.1.6527.3.1.2.1.1.1.0
SNMPv2-SMI::enterprises.6527.3.1.2.1.1.1.0 = Gauge32: 41
The values that the host answer to the snmpwalk are the same in both nodes.
I'm trying to check why this happens, already checked data template, graph template etc.
Comparing the debug data rrdtool, the only different thing given by diff command is:
OLD:
--lower-limit='0' \
NEW:
--lower-limit=0 \
Where can i keep on searching to solve this issue.
Additionnal information:
-Realtime plugin on the new machine graph correctly the values, just like the old one.
-I've got a preproduction ezcacti, with the same installation, and it graphs like new cacti.
-in OLD cacti, i erased ( time ago) the 1 minute RRA, new one is present BUT not active in the data template
thanks in advance
Re: Cacti Migration shows graphs not same on OLD/NEW server
yeah I don't get it either even trying to go through the legwork of troubleshooting it all. I'm obviously missing something but not sure.
Today my new *migrated* server stopped polling at mid of night. I found errors and tried to resolve them all.
I also decided to re-enable the old server to verify if it was different. The old server has no issues graphing all hosts correctly so it looks like something specific on new cacti server. I've checked all files, rrds, permissions, paths, on system and from UI and both look the same.
Here's another screenshot of how when I run poller.php manually as cactiuser it seems fine, but via cron gaps-galore
If this symptom is seen, should the fix still be in the debugging link? I would think if it runs manually from command-line fine, it would be a known reported issue.
I need a sign that points me somewhere, a clue, something please?
Today my new *migrated* server stopped polling at mid of night. I found errors and tried to resolve them all.
I also decided to re-enable the old server to verify if it was different. The old server has no issues graphing all hosts correctly so it looks like something specific on new cacti server. I've checked all files, rrds, permissions, paths, on system and from UI and both look the same.
Here's another screenshot of how when I run poller.php manually as cactiuser it seems fine, but via cron gaps-galore
If this symptom is seen, should the fix still be in the debugging link? I would think if it runs manually from command-line fine, it would be a known reported issue.
I need a sign that points me somewhere, a clue, something please?
- Attachments
-
- manual vs cron polling
- 829_Manual_then_Gaps.PNG (30.84 KiB) Viewed 2388 times
/BestRegards,
-FuRoSh
-FuRoSh
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Cacti Migration shows graphs not same on OLD/NEW server
Just to verify, the host that the graph is coming from is the same ?alopezdu wrote:Hello,
OID's are the same, i checked the data template and graph templates, and they are identical.
The old one seems to be the good one:
[root@XXX cli]# snmpwalk -v 2c -c XXX XXXXXXX 1.3.6.1.4.1.6527.3.1.2.1.1.1.0
SNMPv2-SMI::enterprises.6527.3.1.2.1.1.1.0 = Gauge32: 41
The values that the host answer to the snmpwalk are the same in both nodes.
so you did a SNMPwalk to the remotehost from the old cacti ( oldcacti) and the new one ) newcacti ?
Code: Select all
[root@oldcacti cli]# snmpwalk -v 2c -c XXX remotehost 1.3.6.1.4.1.6527.3.1.2.1.1.1.0
SNMPv2-SMI::enterprises.6527.3.1.2.1.1.1.0 = Gauge32: 41
[root@newcacti cli]# snmpwalk -v 2c -c XXX remotehost 1.3.6.1.4.1.6527.3.1.2.1.1.1.0
SNMPv2-SMI::enterprises.6527.3.1.2.1.1.1.0 = Gauge32: 41
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Cacti Migration shows graphs not same on OLD/NEW server
Can you confirm that only one poller is running ( no 2 cronjobs for poller.php exists ) AND manual poller.php is working when using the same userid as the poller.php cronjob is scheduled under ?FuRosh wrote:When I run poller.php manually as cactiuser it seems fine, but via cron gaps-galore
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Re: Cacti Migration shows graphs not same on OLD/NEW server
Hi phalek,phalek wrote: Can you confirm that only one poller is running ( no 2 cronjobs for poller.php exists ) AND manual poller.php is working when using the same userid as the poller.php cronjob is scheduled under ?
That is correct. I do *NOT* have 2 pollers running. Everything in /etc/cron.d/cacti is actually commented-out. The crontab is under cactiuser's own cron: /var/spool/cron/crontabs/cactiuser edited with 'crontab -u cactiuser -e':
*/1 * * * * /usr/bin/php -q /local/cactiuser/cacti/poller.php --force 1>/var/www/cacti/log/cacti-poller-out.log 2>>/var/log/cacti-poller-error.log
-----
^I made some minor changes after yesterday's gaps to see if I had any improvements. /local/cactiuser/cacti is same area (symlink) as /var/www/cacti/.
I also verified this by reviewing /var/log/syslog on server to verify only 1 user has cron for cacti poller (and not also root) and also to verify 1-minute interval only had 1 entry which they did like so:
------
Aug 30 20:12:01 cactisrv1 CRON[8701]: (cactiuser) CMD (/usr/bin/php -q /local/cactiuser/cacti/poller.php --force 1>/var/www/cacti/log/cacti-poller-out.log 2>>/var/log/cacti-poller-error.log)
Aug 30 20:13:01 cactisrv1 CRON[10513]: (cactiuser) CMD (/usr/bin/php -q /local/cactiuser/cacti/poller.php --force 1>/var/www/cacti/log/cacti-poller-out.log 2>>/var/log/cacti-poller-error.log)
------
Old cacti server I re-enabled yesterday has the exact same thing also.
/BestRegards,
-FuRoSh
-FuRoSh
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Cacti Migration shows graphs not same on OLD/NEW server
First of all, you shouldn't use the "-force" parameter, as you may end up having multiple pollers running.
So remove that and let your cacti run for a few minutes.
Then check the STATS messages in your Cacti log, which actually tells you how long the poller is running. If it's 58 secs or so, then you have an issue with the poller not being able to poll all devices within that 1 minute timeframe.
You probably need to look into your spine settings then, or use the boost addon to improve the pollers performance.
In case you're heavily into using scripts, then you may think about enhancing these scripts to do some sort of caching ...
So remove that and let your cacti run for a few minutes.
Then check the STATS messages in your Cacti log, which actually tells you how long the poller is running. If it's 58 secs or so, then you have an issue with the poller not being able to poll all devices within that 1 minute timeframe.
You probably need to look into your spine settings then, or use the boost addon to improve the pollers performance.
In case you're heavily into using scripts, then you may think about enhancing these scripts to do some sort of caching ...
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Re: Cacti Migration shows graphs not same on OLD/NEW server
Okay done, I took out --force, I also checked to verify no multiple poller.php's running, good there.phalek wrote:First of all, you shouldn't use the "-force" parameter, as you may end up having multiple pollers running.
So remove that and let your cacti run for a few minutes.
I checked STATS on both old-cacti-server and new-cacti-server:phalek wrote:Then check the STATS messages in your Cacti log, which actually tells you how long the poller is running. If it's 58 secs or so, then you have an issue with the poller not being able to poll all devices within that 1 minute timeframe.
Old:
08/31/2013 01:59:15 AM - SYSTEM STATS: Time:13.8600 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:310 RRDsProcessed:232
08/31/2013 02:00:15 AM - SYSTEM STATS: Time:13.2713 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:293 RRDsProcessed:229
08/31/2013 02:01:14 AM - SYSTEM STATS: Time:12.8222 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:300 RRDsProcessed:230
08/31/2013 02:02:15 AM - SYSTEM STATS: Time:13.8259 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:320 RRDsProcessed:231
08/31/2013 02:03:15 AM - SYSTEM STATS: Time:13.8830 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:314 RRDsProcessed:232
New:
08/31/2013 02:00:16 AM - SYSTEM STATS: Time:14.6245 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:307 RRDsProcessed:223
08/31/2013 02:01:14 AM - SYSTEM STATS: Time:12.8323 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:290 RRDsProcessed:225
08/31/2013 02:02:14 AM - SYSTEM STATS: Time:12.8970 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:281 RRDsProcessed:223
08/31/2013 02:03:15 AM - SYSTEM STATS: Time:13.8440 Method:spine Processes:1 Threads:5 Hosts:61 HostsPerProcess:61 DataSources:298 RRDsProcessed:226
Yes a good amount of scripts running for various stuff: apache, mysql, gerrit, and repo metrics.phalek wrote:In case you're heavily into using scripts, then you may think about enhancing these scripts to do some sort of caching ...
However, old-cacti-server is completely fine. Charts look good and consistent, only new-cacti-server looks bad... (except when running manually).
If it helps, here's another ss of old/new cacti servers and how charts are looking on each:
OLD top / NEW bottom
- Attachments
-
- old cacti HTTP graphs
- old_cacti-server.PNG (23.06 KiB) Viewed 2351 times
-
- new cacti HTTP graphs
- new_cacti-server.PNG (27.38 KiB) Viewed 2351 times
/BestRegards,
-FuRoSh
-FuRoSh
- phalek
- Developer
- Posts: 2838
- Joined: Thu Jan 31, 2008 6:39 am
- Location: Kressbronn, Germany
- Contact:
Re: Cacti Migration shows graphs not same on OLD/NEW server
so it's not all graphs that show this behaviour (can you confirm?). Is it only this DNS graph that looks like this ?
Greetings,
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Phalek
---
Need more help ? Read the Cacti documentation or my new Cacti 1.x Book
Need on-site support ? Look here Cacti Workshop
Need professional Cacti support ? Look here CereusService
---
Plugins : CereusReporting
Re: Cacti Migration shows graphs not same on OLD/NEW server
Hello ,
First of thank you for trying to see what it happens:
That's why i think there's something wrong with my rrdtool installation but i'm not able to debug it.
Thanks again!
First of thank you for trying to see what it happens:
The snmpwalk is executed from both installations, and polling the same host.Just to verify, the host that the graph is coming from is the same ?
so you did a SNMPwalk to the remotehost from the old cacti ( oldcacti) and the new one ) newcacti ?
It's not a localhost, it's a Alcatel Lucent 7750 router and the same thing happens to all my routers ( Alcatel's and Juniper's)and remotehost is obviously not localhost or 127.0.0.1 or any other 127.0.0.xxx IP ?
That's why i think there's something wrong with my rrdtool installation but i'm not able to debug it.
Thanks again!
Who is online
Users browsing this forum: No registered users and 0 guests