Interface statistics off a lot (Ubuntu 10.04 AMD64)

Post support questions that directly relate to Linux/Unix operating systems.

Moderators: Developers, Moderators

Post Reply
TvL2386
Posts: 13
Joined: Wed Nov 25, 2009 5:55 am

Interface statistics off a lot (Ubuntu 10.04 AMD64)

Post by TvL2386 »

Hi!

I think it's a Ubuntu related problem. I have an ubuntu 10.04 AMD64 server and when I copy something large to another machine with roughly 100MB/s (1GBIT link), the graph created by cacti seems of by a factor 10. I have been copying for a while now...

I've running a test for a longer period of time to see it clearly

The command:
Image

The test ran for 3233 seconds with an average speed of 33.2MB/s, which would result in 265.6MBIT/s.
It doesn't show in the graph:

Image

EDIT: I forgot to mention that the server with the graphed interface is Ubuntu 10.04 AMD64. The Cacti host is a CentOS5.2 x86_64 machine.

Anybody know what could be the problem?
Last edited by TvL2386 on Fri Jun 18, 2010 2:57 am, edited 1 time in total.
TvL2386
Posts: 13
Joined: Wed Nov 25, 2009 5:55 am

Post by TvL2386 »

Done another test:

user@ubuntu_host:~$ dd if=/dev/zero count=1M bs=10240 | ssh user@otherhost cat > /dev/null
user@otherhost's password:
1048576+0 records in
1048576+0 records out
10737418240 bytes (11 GB) copied, 293.034 s, 36.6 MB/s

I did an snmpwalk of the IF_MIB during this time extracting the 'ifHCOutOctets.6 = Counter64'.

Results:

Code: Select all

td: 31 speed: 998 bytes/sec
td: 30 speed: 13187383 bytes/sec
td: 30 speed: 41962967 bytes/sec
td: 30 speed: 38440179 bytes/sec
td: 30 speed: 39788642 bytes/sec
td: 30 speed: 40642786 bytes/sec
td: 31 speed: 36056132 bytes/sec
td: 30 speed: 38879780 bytes/sec
td: 30 speed: 41439323 bytes/sec
td: 30 speed: 39078419 bytes/sec
td: 30 speed: 39354243 bytes/sec
td: 30 speed: 11636450 bytes/sec
td: 31 speed: 497 bytes/sec
td: 30 speed: 576 bytes/sec
td: 30 speed: 514 bytes/sec
td: 30 speed: 513 bytes/sec
td: 30 speed: 511 bytes/sec
td: 30 speed: 513 bytes/sec
td: 30 speed: 511 bytes/sec
td: 31 speed: 497 bytes/sec
NOTE: td is time difference between timestamps. I read that an octet is the same as a byte, so I printed bytes/sec instead of octets/sec

What cacti shows me:
Image
Last peak is 70MBIT/s according to the graph. Which is wrong, it should be 288MBIT/s (36MB/s * 8)

Still at loss where this comes from, but I think I can say it's Cacti related
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

Please post "rrdtool info" of the rrd file you're using.
Are you using 1 min polling or 5 min polling?
R.
TvL2386
Posts: 13
Joined: Wed Nov 25, 2009 5:55 am

rrdtool info

Post by TvL2386 »

According to the graph source the following rrd is used: /opt/cacti-0.8.7e/rra/213/9117.rrd. I'm using 5 minute polling. The total polling time is 17 seconds.

Code: Select all

rrdtool info /opt/cacti-0.8.7e/rra/213/9117.rrd
filename = "/opt/cacti-0.8.7e/rra/213/9117.rrd"
rrd_version = "0003"
step = 300
last_update = 1276925706
ds[traffic_in].type = "COUNTER"
ds[traffic_in].minimal_heartbeat = 600
ds[traffic_in].min = 0.0000000000e+00
ds[traffic_in].max = 1.0000000000e+07
ds[traffic_in].last_ds = "232675521"
ds[traffic_in].value = 2.2478000000e+02
ds[traffic_in].unknown_sec = 0
ds[traffic_out].type = "COUNTER"
ds[traffic_out].minimal_heartbeat = 600
ds[traffic_out].min = 0.0000000000e+00
ds[traffic_out].max = 1.0000000000e+07
ds[traffic_out].last_ds = "3946522749"
ds[traffic_out].value = 4.6400000000e+01
ds[traffic_out].unknown_sec = 0
rra[0].cf = "AVERAGE"
rra[0].rows = 500
rra[0].cur_row = 106
rra[0].pdp_per_row = 1
rra[0].xff = 5.0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
rra[0].cdp_prep[1].value = NaN
rra[0].cdp_prep[1].unknown_datapoints = 0
rra[1].cf = "AVERAGE"
rra[1].rows = 600
rra[1].cur_row = 412
rra[1].pdp_per_row = 1
rra[1].xff = 5.0000000000e-01
rra[1].cdp_prep[0].value = NaN
rra[1].cdp_prep[0].unknown_datapoints = 0
rra[1].cdp_prep[1].value = NaN
rra[1].cdp_prep[1].unknown_datapoints = 0
rra[2].cf = "AVERAGE"
rra[2].rows = 700
rra[2].cur_row = 406
rra[2].pdp_per_row = 6
rra[2].xff = 5.0000000000e-01
rra[2].cdp_prep[0].value = 3.7498503753e+01
rra[2].cdp_prep[0].unknown_datapoints = 0
rra[2].cdp_prep[1].value = 7.7323090508e+00
rra[2].cdp_prep[1].unknown_datapoints = 0
rra[3].cf = "AVERAGE"
rra[3].rows = 775
rra[3].cur_row = 87
rra[3].pdp_per_row = 24
rra[3].xff = 5.0000000000e-01
rra[3].cdp_prep[0].value = 6.9172113065e+02
rra[3].cdp_prep[0].unknown_datapoints = 2
rra[3].cdp_prep[1].value = 1.3247018764e+02
rra[3].cdp_prep[1].unknown_datapoints = 2
rra[4].cf = "AVERAGE"
rra[4].rows = 797
rra[4].cur_row = 237
rra[4].pdp_per_row = 288
rra[4].xff = 5.0000000000e-01
rra[4].cdp_prep[0].value = 2.4738414400e+03
rra[4].cdp_prep[0].unknown_datapoints = 6
rra[4].cdp_prep[1].value = 4.7658189599e+02
rra[4].cdp_prep[1].unknown_datapoints = 6
rra[5].cf = "MAX"
rra[5].rows = 500
rra[5].cur_row = 398
rra[5].pdp_per_row = 1
rra[5].xff = 5.0000000000e-01
rra[5].cdp_prep[0].value = NaN
rra[5].cdp_prep[0].unknown_datapoints = 0
rra[5].cdp_prep[1].value = NaN
rra[5].cdp_prep[1].unknown_datapoints = 0
rra[6].cf = "MAX"
rra[6].rows = 600
rra[6].cur_row = 233
rra[6].pdp_per_row = 1
rra[6].xff = 5.0000000000e-01
rra[6].cdp_prep[0].value = NaN
rra[6].cdp_prep[0].unknown_datapoints = 0
rra[6].cdp_prep[1].value = NaN
rra[6].cdp_prep[1].unknown_datapoints = 0
rra[7].cf = "MAX"
rra[7].rows = 700
rra[7].cur_row = 551
rra[7].pdp_per_row = 6
rra[7].xff = 5.0000000000e-01
rra[7].cdp_prep[0].value = 3.9168918619e+01
rra[7].cdp_prep[0].unknown_datapoints = 0
rra[7].cdp_prep[1].value = 7.7323090508e+00
rra[7].cdp_prep[1].unknown_datapoints = 0
rra[8].cf = "MAX"
rra[8].rows = 775
rra[8].cur_row = 463
rra[8].pdp_per_row = 24
rra[8].xff = 5.0000000000e-01
rra[8].cdp_prep[0].value = 1.4235130494e+02
rra[8].cdp_prep[0].unknown_datapoints = 2
rra[8].cdp_prep[1].value = 8.5227937127e+00
rra[8].cdp_prep[1].unknown_datapoints = 2
rra[9].cf = "MAX"
rra[9].rows = 797
rra[9].cur_row = 676
rra[9].pdp_per_row = 288
rra[9].xff = 5.0000000000e-01
rra[9].cdp_prep[0].value = 1.4235130494e+02
rra[9].cdp_prep[0].unknown_datapoints = 6
rra[9].cdp_prep[1].value = 8.8499385178e+00
rra[9].cdp_prep[1].unknown_datapoints = 6
I don't know if it's relevant, but I have a few linux nodes and lots of Cisco devices. The new ubuntu 10.04 machine is the only one where graphs are off.
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Re: rrdtool info

Post by gandalf »

TvL2386 wrote:According to the graph source the following rrd is used: /opt/cacti-0.8.7e/rra/213/9117.rrd. I'm using 5 minute polling. The total polling time is 17 seconds.
That's fine.
I don't know if it's relevant, but I have a few linux nodes and lots of Cisco devices. The new ubuntu 10.04 machine is the only one where graphs are off.
Not that I'm aware of.
R.
User avatar
gandalf
Developer
Posts: 22383
Joined: Thu Dec 02, 2004 2:46 am
Location: Muenster, Germany
Contact:

Post by gandalf »

TvL2386 wrote:10737418240 bytes (11 GB) copied, 293.034 s, 36.6 MB/s
YOu copy process lastet around 300 sec. Before and afterwards, there was no traffic. So it may have occured, that one half of that transfer hit a first polling interval and the rest may have hit a second polling interval.
Please retry and make sure, that the copy process covers a whole polling interval. This can be done by e.g. covering 10 minutes instead of 5.
R.
TvL2386
Posts: 13
Joined: Wed Nov 25, 2009 5:55 am

Post by TvL2386 »

Hi Gandalf,

Indeed, true for my last test. Though I started at approximately polling time.
Please see my first post for a 3000+ seconds test.
Post Reply

Who is online

Users browsing this forum: No registered users and 0 guests