Steve:
The consolidation function needs to have one step per consolidated step/period. Can you post your RRD create?
RRA:AVERAGE:0.5:1:12672
RRA:AVERAGE:0.5:36:480
RRA:AVERAGE:0.5:288:730
RRA:MAX:0.5:6:600
RRA:MAX:0.5:24:600
RRA:MAX:0.5:288:600
Steve:
Nein! You still have the Baystacks aloof? LOL
rrdtool create newdata.rrd --step 300 \
DS:upDown:GAUGE:0:U:U \
RRA:AVERAGE:0.5:1:525960
Is what I come up with. I'm not sure why averaged values are ending up in there.
That's ok. Just making sure there is a consensus on how a gauge is supposed to work and deposit primary data into the RRD.
I've checked the actual snmpwalk numerous times and it continues to only give an integer.
We'll have to see if any other RRD gurus wander by.
Steve:
In the mean time, share a laugh- http://www.happyplace.com/9907/david-thorne-makes-co-workers-life-a-living-hell
Steve:
Looks like a normalization issue.
RRD picks the timestamps of the intervals for you using your supplied stepsize but uses its timestamps. This means that if samples are every 10 seconds and starting 6 seconds after the minute for a PDP, none of the data points will be recorded exactly as supplied them.
Steve:
"You can cause the normalisation to be a null operator by
ensuring that you update on every step boundary - not a second early
or late - and do no consolidation. "
"Don't expect values. Expect rates. Values are used by rrdtool to
compute rates. It are these rates that are normalized, consolidated
and displayed.
Depending on your counter type, and your input, rrdtool will
'modify your values' (compute a rate from them).
If you think you are dealing with values instead of rates, at some
point in time you are going to be disappointed"
Looks like it all comes down to 1:1 step per period and always updating on step/period boundaries.
http://www.vandenbogaerdt.nl/rrdtool/process.php
Ok normalization is explained in the link above. Too bad there was no mention of this at all in the oiteker docs.
GAUGE is set up for actual rates such as speed. Even temperature can work because obviously there to get from 70 to 80 the media being measured must go through every fraction of every degree in between (not necessarily linearly).
For a binary value this doesn't work and the normalization produces extremely noticable values. I think the stock price example is also bad because a price could go from 50 to 80 in one trade though the normalization might give you something like 67 in between - very wrong since the stock never actually traded at that.
Just have to do my check as x < 1.5 or x>1.5 since the normalization is always the same with my values.
James - if that's what you meant by a rate I apologize for being daft. This was the first I heard of normalization.
Steve:
Sorry I don't have a better answer for you.
Nope. Normalization is the answer I was looking for. The solution is fudge the check 1->2 is always 1.6, 2->1 is always 1.3 so we just weigh the value against 1.5 instead of ==1 or ==2.
The take away is that Gauge is not suitable for binary types nor is it suitable for data with real disconinuity (prices).
Steve:
Thats pretty much the case.
From : http://oss.oetiker.ch/rrdtool/tut/rrdtutorial.en.html
[quote]One important feature of RRDtool has not been explained yet: it is virtually impossible to collect data and feed it into RRDtool on exact intervals. RRDtool therefore interpolates the data, so they are stored on exact intervals. [/quote]
In translation, during the ideal polling interval (T) - (T+5min) the interface status moves from 1 to 2. At T+5+delta which is the real polling moment, rrd is fed with the new value but in order to store it under T+5 time stamp, rrdtool will scale it back to something less than 2.
Could it be because of this ?
I think in this case, (depending on other factors certainly), I might consider not using zenperfsnmp to gather the data - i.e. for a binary value with 2 or 3 possible states, and/or you don't need to graph it - don't store the values in an RRD. It's probably a hack, but when I want to check on a polled value that is a "yes/no" sort of thing, I usually use a command datasource, and store values if I need to in a file my script creates.
Not to say that's a good way to do it, but maybe a bit of figuring out a correct tool for the job. I don't know all the context of what you're doing though, so this may be totally useless.
Potentially for some of the community devs who are adept at writing perf daemons, maybe it would be worthwhile putting on a "nice to have" list something that's like zenperfsnmp but gives the option if storing data in (something) other than RRD for non-graphing uses.
--
James Pulver
Information Technology Area Supervisor
LEPP Computer Group
Cornell University
Nillie:
Yes, normalization.
Steve:
Maybe write a daemon that uses the twisted framework to perform the snmp tasks and update statuses/create events rather than storing the data.
Follow Us On Twitter »
|
Latest from the Zenoss Blog » | Community | Products | Services Resources | Customers Partners | About Us | ||
Copyright © 2005-2011 Zenoss, Inc.
|
||||||||