Archived community.zenoss.org | full text search
Skip navigation
105221 Views 12 Replies Latest reply: Nov 16, 2009 9:42 PM by guyverix RSS
jbaird Rank: Green Belt 166 posts since
Sep 18, 2007
Currently Being Moderated

Feb 4, 2009 12:13 PM

Graph I/O Performance?

Is anybody monitoring/graphing any I/O related stats on Linux or Unix machines? If so.. how are you going about doing this?

Thanks,

Josh
  • Wouter DHaeseleer ZenossMaster 204 posts since
    Jun 22, 2007
    Currently Being Moderated
    1. Feb 4, 2009 1:53 PM (in response to jbaird)
    RE: Graph I/O Performance?
    I guess you mean things you can view with sar and iostat?

    If so I would take a look at the nagios plugins.
    Here is one: http://www.ofn.dk/files/software/check_iostat
  • Wouter DHaeseleer ZenossMaster 204 posts since
    Jun 22, 2007
    Currently Being Moderated
    3. Feb 4, 2009 2:29 PM (in response to jbaird)
    RE: Graph I/O Performance?

    "jbaird" wrote:

     

    Yeah something like that.. sucks that it has to be executed on the remote system.. was hoping something along these lines was available via SNMP.



    I use nagios_nrpe agent to do remote scripts.
  • gherzbrun Newbie 4 posts since
    Aug 1, 2008
    Currently Being Moderated
    4. Feb 5, 2009 3:23 PM (in response to Wouter DHaeseleer)
    RE: Graph I/O Performance?
    While you can't get the same detail as sar or iostat, there is a diskIOTable (1.3.6.1.4.1.2021.13.15.1) tree in the Net-SNMP mibs. Its rather simple and not always fully functional, but it would give you some stats to start with. Additionally there is also the generic ssIORawSent and ssIORawReceived in the systemStats table, but those are far more generic.

    Similar functionailty exists in the Free Version of the Informant agent for windows in their logicalDiskTable (1.3.6.1.4.1.9600.1.1.1), again not a very complete picture, but at least some stats.

    Good luck
  • c0ns0le Newbie 5 posts since
    Jul 8, 2008
    Currently Being Moderated
    5. Nov 5, 2009 11:46 AM (in response to jbaird)
    Re: Graph I/O Performance?

    Thought i'd drop a note on this and hopefully get some others to help out w/ the graphing etc...

     

    One method i've found for iostats collection is to leverage another communities efforts.

     

    1. place the iostats.pl file attached into whatever directory

    2. edit your snmpd.conf adding:
         pass .1.3.6.1.3.1 /usr/bin/perl /path/to/iostat.pl

    3. create a /etc/cron.d/iostat with following content:
         * * * * * iostat -kxd 30 2 > /tmp/io.tmp && mv /tmp/io.tmp /tmp/iostat.cache

    4. restart cron

    5. snmpwalk -v2c -c<your_community> <host>:<port> .1.3.6.1.3.1

    6. viola you have iostats for all devices via snmp.

    7. leverage someone elses hardware in getting it added to your hardware tab or perf tab / device

     

    ref: http://www.markround.com/archives/48-Linux-iostat-monitoring-with-Cacti.html

     

    Would be great if we could figure out how to do some mappings similar to yaketystats in that it's able to desern what physical and logical devices are mapped to. ie your /dev/mpath or /dev/dm devices mapped to logical volume names etc..

     

    Regards,

    Attachments:
  • c0ns0le Newbie 5 posts since
    Jul 8, 2008
    Currently Being Moderated
    6. Nov 15, 2009 7:09 PM (in response to c0ns0le)
    Re: Graph I/O Performance?
    bump
  • guyverix ZenossMaster 846 posts since
    Jul 10, 2007
    Currently Being Moderated
    7. Nov 15, 2009 7:44 PM (in response to c0ns0le)
    Re: Graph I/O Performance?
    I wrote a daemon that does this about 2 years ago.  Let me dust off the code and I will tar it and get it up here.  Do you want a Debian version, or a Red Hat version?

     

    Message was edited by: guyverix

    I will not kid you, the code is simplistic (I was just starting to learn bash scripting, and this was my first daemon) so the exec statement itself is defining the disk you want to get the stats on.  It is not a table return value per device.  It does however work very well to get what you are asking for.

     

    Message was edited by: guyverix I will check and see how much of a PITA it would be for this to return a table value per device. 

  • guyverix ZenossMaster 846 posts since
    Jul 10, 2007
    Currently Being Moderated
    8. Nov 15, 2009 9:11 PM (in response to guyverix)
    Re: Graph I/O Performance?

    I have updated the code and now, all that needs to be defined in your snmpd.conf file is the drive and if you want the stats from iostat -t or iostat -x.

    The return is in a table format so everything can be graphed.  Here is what you will get as returns on a walk when both t and x are available on one drive:

     

    .1.3.6.1.4.1.8072.1.3.2.3.1.3.1.48 = INTEGER: 6
    .1.3.6.1.4.1.8072.1.3.2.3.1.3.1.49 = INTEGER: 12
    .1.3.6.1.4.1.8072.1.3.2.3.1.4.1.48 = INTEGER: 0
    .1.3.6.1.4.1.8072.1.3.2.3.1.4.1.49 = INTEGER: 0
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.1 = STRING: sda5
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.2 = STRING: 19.58
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.3 = STRING: 18.38
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.4 = STRING: 425.97
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.5 = STRING: 184
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.48.6 = STRING: 4264
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.1 = STRING: sda5
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.2 = STRING: 0.00
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.3 = STRING: 34.87
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.4 = STRING: 1.40
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.5 = STRING: 18.28
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.6 = STRING: 19.98
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.7 = STRING: 425.97
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.8 = STRING: 22.66
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.9 = STRING: 0.21
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.10 = STRING: 10.80
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.11 = STRING: 1.06
    .1.3.6.1.4.1.8072.1.3.2.4.1.2.1.49.12 = STRING: 2.08

     

    I am using the extend command to create this, so it will also give you a litteral string reply as well if you need it for manipulations:

    .1.3.6.1.4.1.8072.1.3.2.3.1.2.1.48 = STRING: sda5
    19.58
    18.38
    425.97
    184
    4264
    .1.3.6.1.4.1.8072.1.3.2.3.1.2.1.49 = STRING: sda5
    0.00
    34.87
    1.40
    18.28
    19.98
    425.97
    22.66
    0.21
    10.80
    1.06
    2.08

  • guyverix ZenossMaster 846 posts since
    Jul 10, 2007
    Currently Being Moderated
    9. Nov 15, 2009 9:45 PM (in response to guyverix)
    Re: Graph I/O Performance?

    As promised, here you go.

     

    Create directory /opt/snmp-scripts

    This must contain iostat-snmp and iostat-snmp-v2

    chmod 755 both of the files, they will be retrieving the data from the logs.

     

    copy iostatd and iostatdd into /etc/init.d/ and chmod them to 755 (also needs root ownership for the init.d dir)

    copy snmpd.local.conf to /etc/snmp/

    edit snmpd.local.conf to reflect your hard disks that you want to get values for.

     

    Go into the rc#.d runlevel for the server and type:

    ln -s ../init.d/iostatd S99iostatd (start it at boot)

     

    type /etc/init.d/iostatd   (this calls iostatdd with no args,)

    now type /etc/init.d/snmpd reload

     

    Verification:

    the files that are created are in /var/tmp/iostat-<x or t>

    run a tail on them.  They are set to update every 10 seconds.  (this can be changed inside iostatdd itself)

     

     

    now snmpwalk -v2c -c public localhost .1.3.6.1.4.1.8072.1.3.2

     

    If everything is set up correctly, you will have your data.

     

    Message was edited by: guyverix

    Darn, almost forgot.  The init.d daemon is active for a Debian system currently.  If it complains at startup, comment out the Debian code, and uncomment the RH code that I have commented.  (or ask me to UL a RH version exclusive)

     

    I would also recommend setting up a cron that restarts the daemon once a week/moth/whatever, since there is no log rotation, when the daemon is started it always wipes the previous file and goes again from scratch.  (why eat the disk space)

    Attachments:
  • guyverix ZenossMaster 846 posts since
    Jul 10, 2007
    Currently Being Moderated
    10. Nov 15, 2009 10:32 PM (in response to guyverix)
    Re: Graph I/O Performance?

    As an FYI, I will be redoing this code sometime in the future, and have the log files be on a virtual mount in RAM.

    No need to create disk IO to record disk IO, even at minimal levels..  Grin..

  • c0ns0le Newbie 5 posts since
    Jul 8, 2008
    Currently Being Moderated
    11. Nov 16, 2009 7:36 PM (in response to guyverix)
    Re: Graph I/O Performance?

    guyverix,

     

    The only help i was really looking for is the graphing of the data in zenoss on say a 'custom' device tab.  I've found the example code for how to create a new zenpack which adds the additional tab.  I just haven't had the time to put it all together.  I did find your post helpful though on the ramdisk part.  I wouldn't expect it to be a high io process though, so not entirely certain it's needed in my implementation. Here are a few of the links.

     

    Links:

    message/36233#36233

    message/40812#40812

     

    Regards,

    c0ns0le

  • guyverix ZenossMaster 846 posts since
    Jul 10, 2007
    Currently Being Moderated
    12. Nov 16, 2009 9:42 PM (in response to c0ns0le)
    Re: Graph I/O Performance?
    Grin, the extra perf tab is not hard to code, it is getting the table replies put into the RRD that I still have all kinds of trouble with. Even after reading the howto for creating ZenPacks I dont understand Python well enouugh to tell it create datasource bleah with datapoints 1,2,3...x..

More Like This

  • Retrieving data ...