Archived community.zenoss.org | full text search
Skip navigation
Currently Being Moderated

Dev chat 10/23/2008

VERSION 1 
Created on: Sep 14, 2009 11:18 AM by Noel Brockett - Last Modified:  Sep 14, 2009 11:18 AM by Noel Brockett

[09:56] * Now talking on #zenoss

[09:56] * Topic for #zenoss is: Zenoss Development will be here Thursday 11am EDT (UTC -04:00)

[09:56] * Topic for #zenoss set by mrayzenoss at Mon Oct 20 13:06:50 2008

[09:56] kells Hey Matt!

[09:57] * kneer0w (n=kneer0w@modemcable146.112-70-69.static.videotron.ca) has joined #zenoss

[09:58] * ktwilight has quit ("dead")

[09:58] mrayzenoss Good morning Americas, good afternoon/evening everyone else

[09:59] * mrayzenoss has changed the topic to: Zenoss Development is here

[09:59] * cote (n=cote@adsl-71-145-189-64.dsl.austtx.sbcglobal.net) has joined #zenoss

[09:59] kells G'day, g'day

[10:01] * ktwilight (n=ktwiligh@28.99-66-87.adsl-dyn.isp.belgacom.be) has joined #zenoss

[10:04] * spike_cb (n=root@ec2-75-101-159-17.compute-1.amazonaws.com) has joined #zenoss

[10:04] kells おはいよう ございます

[10:05] spike_cb hi is this the zenoss chat ?

[10:05] kells Yep

[10:05] kells Got a question, comment?

[10:06] spike_cb yeah, we've been using Zenoss for about 3 months now. Its great but it has its quirks

[10:06] nemo_ d.setGroups(pgroup)

[10:07] nemo_ should that be d.setGroups("/Groups/Something/Something")

[10:07] nemo_ or another format

[10:07] kells Thanks for the kind words :)

[10:08] kells nemo: Could you provide a little more context?  I'm not quite following you....

[10:08] spike_cb We're running the Enterprise version 2.2.1

[10:08] kells Nice!

[10:09] nemo_ kells, question is aimed at those who understand, no real point in explaining

[10:09] nemo_ i read all the docs, did you ? :)

[10:10] spike_cb kells: we've got the problem with /perf/snmp going to some windows servers and complaining that F:\ or Y:\ drive is not mounted, which is kind of annoying

[10:10] kells nemo:  I try to read the code, not the docs  :)

[10:11] * bootay (n=bootay@rrcs-97-77-9-2.sw.biz.rr.com) has joined #zenoss

[10:11] * mcadmin (n=ckeyes@198.136.41.221) has joined #zenoss

[10:11] * cluther (n=cluther@static-72-81-253-234.bltmmd.fios.verizon.net) has joined #zenoss

[10:11] kells spike: This is using the Snmp Informant approach, I take it?

[10:13] spike_cb kells: yes

[10:14] spike_cb kells: is there any way to quickly disable that kind of drive check ?

[10:15] kells If you go to the device's OS tab...

[10:15] kells (going there now)

[10:16] spike_cb kells: you mean deleting them off "File systems" ? they'll just pop right back in

[10:16] kells Nope, there should be another option that you can set to ignore them

[10:16] * mccools (n=root@ec2-75-101-159-17.compute-1.amazonaws.com) has joined #zenoss

[10:16] kells I can't find a windows box in the lab that I've got modeled so I'll just try another server

[10:17] spike_cb hey mccools, its me

[10:18] kells Find the FileSystem under the FileSystems table , click on teh drive, and then click on teh 'Monitor' checkbox

[10:18] kells That should stop it from generating events

[10:19] * ericnewton (n=ecn@static-72-81-253-234.bltmmd.fios.verizon.net) has joined #zenoss

[10:20] spike_cb kells: there's no checkbox for monitor for filesystems .. but if you do click on any drive, you'll get the drive properties and there's the "monitor" option with "true|false"

[10:20] spike_cb kells: is that what you're referring to ?

[10:20] kells Yes

[10:20] cluther spike_cb: You could check out the zFileSystemMapIgnoreNames property. It is a regex that will cause any file systems that match it to not be modeled.

[10:21] kells That's a much better way :)

[10:21] mrayzenoss By my count there are at least 7 Zenoss folks in the channel, so we can probably cover most questions.

[10:21] mrayzenoss I'm working on pushing out a new community beta, finally with stack installers

[10:21] spike_cb cluther: yes thanks. I remember I put /F:/ or something there before but it didn't do anything

[10:22] cluther spike_cb: Leave off the //s

[10:22] cluther spike_cb: And remember to remodel the device after you change that property.

[10:22] spike_cb cluther" ic, so its just the string to match then

[10:22] kells nemo:  It appears that it does take a list of group names

[10:23] cluther spike_cb: It is a regular expression. You've been using Perl too long if you think regular expressions are always surrounded with forward slashes. :)

[10:23] spike_cb cluther: oh remodel ? I haven't done that one before

[10:23] kells ... or sed, vi....

[10:23] spike_cb cluther: hehe yes, im a "use strict" type

[10:25] kells [fFyY]:  should do the trick, then, right?

[10:26] spike_cb cluther: how do you remodel after changing the zProperties config ?

[10:26] * dorferiferon (n=jeisenbe@74.202.159.54) has joined #zenoss

[10:26] cluther spike_cb: Menu -> Manage -> Model Devce

[10:27] kells The remodelling also occurs automatically every  xx hours

[10:28] kells 12?

[10:28] mccools Hi all.  First, thanks for hosting this!

[10:28] mccools Second, I'm wondering about best practices for monitoring a hot/cold cluster.  We have a lot of instances where we use ha.d to float a virtual IP and services between two physical machines.  We have modeled this as a device for the "master service" and then two devices for the physical hardware itself, but we keep running into issues where attributes of the clustered service get associated with one of the physical machines only...

[10:29] spike_cb cluther: ah~ ic .. so do you have to remodel a device everytime you change the zProperties ? or everytime you change anything

[10:30] kells mccools: Is using the 'lock' on those attributes not working for you?

[10:30] mccools Doesn't seem to, particularly lockings osprocesses

[10:32] mccools kells: honestly it  just seems inconsistent, it'll lock and work for awhile then weeks later we'll notice stuff is pinned directly on the device again.  Possible we're remodeling and not relocking or something?

[10:33] kells Is this after the migration from one node to another that you notice, or even if nothing is done to the service?

[10:34] kells IIRC, you shouldn't need to re-lock

[10:34] * cgibbons (n=cgibbons@rrcs-97-77-9-2.sw.biz.rr.com) has joined #zenoss

[10:35] mccools kells: After migration is the only time we've noticed

[10:36] kells So when you lock the service, it's on one of the physical nodes, right?  Or does your virtual IP show up as a separate device in teh device list?

[10:36] * ke4qqq-afk is now known as ke4qqq

[10:37] mccools kells: The VIP is a seperate device, and we try to fully model that one with the 'cluster' resources.  The physical machines are also their own device entries and we strip away cluster resources from the active one (since they're on the vip)

[10:39] kells Do you have other os processes on the physical machines too?  Chet's suggesting removing the HRSWRunMap from the list of collector plugins on the physicals

[10:40] mccools kells: Nope, only ones we care about are the ones on the VIP.  We're only really concerned with performance/services of the VIP, and the fact that the non-active physical machine is up, so that might work

[10:40] * mcadmin has quit ()

[10:43] * dorferiferon1 (n=jeisenbe@74.202.159.54) has joined #zenoss

[10:45] mccools kells: I'll try to get something I can reproduce and then play around with removing that collector, thanks

[10:45] mccools Are you guys going to be at LISA again this year?

[10:45] mrayzenoss Yes

[10:46] mrayzenoss I'll be there with npmccallum and 1 other Zenossian

[10:46] mrayzenoss I'll blog about it about a week before LISA

[10:46] npmccallum mrayzenoss: maybe ;)

[10:46] nemo_ mrayzenoss, where can i find documentation on setGroups() function ?

[10:46] mccools Great, well maybe I'll see you there; enjoyed the vendor bof last year (and you got an enterprise sale out of it)

[10:48] mrayzenoss nemo_: not much in the API docs, http://www.zenoss.com/community/docs/zenoss-api-docs/2.2/ZenModel.Device.Device-class.html#setGroups

[10:48] mrayzenoss http://www.zenoss.com/community/docs/zenoss-api-docs/2.2/identifier-index-S.html

[10:48] adytum-bot Title: ZenModel.Device.Device (at www.zenoss.com)

[10:48] adytum-bot Title: Identifier Index (at www.zenoss.com)

[10:48] mrayzenoss oh quick heads up for everyone in the channel, search.zenoss.com is going live soon

[10:48] cgibbons oooh

[10:49] mrayzenoss it's publicly available, haven't hyped it yet.  We got ourselves a google appliance for searching everything

[10:52] * dorferiferon1 has quit (Read error: 54 (Connection reset by peer))

[10:52] nemo_ mrayzenoss, should that be d.setGroups("/Groups/Something/Something")

[10:52] nemo_ its really not obvious at all.

[10:52] * dorferiferon1 (n=jeisenbe@74.202.159.54) has joined #zenoss

[10:53] nemo_ i even grepped the code to find it :)

[10:53] cluther nemo_: That's it.

[10:53] * dorferiferon1 has quit (Read error: 104 (Connection reset by peer))

[10:54] * mccools has quit ("ircII EPIC4-2.4 -- Are we there yet?")

[10:54] * dorferiferon1 (n=jeisenbe@74.202.159.54) has joined #zenoss

[10:54] cluther nemo_: oops.. it isn't it. d.setGroups("/Something/Something")

[10:54] cluther nemo_: Leave off the leading /Groups

[10:55] * dorferiferon has quit (Read error: 110 (Connection timed out))

[10:55] * dorferiferon1 has quit (Read error: 104 (Connection reset by peer))

[10:57] spike_cb a general question guys, is it a good practice to do clear heartbeat, push changes and remodel to make sure that the change you put in take effect ?

[10:57] * kneer0w has quit (lem.freenode.net irc.freenode.net)

[10:57] * near_ has quit (lem.freenode.net irc.freenode.net)

[10:57] * zenoss-logger has quit (lem.freenode.net irc.freenode.net)

[10:57] * EricL has quit (lem.freenode.net irc.freenode.net)

[10:57] * nemo_ has quit (lem.freenode.net irc.freenode.net)

[10:57] * shibby has quit (lem.freenode.net irc.freenode.net)

[10:57] * mf2ng has quit (lem.freenode.net irc.freenode.net)

[10:57] * notorp has quit (lem.freenode.net irc.freenode.net)

[10:57] * bzed has quit (lem.freenode.net irc.freenode.net)

[10:58] * shephard has quit ()

[10:58] * dorferiferon (n=jeisenbe@74.202.159.54) has joined #zenoss

[11:00] kells The heartbeat is for zenhub to other daemon communications, so you shouldn't need to do anything with that for any of your device changes.

[11:00] * kneer0w (n=kneer0w@modemcable146.112-70-69.static.videotron.ca) has joined #zenoss

[11:00] * near_ (n=near@83-153-92-185.rev.libertysurf.net) has joined #zenoss

[11:00] * zenoss-logger (n=zenoss-l@comm1.zenoss.com) has joined #zenoss

[11:00] * EricL (n=eric@jarbeeg.chal.net) has joined #zenoss

[11:00] * nemo_ (n=nemo@213.244.168.131) has joined #zenoss

[11:00] * bzed (n=bzed@devel.recluse.de) has joined #zenoss

[11:00] * shibby (n=jhibbets@nat/redhat/x-1734379f05de003e) has joined #zenoss

[11:00] * notorp (i=samppa@lobster.avenla.fi) has joined #zenoss

[11:00] * mf2ng (n=mf2hd@flood.fi) has joined #zenoss

[11:01] kells The remodel should be necessary and sufficient

[11:01] kneer0w oof.

[11:01] kneer0w split -_-

[11:01] kells Chet's mentioning that the 'push changes' shouldn't be necessary

[11:02] spike_cb kells: ic thanks!

[11:07] mrayzenoss Any quick questions, before a couple of us duck out?

[11:08] kells I'm told that 'push changes' shouldn't generally be necessary except in a few cases:

[11:08] kells * changes to relations

[11:08] kells * deleting anything except a device

[11:09] kells * possibly for support

[11:09] kells ie support finds something strange that is not updating properly, and as a short term fix asks you to push changes

[11:12] * kneer0w has quit ()

[11:14] mrayzenoss Thanks again to everyone who showed up, we'll have the log up soon and we'll be back officially in 2 weeks, but Zenossians drop in from time to time.

[11:17] * kells (n=kells@S0106000625f7b75b.cg.shawcable.net) has left #zenoss

Comments (0)