From: skyclan@gmx.net
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Re: monit and friends.
Date: Wed, 18 Oct 2017 15:45:23 +0200 [thread overview]
Message-ID: <a7d03aee-988d-f100-8ed3-3e6e9bfb4ecb@kiwifrog.net> (raw)
In-Reply-To: <e57aa55a-c651-2c39-34a8-ab13a4a981f1@gmail.com>
Hi Alan,
This isn't exactly what you describe for your needs but have you
considered using auto-remediation outside of the box? I've been using
StackStorm https://stackstorm.com/ for the last year in an environment
of ~1500 physical servers for this purpose and it's been quite successful.
It has been handling cases like restarting SNMP daemons that segfault,
hadoop instances that loose to contact with the ZooKeeper cluster,
restarting nginx daemons that stop responding to requests by analysing
the last write date in nginx's access logs, the list goes on.
StackStorm is event driven platform that has many integrations available
allowing it to interact with internal and external service providers.
It's Python based and can use ssh to execute remote commands which
sounds like an acceptable approach since you're using ansible.
Connecting SNMP traps up to StackStorm's event bus to trigger automated
responses based on the trap contents would be inline with common use cases.
Regards,
Carlos
On 16/10/17 17:50, Alan McKinnon wrote:
> Nagios and I go way back, way way waaaaaay back. I now recommend it
> never be used unless there really is no other option. There is just so
> many problems with actually using the bloody thing, but let's not get
> into that:-)
>
> I have a full monitoring system that tracks and reports on the state of
> most things, but as it's a monitoring system it is forbidden to make
> changes of any kind at all, and that includes restarting failed daemons.
> Turns out that daemons that failed for no good reason are becoming more
> and more common in this day and age, mostly because we treat them like
> cattle not pets and use virtualization and containers so much. And
> there's our old friend the Linux oom-killer....
>
> What I need here is a small app that will be a constrained,
> single-purpose watchdog. If a daemon fails, the watchdog attempts 3
> restarts to get it going, and records the fact it did it (that goes into
> the big monitoring system as a reportable fact). If the restart fails,
> then a human needs to attend to it as it is seriously or beyond the
> scope of a watchdog.
>
> Like you, I'm tired of being woken at 2am because something dropped 1
> ping when the nightly database maintenance fired up on the vmware
> cluster:-)
prev parent reply other threads:[~2017-10-18 13:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-16 12:11 [gentoo-user] monit and friends Alan McKinnon
2017-10-16 15:08 ` [gentoo-user] " Ian Zimmerman
2017-10-16 15:12 ` Alan McKinnon
2017-10-16 15:41 ` Mick
2017-10-16 15:50 ` Alan McKinnon
2017-10-16 16:10 ` Ralph Seichter
2017-10-16 16:18 ` Alan McKinnon
2017-10-17 0:13 ` Michael Orlitzky
2017-10-18 13:45 ` skyclan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a7d03aee-988d-f100-8ed3-3e6e9bfb4ecb@kiwifrog.net \
--to=skyclan@gmx.net \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox