BigSmoke

Smokes your problems, coughs fresh air.

Page 4 of 52

Exhaling on YouTube

I’ve created a YouTube channel separate from my private account and branded it “BigSmoke”. The channel’s purpose is to breathe some fresh air into some online discussions that I follow(ed). Actually, the content is the sort of content that I used to want to put more of on BigSmoke, but which I now found to be better suited for long-form discussions than for laying down my views from some ivory tower.

The first three puffs of fresh air feature my brother Jorrit:

  1. In the first puff, recorded shortly after he (and me too for the nth time) quit caffeine. The cue for me to want to do that podcast (and stop putting off this creative endeavor indefinitely) was when he told me that quitting caffeine took a heavier toll on his body and mind than did quitting smoking and drinking at the same time a few months earlier. I though that that was a great story to put into perspective that yes, caffeine really is a serious, addictive drug that can interfere not just with your dreams but actually with your dreaming!

    Production-wise, the worst part of this first podcast is that my face is tiny, because I used Google Meet and neglected to install a browser extension to undo its limited layout support or to even just click my own face when I was talking.

  2. The 2nd puff of fresh air centered around meaning. Without the caffeine in my system, I was having more trouble than usual to find the meaning in my “mostly for money” job that’s really doing nothing to make the world a better, more beautiful place. Also, my self-discipline had declined to a long-time, leaving too little time and energy around work-work for more creative meaningful endeavors (such as doing podcasts).

    There was a bit of a production problem with my 2nd puff of fresh air. The one published is a re-iteration of the same discussion we had some days before that but for which I recorded an empty audio stream of my docking station’s unconnected microphone input instead of the audio stream of the laptop port in which my microphone was actually plugged in. At least I did find Jitsi, which allowed easier side-to-side video frames.

  3. In the 3rd puff of fresh air, we zoom in on some topics that we brushed past in the 2nd without really touching. I talk about my pain and shame of being mostly just another cooperating cog in the machine that is wreaking planetary-scale havoc and that is grinding ecosystems all over the world out of existence. Jorrit’s focus is on the harm that’s done to human happiness by our culture (and also the “away with us” culture of which I’m sometimes a part). We try to make our visions on self-discipline collide, but we end up finding more agreement than we expect. Most of our disagreement turns out to be superficially bound to societal structures which we both would rather see transformed than preserved in their current sickly form.

    We switched back to Google Meet for this 3rd recorded conversation, because the free Jitsi server we used was performing shakily that day.

Icinga 2 dependencies, downtimes and host/service unreachability

There are a few gotchas you have to be aware of when working with Icinga 2 dependencies and downtimes.

Gotcha 1a: Downtimes and dependencies are independent of each other

Intuitively, I had expected downtime to always traverse down the parent-child dependency tree. It doesn’t. It’s opt-in. The ScheduledDowntime.child_options attribute can be set to DowntimeTriggeredChildren or DowntimeNonTriggeredChildren to make it so. (These options are called “schedule triggered downtime for all child hosts” and “schedule non-triggered downtime for all child hosts”, respectively, in Icingaweb2.) With one of these options set, runtime downtime objects will also be created for (grand)child hosts (but not services; see Gotcha 1b).

Gotcha 1b: Downtimes never traverse to child hosts’ services

In Icingaweb2, when you’re scheduling downtime for a host and choose to also “schedule (non-)triggered downtime for all child hosts”, this excludes services on those child hosts. The “All Services” toggle applies only to the current host. There is an (open, as of May 5 2020) Icingaweb 2 feature request to address this. So far, the only attempt to implement the Icinga 2 side of this was shot down by the Icinga maintainers on the basis of making things too complex. @dnsmichi would prefer rethinking the current options.

If you want to make it easy to schedule downtime for dependency chain involving numerous hosts and/or services, I recommend using a single HostGroup and/or ServiceGroup to make it easy to Shift-select all dependent objects in Icingaweb2 and schedule the downtime in batch. In the worst case you than have to select all objects in each group separately to plan the bulk downtime twice. In some cases, just a HostGroup may do (because in Icingaweb2 you can include downtime for all of a hosts services), but this won’t be sufficient if you have services that depend directly on other services rather than hosts.

From the configuration, it’s not at all possible to include all of a host’s services in the ScheduledDowntime objects. But, here it’s not really an issue, because it’s enough to abstract your downtime particularities into a template and apply that to both the Host and the Service objects that are to be affected.

Gotcha 2a: Child hosts will (almost) never become UNREACHABLE when the parent host fails and Dependency.disable_checks == true

object Host "A" {
}

object Host "B" {
}

object Dependency "B-needs-A" {
  parent_host_name = "A"
  child_host_name = "B"
  disable_notifications = true
  disable_checks = true
}

In this example, when host A goes down, the B-needs-A dependency is activated and notifications about B are suppressed (because disable_notifications == True). However, because checks are also disabled, host B never becomes unreachable, unless if you manually/explicitly trigger a check via the Icingaweb2 interface.

The means that any service on the child host (B in this example) will still generate notifications, because the (default) host-service dependencies will not be activated until the child host becomes UNREACHABLE. (Of course, any other non-UP state of the child host would also activate the the host-service dependencies.) The same goes for grandchild hosts.

So, if you want a child host to become UNREACHABLE when the parent host fails, Dependency.disable_checks must be false. Only as soon as the check fails will the host become UNREACHABLE.

Gotcha 2b: Grandchild dependencies don’t become active until the child/parent in between them fails

Dependencies are always between a parent and a child. Icinga never traverses further along the tree to determine that a grandchild should be UNREACHABLE rather than DOWN.

Take the following setup:

object Host "A" {
}

object Host "B" {
}

object Host "C" {
}

object Dependency "B-needs-A" {
  parent_host_name = "A"
  child_host_name = "B"
  disable_notifications = true
}

object Dependency "C-needs-B" {
  parent_host_name = "B"
  child_host_name = "C"
  disable_notifications = true
}

If host A fails, host B doesn’t become UNREACHABLE until its check_command returns a not-OK status. The same goes for host B and C. And, despite disable_notifications = true, problems with host C will generate notifications as long as host B is Up. Therefore, to avoid needless notifications, you must always make sure that the hosts go down in the order of the dependency chain. You can do this by playing with check_interval, max_check_attempts, and retry_interval. And, make sure that disable_checks is always false for any intermediate host or service in the dependency chain!

Setting up a Zimbra authenticated proxy

On March 18th, Synacor posted about a critical Zimbra security vulnerability (CVE 2019 9670), which was quick to be exploited in the wild, and subsequently evolved to be harder to erradicate.

I’ve always had a weariness of authentication implementations by hosted applications, so I decided to block the Zimbra web mail interface using iptables (firewall), and only allow access through a separately hosted HTTP proxy which requires authentication. This way, no stray requests to API endpoints accidentally left open will be allowed. That is, almost none: I had to add exceptions to allow webdav traffic for contact and calendar synchronization. If you don’t use that, the exceptions can be left out.

Below is an example Apache configuration. Apache requires several modules to be enabled, which is an exercise left to the reader. Also, a similar proxy is easily implemented in Nginx; I just happened to have a spare Apache server.

Note that it’s best to not make the proxy the default virtual host on the web server. This avoids it being seen by IP probes. If set up properly, there is no trace visible from the outside that you’re using this proxy, and if you make it such that access to it requires the actual domain name (like mywebmail.example.net), it’s very hard for bots to see it (especially if you make the domain name a bit more unguessable).

When you access the web mail page, first you have to authenticate using old style HTTP authentication:

zimbra-pre-login

Anyway, here’s the proxy config:

<VirtualHost *:80>
        RewriteEngine on
        RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [L,R]
        ServerName webmail.example.net
</VirtualHost>
 
<VirtualHost *:443>
        ServerName webmail.example.net
        ServerAdmin webmaster@localhost
 
        SSLEngine on
        SSLCertificateFile    /etc/letsencrypt/live/webmail.example.net/cert.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/webmail.example.net/privkey.pem
        SSLCertificateChainFile /etc/letsencrypt/live/webmail.example.net/chain.pem
 
        SSLProxyEngine On
        ProxyPass        / https://mail.example.net/
        ProxyPassReverse / https://mail.example.net/
 
        # For Webdav/carddav/caldav
        <Location /dav>
                Satisfy any
                Require all granted
        </Location>
 
        # For Let's Encrypt
        <Location /.well-known/>
                Satisfy any
                Require all granted
        </Location>
 
        # For Webdav/carddav/caldav
        <Location /principals/>
                Satisfy any
                Require all granted
        </Location>
 
        # For Webdav/carddav/caldav
        <Location /SOGo/>
                Satisfy any
                Require all granted
        </Location>
 
        # For Webdav/carddav/caldav
        <Location /groupdav.php>
                Satisfy any
                Require all granted
        </Location>
 
        <Location />
                AuthType Basic
                AuthName "Zimbra webmail pre-login"
                AuthUserFile /etc/apache2/htpasswd/webmail
                Require valid-user
 
                # Exception IPs: no auth needed (for monitoring for instance)
                Require ip 1.2.3.4
        </Location>
 
        ErrorLog ${APACHE_LOG_DIR}/webmail.example.net/error.log
        CustomLog ${APACHE_LOG_DIR}/webmail.example.net/access.log combined
</VirtualHost>

Git and Tig config base

Here’s just a quick .gitconfig:

[core]
        commentchar = %
[tig "color"]
        date = cyan black bold
        diff-header = cyan black
[tig]
        ignore-case = yes

The colors are to make it readable on Windows Git Bash, because the dark blue is impossible to read.

And yes, I still have to set up my dotfiles github stuff.

Attempt to repair short on motherbord

At work, we had a small embedded PC that had a short in the mainboard. I attempted a fix, mostly for educational purposes. It wasn’t successful, but I wanted to post the method and result anyway. The method I used is elaborately explained on Youtube, so go hunt there for more details.

First I wanted to find the short. I opted for the method of connecting a current limited power supply and slowly cranking up the current till I can feel something heating up. To do so, I had to solder some wires to the input power jack first.

I set the supply to 1V and and started increasing the current. Given that small resistors have a rating of 250 mW, a 0.25 to 0.5 amps, giving 0.25 to 0.5 W, should be safe enough to gently heat up the shorted component. In the end, I had to increase the current to about 1.5 amps to detect a rising temperature.

I removed the cap in question:

Then I replaced it with hack (because I had no SMD parts here):

The short was gone, but it still didn’t power up. Then I gave up.

« Older posts Newer posts »

© 2024 BigSmoke

Theme by Anders NorenUp ↑