BigSmoke

Smokes your problems, coughs fresh air.

Page 16 of 52

Getting munin to run every 10 minutes

Munin is kind of inefficient and on my P4 2Ghz, it running every 5 minutes is too often, and the munin processes keep dying because locks already exist. You can’t increase the munin-cron script to 10 minute intervals, because then rrdtool will generate gaps.

The munin-cron script is nothing but a wrapper for munin-graph, munin-html and munin-update. I made my own wrappers, which I then run with separate cronjobs:

# cat /usr/local/bin/munin-graph 
#!/bin/bash
 
# file copied from /usr/bin/munin-cron and adjusted.
 
# This used to test if the executables were installed.  But that is
# perfectly redundant and supresses errors that the admin should see.
 
#/usr/share/munin/munin-update $@ || exit 1
 
# The result of munin-limits is needed by munin-html but not by
# munin-graph.  So run it in the background now, it will be done
# before munin-graph.
 
# When runnin update at */5 and graph at */10, munin-update and munin-graph
# will be started at the same time, and this sleep it to prevent a
# race-condition on the update-running file.
sleep 5
 
while [ -f "/var/run/munin/update-running" ]; do
        sleep 1
done
 
/usr/share/munin/munin-limits $@ &
 
nice /usr/share/munin/munin-graph --cron $@ 2>&1 | fgrep -v "*** attempt to put segment in horiz list twice"
 
wait
 
nice /usr/share/munin/munin-html $@ || exit 1

# cat /usr/local/bin/munin-update-data
#!/bin/bash
 
# file copied from /usr/bin/munin-cron and adjusted.
 
# This used to test if the executables were installed.  But that is
# perfectly redundant and supresses errors that the admin should see.
 
runfile="/var/run/munin/update-running"
touch "$runfile"
 
/usr/share/munin/munin-update $@ || exit 1
 
rm "$runfile"
 
# The result of munin-limits is needed by munin-html but not by
# munin-graph.  So run it in the background now, it will be done
# before munin-graph.
 
#/usr/share/munin/munin-limits $@ &
 
#nice /usr/share/munin/munin-graph --cron $@ 2>&1 | fgrep -v "*** attempt to put segment in horiz list twice"
 
#wait
 
#nice /usr/share/munin/munin-html $@ || exit 1 

# cat /etc/cron.d/munin
#
# cron-jobs for munin
#
 
MAILTO=root
 
#*/5 * * * *     munin if [ -x /usr/bin/munin-cron ]; then /usr/bin/munin-cron; fi
*/5 * * * *     munin if [ -x /usr/local/bin/munin-update-data ]; then /usr/local/bin/munin-update-data; fi
*/10 * * * *    munin if [ -x /usr/local/bin/munin-graph ]; then /usr/local/bin/munin-graph; fi
14 10 * * *     munin if [ -x /usr/share/munin/munin-limits ]; then /usr/share/munin/munin-limits --force --contact nagios --contact old-nagios; fi

My zeroth year at university

Maybe my biggest accomplishment to date—maybe my only real accomplishment, if your glasses are so colored by society’s standards—has been to be accepted to the University of Groningen as a fulltime biology student. To apply, I had to send my curriculum and a letter of motivation. Which motivation? I wasn’t so sure that I’d like to be a student. Actually, I had been quite certain for most of my adult life that I really did not want to study and waste all that precious time for a few crums of knowledge.

But, I overdosed on spacecake and was having a bad trip. I was already depressed. My life hadn’t worked out. I hadn’t turned out to be the type of person that I wanted to be. None of the success. None of the happiness. Little satisfaction. Just some stubborn fantasies about how cool me and my life would be if only…

The physical and mental stress caused by the fear that underlies most major depressions overtook me, so terribly afraid of what others—that’s you—might think of me. This sensation wasn’t new. What was new was a lasting awareness about the extend to which this social anxiety directed my life and a stronger sense about how this might have affected my major life decisions. I felt (more than that I thought) that, maybe, I could try the normal life of a college student.

At the same time, I was very doubtful, because I had occassionally tried to fit into the constraints of society. It never fitted. I had to always give up on the straight path. So why would this work?

I did know that I was interested in biology and—by myself—I never really dug into it, apart from enjoying a Attenborough documentary or two. So, I investigated my options and decided to apply for university.

The next couple of months are a blur of learning, intensifying bouts of depression, despair and the occasional glimmer of hope. Never having finished even one of the lowest level of high-school, I had to face a colloquium doctum, where my knowledge of mathematics, physics, chemistry and biology would be tested.

During the first examination, only my understanding of biology was sufficient. My math, physics and chemistry were terrible (a 2.5, a 2, and a 1 (out of 10) respectively), just above elementary school level.

I had only two attempts left to be in time to start studying after the 2011 summer break. The year after, I’d be 30 and no longer eligible to state support as a student.

During the next attempt, I fluked all remaining three subjects (although physics had turned into a 4). Then, the last attempt approached. I was nervous as hell, and felt ill-prepared at best. I was high on sleep-deprivation during the physics part. Yet, I was confident. Mathematics went terrible. It was mostly calculus and the statistics part was also much harder than the practice exams that I’d used.

So I resigned in my head, because I was certain that I had failed math. I decided I wanted to know how much chemistry had improved since my last attempt, though. (It was so bad then that it wasn’t even graded.) Surprisingly, chemistry went somewhat okay. At least I had made a somewhat informed attempt at an answer on most questions.

Came my grades for math and physics: a 5.5 and a 5.9. How was this possible? I was already planning to go back to France to work with my brother. A 5.5 was exactly sufficient to meet the requirements.

The first semester would start in a week. But I’d have to wait a week for the chemistry grade. This was thrilling, in a good way and a bad way. Finally, the grade came in, just in time for me to know if It’d make sense for me to come to university the next day for all the introductions that would take place.

The next day I was sitting in a lecture hall, filled to the brim with hundreds of 18-year-olds. In just a couple of months I had gone from a 0 (that’s a zero) on chemistry to a whopping 7.8!

Adding a clock in screen to avoid your ssh’s from being killed

The world is filled with stupid routers, which kill all connections that have no activity for a while (even a very short while). I keep loosing my SSH sessions because of this. To fix it, I added a clock in my GNU screen bar:

hardstatus alwayslastline "%= %H | %l | [%c:%s]"

For the record, my entire .screenrc:

multiuser on
caption always "%{= kB}%-Lw%{=s kB}%50>%n%f* %t %{-}%+Lw%<"
vbell off
startup_message off
term linux
hardstatus alwayslastline "%= %H | %l | [%c:%s]"

Trying to reduce MySQL InnoDB disk usage after major reduction of data

So, two days ago, I tried to shrink my MediaWiki database and it almost worked, except the MySQL process wouldn’t shrink along with it.

Of course I tried all the obvious things such as dropping the database, stopping and restarting the process, followed by reloading the database. Optimizing tables, altering tables, all the obvious. But, to no avail, because there’s this bug. Well, technically it’s not a bug. Like most of MySQL, it has been “designed” to be counter-intuitive, non-standard, riddled with special cases, exceptions, work-arounds and duct tape. So, the “bug” is really just a feature request from dumb people much like myself who want stupid things like a database that becomes smaller in size when you delete most of your data.

I should really move to a virtual private server environment, where I can just run a real database (PostgreSQL), but I’m still on NFSN, whom (besides the sky-rocketing storage costs as of late) have given me no reason to complain so far.

I thought I’d ask them to help me out.

Recently, due to inattention to spam, one of my wiki databases has grown to well over 10 GiB. After finally getting around to removing most of this spam and tweaking some settings to reduce the table size from over 11Gig to a couple of MiB, I thought my Last Reported Size would go down accordingly.

But no such luck. Apparently it’s a MySQL issue, with the only solution being to drop the offending database (after a dump, of course), stop the MySQL process, remove the offending table, restart the process and then reload the database.

Instead, you could use the innodb_file_per_table option, which is enabled by default on MariaDB.

Without that option set, OPTIMIZE, ALTER TABLE and all that stuff will do nothing to reduce a table size. It’s one of those issues which the MySQL devs are really not interested in solving: http://bugs.mysql.com/bug.php?id=1341

I hope you can help me out with this, either by setting the innodb_file_per_table option or by removing all my database files. In the latter case, I’d hope you ask me to green light this first so that I can make some other data size reductions in various databases before I make a final backup.

But then I thought better of it, when I learned that—contrary to my expectations—the option really was enabled:

SHOW VARIABLES LIKE ‘innodb_file_per_table’;

+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | ON    | 
+-----------------------+-------+
1 row in set (0.00 sec)

So the option was enabled. I had to look elsewhere for an answer, which made me decide to do the following.

OPT="--host=myhost --user=myuser --password=mypassword --skip-column-names --batch hardwood";
mysql $OPT --execute="show tables" | grep -v mw_searchindex | tablename;
  mysql $OPT --execute="alter table $tablename ENGINE=InnoDB; optimize table $tablename"

Needless to say, this didn’t help. So I wrote the support request for my hosting provider:

Recently, due to inattention to spam, one of my wiki databases has grown to well over 11 GiB. After finally getting around to removing most of this spam and tweaking some settings to reduce the table size of the most offending table from over 11Gig to a couple of MiB, I thought my Last Reported Size would go down accordingly.

Since you have the innodb_file_per_table option enabled (the default in MariaDB and confirmed by “SHOW VARIABLES LIKE 'innodb_file_per_table';”), I’d expect “ALTER TABLE mw_text ENGINE=InnoDB” to shrink the table (the command should move the table to its own file, even if it wasn’t already in its own file before the move to MariaDB). It didn’t. Last Reported Size is still approximately the same. Dropping and reloading the entire database didn’t do much to help either.

I suspect that the problem is that that the old shared file still exists and that this file won’t be deleted even if there are no more tables in it with the only solution then being to dump the database, drop it, remove all its files and then reload the dump.

Anyway, I’d like it to be so that my database will actually shrink when I delete stuff and the way I understand it this should actually be possible thanks to innodb_file_per_table. If it’s the same old story as without innodb_file_per_table, that would just be awful, because then I’d need your intervention every time that I’m trying to reduce my database process size.

I hope that you can somehow reload the database and remove the bloated
ibdata1 file.

Now, I’m just waiting patiently for their response while my funds whither…

Shrinking/compressing a MediaWiki database

As of late, I haven’t had a lot of time to chase after spammers, so – despite of anti-spam captchas and everything – a couple of my wikis have been overgrowing with spam. One after the other I’ve been closing them down to anonymous edits, even closing down user registration alltogether, but some a little too late.

The last couple of months my hosting expenses shot through the roof, because my Timber Investments Wiki database kept expanding to well over 14 GiB. So I kind of went into panic mode and I even made time for another one of my famous spam crackdowns—the first in many, many months.

The awfully inefficient bulk deletion of spam users

Most of this latest tsunami of spam was in the form of “fake” user pages filled with bullshit and links. The only process that I could think of to get rid of it was quite cumbersome. First, I made a special category for all legitimite users. From that I created a simple text file (“realusers.txt”) with one user page name per line.

Then, I used Special:AllPages to get a list of everything in the User namespace. After struggling through all the paginated horror, I finally found myself with another copy-pasted text file (“unfilteredusers.txt”) that I could filter:

cp unfilteredusers.txt todelete.txt
cat realusers.txt | u
  sed -i -e "/$u/d" todelete.txt

(I’d like to know how I could have done this with less code, by the way.)

This filtered list, I fed to the deleteBatch.php maintenance script:

php maintenance/deleteBatch.php -u BigSmoke -r spam todelete.txt

By itself, this would only increase the size of MW’s history, so, as a last step, I used deleteArchivedRevisions.php to delete the full revision history of all deleted pages.

This work-flow sucked so bad that I missed thousands of pages (I had to copy-paste this listings by hand, as I mentioned earlier above), and had to redo it again. This time, the mw_text table size shrunk from 11.5 GiB to about 10 GiB. Not enough. Even the complete DB dump was still way over 5 Gig [not to mention the process size which remained stuck at around 15 GiB, something which I woudn’t be able to solve even with the configuration setttings mentioned after this].

Enter $wgCompressRevisions and compressOld.php

The huge size of mw_text was at long last easily resolved by a MW setting that I had never heard about before: $wgCompressRevisions. Setting that, followed by an invocation of the compressOld.php maintenance script took the mw_text table size down all the way from >10 GiB to a measly few MiB:

php maintenance/storage/compressOld.php

SELECT table_schema 'DB name', sum(data_length + index_length) / 1024 / 1024 "DB size in MiB"
FROM information_schema.TABLES
WHERE table_schema LIKE 'hard%'
GROUP BY table_schema;

+----------+----------------+
| DB name  | DB size in MiB |
+----------+----------------+
| hardhout |    41.88052750 | 
| hardwood |   489.10618973 | 
+----------+----------------+

But it didn’t really, because of sweet, good, decent, old MySQL. 🙁 After all this action, the DB process was still huge (still ~15 GiB). This far exceeded the combined reported database sizes. Apparently, MySQL’s InnoDB engine is much like our economy. It only allows growth and if you want it to shrink, you have to stop it first, delete everything and then restart and reload.

Future plans? Controlled access only?

One day I may reopen some wikis to new users with a combination of ConfirmAccount and RevisionDelete and such, but combatting spam versus giving up on the whole wiki principle is a topic for some other day.

Converting all tables in MySQL DB to InnoDB

#!/bin/bash
 
exit 1
 
dbname="eorder"
 
echo 'SHOW TABLES;'  | mysql $dbname  | awk '!/^Tables_in_/ {print "ALTER TABLE `"$0"` ENGINE = InnoDB;"}'  | column -t 
echo 'SHOW TABLES;'  | mysql $dbname  | awk '!/^Tables_in_/ {print "ALTER TABLE `"$0"` ENGINE = InnoDB;"}'  | column -t | mysql $dbname

My universal remote programming codes

Whenever this remote’s battery is loose for a while, it forgets its programming. So, here it is:

For computer, use the TV setting. Press 1 and 3 for a few seconds, then when the light turns on, enter 0677 as code.

For my amp, you have to set the aux mode for a second TV. To do that, press 1 and 6 for a while. Then press 9, 9, 2. Then press TV, then aux. Next step is to enter the TV code. Press aux, then hold 1 and 3. Enter the following code: 0064.

Kart racing scores

I want to keep track of my kart racing scores:

date Track Racer fastest Time
2012-07-24 Long Beach Wiebe 55.95
2012-07-24 Long Beach Wiebe 56.22
2012-07-24 Long Beach Wiebe 58.84

Apache mod_proxy configuration for The Pirate Bay

I found several apache mod_proxy configs for setting up a proxy for The Pirate Bay, but none worked fully.

You need to enable/install:

  • mod_proxy
  • mod_rewrite
  • mod_headers
  • mod_proxy_http

<Virtualhost *:80>
        ServerName tpb.yourdomain.com
 
        # Plausible deniability, and respecting your fellow pirate's privacy.
        Loglevel emerg
        CustomLog /dev/null combined
        ErrorLog /dev/null
 
        <Proxy *>
          Order deny,allow
          Allow from all
        </Proxy>
 
        # Just to fix a few links...
        RewriteEngine On
        RewriteRule \/static\.thepiratebay\.se\/(.*)$ /static/$1 [R=302,L]
 
        ProxyRequests off
 
        # Cookies are imporant to be able to disable the annoying double-row mode.
        # The . before the domain is required, but I don't know why :)
        ProxyPassReverseCookieDomain .thepiratebay.se tpb.yourdomain.com
 
        ProxyPass / http://thepiratebay.se/
        ProxyPass /static/ http://static.thepiratebay.se/
        ProxyPass /torrents/ http://torrents.thepiratebay.se/
        ProxyHTMLURLMap http://thepiratebay.se /
        ProxyHTMLURLMap http://([a-z]*).thepiratebay.se /$1 R
 
        ProxyHTMLEnable On
 
        <Location /static/>
          ProxyPassReverse /
          SetOutputFilter proxy-html
          ProxyHTMLURLMap / /static/
          RequestHeader unset Accept-Encoding
        </Location>
 
        <Location /torrents/>
          ProxyPassReverse /
          SetOutputFilter proxy-html
          ProxyHTMLURLMap / /torrents/
          RequestHeader unset Accept-Encoding
        </Location>
 
</Virtualhost>

« Older posts Newer posts »

© 2024 BigSmoke

Theme by Anders NorenUp ↑