BigSmoke

Smokes your problems, coughs fresh air.

Page 17 of 52

Listing MySQL table sizes

This query lists the sizes of all tables in MySQL:

SELECT 
  TABLE_SCHEMA, 
  TABLE_NAME, 
  CONCAT(ROUND(data_length / ( 1024 * 1024 ), 2), 'MB') DATA, 
  CONCAT(ROUND(data_free  / ( 1024 * 1024 ), 2), 'MB') FREE 
from information_schema.TABLES 
where TABLE_SCHEMA NOT IN ('information_schema','mysql', 'performance_schema');

This query lists the database sizes:

SELECT 
  TABLE_SCHEMA, 
  TABLE_NAME, 
  CONCAT(ROUND(sum(data_length) / ( 1024 * 1024 ), 2), 'MB') DATA 
from information_schema.TABLES 
where TABLE_SCHEMA NOT IN ('information_schema','mysql', 'performance_schema') 
group by TABLE_SCHEMA;

How to test payformystay.com

I haven’t got much experience when it comes to testing web applications. Instead (and more so out of apathy than belief), I’ve always adhered to the ad-hoc test approach. However, the usage of pure Posgres unit tests back when I worked on a complicated investment database with Halfgaar did teach me the advantages of test-driven development.

For payformystay, though, unit tests simply won’t cut it. The database design is quite straight-forward with not that many relationships and the schema’s only complexities arise from it being remarkably denormalized and full of duplication. Besides and contrary to mine and Halfgaar’s PostgreSQL project for Sicirec, the business logic doesn’t live all neatly and contained on the database level. And I’m not using a clean ORM wrapper either, which I could use as a unit test target. And what would be the point, since in typical MySQL/PHP fashion it would be much too easy to circumvent for a particular function.

What I want for this application is full functional test coverage so that I know that all parts of the website function correctly in different browser versions across operating systems. In other words: I want to know that the various parts are functioning correctly as implied by the fact that the whole is functioning correctly.

But how do you do automated tests from a browser?

At first, I thought I should probably cobble something together myself with jQuery, maybe even using a plugin such as Qunit with the composite addon.

But how was I going to run the tests for JavaScript independence then? Using curl/wget or one of these hip, headless browsers which seem to be bred for this purpose?

Choises, choises…

Selenium

Then, there’s Selenium which is a pretty comprehensive set of test tools, meant precisely for what I need. Sadly my wants weren’t easily aligned with my needs. Hence, it took me some time (months, actually) before I was sure that Selenium was right for me.

Selenium provides the WebDriver API—implemented in a number programming languages—that lets you steer all popular browsers either through the standalone Selenium Server or Selenium Grid. The server executes and controls the browser. Since Selenium 2, it doesn’t even need a JavaScript injection in the browser to do this, which is very interesting for my tests related to my desire to make my AJAX-heavy toy also available to browsers with JavaScript disabled for whatever reason.

Selenium versus my pipe dream

Selenium IDE is a Firefox extension which lets you develop Selenium scripts by recording your interactions with the browser. It stores its script in “Selenese”. This held quite some appeal to me, because my initial testing fantasy revolved around doing it all “client-side”, in the sense that I wouldn’t have to leave my browser to do the testing. I wanted to be able to steer any browser on any machine that I happened to stumble upon at my test site and fire those tests.

Well, Selenese can be interpreted by some WebDriver API implementations to remotely steer the browser, but it can’t be executed from within the browser, except by loading it into the Selenium IDE, which is a Firefox-only extension. Also, driving the browser through JavaScript has been abandoned by Selenium with the move away from Selenium-RC to WebDriver (which they’re currently trying to push through the W3C standardization process).

With everyone moving away from my precious pipe-dream, I remained clinging to some home-grown jQuery fantasy. But, how was I going to test my JavaScript-free version? Questions.

I had to eventually replace my pipe dream with the question of which WebDriver implementation to use and which testing framework to use to wrap around it.

PHPUnit

I thought PHPUnit had some serious traction, but seeing that it had “unit” in its name, I thought it might not be suitable for functional testing. The documentation being unit-test-centric, in the sense of recommending you to name you test cases “[ClassYouWannaTest]Test” didn’t help in clearing the confusion.

Luckily, I came across an article about acceptance testing using Selenium/PHPUnit [acceptence test = functional test].

I’ve since settled on PHPUnit by Sebastian Bergmann with the Selenium extension also by Bergmann. His Selenium extension provides two base TestCase classes: PHPUnit_Extensions_SeleniumTestCase and PHPUnit_Extensions_Selenium2TestCase. I chose to use the latter. I hope I won’t be sorry for it, since it uses Selenium 2’s Selenium 1 backward compatible API. Otherwise, they’ll probably have me running for Facebook’s PHP-WebDriver in the end. (PHP-Webdriver also has a nice feature that it allows you to distribute a FF profile to Selenium Server/Grid.)

But what about my pipe dream?

If only I’d be able to visit my test site from any browser, click a button and watch all the test scripts run, the failures being filed into the issue tracker (with error log + screenshot) and a unicorn flying over the rainbow…

Anyway, it’s a pipe dream and the best way to deal with it is probably to put the pipe away, smoothen the sore and scratch the itch.

PEAR pain

As customary for PEAR projects, PHPUnit and its Selenium extension have quite a number of dependencies, meaning that installing and maintaining them manually in my project repo would be quite a pain. I’ve used the pear command to install everything locally, but my hosting provider doesn’t have all these packages intalled, so if I want to run tests from there (calling Selenium Server here), I’ll have to manage all that pear pain along with my project files.

Doesn’t PEAR offer some way to manage packages in any odd location? I’m not interested in what’s in /usr/share/php/. I want my stuff in ~/php-project-of-the-day/libs/

Process pain

So far, I’ve remotely hosted both the production and the development version of payformystay, which is specially nice if you want to share development features with others. Now, it’s difficult to decide what’s more annoying:

  1. Creating a full-fledged, locally hosted version of the website, so that I can execute the tests locally as well as host the testing version (Apache+PHP+MySQL) locally. A lot of misleading positive test results assured due to guaranteed differences between software versions and configurations
  2. Installing all the PEAR packages remotely so that I can run the test from my hosting provider’s shell. This implies having to punch a hole through the NAT wall at home or anywhere I happen to be testing at any moment. Bad idea. I don’t even have the password to all the routers that I pass during the year.
  3. Running the development version of the website remotely, but running the tests locally so that there are no holes to punch, except that I’ll have to tunnel to my host’s MySQL process because my tests need to setup, look-up and check stuff in the database. At least, now I don’t have to install server software on my development machine and need only the php-cli stuff.

Generating an SSL CSR and key

To generate an SSL certificate signing request (CSR) with key you can do this:

openssl req -nodes -newkey rsa:2048 -keyout bla.key -out bla.csr

This syntax does not force you to supply a password, which is convenient.

If you generate a CSR for startcom, you don’t have to fill in any fields; only the public key from the CSR is used. For other vendors, the common name is important; the domain name must be entered there.

Creating a drbd for an existing Xen domain

I needed some VMs to be available on a backup node, which I accomplished with the distributed remote block device, or DRBD. My host machine is Debian 6.

This post replaced an older one I made.

First install drbd:

aptitude -P install drbd8-utils

Then make some config files. First adjust /etc/drbd.d/global.conf (I only had to uncomment the notify rules):

global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
}
 
common {
        protocol C;
 
        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
 
        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb;
 
                # The timeout value when the last known state of the other side was available.
                wfc-timeout 0;
 
                # Timeout value when the last known state was disconnected.
                degr-wfc-timeout 180;
        }
 
        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs   
        }
 
        net {
                # snd‐buf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
        }
 
        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
        }
}

Then I made a resource for my existing logical volume:

resource r0
{
  meta-disk internal;
  device /dev/drbd1;
 
  startup
  {
    # The timeout value when the last known state of the other side was available.
    wfc-timeout 0;
 
    # Timeout value when the last known state was disconnected.
    degr-wfc-timeout 180;
  }
 
  syncer
  {
    # This is recommended only for low-bandwidth lines, to only send those
    # blocks which really have changed.
    #csums-alg md5;
 
    # Set to about half your net speed
    rate 8M;
 
    # It seems that this option moved to the 'net' section in drbd 8.4.
    verify-alg md5;
  }
 
  net
  {
    # The manpage says this is recommended only in pre-production (because of its performance), to determine
    # if your LAN card has a TCP checksum offloading bug. 
    #data-integrity-alg md5;
  }
 
  disk
  {
    # Detach causes the device to work over-the-network-only after the
    # underlying disk fails. Detach is not default for historical reasons, but is
    # recommended by the docs.
    # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event...
    on-io-error detach;
 
    # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors.
    no-disk-barrier;
  }
 
  on top
  {
    disk /dev/universe/lvtest;
    address 192.168.2.6:7789;
  }
 
  on bottom
  {
    disk /dev/universe/lvtest;
    address 192.168.2.7:7790;
  }
}

Copy all config files to the slave machine (and write an rsync-script for it…).

I learned that Linux 3.1 now has write barriers enabled by default for ext3 (they already were for ext4). This causes bugs and IO errors with xen-blkfront, so that needs to be disabled:

# grep barrier /etc/fstab
/dev/xvda2 / ext3 barrier=0 0 1

I’ll see about finding out if there are bug reports and file them if necessary.

The drbd data is going to be written on the actual LV, so on the primary node, we need to make space (you can also grow the LV):

e2fsck -f /dev/universe/lvtest
resize2fs /dev/universe/lvtest 500M # or however big that's a tad smaller than the actual LV.
drbdadm create-md r0
drbdadm up r0

On the secondary node, make the device as well:

drbdadm create-md r0
drbdadm up r0

Then we can start syncing and re-grow it. On the primary:

drbdadm -- --overwrite-data-of-peer primary r0 # the -- is necessary because of weird option handling by drbdadm.
resize2fs /dev/drbd1

The logical volume has been converted from ext3 to drbd:

# mount /dev/universe/lvtest /mnt/temp
mount: unknown filesystem type 'drbd'

Then, it is recommended you create /etc/modprobe.d/drbd.conf with:

options drbd disable_sendpage=1

I don’t know what it does, but it’s recommended by the DRBD devices docs when you put Xen domains on DRBD devices.

In Xen, you can configure the disk device of a VM like this (actually, I learned that this doesn’t work with pygrub):

disk = [ 'drbd:resource,xvda,w' ]

Drbd has installed the necessary scripts in /etc/xen/scripts to support this. Xen will now automatically promote a drbd device to primary when you start a VM.

Bewarned: because of that, don’t put the VM in the /etc/xen/auto dir on the fallback node, otherwise whichever machine is faster will start the VM, preventing the other machine from starting it (because you can’t have two primaries).

Then, I noticed that Debian arranges it’s boot process erroneously, starting xemdomains before drbd. I comment on an old bug.

You can fix it by adding xendomains to the following lines in /etc/init.d/drbd:

# X-Start-Before: heartbeat corosync xendomains
# X-Stop-After:   heartbeat corosync xendomains 

Mdadm (software RAID) schedules monthly checks of your array. You can do that for DRBD too). You do that on the primary node with a cronjob in /etc/cron.d/:

42 0 * * 0    root    /sbin/drbdadm verify all

One last thing: the docs state that when you perform a verify and it detects an out-of-sync device, all you have to do is disconnect and connect. That didn’t work for me. Instead, I ran the following on the secondary node (the one I had destroyed with dd) to initiate a resync:

drbdadm invalidate r0

Fixing mailscanner insecure dependancy

Mailscanner cut out on me, without errors in the log. It was only after turning on debug (which prevents backgrounding and it then only processes one batch) that it showed me.

I needed to change the first line of Mailscanner in /usr/sbin/Mailscanner:

#!/usr/bin/perl -I/usr/share/MailScanner/ -U 

Source.

Of course, this is far from optimal and it bugs me that Debian didn’t fix this yet, because it’s an old issue.

Why doesn’t he just…

You know the conversation. You had the conversation:
“Why doesn’t he just …”
“I don’t understand why he can’t simply …”
“If he’d only …”

Usually followed by: “I used to, but I …”
Or: “At least you have (not) …”

Well, I had this conversation, but at least I …
I am writing about it, so that at least you …

You know the truth:
No, he can’t just …
At least, he couldn’t …

Now, you might …
But if it’d be so easy to …
You wouldn’t be congratulating each other that …
You’re just slightly better than him …

How does it feel?
Safe?

If only you would …

Commenting fixed for blog.bigsmoke.us

To my great surprise, thanks to Tobias Sjösten, I found out that commenting was broken on blog.bigsmoke.us. I couldn’t pinpoint the exact problem, but it must have been introduced with some WordPress upgrade somewhere along the line. I never noticed it because it did work for logged in users. (If I must really guess, I suspect a silent ReCaptcha version compatibility problem.)

Upgrading WordPress and wp-recaptcha to their latest versions (3.3.1 and 3.1.4 respectively) seems to have solved the problem.

Psychopathic Saturday

I’m trying to pump up myself to write a piece of text about psychopathy. All three other group members already wrote their part. We’re making a scientific poster titled “Is there a psychopath hidden in your brain?” But, do I even want to know? It’s all very close to home, with a mother who’s been accusing her ex-husband (my dad) of being a psychopath for, like, forever, and, simultaneously, this monkey in my brain, pointing it’s accusative little finger straight at me.

« Older posts Newer posts »

© 2024 BigSmoke

Theme by Anders NorenUp ↑