Smokes your problems, coughs fresh air.

Category: Technology (Page 5 of 47)

Softstarter (ballast start) relay charred after years of use

This is an interesting case. I made a sound system for a friend in 2012. In it is a soft-starter, that engages the heavy transformer through a ballast, so that the fuse doesn’t trip. This is it:

Up until today, I never knew he had to replace the fuse a few times a year. The reason I found out now, is because the amp wouldn’t start anymore; it would always blow a fuse. So, I was called in to repair.

This is the relay now, with charred contacts:

One initial thought of course is that the amp was turned off/on super quick, so the relay didn’t have time to disengage, subsequently connecting the mains directly to the unstabilized transformer. However, my softstarter design can handle quick off-on (although not super quick, I just tested), and he was adamant that it often happened after hours of off-time, and being careful not to bounce the switch.

My theory is that the mains AC sometimes arced over the relay on turn-on, so that the ballast would be bypassed, obviously blowing a fuse. This arcing slowly wore out the contacts, and probably made it more susceptible, ultimately resulting in always arcing. However, I use the exact same design in my own builds, and I never have this problem.

The relay is rated for 250 Vac, 30 Vdc. But perhaps with the mains in the right position, the transformer acts as a fly-back.

Interestingly, the relay contacts still read near-zero Ohms, and there is little to no voltage over it with when I test it with my 30V, 3A bench supply.

I now replaced it with a nifty relay (Amplimo LRZ) that has a wider gap, and tungsten pre-contact:

I’ll report back in a year how it held up.

Why a mature ERP like Exact should use snapshot isolation in their RDBMS

2024-03-20. This blog post was originally posted six years ago—on April 23, 2018—on the blog of my then-employer Ytec, at https://ytec.nl/blog/why-mature-erp-exact-should-use-snapshot-isolation-their-rdbms/. Some former (or subsequent) colleague of mine must however have missed the memo that Cool URLs Don’t Change and my blog post—actually their whole blog—doesn’t seem to have survived their latest website overhaul. Luckily, I could still retrieve my article as it was from the Internet Archive’s Wayback Machine, and now I’m posting it here, because even if Ytec no longer cares about my content, I do still think that the below remains worthwhile.


A client of ours [Ytec] has over the years grown into an ERP (Exact Globe) that stores all its data in an MS SQL Server. This by itself is a nice feature; the database structure is stable and documented quite decently, so that, after becoming acquainted with some of the legacy table and column names (e.g. learning that frhkrg really means ‘invoice headers’), information can often be retrieved easily enough. Indeed, this is very useful, for example when connecting the ERP (as we did) to a custom-made webshop, or when setting up communications to other systems by EDI.

However, contrary to what you’d expect with all the data stored in a big-name SQL server, the data can at times be difficult to access due to deadlocks and other locking time-outs. A lot of time has gone into timing various integration tasks (SSIS, cron jobs and the like) such that they do not interfere with each other, while most of these tasks only read from the database, with the exception of a few task-specific tables (which will rarely cause locking issues).

There are some slow functions and views in the database, but it’s running on such a powerful server (24 cores, 200GB RAM) that this really ought not to be a problem. Yet it is, and it continues to be. And the problem can hardly be solved by further optimizations or by throwing yet more iron at it. The problem could be solved by turning on snapshot isolation.

Life without snapshot isolation

Coming from a PostgreSQL background, I was quite surprised to find that snapshot isolation is not turned on by default in MS SQL Server. In Postgres it cannot be turned off (due to its multiversion concurrency control architecture). And why would you want to?

Without snapshot isolation, what you have is a situation where reads can block writes, except when the transaction isolation level is READ UNCOMMITTED, which isn’t even supported by PostgreSQL (which treats READ UNCOMMITTED as READ COMMITTED). Except for heuristic reporting, READ UNCOMMITTED is best avoided since it allows dirty reads, of uncommitted operations that may well be rolled back later. That’s why the default transaction isolation level in MSSQL, MySQL and PostgreSQL is READ COMMITTED. Sometimes, there are stricter requirements: for example, that the data used in the transaction should not change except from within the transaction. Such could be guaranteed by using the strictest transaction isolation level: SERIALIZABLE. Somewhere in between READ COMMITTED and SERIALIZABLE is REPEATABLE READ. Anyway, the take-home message is that stricter transaction isolation levels generally require more aggressive locking.

Suppose I have an EDI task that has to export shipping orders to a warehouse. In Exact, these orders can be found by querying the frhkrg table (with invoice header lines):

BEGIN TRANSACTION select_unexported_invoices;
-- SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- is implied
 
SELECT  faknr -- invoice number
FROM    frhkrg
WHERE   fak_soort = ‘V’
        AND
        NOT EXISTS
        (
            SELECT    *
            FROM        invoices_exported
            WHERE    invoice_number=frhkrg.faknr
        )

Besides the interesting table and column names, this select statement is straightforward enough. What caught me by surprise, as a long time PostgreSQL user, is that this transaction can be blocked by an innocuous update on the frhkrg table in a parallel transaction. Yes, with snapshot isolation off, writes can block reads, even if the the transaction does not even require repeatable reads.

This behaviour is easy to replicate:

CREATE DATABASE Ytec_Test;
 
USE Ytec_Test;
 
---
-- Create test table with test data and index
--
CREATE TABLE locking_test
(
    id INT PRIMARY KEY,
    col1 VARCHAR(32) NOT NULL,
    col2 VARCHAR(32) NOT NULL,
    col3 VARCHAR(32) NOT NULL
)
;
INSERT INTO locking_test (id, col1, col2, col3)
SELECT 1, 'Aap', 'Boom', 'A'
UNION ALL
SELECT 2, 'Noot', 'Roos', 'B'
UNION ALL
SELECT 3, 'Mies', 'Vis', 'C'
UNION ALL
SELECT 4, 'Wim', 'Vuur', 'D'
;
CREATE NONCLUSTERED INDEX bypass_lock
    ON locking_test (col1)
    INCLUDE (col2, id)
;

With the test data set-up, it’s easy to lock a read operation:

BEGIN TRANSACTION WRITE1;
UPDATE locking_test SET col1 = 'Aap-je' WHERE id=1;

As long as transaction WRITE1 is open, most selects will be blocked (except one, which uses the index exclusively):

BEGIN TRANSACTION READ1;
 
-- SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- is implied
 
SELECT id FROM locking_test; -- WITH (READCOMMITTED) -- is implied (as also below)
-- Time-out
 
SELECT id, col2 FROM locking_test WHERE col3='D'; -- with a non-indexed column in the predicate
-- Time-out
 
SELECT id, col3 FROM locking_test WHERE col1='Aap'; -- with a non-indexed column in the select list
-- Time-out
 
SELECT id,col2 FROM locking_test WHERE col1='Noot'; -- with only indexed columns
-- id    col2
-- 2    Roos 

One trick that can be glimpsed from this example is that you can use indices to bypass locks, but only to the extent that you’re not trying to select a row that is currently being locked. The following doesn’t work when WRITE1 is open:

BEGIN TRANSACTION READ2;
SELECT id FROM locking_test WHERE col1='Aap'; -- doesn't work 

Another possibility to partially work around aggressive locking is to use the table-hint READPAST:

SELECT id FROM locking_test WITH (READPAST) ORDER BY id;
-- id
-- 2
-- 3
-- 4 

But, as you can see, this is limited in its applicability, since the locked row won’t be included in the results, which may or may not suit you.

Life with snapshot isolation

It will make your life as a developer a lot easier to simply turn on snapshot isolation. In PostgreSQL, you don’t have to, because as I mentioned earlier: it won’t allow you to turn it off. But, in MSSQL, you really do have to. Yes, you do.

ALTER DATABASE YTEC_Test SET READ_COMMITTED_SNAPSHOT ON
ALTER DATABASE YTEC_Test SET ALLOW_SNAPSHOT_ISOLATION ON

Welcome, MSSQL users, in the wonderful world of snapshot isolation! Now, let’s open WRITE1 again and retry READ1 and we’ll see that something wonderful has happened: we were able to read the data, including the row that was at that moment being updated.

That’s why turning on snapshot isolation can make a huge difference. If your legacy applications depend on MSSQL, it’s definitely worth testing them with snapshot isolation turned on. Snapshot isolation was first introduced in Microsoft SQL Server in 2005. That’s a long time for such an essential feature to go unused! I hope that Exact Software is listening and will announce official support for turning on snapshot isolation in Exact Globe’s database; then, eventually, they could even stop performing all their read operations at transaction isolation level READ UNCOMMITTED.

Finally, a quick word of advice: if you’re at the start of a new project and not yet stuck with a particular database legacy, I would recommend looking into PostgreSQL first. Changes are, you won’t ever look back. In a future article [on the now, as of 2024-03-20, extinct Ytec blog], I may go into the reasons why.

Writing bootable disk images (.iso, .img, etc.) to a USB stick from Windows

Because Windows doesn’t have dd, and I want to write the latest Mint LTS release to a USB task, I had to face the unpleasant task of finding a Windows tool to perform what’s a basic Unix operation. The good news is that I found one, and it’s open source: Win32 Disk Imager. It even has a version ≥ 1, titled: “Holy cow, we made a 1.0 Release”.

A screenshot of Win32 Disk Imager at work, writing Linux Mint 18.3 MATE 64bit to my SanDisk USB stick.

Win32 Disk Imager at work, writing Linux Mint 18.3 MATE 64bit to my SanDisk USB stick.

I found another open source tool, UNetbootin, but that tool didn’t recognize my non-MS-format formatted USB stick (which already tauted the installer for a previous Mint release).

In the end, Win32 Disk Imager also choked on the funky partition table left by the previous boot image, so I had to find out how reset the USB disk’s partition table in Windows:

C:\WINDOWS\system32>diskpart

Microsoft DiskPart version 10.0.16299.15

Copyright (C) Microsoft Corporation.
On computer: YTHINK

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          238 GB      0 B        *
  Disk 1    Online           29 GB    28 GB

DISKPART> select disk 1

Disk 1 is now the selected disk.

DISKPART> list partition

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Primary           1706 MB  1024 KB
  Partition 2    Primary           2368 KB  1707 MB

DISKPART> select partition 2

Partition 2 is now the selected partition.

DISKPART> delete partition

DiskPart successfully deleted the selected partition.

DISKPART> select partition 0

The specified partition is not valid.
Please select a valid partition.

There is no partition selected.

DISKPART> select partition 1

Partition 1 is now the selected partition.

DISKPART> delete partition

DiskPart successfully deleted the selected partition.

DISKPART> create partition primary

DiskPart succeeded in creating the specified partition.

DISKPART> exit

Leaving DiskPart...

C:\WINDOWS\system32>

Web print is still shit, even for CSS2 print features

Having spent ten yours out of the loop, I had somehow expected browser makers to take some time out of their favorite hobby—moving knobs and settings around—to implement CSS printing support. I’m all for saving paper and all, but requiring me to pipe my HTML through LaTeX to produce halfway decent documents doesn’t feel very 2017ish to me. In 2007, it already didn’t even feel very 2007is to me.

I’m trying to make the articles on www.sapiensthabitat.com nicely printable. The good news is that I can finally style my headings so that they do not end up all alone on the bottom of a page. page-break-after: avoid is finally supported, except that it isn’t in Firefox. Well, I’m still happy. Back in 2007, only Opera supported this.

Next stop: I wanted to replace the standard header and footer slapped on the page with something nicer. It turned out that, yes, @page {} is supported now, which makes this rather easy:

@page {
 : 0;
}

Except, then I wanted to add the page number, preferrable in the form n/N to the footer, which turned out to be impossible.

Then, I thought: since my publication pipeline starts with Markdown, I might as well convert that to PDF through LaTeX and then hint to the browser to use the PDF version for printing:

<link rel="alternate" media="print" type="application/pdf" href="print.pdf" />

Never mind. Why did I even for one second think that this would be supported?

Hike for Pie – clear lakes

There has been much talk of the “Hike for Pie” in my family-in-law, and we finally figured our for sure where it is. It’s the trail to clear lake, trail head parking lot at GPS coordinates:

  • Latitude: 44.39407007
  • Longitude: -122.00163555
« Older posts Newer posts »

© 2024 BigSmoke

Theme by Anders NorenUp ↑