Smokes your problems, coughs fresh air.

Tag: CSV

PHP fgetcsv() behavior on empty lines

The PHP documentation for fgetcsv() states that A blank line in a CSV file will be returned as an array comprising a single null field, and will not be treated as an error. Here’s a quick demonstration of this behavior.

fgetcsv.php:

<?php
 
while ($fields = fgetcsv(STDIN, 0, ';'))
  print_r($fields);
 
exit(0);

Execute the script and feed it some CSV with empty lines:

php -q fgetcsv.php
"Veld 1";"Veld 2";"Veld 3";;"Veld 5"
 
"Field 1";;"Field 3";"Field 4";
;;;;
;"Campo 2";;;"Campo 5"

After pressing Ctrl+D, I’m presented with the following output:

Array
(
    [0] => Veld 1
    [1] => Veld 2
    [2] => Veld 3
    [3] => 
    [4] => Veld 5
)
Array
(
    [0] => 
)
Array
(
    [0] => Field 1
    [1] => 
    [2] => Field 3
    [3] => Field 4
    [4] => 
)
Array
(
    [0] => 
    [1] => 
    [2] => 
    [3] => 
    [4] => 
)
Array
(
    [0] => 
    [1] => Campo 2
    [2] => 
    [3] => 
    [4] => Campo 5
)
Array
(
    [0] => 
)

This behaviour on empty lines is a little bit annoying if you want to test if the line is empty():

$a = array(null);
print_r($a);
 
if ( empty($a) )
  echo '$a is empty';
else
  echo '$a is not empty';
 
echo "\n";

This code will print:

Array
(
    [0] => 
)
$a is not empty

Hence, the following function:

/**
 * This function tests if the given array (as returned by fgetcsv())
 * is the result of an empty line in the CSV file.
 *
 * It does not work for lines that contain only delimiters.
 * From the POV of this function, these are simply records with
 * many empty fields.
 */
function fgetcsv_empty_line($row_array) {
  return ( !isset($row_array[1]) and empty($row_array[0]) );
}

Now, if I change the call to empty() in my test to a call to fgetcsv_empty_line():

$a is empty

Moved from Mnemosyne to FlashcardDB

When I was studying Spanish last year, I had to choose a flashcard program to memorize new words. At the time, I couldn’t find any on-line program that just did the job and did it well. In a comment on my blog post from last year, however, I was pointed by Jeff to his amazing FlashcardDB.

The program I ended up with last year was Mnemosyne. Mnemosyne is not based on your regular Leitner system, but rather on a concept where, after each card, you have to indicate yourself how well you have remembered it. I found that, in the end, having to tell the system in which box to put the card instead of just saying if my answer was right or wrong was taking me more effort than the actual recollection of the information. Also, as someone who rarely remains at one place for very long, a desktop program just isn’t as practical for me as an online program.

Mnemosyne
With Mnemosyne, I had to constantly remind myself of a complicated grading system.

Now to FlashcardDB. The site is pretty social, which means that you can study (and sometimes even edit) card sets made by other users. When you sign up, you can also create card sets yourself. Card sets can be tagged and you can study these tags instead of individual card sets if you wish. If you already have cards somewhere else, import is easy as well.

The user interface is very slick, especially for such a new program. Thoughtful usage of AJAX means that you’re never distracted by page reloads when this would interrupt your flow of thought. Simple key bindings making studying an easier affair than in most desktop programs. The right arrow is used to show the answer, the up arrow (thumbs up) to mark the answer as correct, the down arrow (thumbs down) to mark the answer incorrect and the left arrow to go back to the previous card. Also the interface for adding cards is very pleasant. It’s just a matter of filling in the front of the card, pressing Tab, filling in the back of the card, pressing Tab, then Enter and on the next card.

Before going on to the conclusion, I want to add that also the Leitner system is very well implemented in FlashcardDB, including pretty diagrams to make it instantly clear to everyone how the system works. Now for my conclusion: My advice if you ever need to make flashcards yourself is that you really should take a look at FlashcardDB before looking at anything else.

Finally, the following Ruby code is a quick hack I used to convert Mnemosyne’s XML export to CSV data which can be imported by FlashcardDB:

#!/usr/bin/ruby
 
require 'rexml/document'
require 'csv'
 
xmldoc = REXML::Document.new($stdin)
 
CSV::Writer.generate($stdout) do |csv|
  xmldoc.each_element('//item') do |el|
    csv << [  el.elements[1,'Q'].text, el.elements[1,'A'].text  ]
  end
end

Web scraping in Ruby: why I had to use scrAPI instead of WWW::Mechanize and Hpricot

Thursday evening: so, I had written myself a nice little script using Aaron Patterson’s WWW::Mechanize and why’s Hpricot to extract some data from a popular web-based airport directory.

Hpricot logo

I was warmed up for Hpricot by the promise of XPath and CSS selector support (and a very cool logo, of course). As a long time XPath user, I started banging out some crispy XPath expressions until I realized that XPath support was only very partial. I kept on trying expressions that would work, even bowing down to expressions that, according to the Wiki, would work, but differently. Come on guys, either support a standard or just plainly ignore it, please! 😡 Because I couldn’t figure out how I’d have to integrate why’s fork of the XPath spec in my expressions, I decided to stick with why’s fork of the CSS selectors instead.

Then, it became time to execute my code. I had estimated that it would take about two hours to finish downloading and parsing the approximately 10.000 pages which contained the data in which I was interested. So, I executed my script, detached my screen session and went to bed, trusting that I would find a nice, handy CSV file in the morning.

Friday morning, I was disappointed to find that my script had been killed. I was left wondering what could have killed the script. I decided to restart the script at the countries starting with the letter b (it had died somewhere halfway the list of countries starting with a b). Soon the script was happily appending data again to the existing CSV file.

Disclaimer: why is a much more prolific Ruby coder than I’ll ever be, so please take my comments with a grain of salt. No, actually, rather take them with a few spoonfuls of salt.

Later, I talked about the spontaneous death of the script with Wiebe. Curious, he looked at the memory usage of my script and saw that it was happily munching away hundreds of megs of memory on our server. And memory usage was growing! With crucial server processes at the risk of running out of memory and with me having to build a circumference around the vegetable garden to protect it from a bunch of brawling chickens, Wiebe was friendly enough to drop in and take a look at my spaghetti code to see if he could fix the leak. He couldn’t, because the leak didn’t appear to be in my code. I wasn’t the first to be bugged by a leak in Hpricot.

That news didn’t make me very happy, because it implied I had to redo the script using different tools. I knew that WWW::Mechanize had been inspired by the Perl package by the same name, so I started by looking at that. After installing WWW::Mechanize, I explored CPAN’s WWW namespace a bit further and noticed that the Perl crowd also had two other good scrapers at their fingertips: WWW::Extractor and WWW::Scraper. Once again I was reminded that Perl, despite its funky syntax, is still the king of all scripting languages when it comes to the availability of quality modules. 🙁 After a few deep breaths, I set my rusty Perl skill into (slow)motion. Hell, this was supposed to be a quick script. Why was this taking so much time? (Yeah, yeah; cue all the jokes about developer incompetence. 😕 )

I was almost stamped by a horde of camels, each with a name more syntactically confusing than the other. Just before I was crushed, I came across a reference to a Ruby scraper with decent support for CSS3 selectors: scrAPI. Credits for this discovery go to the documentors of scRUBYt, a featurefull scraper layered on top of WWW::Mechanize. The documentation writers of scRUBYt where friendly enough to help their users by including a link to the competition.

It took me some time to rewrite the script using scrAPI, partially because it was hard to find any documentation that was more comprehensive than a few blog posts and a cheat sheet and less of a hassle than reading the source. But, when Assaf answered my need by pointing me to the online API docs, I was happy.

Another reason why it was hard to migrate from WWW::Mechanize/Hpricot to scrAPI was that Hpricot starts element offsets for XPath predicates and CSS selectors at zero instead of one where they should start. And of course, I had to rid myself of the weird breed between CSS and XPath selectors.

I was surprised that the script using scrAPI ran about twice as fast as the Hpricot-based script. This was including a cumulative sleep() time between each request of almost an hour, because the speed during testing made me worry about over-exerting their web server. Knowing that one of the popular features of Hpricot is its speed, this was very unexpected, although I have to admit that Hpricot did fill my memory very quickly.

© 2024 BigSmoke

Theme by Anders NorenUp ↑