to install skype on a debian unstable machine :
now we need to fix a bunch of dependencies:
blahhhhhh . It used to be easier... Sometimes I really despise myself for using this closed source software.
The other day I needed a small xml parser to convert an xml document into a different format. First I tried xml-light. This is a simple parser all written in ocaml that stores the parser xml document in an ocaml data structure. This data structure can be user to access various fields of the xml document. It does not offer a dom-like interface, but actually I consider this a feature. Unfortunately xml-light is terribly slow. To parse 30K-plus lines of xml it takes far too long to be considered for my application.
The next logic choice was to try Expat, that is a event-based parser and it is extremely fast. Since using an event based parser can be a bit cumbersome (and I already had written of bit of code using xml-light), I decided to write a small wrapper around expat to provide a xml-light interface to it.
The code is pretty simple and the main idea is taken from the cduce xml loader.
First we provide a small data structure to hold the xml document as we examine it. Nothing deep here. Notice that we use
String as we descend the tree and
Element we we unwind the stack.
Then we need to provide expat handlers to store xml fragments on the stack as we go down. Note that we have an handler for cdata, but not an handler for pcdata as it is the default.
At the end we just register all handlers with the expat parser and we return the root of the xml document.
I've copied the xml-light methods and to access the document in a different file. I've also made everything lazy to save a bit of computing time if it is only necessary to access a part of a huge xml document.
The complete code can be found here:
git clone https://www.mancoosi.org/~abate/repos/xmlparser.git
The other that I was made aware that this parser has a serious bug when used on a 32 bit machine. The problem is that the maximal string size on a 32bit machine is equal to Sys.max_string_length that is roughly 16Mb . If we read and parse a bit document document at once with
IO.read_all , we immediately get an exception. The solution is to parse the document incrementally using the new function
parser_ch below that get a channel instead of a string and run the expat parser incrementally :
During the weekend I upgraded my laptop to sqeeze. I usually track unstable pretty closely, but in between transition I gave myself a bit of slack in order to avoid messing up with the gnome transition. The result is ok, NetworkManager Just Work !!!, the new kernel seems pretty snappy. I finally get the power status for my network card.
My laptop is a old dell latidute x200. I always had problem with the graphic card and Xorg. With this upgrade I've always motivated myself to find a solution. Not surprisingly it was quite easy. I've added these option to my xorg.conf :
What I've left to figure out is to fix the hibernate function, that is still not very reliable as it works 8 out of 10 times.
After 1.3Gb of updates, I'm happy I'm again surfing the unstable wave.
This morning I spent sometimes trying to understand how the GPS subsystem on the freerunner works. I'm using SHR unstable. These information might be incomplete or wrong, so just use them with a grain of salt.
Ok, at the beginning the clients where connecting to the GPS device either directly :
or using gpsd
so far so good. In the end an application like tangogps or navit only needed to have a fix that is not that difficult to obtain from the raw device input.
But what if I want to handle and gather information from multiple GPS devices ? the idea here is to add an additional layer of indirection to make like easier to clients. On top of it, since the freerunner used dbus to communicate and frameworkd as a communication broker we have now two different players.
From , this is the story:
What is gypsy  ? Gypsy is a GPS multiplexing daemon/protocol which allows multiple clients to access GPS data from multiple GPS sources concurrently.
Now, my point was to use the GPS information collected by these two fantastic projects opencellid  and cellhunter . In order to do that I would need to add a "fake" gps device to feed ogpsd with information retrieved from the cell database.
If the architecture I've described here is correct, it should not be to difficult to add the missing bit in ogpsd  ...
UPDATE: It seems that there is already an implementation [8,9] of agps fetching data from agps.u-blox.com and based on gllin  , but you will need a data connection to use this one.
Recently I had few problems with a svn repository that is shared between multiple ssh users. I followed the instructions in the svn book and to solve the problem once for all I recreated the repo from scratch. Briefly:
hopefully this is gonna work. hopefully, otherwise I guess I missing something that is very basic !
Today I've finished the long due migration to drupal 6 . With drupal 7 almost ready was kind of important not to stay to far behind the latest version. I've to say that drupal is getting better and better. This new stable version has a lot of eye candies (web 2.0 style) and improved functionalities. The update of the drupal core modules was almost painless (apart for a stupid mistake that corrupted my database...). The module upgrade took a bit more of time, but in the end I managed to get back everything I had before (and to remove a lot of old modules).
The only thing that I want to mention is about the interference of the pearwiki filter with the geshi filter for code highlighting. This is the relevant bug that contains a patch for the pearwiki module.
This is the final list of modules that I use (and upgraded):
Arghhh today I've discovered reading this bug report that specifying strings are
RawStr() in strom, they are actually stored as blobs in sqlite3. The very bad side-effect is that string comparison does not work !!!
The right way to store strings with storm is to use the
Unicode() data type instead and to wrap all your strings with the
unicode function. If you need "utf-8", you can pass it as optional argument to it. Now string comparisons are 10 times faster !!!!!!!! Argggg
a while ago I wrote about enabling the sqlite3 extension with storm . This is how you do it with the Django ORM. The collation is the same and all details are in the old post. The only tricky part is to establish the connection with
cursor = connection.cursor() before calling the function to enable the extension. Failing to do so, will result in an error as the connection object will be null.
consider the following example :
What I want is to write a query that will group for each (name,num) the start and end edges of the interval given by the table time:
First we create a simple view to unclutter the query statement.
Then the sql query is pretty straightforward ...
Today I started learning how to write web forms in django. My quest was to write a simple search form where I could specify multiple criteria . Django is a very nice and flexible framework written in python and it is also reasonably well documented.
I don't feel writing much today. this is the code :
The tricky part was to understand how to re-display the view and to add a new field. This is easily accomplished in django using the formset class that allows to put display together more then one form. In this case the logic is simple. First we display an empty form with two submit buttons, Add and Search. Search brings the obvious result. Add gets the data that was submitted, validates it and re-display the form with an additional field. Note also that the default validation functions are used transparently to validate the input of each sub-form.
The forms file describes the form logic to be displayed.
This is the template :