## awesome, GTD and other cool tools

Sometimes I think I'm a bit lazy to change my habits.

I've been looking at tiling window managers for a while, but I always failed to adopt one because of the big shift in habits it would have implied. This time, thanks to zack's applet I decided to jump on the awesome bandwagon. And I've to say that I'm really happy about it. Since now I had never noticed how annoying was to move windows around. It is true that in the last year I've used guake as main console. Since it is a drop down and tabbed console, effectively it saved me the stress of placing a new window all the time I needed a new console. However there were still a lot of applications that were popping windows everywhere and I had no choice to find a place for them.

Awesome solves all these problems. Windows are positioned automatically. You can easily change from one layout to another using a key combination, it is very flexible (it's configuration file is a program written in lua !!) and now thanks to zack's applet is perfectly integrated with my gnome desktop. It basically replaces out of the box metacity (the default gnome window manager) just doing what awesome was meant to do : the window manager. You can disable all the extra features like the awesome panel and menus from the panel, and keep using gnome for everything else.

The other great tool I've just discovered thanks to an article on arstechnica is gtg . This is also a very handy tool. I've been using sticky notes for quite a while (both electronic and real ) but I'm far to be satisfied with it. I refuse to use tomboy as it uses mono and I prefer to avoid it (on religious grounds) . GTG follows the Get Things Done GTD methodology. At the end is just a friendly note taking tool with a lot plug-ins designed to be used with a keyboard. I've just started to use it and despite it already crashed on me few times I like it a lot.

Third and last (for this blog post) tool is a mind mapping tool. I've done mind mapping for a while : let's say that I'm not a compulsive mind mapper but I enjoy putting things in place when I've time. In the past I've used freemind that is nice, but it crashed on me too many times. Alas, is written in java, and given my allergy to this language, I think this is just bad karma flowing in both directions... A couple of weeks ago I stumbled upon another mind mapping tool vym that is much more stable and a real pleasure to use. It didn't crash once yet, it is very usable from the keyboard (essential when you are taking notes !) and it has a nice look & feel and a rich feature set. I'm happy it it.

## Chromium fails for me ...

I tried out chromium in the last 3 weeks. It is clearly faster then iceweasel/firefox . One one hand this is because of the architecture of chromium itself. People at google worked very hard to develop a competitive and lean browser. On the other hand, I think because on the firefox side I use a lot of extensions that can cripple performances and make the comparison a bit unfair. I should try yo start from scratch with with iceweasel using a clean profile and see how it goes.

Since everybody is trumpeting about this new browser I felt obliged to give it a try. Well... it doesn't cut it for me.

• The first big problem for me is that there is no easy way to clean up the cache, browsing history, cookies, etc on exit. Call me paranoid, but I think this is an essential feature for a browser. Yes, it is true that this can be done manually and there is even a (windows-only) extension to clean up everything for you, but I find ridiculous that this simple feature cannot be added by default. The people at google are justifing this saying that since chrome has a very aggressive cache strategy, cleaning up the cache every time can lead to a slower user experience. Granted, this can be true, but I think an option should be there nonetheless and let the user choose if they want a clean start every time or the want to leave cookies and stuff lying around the disk. Other people simply say that google doesn't want such a feature in the browser as it would cripple google tracking tools and lower their advertising revenue. I can see that this can be true, and I hope the community will step and patch (if not fork) the main trunk to add more sensible defaults. For a while I've solved this browsing in incognito mode, but this adds other nuisances...
• The second problem I've encountered is the lack of a good ads blocker. There are few around, but none of them are still up to speed with the quality of the ads blocker in the firefox world. I lived in an ads free world for a while and getting back to the horribly blinking and colorful websites where finding the information you are looking for is more difficult than to navigate a maze, is not acceptable for me.
• Third problem is the Cntl-F problem. there is a feature request that is marked as WontFix just because the forward slash (that for me is the more natural way yo search in a page) conflicts with gmail. Again. It's a choice. But adding the possibility to change the default behavior would be the most sensible thing to do instead of marking the request as a won't fix and ignore the outcry of the community.
• Then there is the lack of extensions. Well, to be fair, the extensions that I was looking for maybe are not very google friendly. In particular on firefox there is a google anonymity that does a nice job removing all sort of cookies and avoiding tracking thought the google services. Another extension that I'm missing is Ubiquity, that despite being abandoned by the mozilla labs, I find extremely useful and powerful to speed up my contextual searches.

In conclusion I think chromium is not there for me yet. There are too many downside to justify the move. I'm sure I would reconsider my position in 6 months and I really hope that the google developers will show a bit more of support for the community.

## distcheck vs edos-debcheck

This is the second post about distcheck. I want to give a quick overview of the differences between edos-distcheck and the new version. First despite using the same sat solver and encoding of the problem, Distcheck has been re-written from scratch. Dose2 has several architectural problems and not very well documented. Adding new features had become too difficult and error-prone, so this was a natural choice (at least for me). Hopefully Dose3 will survive the Mancoosi project and provide a base for dependency reasoning. The framework is well documented and the architecture pretty modular. It's is written in ocaml, so sadly, I don't expect many people to join the development team, but we'll be very open to it.

These are the main differences with edos-debcheck .

## Performances

distcheck is about two times faster than edos-debcheck (from dose2), but it is a "bit" slower then debcheck (say the original debcheck), that is the tool wrote by Jerome Vouillon and that was then superseded in debian by edos-debcheck. The original debcheck was a all-in-one tool that did the parsing, encoding and solving without converting the problem to any intermediate format. distcheck trades a bit of speed for generality. Since it is based on Cudf, it can handle different formats and can be easily adapted in a range of situation just by changing the encoding of the original problem to cudf.

Below there are a couple of test I've performed on my machine (debian unstable). The numbers speak alone.

$time cat tmp/squeeze.packages | edos-debcheck -failures > /dev/null Completing conflicts... * 100.0% Conflicts and dependencies... * 100.0% Solving * 100.0% real 0m19.515s user 0m19.193s sys 0m0.276s$time ./distcheck.native -f deb://tmp/squeeze.packages > /dev/null

real    0m10.859s
user    0m10.669s
sys    0m0.172s

## Input

The second big difference is about different input format. In fact, at the moment, we have two different tools in debian, one edos-debcheck and the other edos-rpmcheck. Despite using the same underlying library these two tools have different code bases. distcheck basically is a multiplexer that convert different inputs to a common format and then uses it (agnostically) to solve the installation problem. It can be called in different ways (via symlinks) to behave similarly to its predecessors.

At the moment we are able to handle 5 different formats

1. deb:// Packages 822 format for debian based distributions
2. hdlist:// a binary format used by rpm based distribution
3. synth:// a simplified format to describe rpm based package repositories
4. eclipse:// a 822 based format that encoded OSGi plugings metadata
5. cudf:// the native cudf format

distcheck handles gz and bz2 compressed file transparently . However if you care about performances, you should decompress your input file first and the parse it with distcheck and it often takes more time to decompress the file on the fly that run the installability test itself. There is also an experimental database backend that is not compiled by default at them moment.

## Output

Regarding the output, I've already explained the main differences in an old post. As a quick reminder, the old edos-debcheck had two output options. The first is a human readable - unstructured output - that was a handy source of information when running the tool interactively. The second was a xml based format (without a dtd or a schema, I believe) that was used for batch processing.

distcheck has only one output type in the YAML format that aims to be human and machine readable. Hopefully this will cater for both needs. Moreover, just recently I've added the output of distcheck a summary of who is breaking what. The output of edos-debcheck was basically a map of packages to the reasons of the breakage. In addition to this information distcheck gives also a maps between reason (a missing dependency or a conflict) to the list of packages that are broken by this problem.This additional info is off by default, but I think it can be nice to know what is the missing dependency that is responsible for the majority of problems in a distribution...

For example, calling distcheck with --summary :

$./distcheck.native --summary deb://tests/sid.packages backgroud-packages: 29589 foreground-packages: 29589 broken-packages: 143 missing-packages: 138 conflict-packages: 5 unique-missing-packages: 52 unique-conflict-packages: 5 summary: - missing: missingdep: libevas-svn-05-engines-x (>= 0.9.9.063) packages: - package: enna-dbg version: 0.4.0-4 architecture: amd64 source: enna (= 0.4.0-4) - package: enna version: 0.4.0-4 architecture: amd64 source: enna (= 0.4.0-4) - missing: missingdep: libopenscenegraph56 (>= 2.8.1) packages: - package: libosgal1 version: 0.6.1-2+b3 architecture: amd64 source: osgal (= 0.6.1-2) - package: libosgal-dev version: 0.6.1-2+b3 architecture: amd64 source: osgal (= 0.6.1-2) Below I give a small example of the edos-debcheck output compared to the new yaml based output.$cat tests/sid.packages | edos-debcheck -failures -explain
Completing conflicts...                                            * 100.0%
Conflicts and dependencies...                                      * 100.0%
Solving                                                            * 100.0%
zope-zms (= 1:2.11.1-03-1): FAILED
zope-zms (= 1:2.11.1-03-1) depends on missing:
- zope2.10
- zope2.9
zope-tinytableplus (= 0.9-19): FAILED
zope-tinytableplus (= 0.9-19) depends on missing:
- zope2.11
- zope2.10
- zope2.9
...

And an extract from the distcheck output (the order is different. I cut and pasted parts of the output here...)

$./distcheck.native -f -e deb://tests/sid.packages report: - package: zope-zms version: 1:2.11.1-03-1 architecture: all source: zope-zms (= 1:2.11.1-03-1) status: broken reasons: - missing: pkg: package: zope-zms version: 1:2.11.1-03-1 architecture: all missingdep: zope2.9 | zope2.10 - package: zope-tinytableplus version: 0.9-19 architecture: all source: zope-tinytableplus (= 0.9-19) status: broken reasons: - missing: pkg: package: zope-tinytableplus version: 0.9-19 architecture: all missingdep: zope2.9 | zope2.10 | zope2.11 ... ## Future The roadmap to release version 1.0 of distcheck is as follows: 1. add background and foreground package selection. This feature will allow the use to specify a larger universe (background packages), but check only a subset of this universe (foreground packages). This should allow users to select packages using grep-dctrl and then pipe them to discheck . At the moment we can select individual packages on the command line or we can use expression like bash (<= 2.7) to check all version of bash in the universe with version greater than 2.7. 2. code cleanup and a bit of refactoring between distcheck and buildcheck (that is a frontend for distcheck that allow us to report broken build dependencies) 3. consider essential packages while performing the installation test. Here there are few things we have to understand, but the idea would be to detect possible problems related the implicit presence of essential packages in the distribution. At the moment, distcheck performs the installation test in the empty universe, while ideally, the universe should contain all essential packages. 4. finish the documentation. The effort in underway and we hope to finalize shortly to release the debian package in experimental. ## bypassing the apt-get solver Here at mancoosi we have been working for quite a while to promote and advance solver technology for FOSS distributions. We are almost at the end of the project and it is important to make the mancoosi technology relevant for the community. On goal of the project is to provide a prototype that uses part of the results of mancoosi that can based to install/remove/upgrade packages on a user machine. We certainly don't want to create yet another meta installer. This would be very time consuming and certainly going beyond the scope of the project. The idea is to create a prototype, that can work as an apt-get drop in replacement that will allow everybody to play with different solvers and installation criteria. A very first integration step is a small shell script apt-mancoosi that tries to put together different tools that we have implemented during the project. Roberto wrote extensively about his experience with apt-mancoosi a while ago showing that somehow the mancoosi tools are already usable, as proof of concept, to experiment with all solvers participating to the Misc competition. On notable obstacle we encountered with apt-mancoosi is how to pipe the result of an external solver to apt-get to effectively install the packages proposed as solution. Apt-mancoosi fails to be a drop-in replacement for apt-get exactly for this reason. The "problem" is quite simple : The idea at the beginning was to pass to apt-get, on the command line, a request that effectively represents a complete solution. We expected that, since this was already a locked-down solution, apt-get would have just installed all packages without any further modification to the proposed installation set. Of course, since apt-get is designed to satisfy a user request, and not just to install packages, we quickly realized that our evil plan was doomed to failure. The only option left, was to use libapt directly, but the idea of programming in c++ quickly made me to desist. After a bit of research (not that much after all), I finally found a viable solution to our problems in ''python-apt'' that is a low level and wrapper around libapt. This definitely made my day. Now the juicy details. the problem was to convince apt to completely bypass the solver part and just call the installer. First a small intro. python-apt has an extensive documentation with a couple of tutorials. Using python-apt is actually pretty easy (some snippet from the python-apt doco) : import apt # First of all, open the cache cache = apt.Cache() # Now, lets update the package list cache.update() here we open the cache (apt.Cache is a wrapper around the low level binding in the apt_pkg module), then we update the package list. This is equivalent to apt-get update . Installing a package is equally easy : import apt cache = apt.Cache() pkg = cache['python-apt'] # Mark python-apt for install pkg.mark_install() # Now, really install it cache.commit() Now, the method mark_install of the module package will effectively run the solver to resolve and mark all the dependencies of the package python-apt. This is the default behavior when apt-get is used on the command line. This method has however three optional arguments that are just what I was looking for, that is autoFix, autoInst and fromUser . The explanation from the python-apt doco is quite clear. mark_install(*args, **kwds) Mark a package for install. If autoFix is True, the resolver will be run, trying to fix broken packages. This is the default. If autoInst is True, the dependencies of the packages will be installed automatically. This is the default. If fromUser is True, this package will not be marked as automatically installed. This is the default. Set it to False if you want to be able to automatically remove the package at a later stage when no other package depends on it. What we want is to set autoFix and autoInst to false to completely bypass the solver. So imagine that an external solver can give use a string of the form : bash+ dash=1.4 baobab-  that basically asks to install bash to the newest version, dash at version 1.4 and remove baobab. Suppose also that this is a complete solution, that is, all dependencies are satisfied and there are no conflicts. The work flow of mpm (mancoosi package manager) is as follows : • init apt-get, • convert all packages lists + status in a cudf description, • pass this cudf to an external solver, • get the result and set all packages to add/remove in the apt.cache of python-apt • download the packages • commit the changes (effectively, run dpkg) We already have a first prototype on the mancoosi svn. It's not released yet as we are waiting to do more testing, add more options and make it stable enough for testing. Maybe one day, this will be uploaded to debian. This is the trace of a successful installation of a package in a lenny chroot. The solver used here is the p2 solver dev:~/mpm# ./mpm.py -c apt.conf install baobab Running p2cudf-paranoid-1.6 solver ... Validate solution ... loading CUDF ... loading solution ... Summary of proposed changes: new: 30 removed: 0 replaced: 0 upgraded: 0 downgraded: 0 unsatisfied recommends:: 8 changed: 30 uptodate: 322 notuptodate: 116 New packages: baobab (2.30.0-2) dbus-x11 (1.2.24-3) gconf2 (2.28.1-5) gconf2-common (2.28.1-5) gnome-utils-common (2.30.0-2) libatk1.0-0 (1.30.0-1) libcairo2 (1.8.10-6) libdatrie1 (0.2.4-1) libdbus-glib-1-2 (0.88-2) libgconf2-4 (2.28.1-5) libgtk2.0-0 (2.20.1-2) libgtk2.0-common (2.20.1-2) libgtop2-7 (2.28.1-1) libgtop2-common (2.28.1-1) libidl0 (0.8.14-0.1) libjasper1 (1.900.1-7+b1) liborbit2 (1:2.14.18-0.1) libpango1.0-0 (1.28.3-1) libpango1.0-common (1.28.3-1) libpixman-1-0 (0.16.4-1) libthai-data (0.1.14-2) libthai0 (0.1.14-2) libtiff4 (3.9.4-5) libxcb-render-util0 (0.3.6-1) libxcb-render0 (1.6-1) libxcomposite1 (1:0.4.2-1) libxcursor1 (1:1.1.10-2) libxrandr2 (2:1.3.0-3) psmisc (22.11-1) shared-mime-info (0.71-3) Removed packages: Replaced packages: Upgraded packages: Selecting previously deselected package libatk1.0-0. (Reading database ... 28065 files and directories currently installed.) Unpacking libatk1.0-0 (from .../libatk1.0-0_1.30.0-1_i386.deb) ... Selecting previously deselected package libpixman-1-0. Unpacking libpixman-1-0 (from .../libpixman-1-0_0.16.4-1_i386.deb) ... Selecting previously deselected package libxcb-render0. Unpacking libxcb-render0 (from .../libxcb-render0_1.6-1_i386.deb) ... Selecting previously deselected package libxcb-render-util0. Unpacking libxcb-render-util0 (from .../libxcb-render-util0_0.3.6-1_i386.deb) ... Selecting previously deselected package libcairo2. Unpacking libcairo2 (from .../libcairo2_1.8.10-6_i386.deb) ... Selecting previously deselected package libdbus-glib-1-2. Unpacking libdbus-glib-1-2 (from .../libdbus-glib-1-2_0.88-2_i386.deb) ... Selecting previously deselected package libidl0. Unpacking libidl0 (from .../libidl0_0.8.14-0.1_i386.deb) ... Selecting previously deselected package liborbit2. Unpacking liborbit2 (from .../liborbit2_1%3a2.14.18-0.1_i386.deb) ... Selecting previously deselected package gconf2-common. Unpacking gconf2-common (from .../gconf2-common_2.28.1-5_all.deb) ... Selecting previously deselected package libgconf2-4. Unpacking libgconf2-4 (from .../libgconf2-4_2.28.1-5_i386.deb) ... Selecting previously deselected package libgtk2.0-common. Unpacking libgtk2.0-common (from .../libgtk2.0-common_2.20.1-2_all.deb) ... Selecting previously deselected package libjasper1. Unpacking libjasper1 (from .../libjasper1_1.900.1-7+b1_i386.deb) ... Selecting previously deselected package libpango1.0-common. Unpacking libpango1.0-common (from .../libpango1.0-common_1.28.3-1_all.deb) ... Selecting previously deselected package libdatrie1. Unpacking libdatrie1 (from .../libdatrie1_0.2.4-1_i386.deb) ... Selecting previously deselected package libthai-data. Unpacking libthai-data (from .../libthai-data_0.1.14-2_all.deb) ... Selecting previously deselected package libthai0. Unpacking libthai0 (from .../libthai0_0.1.14-2_i386.deb) ... Selecting previously deselected package libpango1.0-0. Unpacking libpango1.0-0 (from .../libpango1.0-0_1.28.3-1_i386.deb) ... Selecting previously deselected package libtiff4. Unpacking libtiff4 (from .../libtiff4_3.9.4-5_i386.deb) ... Selecting previously deselected package libxcomposite1. Unpacking libxcomposite1 (from .../libxcomposite1_1%3a0.4.2-1_i386.deb) ... Selecting previously deselected package libxcursor1. Selecting previously deselected package libxrandr2. Unpacking libxrandr2 (from .../libxrandr2_2%3a1.3.0-3_i386.deb) ... Selecting previously deselected package shared-mime-info. Unpacking shared-mime-info (from .../shared-mime-info_0.71-3_i386.deb) ... Selecting previously deselected package libgtk2.0-0. Unpacking libgtk2.0-0 (from .../libgtk2.0-0_2.20.1-2_i386.deb) ... Selecting previously deselected package libgtop2-common. Unpacking libgtop2-common (from .../libgtop2-common_2.28.1-1_all.deb) ... Selecting previously deselected package libgtop2-7. Unpacking libgtop2-7 (from .../libgtop2-7_2.28.1-1_i386.deb) ... Selecting previously deselected package psmisc. Unpacking psmisc (from .../psmisc_22.11-1_i386.deb) ... Selecting previously deselected package dbus-x11. Unpacking dbus-x11 (from .../dbus-x11_1.2.24-3_i386.deb) ... Selecting previously deselected package gconf2. Unpacking gconf2 (from .../gconf2_2.28.1-5_i386.deb) ... Selecting previously deselected package gnome-utils-common. Unpacking gnome-utils-common (from .../gnome-utils-common_2.30.0-2_all.deb) ... Selecting previously deselected package baobab. Unpacking baobab (from .../baobab_2.30.0-2_i386.deb) ... Processing triggers for man-db ... Setting up libatk1.0-0 (1.30.0-1) ... Setting up libpixman-1-0 (0.16.4-1) ... Setting up libxcb-render0 (1.6-1) ... Setting up libxcb-render-util0 (0.3.6-1) ... Setting up libcairo2 (1.8.10-6) ... Setting up libdbus-glib-1-2 (0.88-2) ... Setting up libidl0 (0.8.14-0.1) ... Setting up liborbit2 (1:2.14.18-0.1) ... Setting up gconf2-common (2.28.1-5) ... Creating config file /etc/gconf/2/path with new version Setting up libgconf2-4 (2.28.1-5) ... Setting up libgtk2.0-common (2.20.1-2) ... Setting up libjasper1 (1.900.1-7+b1) ... Setting up libpango1.0-common (1.28.3-1) ... Cleaning up font configuration of pango... Updating font configuration of pango... Cleaning up category xfont.. Updating category xfont.. Setting up libdatrie1 (0.2.4-1) ... Setting up libthai-data (0.1.14-2) ... Setting up libthai0 (0.1.14-2) ... Setting up libpango1.0-0 (1.28.3-1) ... Setting up libtiff4 (3.9.4-5) ... Setting up libxcomposite1 (1:0.4.2-1) ... Setting up libxcursor1 (1:1.1.10-2) ... Setting up libxrandr2 (2:1.3.0-3) ... Setting up shared-mime-info (0.71-3) ... Setting up libgtk2.0-0 (2.20.1-2) ... Setting up libgtop2-common (2.28.1-1) ... Setting up libgtop2-7 (2.28.1-1) ... Setting up psmisc (22.11-1) ... Setting up dbus-x11 (1.2.24-3) ... Setting up gconf2 (2.28.1-5) ... update-alternatives: using /usr/bin/gconftool-2 to provide /usr/bin/gconftool (gconftool) in auto mode. Setting up gnome-utils-common (2.30.0-2) ... Setting up baobab (2.30.0-2) ... Broken: 0 InstCount: 30 DelCount: 0 dev:~/mpm# I think we'll keep working on this python prototype for a while, but this is not certainly what we want to propose to the community. The mancoosi package manager is probably going to be written in Ocaml and integrated with dose3 and libcudf. This will allow us to gain speed and have a solid language to develop with (nothing against python, but we don't feel that a scripting language is suitable for an essential component as a package manager). Time will tell. For the moment this is just vapor-ware ... ## alphabetic filter with generic views The other day I decided to add a small alphabetic filter to search among the broken packages in debian weather. Searching the net for a nice solution I've found few snippets, but none of them struck me as particularly flexible for my needs. I've also found a django module, but it seems to me overly complicated for such a simple thing. I had a look at the code and I've generalized the _get_available_letters function that given a table and a filed gives you back a the list of letters used in the table for that specific field. I've generalized the code to integrate better with django relational model. Instead of acting directly on a table (using raw sql), I plug the raw sql statement UPPER(SUBSTR(%s, 1, 1)) in the django query using the extra function. The result is pretty neat as you don't need to know the underlying model and you can use this method with an arbitrary queryset. This is of course possible thanks to django laziness in performing sql queries... def alpha(request,obj,field): alphabet = u'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' s = 'UPPER(SUBSTR(%s, 1, 1))' % field q = obj.distinct().extra(select={'_letter' : s}).values('_letter') letters_used = set([x['_letter'] for x in q]) default_letters = set([x for x in alphabet]) all_letters = list( default_letters | letters_used) all_letters.sort() alpha_lookup = request.GET.get('sw','') choices = [{ 'link': '?sw=%s' % letter, 'title': letter, 'active': letter == alpha_lookup, 'has_entries': letter in letters_used,} for letter in all_letters] all_letters = [{ 'link': '?sw=all&page=all', 'title': ('All'), 'active': '' == alpha_lookup, 'has_entries': True },] return (all_letters + choices) This function also gets a request object in order to select the active letter. This is related to the template to display the result of this view. queryset = Entry.objects.all() #defaults pager + all letters element_by_page = None letter = request.GET.get('sw','all') if (letter != 'all') and (len(letter) == 1): queryset = queryset.filter(myfield__istartswith=letter) if (request.GET.get('page',None) != 'all') : element_by_page = ELEMENT_BY_PAGE In my specific case I wanted to have by default all letters with pagination, but then to be able to switch off pagination and select a specific letter. I use two variables to control all this. The first variable,page, comes with the generic view list_details. It is usually a number from 0 to the last page and it is used to control pagination. I've added a value all to switch off pagination altogether setting element_by_page to None . The second variable one is sw that I use to select a specific letter to display. params = {'choices' : alpha(request,Entry.objects,"myfield")} return list_detail.object_list( request, paginate_by = element_by_page, queryset = queryset, template_name = 'details.html', extra_context = params) If at the end of your view, you return a generic view as above, the only thing you need is to add a choises field in your template to display the alphabetic filter that will look something like this : <link rel="stylesheet" href="{{ MEDIA_URL }}/css/alphabet.css" type="text/css" /> {% if choices %} <br class="clear" /> <ul class="alphabetfilter"> {% for choice in choices %} <li> {% if choice.has_entries %} <a href="{{ choice.link }}"> {% if choice.active %} <span class="selected">{{ choice.title }}</span> {% else %} <span class="inactive">{{ choice.title }}</span> {% endif %} </a> {% else %} <span class="inactive">{{ choice.title }}</span> {% endif %} </li> {% endfor %} </ul> <br class="clear" /> {% endif %} This is pretty standard as it iterates over the list of letter linking the ones with content. You need to associate a small css to display the list horizontally. Put this is a file an embedd it where you want with an include statement: {% include "forecast/alphabet.html" %} . The code for my application is here if you want to check out more details. You can have a look at the result debian here. ## dose3 distcheck A while ago I wrote about the new distcheck tool upcoming in dose3. I've recently updated the proposal on the debian wiki to reflect recent changes in the yaml data structure. The idea was to remove redundant information, to make it easier to read and at the same time include enough details to make it easy to use from a script. I'll write down a small example to explain the format. A package can be broken because of a missing package or because of a conflict. For a missing package we'll have a stanza like this : package: libgnuradio-dev version: 3.2.2.dfsg-1 architecture: all source: gnuradio (= 3.2.2.dfsg-1) status: broken reasons: - missing: pkg: package: libgruel0 version: 3.2.2.dfsg-1+b1 architecture: amd64 missingdep: libboost-thread1.40.0 (>= 1.40.0-1) paths: - depchain: - package: libgnuradio-dev version: 3.2.2.dfsg-1 architecture: all depends: libgnuradio (= 3.2.2.dfsg-1) - package: libgnuradio version: 3.2.2.dfsg-1 architecture: all depends: libgnuradio-core0 - package: libgnuradio-core0 version: 3.2.2.dfsg-1+b1 architecture: amd64 depends: libgruel0 (= 3.2.2.dfsg-1+b1) The first part gives details about the package libgnuradio-dev, specifying its status, source and architecture. The second part is the reason of the problem. In this case it is a missing package that is essential to install libgnuradio-dev. missindep is the dependency that cannot be satisfied is the package libgruel0 , in this case: libboost-thread1.40.0 (>= 1.40.0-1). The paths component gives all possible depchains from the root package libgnuradio-dev to libgruel0 . Notice that we do not include the last node in the dependency chain to avoid a useless repetition. Of course there might be more then on path to reach libgruel0. Distcheck will unroll all of them. Because of the structure of debian dependencies usually there are not so many paths. The other possible cause of a problem is a conflict. Consider the following : package: a version: 1 status: broken reasons: - conflict: pkg1: package: e version: 1 pkg2: package: f version: 1 depchain1: - depchain: - package: a version: 1 depends: b - package: b version: 1 depends: e depchain2: - depchain: - package: a version: 1 depends: d - package: d version: 1 depends: f This is the general case of a deep conflict. I use an artificial example here instead of a concrete one since this case is not very common and I was not able to find one. To put everything in context, this is the example I've used (it's in cudf format, but I think you get the gist of it): package: a version: 1 depends: b, d package: b version: 1 depends: e package: d version: 1 depends: f package: f version: 1 conflicts: e package: e version: 1 conflicts: f The first part of the distcheck report is as before with details about the broken package. Since this is a conflict, and all conflicts are binary, we give the two packages involved in the conflict first. Packages f and e are in conflict, but they are not direct dependency of package a . For this reason, we output the two paths that from a lead to f or e. All dependency chains for each conflict are together. Again, since there might be more than one way from a to reach the conflicting packages, we can have more then one depchain. Another important upcoming change is distcheck (to be implemented soon) it the ability to check if a package is in conflict with an Essential package. In the past edos-debcheck always check the installability of a package in the empty universe. This assumption is actually not true for debian as all essential packages should always be installed. For this reason, now distcheck will check the installability problem not in an empty universe, but in a universe with all essential packages installed. This check is not going to be fool proof though. Because of the semantic of essential packages, despite is not possible to remove a package toutcourt, an essential package can be replaced by a non essential package via the replace mechanism. For example, poking with this feature I noticed that the package upstart in sid replace sysinit and it is in conflict with it. This is perfectly fine as it gives a mechanism to upgrade and replace essential components of the system. At the same time this does not fit in the edos-debcheck philosophy of checking packages for installation problems in the empty universe (or in a universe with all essential packages installed). At the moment we are still thinking how to address this problem (the solution will be in the long term to add the replace semantic in distcheck), but for the moment we will just provide an option to check packages w.r.t essential packages conscious the this can lead to false positives. This work is of course done in collaboration with the mancoosi team in paris. Dose3 is still not ready for prime time. We are preparing debian packages and we plan to upload them in experimental in the near feature. ## latex tables from csv files While writing scientific papers often we feel this need to add evidence and data to our claims. This can be attained in different ways : tables, graphs, or nice pictures (or something else if you feel creative). The point is, that to produce this data, I often end-up writing ad-hoc scripts to analyze my data involving a million awk. sed, sort, unique etc etc ... What I want is a more productive work flow to streamline the boring pipeline Producer | Analyzer | latex. First I need a suitable output for to collect data from my experiments. In the past I often collected row data in a non structured format, then used some kind of parser to extract the important information for a particular figure. Printing non structured data is a plain bad idea and a pain, as it has to be parsed again before it can be used. Moreover reusing an old parser is often difficult as the nature of the experiment can be completely different and so the format of the output. The solution to this problem is to adopt a structured data structure to print your results. This will cut the need to re-write a new parser all the time, and also to try to be more consistent over all my experiments. The format itself is not very important. It can be xml for example or, if you are not so masochist, a something following the json or yaml standards. I've choose yaml that is a meta language designed to be at the same time human and machine readable. Yaml it's fairly easy to produce and very well supported in many programming languages. In particular, yaml is a superset of json, so for simple data structure you can also think of reusing a json printer if you don't have a yaml printer. The second step is to parse and analyze the experimental data. I often accomplish this step in python. The choice here is quite simple: mangling text with python is very easy, there are a lot of libraries (both natives and bindings), and a very nice parser for yaml. If this is not enough python-numeric, python-matplotlib and python-stats should convince to adopt it for this task. Surely perl is another choice, but my sense of aesthetic doesn't allow me to go that way. The third and final step is convert everything to latex. Yes, it is true that I could generate a latex-compatible output directly with python, but this would make the pipeline a bit less flexible as I might want to use the same data in a a web page, for example, without having to write a second printer for html. The solution is to have a generic csv printer and then perform the final conversion with an off-the-shelf tool. For latex for example, and actually the entire post is about this, I've discovered the module '''datatool'''. This module is a pretty neat solution to embed csv tables (and I think it supports other formats as well) directly into your latex document and taking care of the formatting directly in the document. For example, consider this sample data in csv format : gcc-4.3 (= 4.3.2-1.1) | gcc-4.3-base (= 4.3.2-1.1) | > 4.3.2-1.1 | 20079 | 20128 | 13757 gcc-4.3 (= 4.3.2-1.1) | libstdc++6 (= 4.3.2-1.1) | > 4.3.2-1.1 | 14951 | 14964 | 10573 gcc-4.3 (= 4.3.2-1.1) | cpp-4.3 (= 4.3.2-1.1) | > 4.3.2-1.1 | 2200 | 2226 | 1566 perl (= 5.10.0-19) | perl-modules (= 5.10.0-19) | > 5.10.0-19 | 1678 | 7898 | 1488 perl (= 5.10.0-19) | perl (= 5.10.0-19) | > 6 | 1678 | 7898 | 1488 perl (= 5.10.0-19) | perl (= 5.10.0-19) | 5.10.0-19 < . < 6 | 1678 | 7898 | 1488 python-defaults (= 2.5.2-3) | python (= 2.5.2-3) | > 3 | 1079 | 2367 | 897 python-defaults (= 2.5.2-3) | python (= 2.5.2-3) | 2.06 < . < 3 | 1075 | 2367 | 894 gtk+2.0 (= 2.12.11-4) | libgtk2.0-0 (= 2.12.11-4) | > 2.12.11-4 | 796 | 2694 | 624 glibc (= 2.7-18) | libc6 (= 2.7-18) | > 2.7-18 | 567 | 20126 | 471 It is a simple '|' separated file, it has 5 columns and no header (I don't like commas). The datatool latex package is part of the texlive-latex-extra in debian and to use it you just need to add \usepackage{datatool} to you preamble. Now to produce a nice looking latex table, you first need to load the file with the \DTLloaddb command (and you have to specify a proper separator). Without hesitate further, you can now just use the \DTLdisplaydb{table1}'' command to produce the table. Awesome ! \DTLsetseparator{|} \DTLloaddb[noheader,keys={source,package,target,brokenpkg,impactset,brokensource}]{table1}{table1.csv} \begin{table}[htbp] \caption{} \centering \DTLdisplaydb{table1} \end{table } But this is not very nice as there are fields that you don't want to display. The datatool package is actually pretty flexible and this is how you print a table with only three columns : \begin{table}[htbp] \caption{} \centering \begin{tabular}{lll} {\bf Package} & {Target Version} & {Broken }% \DTLforeach{table1}{% \package=package,\target=target,\brokensource=brokensource}{% \\ \package &$\target$& \brokensource} \end{tabular} \end{table } There are a lot of nice short-cuts to print your table. Looking at the documentation it looks like a very powerful too to have. This made my day. ## Enforcing Ads on apple products I just stumbled on this patent application from apple. The content is quite hilarious and scaring at the same time : "Apple can further determine whether a user pays attention to the advertisement. The determination can include performing, while the advertisement is presented, an operation that urges the user to respond; and detecting whether the user responds to the performed operation. If the response is inappropriate or nonexistent, the system will go into lock down mode in some form or other until the user complies. In the case of an iPod, the sound could be disconnected rendering it useless until compliance is met. For the iPhone, no calls will be able to be made or received." I would say that the future of apple products is not for the faint of heart ... Maybe I'm a control freak, but if I buy a product from somebody, I would like to decide how and when to use this product more then let somebody else to enforce any kind of behavior on me... ahahah, and I'm sure apple fun boys will just shallow this as a new fantastic advancement in technology and design :) ## connected components with graphivz I just discovered the gvpr transformation language that comes with graphivz. Up until now, I spend far too much time manipulating dot graphs for various reasons. gvpr is an awk like language to manipulate dot graphs. It seems pretty complete, and allows you to do a lot of simple operations in one line. One small example is to split a graph in its connected components in many graphs, one per file. The one line is : ccomps -x dominators.dot | gvpr -f split.gvpr where ccomps is a tool (that is also part of the graphivz suite) that computes connected components of a dot graph. The option -x creates a digraph for connected component (by default it creates a graph with a lot of subgraphs). The results of this command is piped to gvpr that is the graphivz language interpreter. The program itself is very simple : BEGIN { int n; } BEG_G { n = nNodes($G);
if ( n > 2 ) writeG($G,$G.name);
}
END {}
for each graph, if the graph has more then two nodes, write the graph in a file. and voila !

## hidden ssh service via tor

We live in a nat-ed / firewall-ed world. Almost all DSL providers don't give public IPs and when the do, they are often behind a draconian firewall. In this context having an emergency remote shell, despite not fast and not public is very handy. A simple way to solve this problem it to create a hidden server on the tor network and the access the shell from anywhere in the world without caring of change of IPs, routing, dns or anything else.

On debian you can just install tor from the official repository. Since Tor is not available in ubuntu (but it is available on debian), we need to get it directly from the tor website. There is a nice write-up on the ubuntu site : https://help.ubuntu.com/community/Tor . And these are the details on the tor website.

So we add this to apt.sources

deb http://deb.torproject.org/torproject.org lucid main

and the we aptitude install tor .The package will install and run by default the tor daemon. Next step is to edit /etc/tor/torrc to add the proxy server.

HiddenServiceDir /var/lib/tor/ssh/
HiddenServicePort 22 127.0.0.1:22

Remember also to install the openssh-server server if you don't have it already. And this is it. In the directory /var/lib/tor/ssh/ you will find a file with the hostname on the TOR ring that you have to use to connect to you new hidden server.

On the client side we need to aptitude install connect-proxy. It's a simple tool to tunnel ssh through a socks5 connection. Now you are ready to test. In your ~/.ssh/config you can simply add something like

Host *.onion
ProxyCommand connect -R remote -5 -S 127.0.0.1:9050 %h %p

and then ssh-way youronionhost.onion server . The connection will be veeeeery slow since you are going through different layers of encryption and indirection. You should also check the hostid of your server before connecting and dropping in a pub-key as you should never trust your friendly TOR providers (US govt, Chinese gvt, Iranian govt, etc ...).

For emergency is actually pretty handy. For anything else if will make you die of boredom ...

## Update

Ahhh . It seems I've assumed that since TOR was not available on ubuntu, this was the same on debian. Tor is definitely available on debian, but not on ubuntu. Blah... check before asserting wrong information ! Post fixed.