dose has a new git repository and mailing list !

Recently we did a bit of clean up of our git repositories and now thanks to roberto’s efforts we have a new shiny git repository on the inria forge and two mailing lists to discuss development and user questions.

If you are a user, or interested in dose development, please sign up to these mailing lists:

  • dose-discus :http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/dose-discuss
  • dose-devel : http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/dose-devel

if you already have a copy of the latest git repository, you can change the upstream repository issuing the following command :

     git config remote.origin.url git://gforge.inria.fr/dose/dose.git

If you are curious, you can clone dose git clone git://gforge.inria.fr/dose/dose.git and let us know what you think about it.

The API documentation is available here. The man pages or various applications developed on top of the dose library are available here. We are still actively working on the documentation and any contribution is very much welcome. I’m working on a nice homepage…

You can get the latest tar ball release here : https://gforge.inria.fr/frs/?group_id=4395 Old releases will be left on the old forge.

Update

And now we are even social ! follow us on identi.ca : http://identi.ca/group/dose3


Package Managers Comparison - take 2

On year ago, we (the mancoosi team) published a comparison study regarding the state of the art of dependency solving in debian. As few noticed, the data presented had few glitches that I promised to fix. So we’ve repeated our tests using exactly the same data we used one year ago, but now using the latest available versions of all package managers as available in debian unstable.

During the last year, three out of four solver that we evaluated release a major upgrade so I expected many improvements in performances and accuracy.

  • apt-get 0.8.10 -> 0.9.7
  • aptitude 0.6.3 -> 0.6.7
  • smart 1.3-1 -> 1.4
  • cupt 1.5.14.1 -> 2.5.6

Mpm, our test-bench for new technologies, changed quite a bit under the wood as a consequence of the evolution of apt-cudf and the recent work done in apt-get to integrate external cudf dependency solvers.

Overall the results of our study are not changed. All solvers but mpm, that is based on aspcud, are not scalable as the number of packages (and alternatives) grows. It seems that Smart is the solver that does not give up, incurring in a timeout (fixed at 60 seconds) most of the time. Aptitude is the solver that tried to give you a solution, doesn’t matter what and as result providing solutions that do not satisfy the user request for one reason or the other. Apt-get does surprisingly well, but it gives up pretty often showing the incomplete nature of it’s internal solver. Cupt sometimes timeout, sometimes gives up, but when it is able to provide an answer it is usually optimal and it is very fast … Mpm consistently finds an optimal solution, but sometimes it takes really a long time to do it. Since mpm is written in python and not optimized for speed this is not a big problem for us. The technology used by mpm is now integrated in apt-get and I hope this will alleviate this problem.

All the details of our study can be found one the Mancoosi Website as usual with a lot of details. For example here you can find the results when mixing four major releases : sarge-etch-lenny-squeeze.

Comments are more then welcome.


apt-get with external solvers : call for testers

Last year we invited David to work with us for a few days to add a generic interface to apt to call external solvers. After a few iterations, this patch finally landed in master and recently (about 3 months ago), in debian unstable.

[ David Kalnischkies ] * [ABI-Break] Implement EDSP in libapt-pkg so that all front-ends which use the internal resolver can now be used also with external ones as the usage is hidden in between the old API * provide two edsp solvers in apt-utils: - ‘dump’ to quickly output a complete scenario and - ‘apt’ to use the internal as an external resolver

Today the new version of apt-cudf was upload in unstable and with it the latest bug fixes that makes it ready for daily use. I’ve used it quite a lot myself to upgrade my machine and it seems working pretty well so far… The most important difference with the old version is the support for multi-arch enabled machines.

This marks an important milestone in our efforts to integrate external solvers, built using different technologies, directly into apt. From a user prospective, this means that (s)he will have the possibility to check if there exists a better (best ?) solution to an installation problem then what proposed by the internal apt solver. Moreover, even if apt-get gives very satisfactory answers, there are occasions where it fails miserably, leaving the user wondering how to unravel the complex web of dependencies to accomplish his task. Available cudf solvers in debian are at the moment : aspcud, mccs and packup.

From an architectural point of view this is accomplished by abstracting the installation problem via a simple textual protocol (EDSP) and using an external tool to do the heavy duty translation. Since all solvers now available in debian are not meant to be debian-specific, using them involve a two step translation. The EDSP protocol specification is for the moment “hidden” in the apt documentation. I hope to find a better place for it soon : it would be cool if other package managers as smart or cupt could add an implementation on EDSP in their code so to automatically benefit of this technology.

To make it happen, Apt first creates an EDSP file that is then handled to apt-cudf that takes care of the translation to cudf and back into EDSP that is then read back by apt. Apt-cudf is the bridge between edsp and the external solvers and takes care of doing the book keeping and to select the right optimization criteria.

Roberto recently wrote a very nice article explaining how to use apt-get with an external solver.

In a nutshell, if you want to try this out you just need to install apt-cudf, one external solver like aspcud from the university of Pasdam and then call apt-get using the —solver option (that is not yet documented #67442 ). For example :

apt-get install -s gnome --solver aspcud

This will install gnome while using the a optimization criteria that tries to minimizing the changes on the system. Various other optimization criteria for all apt-get default actions can be specified in the apt-cudf configuration file /etc/apt-cudf.conf

I hope the new release of apt-cudf make it into testing before the freeze. Time to test !!!


managing puppet manifest with gitolite

Managing the puppet manifest using a vcs is a best practice and there is a lot of material on the web. The easier way to do it, is to use git directly in the directory /etc/puppet and use a simple synchronization strategy with an external repo, either to publish your work, or simply to keep a backup somewhere.

Things are a bit more complicated when you would like to co-administer the machine with multiple people. Setting up user accounts, permission and everything can be a pain in the neck. Moreover working from your desktop is always more comfortable then logging in as root on a remote system and make changes there …

The solution I’ve chosen to make my life a bit easier is to use gitolite, that is a simple git gateway that works using ssh public keys for authentication and does not require the creating of local users on the server machine. Gitolite is available in debian and installing it is super easy : apt-get install gitolite .

If you use puppet already you might be tempted to use puppet to manage your gitolite installation. This is all good, but I don’t advice you to use modules like this one http://forge.puppetlabs.com/gwmngilfen/gitolite/1.0.0 as it’s going to install gitolite from source and on debian it’s not necessary … For my purposes, I didn’t find necessary to manage gitolite with puppet as all the default config options where good enough for me.

Once your debian package is installed, in order to initialize your repo, you just need to pass to gitolite the admin public key, that is your .ssh/id_rsa.pub key and then run this command:

sudo -H -u gitolite gl-setup /tmp/youruser.pub

This will create the admin and testing repo in /var/lib/gitolite/repositories and setup few other things. At this point you are ready to test you gitolite installation by cloning the admin repo :

git clone gitolite@example.org:gitolite-admin.git

Gitolite is engineered to use only the gitolite use to manage all your repositories. To add more repositories and users you should have a look at the documentation and then editing the file conf/gitolite.conf to add your new puppet repository.

At this point, you can go two ways. If you use git to manage your puppet directory, you can just make a copy of it somewhere and then add gitolite as a remote

git remote add origin gitolite@example.org:puppet.git

If you didn’t use git before, you can just copy the manifest in your new git repository, make a first commit and push it on the server.

git push origin master

Every authorized users can now use git to clone your puppet repository, hack, commit, push …

git clone gitolite@example.org:puppet

One last step is to add a small post-receive hook on the server to synchronize your gitolite repository with the puppet directory in /etc : This will sync your main puppet directory and trigger changes on the nodes for the next puppetd run. First I created a small shell script in /usr/local/bin/puppet-post-receive-hook.sh :

#!/bin/bash
umask 0022
cd /etc/puppet
git pull -q origin master

This script presuppose that your git repo in /etc/puppet has the gitolite repo as origin … Then I added a simple hook in the gitolite git repo that calls this script using sudo :

sudo /usr/local/bin/puppet-post-receive-hook.sh

And since you are at it you should also add a pre-commit hook to check the manifest syntax. This will save you a lot of useless commits.

If you use a more complicated puppet setup using environments (I’m not there yet, and I don’t think my setup will evolve in that direction in the near future), you can use puppet-sync that seems a neat script to do the job.

For the moment this setup works pretty well. I’m tempted to explore mcollective to trigger puppet runs on my nodes, but I’m there yet…


configuring a local apt repository for puppet

Puppet has a built-in functionality to serve small files to its clients. However, for my internal use I sometimes find easier to create a custom debian package to install a specific component then to write a puppet recipe and to copy files around.

To create a local debian repository I use the package reprepro. This is a simple tool that creates and manages apt repository, it is easy to configure and for the moment it lived fully to my expectations.

First of all you need to create a configuration file where you describe your distribution. In this case I choose /var/www/debian/conf/distributions and add the following content :

Origin: PCPool
Label: PCPool
Suite: stable
Codename: pcpool
Version: 3.0
Architectures: i386 amd64
Components: contrib
Description: puppet support package repository
SignWith: D3CF695E

Notice that since reprepro wants to sign your repository, you need to provide a gpg keyid for it.

To add a package to the repository it is straightforward :

reprepro -Vb /var/www/debian/ includedeb pcpool /tmp/msm_1-2_all.deb

As I said, since the repository is signed, we need to make have a way to add the keyid to the known keys of the target machine. In order to achieve this, we add the following puppet recipe :

class apt {
    #local repo sign key
    $keyid = "D3CF695E"

    exec { "apt-update":
        command => "/usr/bin/apt-get update",
        refreshonly => true;
    }

    file { "/etc/apt/trusted.gpg.d/pcpool.gpg":
        source => "puppet://$server/etc/apt/trusted.gpg.d/pcpool.gpg"
    }

#    file { "/root/pcpool.key":
#       source => "puppet://$server/files/root/pcpool.key"
#    }

#    exec { "apt-key":
#        path        => '/bin:/usr/bin',
#        environment => 'HOME=/root',
#        command     => "apt-key add /root/pcpool.key",
#        unless      => "apt-key list | grep $keyid",
#        subscribe   => File["/root/pcpool.key"]
#    }

    file { "/etc/apt/sources.list.d/puppet.list":
        content => "deb http://puppet/debian/ pcpool contrib\n",
        owner   => root,
        group   => root,
        mode    => 0644,
        notify  => Exec["apt-update"]
    }
}

class msm {
    package { "msm": ensure => installed }
}

First we copy the keyid that we have stored in the puppet file bucket in the root directory of the client, then we exec the apt-key command. Note that since puppet executes each action in parallel, we must specify an execution order using the attributes subscribe and notify. Similarly as soon as the file /etc/apt/sources.list.d/puppet.list is added to the machine, we run apt-get update to refresh the cache of apt.

The last stanza simply installs the package that we added to the local repository.

Update

There is a better way to add a gpg key, that is to put it in the /etc/apt/trusted.gpg.d directory. Thanks for the suggestion !