Jenkins + VirtualBox

Developing software while working at university always and invariably put you in a uncomfortable position. On the one side academia is one of the driving forces behind good software development practices. We study, analyze, test, defend and sometimes attack different development methodology and more importantly we teach students what they should do once outside academia. On the other hand, just because our primary job is to do all of the above, sometimes is difficult while developing software ourselves, to follow these best practices. Sometimes is more a matter of mind setting, sometimes is a matter of resources and time.

Today I invested a bit of time to configure and install virtualbox and run a jenkins instance on it. I prefer not to litter my laptop with jenkins as I know I won't run it all the time and I don't want to leave around hundreds of MBs of unused dependencies.

Installing virtualbox is pretty easy. It's in the debian repos, and it's just one apt-get way. Once installed, you need to create a virtual machine. For this purpose I simply downloaded on the the netinstall CD and I use it in the VB GUI as installation CD. Everything went smoothly and my host was up and running in no time.

By default VB set up a NAT network on the first adapter (eth0). This is nice and easy if you want a machine that does not need to talk to the outside world. On the other hand, if you want to to connect to this machine you need to do a bit more of work. To this end I added a host-only network between the guest VM and the host. The catch is that you first need to create a host adapter on the host machine. Simply go File -> Preferences -> Network and create a new interface. This is the interface that will appear on you host. On the guest side, configure the second adapter (eth1) as host-only network and select the interface that you just created before.

The first time you run the VM, the NAT connection should work straightaway, while the second interface will not. To fix this problem you need to edit the file /etc/network/interfaces and set eth1 to auto-configure using dhcp.

Once this is all done, we need to install jenkins. This is pretty easy as well.

the jenkins wiki gives all the explanations you need :

once this is done, on your host got to :8080 and voila ! you can start playing with jenkins !!!

Average: 3 (2 votes)

configuring a local apt repository for puppet

Puppet has a built-in functionality to serve small files to its clients. However, for my internal use I sometimes find easier to create a custom debian package to install a specific component then to write a puppet recipe and to copy files around.

To create a local debian repository I use the package reprepro. This is a simple tool that creates and manages apt repository, it is easy to configure and for the moment it lived fully to my expectations.

First of all you need to create a configuration file where you describe your distribution. In this case I choose /var/www/debian/conf/distributions and add the following content :

Origin: PCPool
Label: PCPool
Suite: stable
Codename: pcpool
Version: 3.0
Architectures: i386 amd64
Components: contrib
Description: puppet support package repository
SignWith: D3CF695E

Notice that since reprepro wants to sign your repository, you need to provide a gpg keyid for it.

To add a package to the repository it is straightforward :

reprepro -Vb /var/www/debian/ includedeb pcpool /tmp/msm_1-2_all.deb

As I said, since the repository is signed, we need to make have a way to add the keyid to the known keys of the target machine. In order to achieve this, we add the following puppet recipe :

class apt {
    #local repo sign key
    $keyid = "D3CF695E"

    exec { "apt-update":
        command => "/usr/bin/apt-get update",
        refreshonly => true;

    file { "/etc/apt/trusted.gpg.d/pcpool.gpg":
        source => "puppet://$server/etc/apt/trusted.gpg.d/pcpool.gpg"

#    file { "/root/pcpool.key":
#       source => "puppet://$server/files/root/pcpool.key"
#    }

#    exec { "apt-key":
#        path        => '/bin:/usr/bin',
#        environment => 'HOME=/root',
#        command     => "apt-key add /root/pcpool.key",
#        unless      => "apt-key list | grep $keyid",
#        subscribe   => File["/root/pcpool.key"]
#    }

    file { "/etc/apt/sources.list.d/puppet.list":
        content => "deb http://puppet/debian/ pcpool contrib\n",
        owner   => root,
        group   => root,
        mode    => 0644,
        notify  => Exec["apt-update"]

class msm {
    package { "msm": ensure => installed }

First we copy the keyid that we have stored in the puppet file bucket in the root directory of the client, then we exec the apt-key command. Note that since puppet executes each action in parallel, we must specify an execution order using the attributes subscribe and notify. Similarly as soon as the file /etc/apt/sources.list.d/puppet.list is added to the machine, we run apt-get update to refresh the cache of apt.

The last stanza simply installs the package that we added to the local repository.


There is a better way to add a gpg key, that is to put it in the /etc/apt/trusted.gpg.d directory. Thanks for the suggestion !
Average: 5 (3 votes)

bootstrap puppet with ganeti

Third post about ganeti.

Ganet-debootstrap-instance contains a nice set of scripts to create a debian (or derivatives) image using debootstrap. Images can be configured and customized by writing simple hooks script to modify various aspects of the default installation. However writing these script is not really fun and pushing it too far can lead to long messy scripts, loosing the overall benefit of automatic configuration.

Puppet is my configuration management tool of choice, but installing puppet on a new machine requires few magic incantations that the user should perform manually, or in a semi automatic mode (autosign=true) to make it work. My goal is to install puppet automatically on the newly created instance so it will run and configure the new instance at the first boot. From that moment on I'll forget about ganeti and configure all remaining services of my new VM using puppet.

In order to do so, we need to install puppet (and apt-get update/upgrade...), create the ssl certificates for the client and enabling the puppet daemon on the client. We add another hook in /etc/ganeti/instance-debootstrap/hooks/ :

if [ -z "$TARGET" -o ! -d "$TARGET" ]; then
  echo "Missing target directory"
  exit 1

chroot "$TARGET" apt-get -y --force-yes update
chroot "$TARGET" apt-get -y --force-yes upgrade

# install puppet on the client
chroot "$TARGET" apt-get -y --force-yes install puppet

echo "Installing puppet certificates for $instance"
puppetca clean $instance
puppetca -g $instance

mkdir -p $TARGET/etc/puppet
mkdir -p $TARGET/var/lib/puppet/ssl/private_keys/
mkdir -p $TARGET/var/lib/puppet/ssl/certs/

cp /var/lib/puppet/ssl/private_keys/$instance.pem $TARGET/var/lib/puppet/ssl/private_keys/
rm -f $TARGET/var/lib/puppet/ssl/public_keys/$instance.pem

cp /var/lib/puppet/ssl/certs/$instance.pem $TARGET/var/lib/puppet/ssl/certs/
cp /var/lib/puppet/ssl/certs/ca.pem $TARGET/var/lib/puppet/ssl/certs/

chown root. $TARGET/var/lib/puppet/ssl/private_keys/$instance.pem
chmod 0400 $TARGET/var/lib/puppet/ssl/private_keys/$instance.pem

chown root. $TARGET/var/lib/puppet/ssl/certs/$instance.pem
chmod 0640 $TARGET/var/lib/puppet/ssl/certs/$instance.pem

chown root. $TARGET/var/lib/puppet/ssl/certs/ca.pem
chmod 0641 $TARGET/var/lib/puppet/ssl/certs/ca.pem

#echo "server=puppet" >> /etc/puppet/puppet.conf

echo "START=yes" > $TARGET/etc/default/puppet
echo "DAEMON_OPTS=\"\"" >> $TARGET/etc/default/puppet

This script uses puppetca to create on the puppet (and ganeti) server the client key, sign it, and then copy it to the target machine. Notice that we create the certificate for a fqnd name $INSTANCE_NAME.$DOMAIN or otherwise puppet will complain loudly. This is not strictly needed, but if you want to do otherwise, you'll need to fiddle with the puppet configuration a bit more. The procedure to create a puppet certificate server-side is well documented on the puppet website, so if you are curious about the details duck-duck-it .

Average: 3.5 (4 votes)

add swap hook for ganeti-deboostrap-instance

Second post about ganeti. This time I'll talk about adding a swap partition to an image added with ganeti-deboostrap-instance. Browsing the web, it seems that an old version of the ganeti debostrap script allowed for the creation of a swap partition from the command line. The actual version in sid does not, so, if you want to add a swap partition, you need to write a small hook in /etc/ganeti/instance-debootstrap/hooks/.

Part of the code below is taken from the instance-debootstrap script.

if [ $DISK_COUNT -lt 2 -o -z "$DISK_1_PATH" ]; then
    log_error "Skip swap creation"
    exit 0


# Make sure we're not working on the root directory
if [ -z "$TARGET" -o "$TARGET" = "/" ]; then
    echo "Invalid target directory '$TARGET', aborting." 1>&2
    exit 1

if [ "$(mountpoint -d /)" = "$(mountpoint -d "$TARGET")" ]; then
    echo "The target directory seems to be the root dir, aborting."  1>&2
    exit 1

if [ -f /sbin/blkid -a -x /sbin/blkid ]; then
  VOL_ID="/sbin/blkid -o value -s UUID"
  VOL_TYPE="/sbin/blkid -o value -s TYPE"
  for dir in /lib/udev /sbin; do
    if [ -f $dir/vol_id -a -x $dir/vol_id ]; then
      VOL_ID="$dir/vol_id -u"
      VOL_TYPE="$dir/vol_id -t"

if [ -z "$VOL_ID" ]; then
  log_error "vol_id or blkid not found, please install udev or util-linux"
  exit 1

if [ -n "$swapdev" ]; then
  mkswap $swapdev
  swap_uuid=$($VOL_ID $swapdev || true )

[ -n "$swapdev" -a -n "$swap_uuid" ] && cat >> $TARGET/etc/fstab <<EOF
UUID=$swap_uuid   swap            swap    defaults        0       0

This script does two things: first it checks if the user passed a second disk argument to the gnt-instance add call. I decided arbitrarily that the second disk is going to be used a swap disk. Second it figures out the vol-id of this disk, create the swap partition and write an entry in the fstab. All in all it's a straightforward procedure, but I love when I can cut and paste easy scripts :)

The call to create the instance is as follows, using a disk of 5G for the system and a disk of 1G for the swap.

gnt-instance add -t plain --disk 0:size=5G --disk 1:size=1G -B memory=1024 -o debootstrap+unstable --no-ip-check --no-name-check node1

Average: 3.3 (3 votes)

how to create VMs with ganeti / xen and dnsmasq

I'll start here a small series of posts about ganeti, xen and puppet. For my work I run few servers sitting on xen and it has always been a bit of a pain to create a new instance and keep it up to date. Up to now I've used the excellent xen-create-image tool to create my VMs, but I wanted to try something new and more sexy... Last week I finally found some time to learn (and a spare box to run my experiments) how to use ganeti. Ganeti is the only tool I tried out, but it seems to fit the bill for my use and it seems polished and mature project to me... Moreover I've seen a presentation about it in every FLOSS conference I've attended in the last few years and I thought it was time to give it a try.

Installing and configuring ganeti is fairly easy, there is a lot of documentation available and this post is not going to be about installing it, but rather how to create a new bare instance with ganeti-deboostrap-instance. There is also a way to create a new instance from an image, but I didn't go that way yet.

This first post is about the first problem I've encountered, that is, how to automatically assign a network address and a name to each new instance created by gnt-instance add. Since all my instances should be able to communicate together on the same subnet, I've decided to configure xen to create a NATted private network and add each new instance to this network.

The first step is to create an interface in /etc/network/interfaces .

auto xen-br0
iface xen-br0 inet static
    bridge_stp off
    bridge_fd 0
    bridge_ports none

This is the standard debian way but since xen uses a different naming convention (here I'm using ganeti naming convention xenbr0 vs xen-br0), I need to convince tell xen what I intend to do by adding these lines in /etc/xen/xend.config :

(network-script 'network-virtual bridgeip="" brnet="" bridge="xen-br0"')
(vif-script     vif-bridge)

Next I have to connect my real network interface to the private network using few iptables rules in /etc/rc.local (probably there is a better place to do this...):

echo 1 > /proc/sys/net/ipv4/ip_forward
/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/sbin/iptables -A FORWARD -i eth0 -o xen-br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -A FORWARD -i xen-br0 -o eth0 -j ACCEPT

The xen setup is complete and every new image should have a vif connected to the subbet The xen setup corresponds to the physical wiring of the network. The next step is to configure each instance so to allow them to communicate on this subnet. Since I build my VMs using ganeti-debootstrap-instance, and by default debootstrap does not configure the network, we need to add a new hook in the directory /etc/ganeti/instance-debootstrap/hooks.

if [ -z "$TARGET" -o ! -d "$TARGET" ]; then
  echo "Missing target directory"
  exit 1

if [ ! -d "$TARGET/etc/network" ]; then
  echo "Missing target network directory"
  exit 1

if [ -z "$NIC_COUNT" ]; then
  echo "Missing NIC COUNT"
  exit 1

if [ "$NIC_COUNT" -gt 0 ]; then

  cat > $TARGET/etc/network/interfaces <<EOF
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp



DAEMON_PID_FILES="/var/run/ /var/run/dnsmasq/"

[ -n "$instance" ] || exit 1
nic_count=$((NIC_COUNT - 1))
echo $mac_var
echo $nic_count
echo $mac
echo "dhcp-host=$mac,$instance" > /etc/dnsmasq.d/$instance.conf

This hook will do two things. First it will configure the interfaces of the new instance to get configured using dhcp, second, it will add an entry to the dnsmasq configuration file to make this instance known to the world. This basically boils down to add a file in /etc/dnsmasq.d/ with the mac address of the new instance and its designated name. Dnsmasq will then provide an ip address for this instance and add it to the dns.


Configuring dnsmasq is pretty easy as well. First I want it to answer dhcp queries only on the internal network, second I want to configure my clients passing as nameserver and gataway. You can just add the following lines in /etc/dnsmasq.d/general to get it going.


To create your new instance you can just run the following command :

gnt-instance add -t plain -s 5g -B memory=1024 -o  debootstrap+unstable --no-ip-check --no-name-check node1

If you are running your dom0 on debian squeeze before running this command you should configure ganeti to pass the right xen parameters to the newly created instance :

gnt-cluster modify --hypervisor-parameter xen-pvm:root_path='/dev/xvda1'
gnt-cluster modify --hypervisor-parameter xen-pvm:initrd_path='/boot/initrd-2.6-xenU'

I use --no-ip-check and --no-name-check to skip ip and dns check performed by ganeti and to avoid a sort of chicken-egg problem, where the name and address of this new instance is yet unknown to dnsmasq and that node1 is the name that will be used by the hook to add an entry in the dnsmasq configuration. debootstrap+unstable is a variant of the default configuration and you need to add it to the list of variants used by ganeti-deboostrap-instance.

That should be it. The new instance should come up with a dynamically assigned ip address, able to talk to the outside world and automatically known by all the other machine on the subnet via dns.

Next post will be about how to add a swap hook for ganete-debootstrap-istance.

Average: 3 (2 votes)

How to convince cupt and smartpm to ignore the gpg signature of a Release file

I'm blogging about this small configuration issue as it took me some time to figure out how to configure cupt and smart to solve this problem. The reason I'm playing with cupt and smartpm is that I'm working to compare again a number of package managers in debian against the state of the art cudf solvers using mpm, and I'm suffering quite a bit to configure my virtual environment. Last year I promised to revise and fix our results. I didn't forget my promise, but it seems it took longer then expected.

Anyway, back to the main topic. The release-problem arises because the key used to sign sarge (that I'm using as baseline for my experiments) is long expired. If you try to retrieve sarge from you will find that sarge is signed with the key A70DAF536070D3A1 and apt-get will complain loudly if you try to use an archive signed with an expired key.


For cupt this is documented in the man page and there are a number of options to add either to /etc/apt/apt.conf to the the cupt own conf file. Then cupt will happily accept the sarge Packages file and let you run update.

cupt::cache::release-file-expiration::ignore "true";
cupt::update::check-release-files "false";
cupt::update::keep-bad-signatures "true";


For smart I could not found this information anywhere, but reading the source code (tnx good it's python !). To cut it short, you need to set the key trustdb to an empty value for a specific channel. On the command line you get something like :

smart channel --set  aptsync-614482cb2c7e08d5722af3498232ba52 keyring= --config-file=/root/var/lib/smart/config

where aptsync-614482cb2c7e08d5722af3498232ba52 is the channel name corresponding to sarge in my conf. Since I'm using a simulated environment, I save the result of this option in a non-default config file in my chroot.

Average: 5 (1 vote)

unburden my home dir

Today I installed unburden-home-dir and I'm very please with it. It's a simple script that takes care of your temporary files to move them outside your home directory. The main reason why I installed it is to minimize the number of read/writes of iceweasel. My favorite browser is apparently the culprit of 80% of read / write operations on disk even when it is idle... Moving the cache to tmpfs, I hope to reduce the IO on disk and to extend my battery life. Using a SSD I haven't noticed any remarkable benefits regarding performances, but I hope I'll manage to squeeze a bit more from my battery.

Installing and configuring unburden-home-dir is straightforward. It is packaged for debian (experimental at the time of writing), and it is very easy to configure. Remember that if you want to have your cache on tmpfs, you need to ether mount a tmpfs-enabled file system somewhere or enable RAMTMP=yes in /etc/default/rcS (default in wheezy).

After installing unburden-home-dir iotop shows a delightful page full of zeros :)

ps: if you use duplicity, remember to specify the option --archive-dir to move the duplicity cache somewhere else...

Average: 3.9 (9 votes)

Learning from the Future of Component Repositories - CBSE 2012

Learning from the Future of Component Repositories ( Pietro Abate, Roberto Di Cosmo, Ralf Treinen and Stefano Zacchiroli ) has been accepted to be presented at CBSE 2012 (26-28 June, Bertinoro, Italy)


  An important aspect of the quality assurance of large component repositories
  is the logical coherence of component metadata. We argue that it is possible
  to identify certain classes of such problems by checking relevant properties
  of the possible future repositories into which the current
  repository may evolve. In order to make a complete analysis of all possible
  futures effective however, one needs a way to construct a finite set of
  representatives of this infinite set of potential futures. We define a class
  of properties for which this can be done.

  We illustrate the practical usefulness of the approach with two quality
  assurance applications: (i) establishing the amount of ``forced upgrades''
  induced by introducing new versions of existing components in a repository,
  and (ii) identifying outdated components that need to be upgraded in order to
  ever be installable in the future. For both applications we provide
  experience reports obtained on the Debian distribution.
The tools presented in this paper (outdated and challenges) are already in Debian as part of the 'dose-extra' package.


For the second year in a raw our paper won the '''Best Paper Award''' for the CBSE 2012 conference !!!

Update 2

I presented this paper at cbse2012. The slides of my presentations are attached.
main.pdf375.94 KB
Average: 5 (1 vote)

terminator terminal

Despite the fact I can't stop talking about xfce4, what I'm have been missing lately is the functionality of gnome-terminal... I know I could install gnome terminal and use it, but this will imply a lot of unwanted dependencies and I prefer not go that way. Chatting on #xfce in IRC it was suggested to give a try to terminator I've used it for a couple of days now and I feel very comfortable with it. It has a lot of configuration options, the possibility to have multiple profiles and to group windows together, it has tabs and all bells and whistles you can ask.

The only thing I didn't like from the default settings is the red title bar on top... to remove it you need to edit the config file in ~/.config/terminator/config and add show_titlebar = False . This is my conf file :

  geometry_hinting = False
  dbus = True
  focus = mouse
  borderless = True
    scrollbar_position = hidden
    show_titlebar = False
    use_system_font = False
    font = Monospace 13
      profile = default
      type = Terminal
      parent = window0
      type = Window
      parent = ""

Something else I'd like to do is to reduce the size and font of the tab on top of the terminal. Since this is a gtk applications, I should hack my gtk theme in order to style the tab, but I don't know how to do it only for one application. If somebody has an answer I'm all ears.

Average: 3.7 (11 votes)

mancoosi tools in debian testing

Finally the dose3 libraries and tools landed in testing this weekend. We solved a couple of bugs already and it seems nobody complained too loudly. If you used the edos tools in the past you might be interested to check out our new tools in the package dose-extra.

Actually @mancoosi we will be delighted to ear about you experience with our tools and how to make them better and more useful. Please drop me a line !

The next major release of dose will be multi-arch aware and provide performance improvements and other minor features.

If you missed it, and you are now curious, I delivered a talk at fosdem regarding our tools:

  • QA tools for FOSS distributions (FOSDEM 2012) - Pietro Abate (video)

A big thanks to ralf of course for packaging everything !

No votes yet
Syndicate content