From FusionForge Wiki
Jump to: navigation, search

buildbot.fusionforge.org hosts a Jenkins instance, handling the different build queues.


  • Physical box: miromesnil.gnurandal.net
  • KVM VM: vladimir.gnurandal.net
  • SSH port: 10022

Host configuration

  • user: jenkins ($HOME=/var/lib/jenkins)
  • ~jenkins/jobs contains the different job configurations; for instance ~jenkins/jobs/fusionforge-master-src-debian8/config.xml runs fusionforge-build-and-test-src-deb.sh


Common installation setup (note: using libvirt for simplicity, while live buildbot uses some tuned dhcpd conf):

# Prepare some space
# - /var/lib/lxc/ : 1GB
# - /var/cache/lxc/ : 1GB
# - /var/lib/jenkins/ : 1GB

# Create user
useradd jenkins -m -d /var/lib/jenkins -s /bin/bash

# Grab the code for the VM tools and wrapper
apt-get install git
git clone git://fusionforge.org/fusionforge/fusionforge.git
apt-get install make
make -C fusionforge/tests/buildbot/
apt-get install createrepo  # for push-packages-to-repositories.sh

# VMs networking
apt-get install lxc debootstrap
apt-get install avahi-daemon libnss-mdns
sed -i.bak -e 's/^hosts:.*/hosts:          files mdns4_minimal [NOTFOUND=return] dns/' /etc/nsswitch.conf
# and make sure you accept mdns traffic: iptables -A INPUT -i virbr0 -p udp --dport 5353 -j ACCEPT
apt-get install libvirt-bin dnsmasq ebtables
service dnsmasq stop
update-rc.d dnsmasq remove
virsh net-autostart default
service libvirtd restart
cat > /etc/lxc/default.conf <<'EOF'
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0

# Prepare bot workspace and sudo access
apt-get install sudo
mkdir ~jenkins/reports/
chown -R jenkins: ~jenkins/
(cd fusionforge/tests/buildbot/ && ./init-jenkins.sh)
# possibly add your SSH public key in ~jenkins/.ssh/id_rsa.pub to access the VMs

# possibly add your configuration, e.g.:
(cd fusionforge/tests/buildbot/config/ && sed 's/^KEEPVM=.*/KEEPVM=true/' default > $(hostname))

# SSH host keys cache
mkdir -m 700 /var/cache/lxc/ssh/
mkdir -m 700 /var/cache/lxc/ssh/debian8.local/
mkdir -m 700 /var/cache/lxc/ssh/centos7.local/
cp -a .../existing_vm/rootfs/etc/ssh/ssh_host_* /var/cache/lxc/ssh/debian8.local/
cp -a .../existing_vm/rootfs/etc/ssh/ssh_host_* /var/cache/lxc/ssh/centos7.local/

Debian additional setup for 5.3

# - Compile source package outside of the container
apt-get install cowbuilder
apt-get install devscripts debhelper quilt lintian
# - Create a local repository
apt-get install reprepro
# Make sure hostname properly configured
hostname -f  

# Add sudo support for pdebuild and cowbuilder
cat <<EOF >> /etc/sudoers.d/ci
Defaults env_keep += "HOME"
jenkins ALL= NOPASSWD: /usr/sbin/cowbuilder

su - jenkins

# - Prepare cowbuilder root
mkdir -p ~/builder/cow/
mkdir -p ~/builder/buildplace/
mkdir -p ~/builder/result/
cd fusionforge/
DISTROLIST=wheezy tests/scripts/manage-cowbuilder.sh


Manual test

Now we can run the script manually for testing:

su - jenkins
  • FusionForge master:
(cd ~/fusionforge/ && bash -ex ./tests/buildbot/fusionforge-build-and-test.sh debian8.local deb; echo $?)
(cd ~/fusionforge/ && bash -ex ./tests/buildbot/fusionforge-build-and-test.sh centos7.local rpm; echo $?)
sudo /usr/local/sbin/lxc-wrapper destroy centos7
(cd ~/fusionforge/ && bash -ex ./tests/buildbot/fusionforge-build-and-test.sh centos7.local src; echo $?)

Current network configuration at buildbot.fusionforge.org

It uses an historically more complex setup involving a manual bridge and ISC DHCPd with explicit DNS servers.

cat <<EOF >> /etc/network/interfaces
auto br0
iface br0 inet static
       bridge_stp off
       bridge_maxwait 5
       post-up echo 1 > /proc/sys/net/ipv4/ip_forward
       post-up iptables -t nat -A POSTROUTING -s '' -o eth0 -j MASQUERADE
       post-up service isc-dhcp-server restart
       post-down iptables -t nat -D POSTROUTING -s '' -o eth0 -j MASQUERADE

apt-get install isc-dhcp-server
sed -i -s 's/^INTERFACES=.*/INTERFACES="br0"/' /etc/default/isc-dhcp-server
cat <<EOF >> /etc/dhcp/dhcpd.conf
subnet netmask {
       option routers;
       option domain-name "local";
       option domain-name-servers,;

LXC templates

When you update one of our template scripts, remove the cached rootfs, e.g.:

/usr/local/sbin/lxc-wrapper emptycache centos7

Jenkins configuration

  • Manage Jenkins > Plugins > Advanced > Check Now (update plugins list)
  • Available > "Git Plugin" > Download and Install after restart

Sample build configuration

Example with fusionforge-60-src-debian8:

In Jenkins, create a new job:

./tests/buildbot/fusionforge-func_tests.sh debian8.local src
  • Save

Jenkins currently checks out the repository in /var/lib/jenkins/jobs/fusionforge-60-src-debian8/workspace.

We also upgrade the VMs weekly:


cd $WORKSPACE/tests
sudo /usr/local/sbin/lxc-wrapper stop $os || true

sudo /usr/local/sbin/lxc-wrapper emptycache $os
sudo /usr/local/sbin/lxc-wrapper prepare $os

# Not sure about re-introducing common-vm and ./start_vm just for this:
#./buildbot/start_vm $os.local
sudo /usr/local/sbin/lxc-wrapper stop $os || true


  • Document changes to stock LXC templates from distro (see tests/buildbot/lxc/lxc-wrapper); among other things, these changes overwrite SSH host keys and set up root's authorized_keys file)
  • Automate task creation maybe using dsl plugin
    • improve the Jenkins plugin to pilot/setup Jenkins
    • put/visualize result in FRS
  • Use standard 'virbr0' from libvirt also at vladimir
  • Use master/slave Jenkins capability to enable parallel builds
  • Try Buildbot as a Jenkins alternative
  • document ~/.mini-dinstall.conf which is used for deb repo upload.

See also