Recycling your old speaker system into an Airplay-enabled device

I have this old beast in my bathroom, with an iPod dock (old connector), radio antenna (which doesn’t receive inside the appartment), and (luckily) a jack input, that I plug into every morning.

The thing can’t seem to die, and is quite powerful, so instead of trashing it, I’ll turn it into a connected speaker.

Hardware

This is where all our money will go. We want something cheap, small but powerful, with wifi capabilities. The obvious choice will be… the Raspberry Pi Zero W, for 10€.

You will also need a micro SD card, the Samsung Evo + (32GB, more than enough) for something like 8€, achieves a really great performance-price ratio, but others could do (Sandisk, Samsung Pro, …), you can find benchmarks here.

However, as great as this board is, they removed the jack output compared to the classic Raspberry boards, so we will need to add it.

You could obviously use a Raspberry Pi 3 or 4 instead, but they feel too big to me.

I see 4 possibilities :

  • HDMI to Jack dongle. Just buy the thing for a few bucks, plug it in, done. Unfortunately, it means you will probably have a big dongle hanging out of your raspberry.
  • USB DAC dongle, can be expensive.
  • The Raspberry PWM pins. You will have to add your own audio filter to it, which can be done for quite cheap.
  • An I2S audio pi hat, I personally chose Pimoroni PHat DAC, but there are many others. This one costs 15€. Adafruit’s I2S board (which looks a lot like Pimoroni’s) costs only 10$, so it could be a good alternative.

You will also need a micro-USB power supply, which should amount for less than 10€, maybe 0 if you have an old phone charger lying around.

So we end up with a bill of about 40€, shipping not included.

The PHat will have to be soldered to the board, I will post pictures when this is done.

Software

We are going to use :

  • A classic Raspbian (lite), the standard Raspberry distribution
  • Shairport-sync, some software that implements a receiver for the Airplay protocol.

Preparing your Raspbian image

Download the image “Raspbian Buster Lite” from the RaspberryPi download page, unzip the .zip file, and write it to your microSD card.

wget "http://downloads.raspberrypi.org/raspbian_full/images/raspbian_full-2019-09-30/2019-09-26-raspbian-buster-full.zip"
unzip 2019-09-26-raspbian-buster-lite.zip
dd if=2019-09-26-raspbian-buster-lite.img of=/dev/mmcblk0 bs=32M

Before booting on our system, we are going to pre-configure the wifi, and enable the SSH server so we can connect to the Pi when it starts (remember, we have no keyboard plugged in).

mkdir /mnt/raspboot
mount /dev/mmcblk0p1 /mnt/raspboot

# enable the wifi, adjust country, ssid and psk accordingly
cat <<EOF>/mnt/raspboot/wpa_supplicant.conf
country=FR
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
  ssid="your-box-ssid"
  psk="your-wifi-password"
}
EOF

# enable SSH
touch /mnt/raspboot/ssh

umount /mnt/raspboot

Boot

Now you are ready to boot, put the card in the Pi and power it up, it should connect to your wifi network.

Find the IP and SSH into it, or if you have Avahi/Bonjour enabled, you should be able to just use the raspberrypi.local hostname :

# default password is raspberrypi
ssh pi@raspberrypi.local
# change it to something better as soon as you connect
passwd
# change the network/airplay name of your board
cat "airplay-myroom" | sudo tee /etc/hostname

PHat DAC setup

First, I want to setup the audio driver for the PHat DAC, the installation is a one-liner, as explained on Pimoroni’s tutorial :

curl https://get.pimoroni.com/phatdac | bash

Building the software

It’s time to build the software. As we only have to build 2 small packages, we will compile on the Raspberry itself, but if we had more, we would probably cross-compile on a real computer.

First, install the dependencies. If you don’t want to use convolution, you can remove libsndfile-dev.

sudo apt update
sudo apt upgrade
sudo apt install -y  \
  git autoconf libtool libpopt-dev libconfig-dev \
  libssl-dev libavahi-client-dev libasound2-dev  \
  libsndfile-dev

Next we are going to build ALAC, the Apple Lossless Audio Codec, as this is a dependency of shairport-sync, and unfortunately it is not provided in Raspbian packages.

cd ~/
git clone https://github.com/mikebrady/alac/
cd alac
autoreconf -fi
./configure && make && sudo make install
# that way shairport-sync will be able to find the library
sudo ldconfig

Getting closer, now we build shairport-sync

cd ~/
git clone https://github.com/mikebrady/shairport-sync/
cd shairport-sync
autoreconf -fi
# here we are enabling:
# - systemd, as raspbian uses this boot system and not SystemV
# - alsa, as raspbian does not use PulseAudio by default
# - openssl, the other option is mbedtls
# - the ALAC codec
# - convolution, not required, but can be useful to apply effects
# - avahi, to be discoverable on the network
./configure --with-alsa --with-avahi \
            --with-ssl=openssl --with-apple-alac \
            --with-convolution --with-systemd
make && sudo make install
# copy the config to /etc
sudo cp scripts/shairport-sync.conf /etc/
# start shairport-sync
sudo systemctl start shairport-sync
# enable on boot shairport-sync
sudo systemctl start shairport-sync
Speaker is now ready!
Well, it works!

Let’s Encrypt : Free SSL certificates for everyone

A new Certificate Authority appears : Let’s Encrypt

If you’ve not heard about it yet, a new Certificate Authority appeared recently, called “Lets Encrypt”.

Backed by the Internet Security Research Group (ISRG), it aims at securing the web by providing SSL certificates for everyone.

After a bit of teasing, they just entered public beta, which means you can use it now (if you’re able to follow instructions and download a git repository)!

If you’re the brave owner of an SSL domain, remember how painful it has always been…

Remember when you had a to sell a liver because multi-domain certificates are so expensive?

Well that’s over. Let’s Encrypt provides free certificates.

Remember when you switched to CACert because it was free? And how much trouble that brought you.

That sounded like a good idea, until you realized all your visitors got scary warnings because CACert is not recognized as a safe authority (well, they give away free certificates — that’s shady as fuck).

That’s over too. Let’s Encrypt is recognized as a safe authority.

Remember when you had to jump through hoops each time you needed a new certificate?

Provide an ID, fill in forms, manually identify your domain, give them your firstborn son…

Yep, over. With Let’s Encrypt, you run  their authentication program on your server, which will take care of automatically verifying that you own the domain you want a certificate for.

Remember the pains of configuring your server for SSL?

Right. Unlike Apache, Nginx needs a certificate that contains the whole validation chain (your certificate and the authority’s), and you better put them in the right order.

Well that’s not over. Not completely. Let’s Encrypt is able to configure/deploy automatically with apache, but Nginx is not supported yet. On the other hand, it does directly generate a full chain certificate to use with Nginx so that’s one less hassle.

How does it work?

The principle of Let’s Encrypt is this :

  1. You run the application on your server.
    1. If you’re on Apache, it reads the Apache configuration files, finds your VirtualHosts and the domains associated to it.
    2. If you’re on another web server, you can specify the domains, and give the root directory of each on your server, so that it can create its authentication files.
  2. The Certificate Authority then validates those domains (by giving a token to the server, to be put at a given location, and a nonce that needs to be signed with the server key to verify it) and gives you your certificate.
  3. The application either installs the certificates on its own (with Apache), or just deploys them to a location (/etc/letsencrypt/…).

The certificates are only valid for 90 days, but you can easily renew them by relaunching the generation command (do that in a cron and you’re done for life – you can do 5 renew each week, so don’t abuse it)

The principle of the automatic authentication is explained here, and you can find the full details of the protocol (called ACME – Automatic Certificate Management Environment) in this RFC.

What’s after Beta?

As far as I’ve read, the objectives of Let’s Encrypt are :

  • to support Nginx like Apache (currently experimental)
  • automatic renewal
  • Python3 support (currently Python2.7)

A minimal system for Kernel testing with QEmu

When I went to Kernel Recipes earlier this year, I watched a very interesting presentation on using QEmu for Kernel development.

A few hours ago, working on Eudyptula challenge, I was getting a bit annoyed :

The nvidia module was not working on my custom kernel, I needed to do a mkinitcpio each time I compiled a kernel based on the Archlinux config, and occasionally my development module would crash the system. Thus I went on a quest to get QEmu working.

The subject is fairly easy, but unfortunately, documentation is sparse, so here’s a Howto that will allow you to get running in just a few minutes.

If you really are in a hurry, jump to the end of this article. I provide the finished scripts to build the system.

Principles

What people usually do when building a custom system is re-use a system, like debootstrap (it builds a debian system inside a directory), or openembedded, buildroot, ptxdist (mostly used in the embedded world). In our case however, we want to do a really really small system, so even those build systems are to much. We want something completely bare.

So what we are going to create is a build script that will :

  • Compile your Kernel and modules
  • Build an initramfs/initrd image, of about 3-4MB containing :
    • busybox
    • your kernel modules
    • a custom init script

Building your kernel

I am not going to explain how to configure your kernel. If you don’t want to worry too much, just reuse your distributions “.config” file. It should be located in /proc/config.gz.

Compiling is easy :

ROOT_DIR=$(pwd)

# adjust accordingly
KERNEL_DIR=$ROOT_DIR/linux

# Compile your kernel
cd $KERNEL_DIR
make
make modules
cd $ROOT_DIR

Building an Initramfs

First a little reminder : what is an initramdisk?

An initramdisk is a (small) CPIO archive that is loaded in ram as a / filesystem.

It is usually used in distros to do a 2-stage boot. The first boot will load the vital modules, mount the real file system (probably from a disk) and start the second stage from the disk.

This allows distros to provide a precompiled linux kernel with everything as modules for everyone (including motherboard drivers), and generate on installation an initrd that provides the modules your specific system will need to do a basic boot (ie: your “/” filesystem, essential motherboard drivers, …).

In our case, we don’t care about that second stage, we just want to boot into a shell.

Installing kernel modules

Make modules_install will install your modules in /lib/modules/<kernel_name>/…

You can just set INSTALL_MOD_PATH, and the modules will be installed in $INSTALL_MOD_PATH/lib/modules/<kernel_name>/… instead

INITRAMFS_DIR=$ROOT_DIR/initramfs

# remove old modules
rm -fr $INITRAMFS_DIR/lib/modules
# install new modules
cd $KERNEL_DIR
make INSTALL_MOD_PATH=$INITRAMFS_DIR modules_install
cd $ROOT_DIR

Installing Busybox

You can just install Busybox on your host system (provided you want to emulate the same architecture), and run

busybox --install $INITRAMFS_DIR/bin/

This will install all binaries that busybox can emulate into /bin of your initramfs. You only need to do this once, so no need to include it in your build script.

If you want, you can compile Busybox with specific options, to tell it you want some special command available. I find the one provided by Archlinux sufficient.

Your Init script

Here things are starting to be interesting. After the kernel boots, it will launch “/init” as process 0. On your distribution, this is probably systemd.

We don’t need something that huge, so we are just going to write our own as a shell script (remember busybox: it provides us with /bin/sh !)

What do you want to do in your init?

Initializing the system

We want to mount our /proc and /sys (add debugfs if you do kernel debugging), and populate /dev a bit (create /dev/zero, ttys, and other required devices)

#!/bin/sh
# Don't forget the shebang on the first line

/bin/mount -t proc none /proc
/bin/mount -t sysfs sysfs /sys

echo "> Populating /dev/"
/bin/mdev -s

Loading modules

Well obviously, modules are not going to load themselves right? Use /bin/modprobe (provided by Busybox)

# ehci = qemu usb 2.0
# uhci = qemu usb 1.1
# add other modules as you see fit
MODULES="ehci-pci ehci-hcd uhci-hcd"

for module in $MODULES; do
    echo "> Loading module $module"
    /bin/modprobe $module
done

Give you control

Let’s not forget the most important part, right? If the init script exited your system would end.

/bin/sh

Generating your Initramfs

We need the cpio command

cd $INITRAMFS_DIR
find . -print0 | cpio --null -ov --format=newc | gzip - > $ROOT_DIR/initramfs.img
cd $ROOT_DIR

This will list all files in your ramfs directory, send them to cpio, and compress the image generated by cpio with gzip.

Launching QEmu

qemu-system-x86_64 \
-m 1024 \
-kernel linux/arch/x86_64/boot/bzImage \
-initrd initramfs.img \
-append 'console=ttyS0' \
-nographic \
-usb \
-device usb-ehci,id=ehci \
-device usb-host,bus=usb-bus.0,vendorid=0x046d,productid=0xc52b \
-device usb-tablet

Let’s inspect each line :

  • -m 1024 will give 1GB to the system. This is definitely not required, ignore it or adapt it as you want
  • -kernel and -initrd are ovious : you want to provide the generated initramfs and kernel
  • -append ‘console=ttyS0’ and -nographic makes QEmu not create a new window, and redirect all the output to the terminal you used to launch it. If you want to access the QEmu console, use the shortcut <Ctl-a c>
  • -usb enables usb. It will create ” usb-bus” hub to connect your USB 1.1 devices to.
  • -device usb-ehci,id=ehci will create a “ehci” hub to connect your USB 2.0 devices to.
  • -device usb-tablet creates a QEmu special “tablet” pointer. It will connect to your ehci hub automatically
  • -device usb-host,bus=usb-bus.0,vendorid=0x046d,productid=0xc52b will pass the control from an usb device connected to your host, to the virtual machine. We provide the vendorid and productid of the device (as returned by lsusb), and tell it to connect to the first port of the usb-bus (USB 1.1) hub.

If your computer is recent enough and provides virtualization instructions, you could add the -enable-kvm option. Without it, the system takes about 4 seconds to boot on my computer.

TL;DR

initrd build script and init

Unit testing in Python 3

The necessity of unit testing in Python

As you may know, Python is a dynamically typed language. Unlike some functional languages like Haskell or F# that have this beautiful thing called Hindley-Milner type inference , Python has : Duck-typing.

If it flies like a duck, quacks like a duck, swims like a duck, then it probably is a duck.

In practice, this pretty much means “yeah, we’ll sort this typing-mess at runtime. If the object does not have the quack method we’re trying to call, we’ll just throw an exception”.

What could possibly go wrong?

Well first, not knowing what kind of argument you need to pass.

Was it “3” or 3? Because “3”*3 is “333” and 3*3 is 9. That’s not exactly the same result. Now you need to look back at previous code to be sure.

Then you have Refactoring-Hell. You have changed the parameters order, or their name, and you now have broken calls to your API.

Of course, you dont know that yet. You’ll discover it the next time you trigger the broken code path. Maybe that’s a month after deployment. Too bad.

This dynamism makes unit testing in Python not a mere addition, but a requirement for your sanity.

A friendly reminder about unit tests

What were those exactly?

v-model
Remember this? Yay the good ol’ V model.

You’re wondering why I put this here. Nobody uses the V-Model anymore, it’s tedious, Agile yada-yada… Well, I agree. I hate the V-Model, but this here bears a very important reminder :

In V-Model, Unit Testing validates that your code fits the Low Level Specification (a 400 pages Word document that nobody reads except a traceability program that gives your manager the knowledge that 93% of the requirements from your High Level requirements are linked to a Low Level one). But I disgress.

In Agile? Well, since you probably don’t have a spec, Unit Tests WILL be your spec, your guarantee that :

  • even after that refactor, all your calls are still correct.
  • every branch of the function works as expected, not just the main one.
  • the painful merge you applied did not bring back a regression from the dead.

Issues with Unit Tests

It’s not Functional testing

Well, duh. Unit tests are not a silver bullet, they won’t test your software “globally”. That’s what functional testing is for. You could probably automate that a bit, or just hire very patient people that will take care of doing it. Again. And again. And again and again and again…

There are some drawbacks

  • you spend time writing them, sometimes more than you spent coding the feature.
  • you won’t see the need for them until they detect something broke and save your ass.
  • rewriting dozens of tests just because you did a little refactor that touched lots of classes can be a pain.

Now to the practice.

First, here’s the code we’re going to test. As you can see it includes a few things to test :

  • call of an external function (subprocess.call)
  • use of a builtin function (open/read)
  • call of an internal function
import yaml
import subprocess


class MyClass:
    def __init__(self, conf_file):
        self._conf_file = conf_file
        self._config_keys = ["key1", "key2", "key3"]

    def get_conf(self):
        """ parse config file using yaml """
        with open(self._conf_file, "r") as f:
            return yaml.load(f)

    def check_conf(self):
        """ check conf file contains all the config keys """
        config = self.get_conf()
        for key in self._config_keys:
            if key not in config:
                raise Exception("missing key : {}".format(key))
        return True

    def execute_key1(self, config):
        subprocess.call([config["key1"], "--some-arg", config["key2"]])

TestsCases

The skeleton for a Test is always the same :

setUp and tearDown functions will be called at the beginning and end of each test, independently of whatever happens in the test (success, exception, …).

Then you have a bunch of test* methods that will be called one after the other. Each of those is a unit test.

Unless you are using a specific runner, like nosetests (which I personally don’t find very useful), you will need to add a line to run unittest.main(). This will take care of running the tests in the file.

import unittest
import MyClass

class TestMyClass(unittest.TestCase):
    def setUp(self):
        """ Executed before each test """
        pass

    def tearDown(self):
        """ Executed after each test """
        pass

    def test0000_something(self):
        pass


# execute the tests if called directly
if __name__ == "__main__":
    unittest.main()

Mocks

Quite often, when you try to write a test, you run into the issue of calling code from other objects. At this point, you’re not sure what you are testing anymore, is it the calling or the called code.

Unit tests are just that : their only scope is the object you are testing (and often even smaller : the function). So you need to be sure that the object is correct, not the objects you are using. It is, in fact, easier to simulate the behavior per-case of those objects. It is extremely easy in Python 3, but the documentation does not reflect that.

Mocks have a few interesting properties :

  • They only live during the duration of your test (as a decorator), or even less (using a with statement)
  • They replace functions,  methods or complete objects
  • They are inexpensive to create
  • They can be used to be sure some code is called

Basic Mocking

Returning a value

@patch('method_to_replace', return_value=3)

Raising an exception

@patch('method_to_replace', side_effect=Exception("awe"))

Returning different values at each call

@patch('method_to_replace', side_effect=["first call return value", "second call return value"])

Mocking object methods

#!/usr/local/bin/python3

import unittest
from unittest.mock import patch
from unittest.mock import mock_open

from my_class import MyClass


class TestMyClass(unittest.TestCase):
    def setUp(self):
        self.obj = MyClass("/tmp/test_file.yaml")
        self.working_conf = {"key1": 1, "key2": 2, "key3": 3}

    def test_000_check_conf_works_with_all_keys(self):
        """ Check that our function works with a correct conf
        
        Notable things here :
        - we use a with statement
        """
        with patch.object(MyClass, "get_conf", return_value=self.working_conf):
            self.assertTrue(self.obj.check_conf())

    @patch.object(MyClass, "get_conf")
    def test_001_check_conf_raises_exception_on_missing_key(self, get_conf_method):
        """ Check that for each key, we raise an exception if that key is missing
        
        Notable things here :
        - we use a decorator this time
        - we use subTest to regroup tests that are similar (new in Python 3.4).
          This ensures that all iterations are run even if the first fails.
          We also get debug information if the subtest fails
        - we re-assign the output of our mock method for each iteration
        """
        source = {"key1": 1, "key2": 2, "key3": 3}
        for key in self.obj._config_keys:
            test_conf = source.copy()
            del(test_conf[key])
            get_conf_method.return_value = test_conf
            with self.subTest(conf=test_conf):
                with self.assertRaises(Exception):
                    self.obj.check_conf()


if __name__ == '__main__':
    unittest.main()

Mocking file I/O with mock_open()

Very often, you’ll find you need to test code that reads/writes from a file on disk. The most instinctive way is to use setUp()/tearDown() to create a file (probably in /tmp, even better if you use tempfile.NamedTemporaryFile), write the data to it, then delete the file in tearDown().

Then you realize you need to do 3, maybe 5 tests with different sets of data, and all your motivation goes to shambles.

Fear not, you can just mock open() and read() in one line (one that is incredibly hard to find on the net unfortunately).

    def test_002_get_conf_returns_decoded_yaml_data(self):
        """ Check that we decode yaml and return it directly

        Notable things here :
        - mock_open is used to patch open() to avoid failing to open a
        real file, but also to return special data upon read()!
        - we replace my_class.open, which means open is replaced only in the
        scope of the "my_class" module (from which we imported MyClass)
        - we provide create=True, because open() is a builtin function (not imported).
        This is not needed anymore as of Python 3.5
        """
        with patch('my_class.open', mock_open(read_data='["qwe"]'), create=True):
            self.assertEqual(self.obj.get_conf(), ["qwe"])

Mocking File as an iterator

Sometimes your code uses a file descriptor as an iterator :

with open("file) as f:
    for line in file:
        do_stuff()

The current mock_open does not support this behavior for now unfortunately, but you can implement it with two lines :

m_open = mock_open(read_data='some data \n new lines \n')
m_open.return_value.__iter__ = lambda self: self
m_open.return_value.__next__ = lambda self: self.readline()
with patch('my_class.open', m_open, create=True):
    ...

Mocking file writes

We saw how easy it is to mock reading a file, but you’re probably wondering how you verify data has been written to a file. To be fair, it works a bit differently :

By calling the fake open() function returned by mock_open(), you retrieve the same mock file descriptor that your object used.

This (mock) File object has the classic file methods, like file.write()… which are also mocks! (yeah, mocks all the way down!). On this mock, you can call assert_called_with, assert_has_calls, …to be sure the data you want has been written. (see part “Mocking an external function (and checking its values)” for more information)

m_open = mock_open(read_data='some data \n new lines \n')
with patch('my_class.open', m_open, create=True):
    ...

# verify write has been called with argument
file_desc = m_open()
file_desc.write.assert_called_once_with("data we wrote")

Mocking an external function (and checking its values)

Here’s a use case : you don’t want your method to really call a function. Maybe your library is not on the system that runs the tests, or maybe your executing a subprocess, with the executable not on the system.

In that case, you want to mock the call to that function, but you also want to know if the parameters correspond to what you expect.

Good news : Mocks remember when they are called and how!

    @patch('my_class.subprocess.call')
    def test_003_execute_key1_executes_correct_command(self, sp_call):
        """ Check that subprocess.call is called, with the expected arguments 
        
        Notable things here :
        - we use patch('my_class.subprocess.call') to patch subprocess.call
        only inside the my_class module, objects outside of that module will
        not be affected by our mock. We could mock subprocess.call to mock
        all instances instead
        - we check the mock is called with specific arguments
        """
        self.obj.execute_key1({"key1": "command", "key2": "argument"})
        sp_call.assert_called_with(["command", "--some-arg", "argument"])

Other functions are at your disposal to check how the mock has been called, like assert_has_calls, which takes a list of unittest.mock.call(arguments).

Mocking a complete object

Sometimes you want to mock more than a method, and do a full object emulation. Here’s how you can do it :

First step is to create a fake class that does whatever you want. You can either do it manually, or use Mock/MagicMock, which work the same way as patch. For example, this one-liner will create an object with

  • a method method() that returns 5
  • another method method2() that returns a different integer each time it is called
  • an attribute attr with a value of 5
m = Mock(**{'method.return_value': 5, 'attr': 5, 'method2.side_effect': [1, 2, 3]})

Next, you just need to set this mock as the return_value of the class (think of it that way : when you “call” the class, it returns a class instance).

with patch("namespace.MyClass", return_value=m):
    ...

Proxying Transmission Web interface with nginx

0. Why do this?

Easy for me : I use a PCH box as my torrent client.

It’s really nice, but it cannot :

  • Use IPv6 (I don’t want to forward ports when it can be avoided)
  • Protect the Transmission web interface with a password

On the other hand

  • My macbook is always on (though I’ll replace this with a Guruplug… if I ever receive the one I ordered)
  • I want to access the torrent administration interface when I’m not at home
  • I wanted to tinker with nginx 😉

1. The easy solution

server {
    listen       :8080;
    server_name  bt.coding-badger.net;
    location / {
        proxy_pass  http://192.168.0.2:9091/;
    }
}

Ok. We’re done. Redirect all requests to bt.coding-badger.net:8080 to the popcorn hour box on transmission’s port. KTHXBYE

2. Less subdirectories, moar fun!

Wait, of course we aren’t. It would be no fun at all. I don’t really like to have to access this page through /transmission/web/. We’re already on a special vhost, so I want my bt page at the root!

server {
        listen       :8080;
        server_name  bt.coding-badger.net;
	location /transmission {
    	    proxy_pass	http://192.168.0.2:9091/transmission;
	}
	location / {
    	    proxy_pass	http://192.168.0.2:9091/transmission/web/;
	}
}

What happens there? First you have to know what transmission does :

  • “/” requests are redirected to “/transmission/web” with a 301 Error page
  • /transmission/web/ contains javascript, css, pages, etc…
  • /transmission/upload is used to upload a torrent
  • /transmission/rpc is used to update the window

What we do is redirect all requests that hit / to /transmission/web/ on the transmission server (that way we can be on the / page and transmission will think we’re on /transmission/web and not attempt to redirect), and redirect all other /transmission/* requests to /transmission/*

You can tweak this to your liking, but you have to remember :

NEVER. EVER hit the “/” on the transmission web interface with your proxy, because it will redirect the browser to /transmission/web. You could probably handle this with a “proxy_redirect” command in the nginx configuration, but it’s a bit tricky to get right.

3. IPv6

On OSX, I use brew as the package manager for OSX. Unfortunately, it does not compile nginx with ipv6 support by default!

$ brew edit nginx

Go to the install function, and add –enable-ipv6 to the args array

$ brew install nginx

On debian, it should be compiled with ipv6 by default. You can check by running

$ nginx -V
nginx version: nginx/0.7.67
TLS SNI support enabled
configure arguments: --prefix=/usr/local/Cellar/nginx/0.7.67 --with-http_ssl_module --with-pcre --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/nginx/nginx.lock --with-ipv6
nginx version: nginx/0.7.67TLS SNI support enabledconfigure arguments: --prefix=/usr/local/Cellar/nginx/0.7.67 --with-http_ssl_module --with-pcre --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/nginx/nginx.lock --with-ipv6

If you’ve got –enable-ipv6 you’re all good!

Then we have to update the “listen” configuration of the server to tell it to use IPv4 and IPv6

server {
        listen       [::]:8080; # this enables ipv6
        server_name  bt.coding-badger.net;
        location /transmission {
                proxy_pass      http://192.168.0.2:9091/transmission;
        }
        location / {
                proxy_pass      http://192.168.0.2:9091/transmission/web/;
        }
}

4. Authentication

We don’t want our transmission client to be accessed by anyone! nginx can provide authentication through htpasswd files

$ sudo htpasswd -c /usr/local/etc/nginx/nginx.passwd <username>

Enter the password you wish, then setup nginx to request a password :

server {
        listen       [::]:8080;
        server_name  bt.coding-badger.net;
        location /transmission {
                proxy_pass      http://192.168.0.2:9091/transmission;
        }
        location / {
                proxy_pass      http://192.168.0.2:9091/transmission/web/;
        }
        auth_basic            "Restricted";
        auth_basic_user_file  /usr/local/etc/nginx/nginx.passwd;
}

Of course, if you are not using brew, you may want to use more “traditionnal” places for the passwd file, (like /etc/nginx instead of /usr/local/etc/nginx… use the same directory as your nginx config file)

Edje Messages vs Signals

We saw in the previous Edje post how to send Signals down the Edje elements. We’re now going to see another way of communicating between the application and the Edje theme : Messages.

1. Difference between Messages and Signals

That was my first question when I discovered Messages : “what is this stuff for, since we have signals?”

The answer is :
Signals allow you to send some discrete (punctual) information. It fits really well in an “action” type communication “do this”, “hey this event happened”, …

Now what if you want, for example, to send data (which could be anything, like a bunch of strings, …), to the theme, or the other way? This doesn’t really fit in the signal+source mold. This is where messages come in handy.

For a concrete example, you could see the existing E “alarm” widget. (I will try to find the URL when I have time)

2. Structure of a message

Three things define a message :
– its Type
– its ID
– its data

The first one is obvious and tells the type of data contained. In C the existing types are defined by EDJE_MESSAGE_* (see Edje.h), and in Embryo as Edje_Type:MSG_*
The second one is an integer that allows you to know what is the function of the message. Use whatever you want, put the defined values in some header shared between your code and your edc files.
The last one is the data.

Now, about the data, you have a few defined types :
Edje_Message_String is just one string
Edje_Message_Int is just one int (ok, that was easy)
Edje_Message_Float oh well you guessed it…
Edje_Message_String_Set more interesting, a variable number of strings (in an array), the number of strings is a count field in the struct.
Edje_Message_Float_Set and Edje_Message_Int_Set, same as the previous one but with floats and strings
Edje_Message_String_Int, Edje_Message_String_Float, a string with an integer (or a float), could be used as a key => value pair for example
Edje_Message_String_Int_Set, Edje_Message_String_Int_Set, a string and an array of integers/floats (with the additional count field).

Note that the set is defined in the struct as an int[1], float[1], or char *[1]. This means you have to allocate your structure to a size bigger depending on the number of elements.


/* allocate a struct for 4 integers */
Edje_Message_String_Int_Set *msg = malloc(sizeof(Edje_Message_String_Int_Set + (4 - 1) * sizeof(int));
/* allocate a struct for 10 string pointers */
Edje_Message_String_Set *msg2 = malloc(sizeof(Edje_Message_String_Set + (10 - 1) * sizeof(char *));

3. Sending a message from Edje to the application

3.1 Binding a message callback

#define MSG_ID_GET_TIME 0
#define MSG_ID_GET_FLOAT AND STR 1

/* your callback */
void message_cb(void *data, Evas_Object *obj, Edje_Message_Type type, int id, void *msg)
{
    if (id == MSG_ID_GET_TIME && type == EDJE_MESSAGE_INT_SET) {
        /* check the right number of integers */
        if (msg->count != 3)
            return;
        int seconds, minutes, hours;
        hours = msg->val[0];
        minutes = msg->val[1];
        seconds = msg->val[2];
        /* do whatever processing you want with your values */
        ...
    } else if (id == MSG_ID_GET_FLOAT_AND_STR && type == EDJE_MESSAGE_STRING_FLOAT) {
        float f;
        char *str;
        f = msg->val;
        str = msg->str;
        /* processing again */
        ...
    }
}

int main()
{
    /* initialization and all */
    ...
    edje_object_message_handler_set(my_edje_object, &message_cb, NULL);
    ...
    /* more stuff after that, you could set the handler at anytime you want anyway */
}

3.2 Sending the message using Embryo

This probably has to be defined in the “top” group loaded as an Evas, I don’t know what would happen otherwise.

group {
    programs {
        program {
            name: "my_program";
            signal: "whatever";
            source: "same";
            script {
                send_message(MSG_INT_SET, MSG_ID_GET_TIME, 5, 10, 3);
                send_message(MSG_STRING_FLOAT, MSG_ID_GET_FLOAT_AND_STR, 5.2, "hello world");
            }
        }
    }
}

4. Sending a message from the application to Edje

Now, if you would want to do it the other way

4.1 Create an embryo callback in the skin

group {
    name: "mygroup";
    script {
        /* this function is automatically bound
          * you may have noticed we don't pass a structure but a variable
          * number of arguments */
        public message(Message_Type:type, int id, ...) {
            if (type == MSG_INT_SET && id == MSG_ID_GET_TIME) {
                /* look in the embryo doc if you need to count the arguments
                  * the "count" variable is not provided in the arguments */
                int sec = getarg(2);
                int min = getarg(3);
                int hour = getarg(4);
                /* do whatever you want in embryo with it */
            } else if (type == MSG_STRING_FLOAT && id == MSG_ID_GET_FLOAT_AND_STR) {
                float f = getfarg(2); /* get a float */
                new str[128];
                getsarg(3, str, 128);
                /* do whatever you want in embryo with it */
            }
        }
    }
}

4.2 Sending the message in the code

/* allocate a struct for 3 integers */
Edje_Message_Int_Set *msg = malloc(sizeof(Edje_Message_Int_Set + (3 - 1) * sizeof(int));
msg->val[0] = 10;
msg->val[1] = 42;
msg->val[2] = 31;
edje_object_send_message(my_edje, EDJE_MESSAGE_INT_SET, MSG_ID_GET_TIME, msg);
/* allocate a struct for string & float */
Edje_Message_String_Float *msg2 = malloc(sizeof(Edje_Message_String_Float);
msg2->str = "I can haz cheezburger";
msg2->val = 0.7777;
edje_object_message_send(my_edje, EDJE_MESSAGE_STRING_FLOAT, MSG_ID_GET_FLOAT_AND_STR, msg2);

Migrating a domain name from 1and1

I thought I would share this information, since the steps you have to follow are completely not obvious at all :

1. Allow the migration of your account

– log in to your account
– select your domain name
– set it to an “unlocked” state

2. Cancel your contract

Now, this feels completely idiotic, but to migrate your domain name, you have to follow the same workflow as a cancellation. And it’s called at every step a cancellation. Maybe 1and1 wants people to fear losing their domain name when they try to migrate, so they’ll just keep their contract with them?

– go to cancel.1and1.com (or contrat.1and1.fr if you use the french version)
– log in
– cancel your domain or pack. You will have to answer to a “customer satisfaction” poll
– at the end, just before confirming, you will have the ability to choose :
1. when to apply this action (10 days, 10 days + 1 month, 10 days + 2 months) – just pick whatever fits you
2. what you want to do. THIS IS THE IMPORTANT PART, choose the option to migrate your domain to another provider.
3. when to cancel – right now or wait the end of the contract. You don’t care about that since the migration option will replace it with As Soon As Possible.

Now, you’re going to get a code, keep it since it will be necessary to migrate.

You will now have to validate your cancellation to 1and1 by email.

3. Register with your new provider

– Follow the steps. You will be asked for the migration code you received when cancelling. This is used to ensure someone isn’t trying to steal your domain name.
–  your new provider will also probably ask for an email confirmation using the email in the whois
– now you just have to wait 🙂

Edje Signals, Callbacks and propagation.

As you may already know, interaction with an edje file is down mostly with signals.

You can set up a group with a part and some signal to detect when the part has been clicked

group {
  name: "my_group"
  parts {
    part {
      name: "button";
      type: RECT;
      description {
        state: "default" 0.0;
        color: 255 0 0 255;
        min: 50 50;
      }
    }
  }
  programs {
    program {
      signal: "mouse,up,*";
      source: "button";
      action: SIGNAL_EMIT "button_clicked" "";
    }
  }
}

When the red rectangle is clicked, it will emit a signal {“button_clicked”, “”}

Imagine you want to do something in your program when the button is clicked. You have to add a callback for this :

{
  Evas_Object *evas_obj = edje_object_add(evas);
  edje_object_file_set(evas_obj, "my_theme.edj", "my_group");
  /* needed evas resize/move/show */
  evas_object_resize(evas_obj, 800, 480);
  evas_object_move(evas_obj, 0, 0);
  evas_object_show(evas_obj);

  /* add a callback */
  edje_object_signal_callback_add(evas_obj, "button_clicked", "", &my_callback, NULL);
  /*  simulate a mouse signal on the button to see if it works */
  edje_object_signal_emit(evas_obj, "mouse,up,acme", "button");
}
...
void my_callback(void *data, Evas_Object *o, const char *emission, const char *source)
{
  if (strcmp(emission, "button_clicked") == 0 && strcmp(source, "") == 0) {
    /* do something */
  }
}

As you may notice, there are 4 arguments to the callback :
data, corresponds to the last field of callback_add. You can pass whatever you want to it, but be sure to have it allocated on the heap. You don’t know when the callback will be launched, so if you specify a pointer to a variable that was allocated in the function stack, you’re going to have some problems.
evas_object will be the object that emitted a signal (will correspond to evas_obj here)
emission and source correspond to the signal and the source. You can use the same callback for many signals and dispatch however you want after that.

This is basic stuff you will see in any tutorial, and may lead you to get a bad habit : allocating an Evas_Object for each group of your theme and integrating those in the code.
The ideal goal in an edje application is to have as little code as possible, and move the logic to the edje. That way, you can make a completely different interface (including the way it works, not just graphics) for the same program.
Instead of loading each group in an Evas_Object and use evas to show, hide, move, resize all the elements, juste do one big group with subparts.

group {
  name: "main";
  parts {
    part {
      name: "instance_of_my_group1";
      type: GROUP;
      source: "my_group";
      description {
        state: "default" 0.0;
      }
    }
    part {
      name: "instance_of_my_group2";
      type: GROUP;
      source: "my_group";
      description {
        state: "default" 0.0;
      }
    }
  }
}

What this does is create two instances of the same group, loaded into the “main” group. You only need to instantiate main (edje_object_add and edje_object_file_set) to create those two buttons.
Your next question will be : but if I don’t have an Evas_Object of the group, how do I emit signals to them, how do I add callbacks?

Signal. Propagation.

{
  Evas_Object *main_obj = edje_object_add(evas);
  edje_object_file_set(main_obj, "my_theme.edj", "main");
  /* needed evas resize/move/show */
  evas_object_resize(evas_obj, 800, 480);
  evas_object_move(evas_obj, 0, 0);
  evas_object_show(evas_obj);

  edje_object_signal_callback_add(main_obj, ""button_clicked", "instance_of_my_group1:", &my_callback, NULL);
  /*  simulate a mouse signal on the button to see if it works */
  edje_object_signal_emit(evas_obj, "instance_of_my_group1:mouse,up,acme", "button");
}
...
void my_callback(void *data, Evas_Object *o, const char *emission,  const char *source)
{
  if (strcmp(emission, "button_clicked") == 0 &&  strcmp(source, "instance_of_my_group1:") == 0) {
    /* do something */
  }
}

Understand this important thing :

If you want to emit a signal to a subgroup of an edje object, you have to prefix the signal with the subgroup instance name.
{ “mysubgroup:signal”, “source” }

If you want to add a callback to a signal emitted by a subgroup of an edje object, you have to prefix the source with the subgroup instance name.
{ “signal”, “mysubgroup:source” }

Edje will take care to dispatch those signals automagically. This works with an infinite number of group levels. You could emit {“group:subgroup:subsubgroup:signal”, “source”}.

Now, if you have elements put in a table, or a box, you cannot (at least for now, but it would be a good addition to edje) send signals or get signals from the sub-elements. No “mybox[4]:signal” … for now.

Edit (18/05/2010) :

When you send a signal from the object (if the item was inserted in Edje in the box.items{}), I think the object name is skipped, as if the table itself had sent the signal.

I’ve submitted a patch that would allow someone to send signals to box elements by using this syntax :
{“boxpartname:idx:signal”, “source”} where idx is the index of the element you want to send the signal too, let’s hope it gets accepted.

Edit (20/05/2010) :

Add the evas_object_resize/move/show when initializing the edje object.