Posts in category Linux:
Python hug with apache mod_wsgi
I found it surprisingly difficult to find a plain explanation of setting up hug and apache mod_wsgi. So here is how I did it.
Minimal hug API
Writing an API with hug is really easy. You can write a "hello world" in four lines of code:
import hug
@hug.get('/hello') def hello(name: str) -> str: return "Hello " + name
Running this on a devserver is also trivial:
hug -f hello.py
mod_wsgi
Setting up mod_wsgi is not too difficult either. You need to install the module and enable it, which on Ubuntu was as easy as:
sudo apt install libapache2-mod-wsgi-py3
Note the 'py3' part, however. I wasted quite a bit of time before realizing the module uses a specific version of Python. Since hug is Python 3 only (a shame), a Python 3 build is needed.
The actual host configuration looks something like:
<VirtualHost *:80> ServerName hello.example.com DocumentRoot /var/www/hello/ WSGIDaemonProcess hello threads=1 WSGIScriptAlias / /var/www/api/hello.py
<Directory /var/www/hello> WSGIProcessGroup hello WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>
(This is on Ubuntu 16.04 with Apache 2.4.18, running it as the Apache user.)
The WSGI script
The WSGI entry point expected by mod_wsgi is named 'application' while hug exposes one under 'hug_wsgi'. There must be some way to change it from the above configuration (how?), but I ended up adding this to hello.py:
application = hug_wsgi
If the application was installed as a package it would be cleaner to have some kind of a separate shi like wsgi.py with just a oneliner:
from mypackage import hug_wsgi as application
Cloning a Linux Partition
Whenever I want to test a new release of Ubuntu I find myself cloning the root partition so I can safely upgrade and still keep my old install.
However, I usually forget some step and end up needing to debug grub boot. So this is a summary of what needs to be done in order to clone the partition and make it work. (Almost all of the below need to be run as root of course.)
First you need the file systems in question to be unmounted, so likely you will want to boot from a USB stick or some third partition.
- Cloning with
dd
--
The simplest way to clone the whole FS is to just dd
the partition into
another equally sized partition (I keep a few root-sized ones in reserve).
That is, something like:
dd if=/dev/sdd1 of=/dev/sdd2 bs=$((1024102416))
That copies the contents of /dev/sdd1
into /dev/sdd2
with a large enough
block size that you get most out of SSDs.
- Changing the UUID --
Since there are now two partitions with the same UUID, you need to rename one. The command depends on the file system in question, but for ext4 it is:
tune2fs -U random /dev/sdd2
Now you can check that the UUID changed and record the values:
blkid
- Updating the UUID in
/etc/fstab
--
The next step is to update the fstab in the changed filesystem. Mount the FS somewhere and edit the root UUID manually or just use sed:
sed -ie 's/<old_uuid>/<new_uuid/g' ./etc/fstab
- Updating grub --
Next you can boot into the normal system (which for me is usually the clone's
original copy, i.e. /dev/sdd1
above), chroot into the clone FS (/dev/sdd2
)
and update grub from there. Go into the root of the clone and run:
mount --bind /dev ./dev mount --bind /proc ./proc mount --bind /sys ./sys chroot . update-grub
Finally, run update-grub
outside the chroot, in whichever install owns
the default drive's grub.
Remote Control Rhythmbox from Command Line
I have a mostly-headless HTPC without keyboard, so it is easier to control it remotely. VNC lets you do anything, but for simple things a command line is faster.
My media player of choice is rhythmbox, so there's an easy way to control it: rhythmbox-control. However, it requires an X session and ssh -X opens a new one. Here's how to use it to control a rhythmbox instance that is already running in the main display:
[code]alias rthythmbox-remote="env DISPLAY=:0.0 rhythmbox-control $1"[/code]
For example:
[code]ssh -p 1234 computer.local rhythmbox-remote --next[/code]
Merging Persistence Changes in Ubuntu Live USB
When using Ubuntu Live USB there is the option of using persistence to store changes on the USB. It uses aufs to layer a read-write filesystem on top of a read-only squashfs compressed filesystem. This is great, because it fits over 2GB worth stuff into a c. 700MB default live system and still allows changes to be stored.
The bad news is that updates quickly eat the rw space because old versions of applications cannot be removed to free space. Also, removing useless applications (like OO.o/LO in my case) doesn't free space, but can actually consume it.
Here's the process I used to merge changes back to the ro filesystem, in order to hopefully claim back some space. This is based on information on the Live CD Customization page in Ubuntu community documentation and various man pages.
Lets assume we have a directory on the hard drive where we've copied the casper/filesystem.squashfs file on the USB as fs.ro and the casper-rw file as fs.rw. First we mount the aufs by layering these:
mkdir -p tmp-ro tmp-rw tmp-aufs sudo mount -o loop fs.ro tmp-ro/ sudo mount -o loop fs.rw tmp-rw/ sudo mount -t aufs -o br:tmp-rw:tmp-ro none tmp-aufs/
Now tmp-ro shows the squshfs, tmp-rw shows the changes stored in caster-rw and tmp-aufs shows the layered filesystem as the live OS would see it.
Next we can generate the new squashfs using mksquashfs (from squashfs-tools):
sudo mksquashfs tmp-aufs/ filesystem.squashfs
Unfortunately, the mksquashfs from natty doesn't seem to support lzma, so it's gzip compression only
We can also generate new manifests based on the packages installed. I suppose this only matters if you mean to use the USB for installing to a HDD, but I did it anyway:
sudo chroot tmp-aufs/ dpkg-query -W --showformat='${Package} ${Version}\n' \ > filesystem.manifest cp filesystem.manifest filesystem.manifest-desktop sed -i -e '/ubiquity/d' -e '/casper/d' filesystem.manifest-desktop
Now we don't need the original filesystems anymore:
sudo umount tmp-aufs/ sudo umount tmp-rw/ sudo umount tmp-ro/ rm -rf tmp-rw tmp-aufs
Instead we temporarily mount the new squashfs to get the filesystem size:
sudo mount -o loop filesystem.squashfs tmp-ro/ printf $(sudo du -sx --block-size=1 tmp-ro | cut -f1) > filesystem.size sudo umount tmp-ro/ rmdir tmp-ro
Finally, we can make a new casper-rw of whatever size we need (I used 1GB):
dd if=/dev/zero of=casper-rw bs=1M count=1000 mkfs.ext3 -F casper-rw
Now the resulting files can be copied to the USB (back-up the old ones first!) and all should work. This procedure can also be used to easily customize USB installations from within the live environment.
Reboot to Another Kernel (or Windows) with grub2
Update: Updated below for 11.04/Natty.
So I had some trouble getting grub-reboot to work, since it only seems to like numbers, but I figured it out. I made a script to reboot to another kernel or OS (Windows for me) once, leaving the default unchanged.
First one must set GRUB_DEFAULT=saved
in /etc/default/grub
and run
update-grub
and grub-set-default
to set the default kernel (usually
'0' for the most recent Linux kernel).
Next, here's the script to immediately reboot to another menu entry from those normally displayed on reboot. It shows a list of available entries and asks which to reboot to. It's for Ubuntu, but may work with other sudo-using distros.
#!/bin/bash
# Read lines as fields IFS=" "
# Find the grub entries hack select opt in `grep menuentry /boot/grub/grub.cfg | sed "s/[^\"']*[\"']\([^\"']*\).*/\1/"` do # Numbering starts with 0 sudo grub-reboot $((REPLY - 1)) # Remove the next to only set the kernel, not actually reboot sudo reboot break done</code>
Running it from a terminal reboots to the chosen kernel or OS, while leaving the default as it was.
Unfortunately grub seems to sometimes discard the old default if one reboots twice in a row to the secondary OS, but I haven't been able to debug enough to report it yet.
Update: In Ubuntu 11.04 a grub2 feature called submenus is enabled by default and breaks the script. Editing /etc/grub.d/10_linux and commenting out the lines regarding submenus is a workaround.
Using ssh with an Encrypted Home in Ubuntu
The encrypted home directory feature of Ubuntu is especially useful with laptops, where if it is lost one probably doesn't want others to have access to data. Unfortunately it messes up ssh access with public keys.
Key authentication
is a very useful feature of ssh that I use all the time. It lets me
avoid constantly typing in passwords and is more secure than password
authentication. It works by storing authorized public keys (by default)
in ~/.ssh/authorized_keys
which of course gets encrypted with the
rest. Therefore one can only log in using ssh if a local session already
exists and has mounted the encrypted directory.
The easiest way to avoid this is to set up ssh access as usual, with the keyfile inside the encrypted home, then copy the file over to the unencrypted directory. Thereafter ssh logins are possible with or without other open sessions. (Just remember to update both files if they need changes.)
To copy the file one needs to log out of any graphical sessions and have an open command line either though ssh or a local prompt (e.g. using Ctrl+Alt+F1). The keyfile can then be copied outside the home:
cp ~/.ssh/authorized_keys /tmp/
The encrypted home can be unmounted (but there should be no programs running that need it):
umount.ecryptfs_private && cd
The latter command moves to the mostly empty directory that holds the encrypted filesystem. The directory was write protected on my system, so writes need to be enabled next:
chmod +w ~
With that out of the way, the .ssh subdirectory can be created and the keyfile can be copied over:
mkdir .ssh
chmod 700 .ssh
mv /tmp/authorized_keys .ssh/
Now the encrypted filesystem can be remounted - though a log out + log in would also do the trick:
mount.ecryptfs_private && cd
Radeon Power Management in Ubuntu 10.10 (Maverick)
One of the changes in the recent Ubuntu 10.10 (Maverick) is the upgrade of Linux kernel from version 2.6.32 (in Lucid) to 2.6.35. The change introduces several new features of which I especially like Radeon power management (for use with open drivers).
Unfortunately the default power profile only saves power when the screen is off (or on battery I think). I also know of no GUI tools to manipulate the power profile. Here is the command I found to change the profile using sysfs:
sudo -i 'echo low > /sys/class/drm/card0/device/power_profile'
The change is not persistent, but the quoted part can be added to e.g.
/etc/rc.local
to get it every boot. (I'm unsure about suspend, which I
don't use.)
My Radeon HD 4850 has enough power for desktop effects at the lowest power level, lower end cards may work better with the mid level profile.
To see current power and voltage info, here's another command:
cat /sys/kernel/debug/dri/0/radeon_pm_info
Asus Eee PC 1001PX and Ubuntu Linux
To make the wireless of my 1001PX work in Ubuntu Lucid I had to install the 2.6.35 kernel from Kernel team PPA. The package name is linux-meta-lts-backport-maverick and it should eventually become available in the Lucid package archives.
Unfortunately, the touchpad didn't support multi-touch OOTB and the option in Mouse Settings was disabled. Here are the xinput commands I put in /etc/rc.local to get multi-touch scrolling and click emulation. They should work in other Linux OSes too:
TOUCHPAD="SynPS/2 Synaptics TouchPad" xinput set-prop "$TOUCHPAD" "Synaptics Two-Finger Pressure" 4 xinput set-prop "$TOUCHPAD" "Synaptics Two-Finger Scrolling" 1 1 xinput set-prop "$TOUCHPAD" "Synaptics Tap Action" 0 3 0 2 1 3 2
The three-finger-click doesn't work very well, so I have the bottom corners also emulating different clicks.
Now the only problem is getting the fan under control. It almost never spins up in Windows, but is constantly on in Ubuntu.
Ubuntu Lucid Lynx 10.04 Customizations
I finally have Ubuntu Lucid looking like I want it to. First a couple of screenshots and then some linking to the relevant packages etc. Here's the desktop with Firefox open.
{.alignnone .size-medium .wp-image-174 width="300" height="187"}
And here I've marked some customizations in the image. (Apologies for ugly text.) {.alignnone .size-medium .wp-image-173 width="300" height="187"}
Firstly, I want my panel (singular) vertically. That is unfortunately buggy with the Ambiance theme (Bug #534582) and indicators (Bug #533439). The latter does not look too bad, but the former had to be fixed with the instructions from Bug #160311: Remove the image from the theme gtkrc, set the scaling properties in gconf-editor and set the background image manually.
Instead of the menu bar, which does not like a vertical panel, I use Cardapio from the PPA. It has lots of great plugins for e.g. Google and Wikipedia searches, though I haven't really used them.
Replacing both the window list applet and launchers I have DockbarX from the PPA. The theme is Minimalistic and I have enabled Compiz previews and Opacify.
In the lower right corner I have Conky (from Universe), which I use to monitor the computer.
Firefox is still 3.6 (the default), since I'm refraining from Firefox 4 beta testing until they get their Linux version up to speed. Addons I use include at least:
- About:Me
- App Tabs
- Hide Menubar
- Link Target Display
- Stop-and-Reload
- StumbleUpon
I look forward to trying Lucid out on the laptop I just ordered.
Aliasing Hostnames on Linux (Ubuntu)
When Googling for hostname aliasing on Linux, the most prominent answers were for SSH hostname aliasing, which is simple. For a general purpose solution the common answer was "install a DNS server", which is way over the top. Here's what I managed to figure out.
The objective is to have something like /etc/hosts, except like "hostname hostname2" instead of "address hostname".
My solution is to use the HOSTALIASES variable, which should point to a host alias file. In my case I did:
echo 'hostname hostname2' > .hostaliases echo 'export HOSTALIASES=$HOME/.hostaliases' >> .bashrc
Thereafter, all my bash sessions use the host alias file and apps such as ftp, wget and Firefox can all resolve the alternative hostname correctly.
Unfortunately not all tools seem to resolve hostnames in a way that supports host aliases. E.g. ping does not.
Splitting Files in-Place on Linux
Ok, so I have a 160GB disk image I need to split to small pieces (so I
can compress those individually in the background later), but I only
have a few GB free on the disk containing the image. Unfortunately,
split
isn't in place. Here's what I managed to put together using Bash
on Ubuntu:
#!/bin/bash file=$1 size=$2 count=$3
for i in `seq $((count-1)) -1 0` do dd if=$file of=$file.$i bs=4096 skip=$((size*i/4096)) truncate --size=$((size*i)) $file echo $((count-i))/$count done
Admittedly it isn't exactly in place either, but it does only need space for one piece at a time, which was good enough for me.
Trust Slimline Sketch Tablet on Ubuntu Karmic
Update: See the bottom for Ubuntu Lucid
I just installed my new Trust Slimline Sketch graphics tablet on Ubuntu Karmic 9.10. It was reported as "UC-LOGIC Tablet WP8060U" by xinput. Here's how I made in work, (based on Ubuntu wiki and some Googling):
1. Install wizardpen from Michael Owens' PPA using ppa:doctormo/xorg-wizardpen and installing the package.
2. Create an .fdi file using
sudo gedit /etc/hal/fdi/policy/99-x11-wizardpen.fdi
. Copy the below
with the product name tweaked if necessary (use xinput list to get the
name):
<?xml version="1.0" encoding="ISO-8859-1" ?> <deviceinfo version="0.2"> <device> <match key="info.product" contains="UC-LOGIC Tablet WP8060U"> <merge key="input.x11_driver" type="string">wizardpen</merge> <merge key="input.x11_options.SendCoreEvents" type="string">true</merge> <merge key="input.x11_options.TopX" type="string">1762</merge> <merge key="input.x11_options.TopY" type="string">2547</merge> <merge key="input.x11_options.BottomX" type="string">31006</merge> <merge key="input.x11_options.BottomY" type="string">30608</merge> <merge key="input.x11_options.MaxX" type="string">31006</merge> <merge key="input.x11_options.MaxY" type="string">30608</merge> </match> </device> </deviceinfo>
3. Connect/reconnect the device. It should work now.
4. Run
sudo wizardpen-calibrate /dev/input/by-id/usb-UC-LOGIC_Tablet_WP8060U-event-mouse
or with whatever the proper device path is to get corner coordinates you
like.
5. Re-edit the .fdi file and insert the coordinates there. Reconnect the device.
To use it with GIMP I just enabled it from extended input preferences. The only thing missing is support for the programmable buttons around the draw area.
Update: I now use Ubuntu Lucid (10.04), where the driver works without configuration once installed from the PPA. That is, steps 1 and 3 are the only mandatory steps.
Update: And moving to Maverick (10.10), no configuration required. Driver install from PPA and GIMP input select and everything works, including pressure. Even the defaults are good enough without calibration.
Running Folding@Home on Ubuntu without Straining the CPU
With my new quad-core I decided to install Folding@Home on my Ubuntu Karmic. It is very easy using the origami installer. I just ran the following to install with team Ubuntu as my team:
sudo apt-get install origami && sudo origami install -t 45104 -u USERNAME
Unfortunately, while it is niced by default, it keeps the CPU running at 100%. This means the CPU uses more electricity and runs hotter. Most annoyingly, IMHO, it also keeps the fan at full speed meaning additional noise.
My solution was to change the ignore_nice_load
setting of the ondemand
frequency governor from 0 to 1. In Ubuntu Karmic I had to do it by
editing the special boot script in /etc/init.d/ondemand
. This script
changes the frequency governor from performance to ondemand after the
system has booted. I simply added the following to line 29:
echo -n 1 > /sys/devices/system/cpu/cpu0/cpufreq/ondemand/ignore_nice_load
In a multi-core setup the value seems to propagate to other cores automagically. Now my processor scales back to the minimum 800 MHz while idle, but uses any excess power to fold proteins in the grid.
Update: I found that 800 MHz isn't fast enough to complete the work units in time for their deadlines (my computer isn't on 24/7). I switched to gravitational waves and <abbr title="Quantum Monte Carlo">QMC</abbr> using the BOINC client instead. It's a bit more of a hassle to set up, but seems to be running fine.
Increasing Display Resolution in Virtual Machine Manager
With KVM you can use a high resolution by passing kvm the -vga std parameter (or -std-vga before KVM-77). When using virt-manager to manage virtual machine guests, there is no option to change display resolution. However, it is possible to work around it as follows.
First, create a bash script that adds the argument to a kvm call:
#!/bin/bash kvm $@ -vga std
Put it at, say, /usr/bin/qemu-kvm and make it executable:
chmod +x qemu-kvm sudo install qemu-kvm /usr/bin/
Then make virt-manager (libvirt) use that instead of kvm. Unfortunately,
I haven't found a way to change it system-wide, but it can be changed
for individual virtual machines by editing the corresponding XML file in
~/.libvirt/qemu. Or for all at once:
for i in `ls ~/.libvirt/qemu/*.xml`; do sed -i "s/\/usr\/bin\/kvm/\/usr\/bin\/qemu-kvm/" $i; done
(I think I read most of this from somewhere, but can no longer find the reference.)