Update: See the bottom for Ubuntu Lucid
I just installed my new Trust Slimline Sketch graphics tablet on Ubuntu Karmic 9.10. It was reported as "UC-LOGIC Tablet WP8060U" by xinput. Here's how I made in work, (based on Ubuntu wiki and some Googling):
1. Install wizardpen from Michael Owens' PPA using ppa:doctormo/xorg-wizardpen and installing the package.
2. Create an .fdi file using
sudo gedit /etc/hal/fdi/policy/99-x11-wizardpen.fdi
. Copy the below
with the product name tweaked if necessary (use xinput list to get the
name):
<?xml version="1.0" encoding="ISO-8859-1" ?> <deviceinfo version="0.2"> <device> <match key="info.product" contains="UC-LOGIC Tablet WP8060U"> <merge key="input.x11_driver" type="string">wizardpen</merge> <merge key="input.x11_options.SendCoreEvents" type="string">true</merge> <merge key="input.x11_options.TopX" type="string">1762</merge> <merge key="input.x11_options.TopY" type="string">2547</merge> <merge key="input.x11_options.BottomX" type="string">31006</merge> <merge key="input.x11_options.BottomY" type="string">30608</merge> <merge key="input.x11_options.MaxX" type="string">31006</merge> <merge key="input.x11_options.MaxY" type="string">30608</merge> </match> </device> </deviceinfo>
3. Connect/reconnect the device. It should work now.
4. Run
sudo wizardpen-calibrate /dev/input/by-id/usb-UC-LOGIC_Tablet_WP8060U-event-mouse
or with whatever the proper device path is to get corner coordinates you
like.
5. Re-edit the .fdi file and insert the coordinates there. Reconnect the device.
To use it with GIMP I just enabled it from extended input preferences. The only thing missing is support for the programmable buttons around the draw area.
Update: I now use Ubuntu Lucid (10.04), where the driver works without configuration once installed from the PPA. That is, steps 1 and 3 are the only mandatory steps.
Update: And moving to Maverick (10.10), no configuration required. Driver install from PPA and GIMP input select and everything works, including pressure. Even the defaults are good enough without calibration.
With my new quad-core I decided to install Folding@Home on my Ubuntu Karmic. It is very easy using the origami installer. I just ran the following to install with team Ubuntu as my team:
sudo apt-get install origami && sudo origami install -t 45104 -u USERNAME
Unfortunately, while it is niced by default, it keeps the CPU running at 100%. This means the CPU uses more electricity and runs hotter. Most annoyingly, IMHO, it also keeps the fan at full speed meaning additional noise.
My solution was to change the ignore_nice_load
setting of the ondemand
frequency governor from 0 to 1. In Ubuntu Karmic I had to do it by
editing the special boot script in /etc/init.d/ondemand
. This script
changes the frequency governor from performance to ondemand after the
system has booted. I simply added the following to line 29:
echo -n 1 > /sys/devices/system/cpu/cpu0/cpufreq/ondemand/ignore_nice_load
In a multi-core setup the value seems to propagate to other cores automagically. Now my processor scales back to the minimum 800 MHz while idle, but uses any excess power to fold proteins in the grid.
Update: I found that 800 MHz isn't fast enough to complete the work units in time for their deadlines (my computer isn't on 24/7). I switched to gravitational waves and <abbr title="Quantum Monte Carlo">QMC</abbr> using the BOINC client instead. It's a bit more of a hassle to set up, but seems to be running fine.
The code for my table generator for 5 card poker evaluation is included. It is a BlitzMax file, which requires the "Cactus Kev's Evaluator" to run. Another evaluator can be used instead with some rewriting of code.
It uses a backtracking depth-first search of the problem space and merges nodes as it finds them, using a binary tree for ordering. This is fast enough for five cards - running the generator takes less than half a minute - and generates an optimal <abbr title="directed acyclic graph">DAG</abbr>. The resulting tables (after some conversions to 16-bit integers) are here.
Unfortunately, a direct conversion to seven cards using Kev's eval_7hand would take several days to run, so the 7 card problem requires a more messy approach. I'm still working on that code.
Download: gen_eval5.bmx.tar (8 KB) License: MIT (included) See also: generated tables
I found a bug in my 7 hand poker evaluator generator, which means that the tables I posted earlier, while valid and usable, are not optimal. I haven't yet rewritten the code, but the difference should be on the order of a couple of percent. Meanwhile, I used the same procedure I outlined to create a table-based representation for a five card evaluator.
The tables and a BlitzMax file are included in a compressed archive. Again, the code is not yet clean enough for me to post it here, but I'm working on it.
Download: eval5.7z (138 KB 7-zip archive) License: public domain (none)
Over the past few days I've looked at poker hand evaluation, specifically 7-card evaluation for games like Texas Hold'em. What I did is based on three evaluators I've studied: Cactus Kev's 5-card evaluator, Paul Senzee's 7-card hash function and the "2+2 evaluator" by Ray Wotton et. al.
First a little background: There are (52 choose 7) = c. 134 million 7 card combinations, but less than 7500 hands with a unique value as ranked by Kevin Suffecool in the above link. The problem at hand is getting from a representation of seven individual cards to a value that can be compared to other hands - and doing that fast.
Of the three evaluators, I especially like what Paul Senzee did: he created a perfect minimal hash function that maps a 52-bit integer with seven bits set to a unique number. This can be used for a single table lookup from a 134 million element table of hand values. The downsides are that the table is huge and that the hash function is slow in comparison.
The "Cactus Kev" evaluator is a 5-card evaluator, but can naively be used to find the maximum of the 21 5-card hands in a 7-card hand. That is of course even slower, but can be used in the table generation phase of another evaluator. His code can also most easily be pasted in to another project. The code files are available at the link I gave.
In practical use, the 2+2 evaluator works like magic: seven table lookups and you are done. However, the table generation code is a mess and the table is rather large at over 100 MB. In reality, the one table represents a directed acyclic graph of seven layers, with each link representing a card. This is as close to one can come to "solving" 7-card poker.
Constructing a graph for a card sequence would mean ( 52! / (52-n)! ) nodes at the nth layer; however, the order does not matter in poker hands. The idea is the same as as behind Paul Senzee's method: the nodes representing Ah7s and 7sAh can be joined. Do the math and you again have (52 choose n) nodes at layer n.
However, there are also other nodes that are equivalent. The trick to the 2+2 evaluator is in the MakeID function of the code I linked to: suits don't always matter! For example, take a four card hand AcKdQhJs. None of the suits matter, because flush is no longer possible. Joining such nodes results in dramatic space savings.
If it were a 5 card evaluator, I believe the 2+2 evaluator (a bit tweaked) would generate an optimal DAG representation for the problem. However, as it is, it ignores some redundancy: take any ready flush - the off suit cards don't matter. Similarly, some other made hands have redundancies. A more significant memory loss is due to concatenation of all tables into one.
The way I would like to generate the DAG is: First, create it for 52 choose n, taking all cards into account. Second, apply a generic procedure to join any nodes that can be joined. This would again result in an optimal DAG, though the representation can be improved. Keeping the layers as separate tables allows for more efficient storage of those tables that don't require a full 32-bit integer for their values.
Unfortunately, the final layer would require several gigabytes (12?) at first, so it is not yet practical. I had to keep the 2+2 suit hack in, ugly though it may be. After that, layers 1 to 5 can be optimized no further, but the last two can (on the order of ten percent). As far as I can tell, the algorithm for doing that has to be at least of second order with respect to the number of nodes. I did the straightforward thing and it is indeed slow, but only has to be run once.
Storing tables 2-6 using 32-bit Ints and 7 using 16-bit Shorts the total space taken is 80 MB, in contrast to over 120 MB for the original 2+2 evaluator. The first table simply contains the index multiplied by 52, so it isn't really needed. (It annoys me greatly that table 2 almost fits in Shorts.) The 7-zip compressed file I've attached contains the tables that can be used from any programming language and samples in C and BlitzMax. I'll publish the generating code once it looks presentable.
I'm rather confident an array-based DAG is the fastest way to do an exhaustive iteration of hands in order - minimal testing shows all 134 million ordered 7-card hands can be evaluated in less than a second. However, I'm unsure whether a heavily memory dependent algorithm is the best for a Monte-Carlo simulation. It would seem that a random order would favor something with fever lookups and thus cache misses. After all, tables 5-7 are all too large to fit in current generation processor caches of up to a few MB (my previous generation dual-core has 3 MB total).
Download: eval7.7z (7.8 MB 7-zip compressed archive) License: public domain (none)