# Canonical Voices

abeato

## Analysis and Plots of Solutions to Complex Powers

In chapter 5 of his mind-blowing “The Road to Reality”, Penrose devotes a section to complex powers, that is, to the solutions to

$$w^z~~~\text{with}~~~w,z \in \mathbb{C}$$

In this post I develop a bit more what he exposes and explore what the solutions look like with the help of some simple Python scripts. The scripts can be found in this github repo, and all the figures in this post can be replicated by running

git clone https://github.com/alfonsosanchezbeato/exponential-spiral.git
cd exponential-spiral; ./spiral_examples.py


The scripts make use of numpy and matplotlib, so make sure those are installed before running them.

Now, let’s develop the math behind this. The values for $$w^z$$ can be found by using the exponential function as

$$w^z=e^{z\log{w}}=e^{z~\text{Log}~w}e^{2\pi nzi}$$

In this equation, “log” is the complex natural logarithm multi-valued function, while “Log” is one of its branches, concretely the principal value, whose imaginary part lies in the interval $$(−\pi, \pi]$$. In the equation we reflect the fact that $$\log{w}=\text{Log}~w + 2\pi ni$$ with $$n \in \mathbb{Z}$$. This shows the remarkable fact that, in the general case, we have infinite solutions for the equation. For the rest of the discussion we will separate $$w^z$$ as follows:

$$w^z=e^{z~\text{Log}~w}e^{2\pi nzi}=C \cdot F_n$$

with constant $$C=e^{z~\text{Log}~w}$$ and the rest being the sequence $$F_n=e^{2\pi nzi}$$. Being $$C$$ a complex constant that multiplies $$F_n$$, the only influence it has is to rotate and scale equally all solutions. Noticeably, $$w$$ appears only in this constant, which shows us that the $$z$$ values are what is really determinant for the number and general shape of the solutions. Therefore, we will concentrate in analyzing the behavior of $$F_n$$, by seeing what solutions we can find when we restrict $$z$$ to different domains.

Starting by restricting $$z$$ to integers ($$z \in \mathbb{Z}$$), it is easy to see that there is only one resulting solution in this case, as the factor $$F_n=e^{2\pi nzi}=1$$ in this case (it just rotates the solution $$2\pi$$ radians an integer number of times, leaving it unmodified). As expected, a complex number to an integer power has only one solution.

If we let $$z$$ be a rational number ($$z=p/q$$, being $$p$$ and $$q$$ integers chosen so we have the canonical form), we obtain

$$F_n=e^{2\pi\frac{pn}{q} i}$$

which makes the sequence $$F_n$$ periodic with period $$q$$, that is, there are $$q$$ solutions for the equation. So we have two solutions for $$w^{1/2}$$, three for $$w^{1/3}$$, etc., as expected as that is the number of solutions for square roots, cube roots and so on. The values will be the vertex of a regular polygon in the complex plane. For instance, in figure 1 the solutions for $$2^{1/5}$$ are displayed.

If $$z$$ is real, $$e^{2\pi nzi}$$ is not periodic anymore has infinite solutions in the unit circle, and therefore $$w^z$$ has infinite values that lie on a circle of radius $$|C|$$.

In the more general case, $$z \in \mathbb{C}$$, that is, $$z=a+bi$$ being $$a$$ and $$b$$ real numbers, and we have

$$F_n=e^{-2\pi bn}e^{2\pi ani}.$$

There is now a scaling factor, $$e^{-2\pi bn}$$ that makes the module of the solutions vary with $$n$$, scattering them across the complex plane, while $$e^{2\pi ani}$$ rotates them as $$n$$ changes. The result is an infinite number of solutions for $$w^z$$ that lie in an equiangular spiral in the complex plane. The spiral can be seen if we change the domain of $$F$$ to $$\mathbb{R}$$, this is

$$F(t)=e^{-2\pi bt}e^{2\pi ati}~~~\text{with}~~~t \in \mathbb{R}.$$

In figure 2 we can see one example which shows some solutions to $$2^{0.4-0.1i}$$, plus the spiral that passes over them.

In fact, in Penrose’s book it is stated that these values are found in the intersection of two equiangular spirals, although he leaves finding them as an exercise for the reader (problem 5.9).

Let’s see then if we can find more spirals that cross these points. We are searching for functions that have the same value as $$F(t)$$ when $$t$$ is an integer. We can easily verify that the family of functions

$$F_k'(t)=F(t)e^{2\pi kti}~~~\text{with}~~~k \in \mathbb{Z}$$

are compatible with this restriction, as $$e^{2\pi kti}=1$$ in that case (integer $$t$$). Figures 3 and 4 represent again some solutions to $$2^{0.4-0.1i}$$, $$F(t)$$ (which is the same as the spiral for $$k=0$$), plus the spirals for $$k=-1$$ and $$k=1$$ respectively. We can see there that the solutions lie in the intersection of two spirals indeed.

If we superpose these 3 spirals, the ones for $$k=1$$ and $$k=-1$$ cross also in places different to the complex powers, as can be seen in figure 5. But, if we choose two consecutive numbers for $$k$$, the two spirals will cross only in the solutions to $$w^z$$. See, for instance, figure 6 where the spirals for $$k=\{-2,-1\}$$ are plotted. We see that any pair of such spirals fulfills Penrose’s description.

In general, the number of places at which two spirals cross depends on the difference between their $$k$$-number. If we have, say, $$F_k’$$ and $$F_l’$$ with $$k>l$$, they will cross when

$$t=…,0,\frac{1}{k-l},\frac{2}{k-l},…,\frac{k-l-1}{k},1,1+\frac{1}{k-l},…$$

That is, they will cross when $$t$$ is an integer (at the solutions to $$w^z$$) and also at $$k-l-1$$ points between consecutive solutions.

Let’s see now another interesting special case: when $$z=bi$$, that is, it is pure imaginary. In this case, $$e^{2\pi ati}$$ is $$1$$, and there is no turn in the complex plane when $$t$$ grows. We end up with the spiral $$F(t)$$ degenerating to a half-line that starts at the origin (which is reached when $$t=\infty$$ if $$b>0$$). This can be appreciated in figure 7, where the line and the spirals for $$k=-1$$ and $$k=1$$ are plotted for $$20^{0.1i}$$. The two spirals are mirrored around the half-line.

Digging more into this case, it turns out that a pure imaginary number to a pure imaginary power can produce a real result. For instance, for $$i^{0.1i}$$, we see in figure 8 that the roots are in the half-positive real line.

That something like this can produce real numbers is a curiosity that has intrigued historically mathematicians ($$i^i$$ has real values too!). And with this I finish the post. It is really amusing to start playing with the values of $$w$$ and $$z$$, if you want to do so you can use the python scripts I pointed to in the beginning of the post. I hope you enjoyed the post as much as I did writing it.

Dustin Kirkland

## The Golden Ratio calculated to a record 2 trillion digits, on Ubuntu, in the Cloud!

The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity.  Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.

I think of the Golden Ratio as sort of "Pi in 1 dimension".  Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.

Visually, this diagram from Wikipedia helps explain it:

We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.

While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:

The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...

For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.

Leonardo da Vinci used the Golden Ratio throughout his works.  I'm told that his Vitruvian Man displays the Golden Ratio...

From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio.  Does 1280x800 or 1680x1050 sound familiar?

That ~1.6 number is only an approximation, of course.  The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:

You can plug that number into your computer's calculator and crank out a dozen or so significant digits.

However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision.  (Sorry free software readers of this blog -- y-cruncher is not open source code...)

I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily.  While I opted to use mprime in that post, I saved y-cruncher for this one :-)

Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links.  While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.

Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone.  All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches.  As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.

After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010).  I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours.  This was purely an economic decision :-)

Let's geek out on technical specifications for a second...  So what's in a d2.2xlarge?
• 8x Intel Xeon CPUs (E5-2676 v3 @ 2.4GHz)
• 60GB of Memory
• 6x 2TB HDDs
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).

$sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=6 /dev/xvd?$ sudo mkfs.xfs /dev/md0$df -h /mnt/dev/md0 11T 34M 11T 1% /mnt Here's a brief look at raw read performance with hdparm: $ sudo hdparm -tT /dev/md0 Timing cached reads:   21126 MB in  2.00 seconds = 10576.60 MB/sec Timing buffered disk reads: 1784 MB in  3.00 seconds = 593.88 MB/sec

The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel.  600 MB/sec is pretty quick reads by any measure!  In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!

With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:

Program Version:       0.6.8 Build 9461 (Linux - x64 AVX2 ~ Airi)Constant:              Golden RatioAlgorithm:             Newton's MethodDecimal Digits:        2,000,000,000,000Hexadecimal Digits:    1,660,964,047,444Threading Mode:        Thread Spawn (1 Thread/Task)  ? / 8Computation Mode:      Swap ModeWorking Memory:        61,342,174,048 bytes  ( 57.1 GiB )Logical Disk Usage:    8,851,913,469,608 bytes  ( 8.05 TiB )

Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.

And approximately 79 hours later, it finished successfully!

Start Date:            Thu Jul 16 03:54:11 2015End Date:              Sun Jul 19 11:14:52 2015Computation Time:      221548.583 secondsTotal Time:            285640.965 secondsCPU Utilization:           315.469 %Multi-core Efficiency:     39.434 %Last Digits:5027026274 0209627284 1999836114 2950866539 8538613661  :  1,999,999,999,9502578388470 9290671113 7339871816 2353911433 7831736127  :  2,000,000,000,000

Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd.  As such, Ron and I are "sharing" credit for the Golden Ratio record.

Now, let's talk about the economics here, which I think are the most interesting part of this post.

Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gaming PCs running Windows.

What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS.  I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!)  It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of$220!

$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours. If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour! Hopefully you find this as fascinating as I! Cheers, :-Dustin Read more niemeyer ## Taking the Gopher for a spin As originally shared on Google+, and as a follow up of the previous post covering OpenGL on Go QML, a new screencast was published to demonstrate the latest features introduced around OpenGL support in Go QML: Refrences: Read more niemeyer ## IEEE-754 brain teaser Here is a small programming brain teaser for the weekend: Assume uf is an unsigned integer with 64 bits that holds the IEEE-754 representation for a binary floating point number of that size. The questions are: 1. How to tell if uf represents an integer number? 2. How to serialize the absolute value of such an integer number in the minimum number of bytes possible, using big-endian ordering and the 8th bit as a continuation flag? For example, float64(1<<70 + 3<<21) serializes as:   "\x81\x80\x80\x80\x80\x80\x80\x83\x80\x80\x00"    The background for this problem is that the current draft of the strepr specification mentions that serialization. Some languages, such as Python and Ruby, implement transparent arbitrary precision integers, and that makes implementing the specification easier. For example, here is a simple Python interactive session that arrives at the result provided above exploring the native integer representation.   >>> f = float((1<<70) + (3<<21)) >>> v = int(f) >>> l = [v&0x7f] >>> v >>= 7 >>> while v > 0: ... l.append(0x80 | (v&0x7f)) ... v >>= 7 ... >>> l.reverse() >>> "".join("%02x" % i for i in l) '8180808080808083808000'    Python makes the procedure simpler because it is internally converting the float into an integer of appropriate precision via standard C functions, and then offering bit operations on the resulting value. The suggested brain teaser can be efficiently solved using just the IEEE-754 representation, though, and it’s relatively easy because the problem is being constrained to the integer space. A link to an implementation will be provided next week. UPDATE: The logic is now available as part of the reference implementation of strepr. Read more niemeyer ## strepr v1 (draft2) Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it. ## Contents ## Introduction This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications. Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc). The format is designed with the following principles in mind: Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly. Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation. Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language. Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16. ## Supported values The following values are supported: • nil: the nil/null/none singleton • bool: the true and false singletons • string: raw sequence of bytes • integers: positive, zero, and negative integer numbers • floats: IEEE754 binary floating point numbers • list: sequence of values • map: associative value→value pairs ## Representation nil = 'z' The nil/null/none singleton is represented by the single byte 'z' (0x7a). bool = 't' / 'f' The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively. unsigned integer = 'p' <value> Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number. For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language. negative integer = 'n' <absolute value> Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number. For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language. string = 's' <num bytes> <bytes> Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding. For example, the string hi is represented as {0x73, 0x02, 'h', 'i'} Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations. binary float = 'd' <binary64> 32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number. There are two exceptions to that rule: 1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation. 2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞. For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}. This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format. list = 'l' <num items> [<item> ...] Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order. For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65} map = 'm' <num pairs> [<item key> <item value> ...] Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation. For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}. ## Variable-length encoding Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80. For example, the number 128 is variable-length encoded as {0x81, 0x00}. ## Reference implementation A reference implementation is available, including a test suite which should be considered when implementing the specification. ## Changes draft1 → draft2 • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out. • Enforce a single NaN representation for floats. • Explain that map key uniqueness refers to the representation. • Don’t claim the specification is easy to implement; floats require attention. • Mention reference implementation. Read more niemeyer ## Baby feeding statistics with R Our son Otávio was born recently. Right in the first few days, we decided to keep tight control on the feeding times for a while, as it is an intense routine pretty unlike anything else, and obviously critical for the health of the baby. I imagined that it wouldn’t be hard to find an Android app that would do that in a reasonable way, and indeed there are quite a few. We went with Baby Care, as it has a polished interface and more features than we’ll ever use. The app also includes some basic statistics, but not enough for our needs. Luckily, though, it is able to export the data as a CSV file, and post-processing that file with the R language is easy, and allows extracting some fun facts about what the routine of a healthy baby can look like in the first month, as shown below. The first thing to do is to import the raw data from the CSV file. It is a one-liner in R:   > info = read.csv("baby-care.csv", header=TRUE)    Then, this file actually comes with other events that won’t be processed now, so we’ll slice it and grab only the rows and columns of interest:   > feeding <- info[info$Event.type == "Breast",
c("Event.subType", "Start.Time", "End.Time", "Duration")]

 

This is how it looks like:
 

> feeding[100:103,]
Event.subType       Start.Time         End.Time Duration
129          Left 2013/01/04 13:45 2013/01/04 14:01    00:16
132          Left 2013/01/04 16:21 2013/01/04 16:30    00:09
134         Right 2013/01/04 17:46 2013/01/04 17:54    00:08

 

Now things get more interesting. Let’s extract that duration column into a more useful vector, and do some basic analysis:

 

> duration <- as.difftime(as.vector(feeding$Duration), "%H:%M") > length(duration) [1] 365 > total = sum(duration) > units(total) = "hours" > total Time difference of 63.71667 hours > mean(duration) Time difference of 10.47397 mins > sd(duration) [1] 5.937172    A total of 63 hours surprised me, but the mean time of around 10 minutes per feeding is within the recommendation, and the standard deviation looks reasonable. It may be more conveniently pictured as a histogram:   > hist(as.numeric(duration), breaks="FD", col="blue", main="", xlab="Minutes")    Another point we were interested on is if both sides are properly balanced:   > sides <- c(" Right", " Left") > tapply(duration, feeding$Event.subType, mean)[sides]
Right     Left
10.72283 10.22099

 

Looks good.

All of the analysis so far goes over the whole period, but how has the daily intake changed over time? We’ll need an additional vector to compute this and visualize in a chart:
 

> day <- format(strptime(feeding$Start.Time, "%Y/%m/%d %H:%M"), "%Y/%m/%d") > perday <- tapply(duration, day, sum) > mean(perday) [1] 136.5357 > sd(perday) [1] 53.72735 > sd(perday[8:length(perday)]) [1] 17.49735 > plot(perday, type="h", col="blue", xlab="Day", ylab="Minutes")    The mean looks good, with about two hours every day. The standard deviation looks high on a first look, but it’s actually not that bad if we take off the first few days. Looking at the graph shows why: the slope on the left-hand side, which is expected as there’s less milk and the baby has more trouble right after birth. The chart shows a red flag, though: one day seems well below the mean. This is something to be careful about, as babies can get into a loop where they sleep too much and miss being hungry, the lack of feeding causes hypoglycemia, which causes more sleep, and it doesn’t end up well. A rule of thumb is to wake the baby up every two hours in the first few days, and at most every four hours once he stabilizes for the following weeks. So, this was another point of interest: what are the intervals between feedings?   > start = strptime(feeding$Start.Time, "%Y/%m/%d %H:%M")
> end = strptime(feeding$End.Time, "%Y/%m/%d %H:%M") > interval <- start[-1]-end[-length(end)] > hist(as.numeric(interval), breaks="FD", col="blue", main="", xlab="Minutes")  Seems great, with most feedings well under two hours. There's a worrying outlier, though, of more than 6 hours. Unsurprisingly, it happened over night:   > feeding$End.Time[interval > 300]
[1] 2013/01/07 00:52

 

It wasn't a significant issue, but we don't want that happening often while his body isn't yet ready to hold enough energy for a full night of sleep. That's the kind of reason we've been monitoring him, and is important because our bodies are eager to get full nights of sleep, which opens the door for unintended slack. As a reward for that kind of control, we've got the chance to enjoy not only his health, but also an admirable mood.

Gustavo Niemeyer

The underlying concept is very simple: spreadsheets are a way to organize text, numbers and formulas into what might be seen as a natively numeric environment: a matrix. So what would happen if we loosed some of the bolts of the numeric-oriented organization, and tried to reuse the same concepts into a more formatting-oriented environment which is naturally collaborative: a wiki.

While I do encourage you to answer this with some fantastic new online service (please provide me with an account and the best e-book reader device available once you’re rich) I had a try at answering this question myself a while ago by writing the Calc macro for Moin.

Basically, the Calc macro allows extracting values found in a wiki page into lists (think columns or rows), and applying formulas and further formatting as wanted.

I believe there’s a lot of potential on the basic concept, and the prototype, even though functional and useful, surely has a lot to evolve, so I’ve published the project in Launchpad to make contributions easier. I actually apologize for not publishing it earlier. There was hope that more features would be implemented before releasing, but now it’s clear that it won’t get many improvements from me anytime soon. If you do decide to improve it, please try to prepare patches which are mostly ready for integration, including full testing, since I can’t dedicate much time for it myself in the foreseeable future.