This short post is intended to summarize some work on a simple exercise that may be useful to keep in mind when arithmetizing purely functional programs.

The topic of discussion is the hylomorphism; a recursive function made up of four parts;

An endofunctor, $FF$. This is the same thing as a functor in the Haskell sense. There should be a map function that turns a function from $AA$s to $BB$s into a function from $F(A)F(A)$s to $F(B)F(B)$s, for all $AA$ and all $BB$.

A "coalgebra" over $FF$; that is, a function that turns $AA$s into $F(A)F(A)$s, for a fixed$AA$.

An "algebra" over $FF$; that is, a function that turns $F(B)F(B)$s into $BB$s, for a fixed$BB$.

A recursive equation that defines the hylomorphism. If $aa$ is the algebra, $cc$ is the coalgebra, and $hh$ is the hylomorphism, the following equation holds.

If you've never seen a hylomorphism before, it may not be clear either what $hh$ is doing or why this pattern is important.

Briefly, we can think of $cc$ as constructing a single layer of a tree. If $FF$ is, for example, something like $X\mapsto 1+X\text{\u2a2f}XX\; \mapsto \; 1\; +\; X\; \u2a2f\; X$, then $cc$ will take an $AA$ and turn it into something in the shape $1+A\text{\u2a2f}A1\; +\; A\; \u2a2f\; A$, that is, it will either be a leaf, or a pair of $AA$s. The $F(h)F(h)$, being aware of this layer structure, will map $hh$ into the $AA$s created by $cc$. In the case of a leaf, there will be no created $AA$s, so no new calls to $hh$ will be created. Alternatively, if a branch is encountered then there will be one new $hh$ call per branch. $hh$ will produce new $cc$s, and so we can imagine a tree whose layers are $FF$ shaped taking form.

$aa$ will deconstruct these layers. It will take an $F(B)F(B)$ and produce a $BB$. In the case of a leaf, it will produce a $BB$ wholesale, otherwise it may combine the $BB$s in the branches. As $hh$ recurses, it will spawn exactly one $aa$ for each $cc$, exactly one layer consumer for each layer producer. Ultimately, the $aa$s will consume the whole tree. In the end, $hh$ will be a recursive function that takes an $AA$ and produces a $BB$.

A commonplace example of a hylomorphism is the quicksort algorithm. $cc$ will take an input list and split it into

the first element of the list.

all the elements of the list's tail that are less than or equal to the first.

all the elements of the list's tail that are greater than the first.

Recursively applying this will produce a tree that stores numbers on its branches such that all these numbers are sorted. $aa$ then only needs to collapse the tree back into the list. We can infer that $FF$ will be $X\mapsto 1+\mathrm{N}\text{\u2a2f}X\text{\u2a2f}XX\; \mapsto \; 1\; +\; \mathbb{N}\; \u2a2f\; X\; \u2a2f\; X$. Additionally, both $AA$ and $BB$ will be $List(\mathrm{N})List(\mathbb{N})$. Using this as a guide, we can implement this as a hylomorphism. I will use Mathematica, but this should be doable, maybe with more or less elegance, in most languages.

The reason why hylomorphisms are important boils down to their ubiquity and expressiveness. The vast majority of complex algorithms either are or can be cast as a hylomorphism. They also have an elaborate algebraic theory that allows them to be manipulated as purely mathematical objects for the purpose of program calculation and optimization. This is common in approaches in the vein of the algebra of programming.

The other topic of this post is trace arithmetization. There is a lot of work taking whole programming languages, execution models, or virtual machines and creating circuits for verifying the execution of a program. By arithmetizing hylomorphisms, we cover a large swath of programs without having to commit to a language. Now, is this actually better? Intuitively, I would say no; at least not on its face. However, I found this approach at least enlightening to contemplate.

The trace of a hylomorphism will have the same shape as the recursive call structure of the function. This is exactly the tree produced by the coalgebra. By using the identity function as the algebra, we can get this tree as $tree=F(tree)\circ ctree\; =\; F(tree)\; \circ \; c$. With a bit of formatting, we can see this in our previous implementation.

This is not quite our trace, but it forms its skeleton. What we need to do is decorate each constructor with the $AA$s and $BB$s present there. Let's start with the $AA$s produced by the coalgebra. At each leaf, we won't have an $FF$ layer, but, instead, an $A\text{\u2a2f}FA\; \u2a2f\; F$ layer. Some readers may recognize this as a cofree comonad over $FF$; if you don't, then you can ignore that. To produce these layers, we merely need to copy each $AA$, leave one copy alone and apply $cc$ to the other. This can be accomplished using the universal property of ⨯, which I will denote △. △ will take two functions, one from $XX$s to $YY$s and the other from $XX$s to $ZZ$s and produce a function from $XX$s to $(Y\text{\u2a2f}Z)(Y\; \u2a2f\; Z)$s by copying the input and applying each function respectively. With this in hand, we can define the coalgebra part of the trace as

At this point, we have something which can be collapsed into a list of constraints. At each layer, we have something of the shape $A\text{\u2a2f}F(A\text{\u2a2f}\mathrm{.}\mathrm{.}\mathrm{.})A\; \u2a2f\; F(A\; \u2a2f\; ...)$. By mapping ${\pi}_{1}\pi \u2081$, the first projection, we get a pair $A\text{\u2a2f}F(A)A\; \u2a2f\; F(A)$. Such a pair should be an input-output pair for $cc$. ${\pi}_{1}\pi \u2081$ is First in Mathematica.

The algebra trace is only a bit more complicated. We do need $cc$ to produce the tree. After that, each algebra will see, not an $F(B)F(B)$, but, instead, an $F(B\text{\u2a2f}F(\mathrm{.}\mathrm{.}\mathrm{.}))F(B\; \u2a2f\; F(...))$, and it must produce a $B\text{\u2a2f}F(B\text{\u2a2f}F(\mathrm{.}\mathrm{.}\mathrm{.}))B\; \u2a2f\; F(B\; \u2a2f\; F(...))$. This requires copying the input, mapping ${\pi}_{1}\pi \u2081$ , and applying $aa$ to one of those copies and leaving the other unmodified.

We could combine this into a single full trace by storing both $AA$ and $BB$ at each constructor, but there's no benefit in doing that here, so I'll keep them separate.

This is all well and good, but, presumably, one would like to put this in a format that can ultimately be placed within a circuit. To that end, the algebra and coalgebra of quicksort are also recursive. They are also hylomorphisms, so we can produce their traces the same way. To facilitate this, it's prudent to do a bit of generic programming. We can define generic constructors for polynomial endofunctors and automatically generate functions using them. For example, we can define generic maps;

And we can define a single hylomorphism function which we'll use for all future hylos.

hylo[F_, a_, c_][x_] := a@M[F][hylo[F, a, c]]@c@x

Focusing on the coalgebra, we must define a function which pulls off the first element of a list before splitting the list into two filtered sub-lists. This requires us to define the coalgebra in two pieces. The first piece is non-recursive; it tries splitting off an element and returns nothing if there is no first element. I'll call this piece $listCoalglistCoalg$. It takes a $List(A)List(A)$ and returns a $1+(A\text{\u2a2f}List(A))1\; +\; (A\; \u2a2f\; List(A))$.

Assuming we were able to split off the head of the list, we can now define the family of hylomorphisms which take a parameter and a list and produces a pair of lists.

With this, we can define the second piece of the coalgebra, a recursive function I'll call filter. That, along with the coalgebra itself, can be defined as;

With that, all the recursive parts of quicksort have been factored out. To turn this into a more useful set of constraints, we can implement generic versions of our trace functions. Part of that is implementing a function I called "collapse" which will take an $F(Set(A))F(Set(A))$ and produce a $Set(A)Set(A)$ by unioning together all the sets which appear in the expression.

At this point, the goal would be to get a list of constraints that are sufficiently primitive that we can implement them in a simpler system. This should basically be the check for filterAlg, concatAlg, and the non-recursive parts of the other functions.

For filter, the coalgebra just converts the standard representation of lists into a generic representation. In other words, it's essentially a glorified identity function. As such, we can ignore the coalgebra when describing the filter constraints.

I've named the, yet to be implemented, check for filterAlg, filterAlgCheck.

The coalgebra has two cases that produce two different sets of constraints. The only additional constraint on top of what filterCheck produces ensures that the second argument of the check is the appropriate output.

We can ignore the coalgebra constraints for concat for the same reason we ignored them for filter. The structure of the constraints is, overall, similar.

If we run qsCheck on $9,4,0,5,3,2,7,8,6,1\{9,\; 4,\; 0,\; 5,\; 3,\; 2,\; 7,\; 8,\; 6,\; 1\}$, we will get a total of 87 constraints to check.

For the rest of this post, I will try compiling those constraints to Vamp-IR, a language for representing arithmetic circuits.

In order to perform all the checks, we need to be able to check equality between elements of polynomial endofunctors. Vamp-IR has built-in tuples, but not coproducts. We can, however, implement coproducts via higher-order functions.

def inl x l r = l x;
def inr x l r = r x;

We can then implement equality checks. These are parameterized by whatever method is necessary to check equality of the underlying data. Each will return $11$ if the elements are equal, and $00$ otherwise.

def isZero x = {
def xi = fresh (1 | x);
x * (1 - xi * x) = 0;
1 - xi * x
};
def intEq a b = isZero (a - b);
def unitEq a b = 1;
def emptyEq a b = 0;
def prodEq eqX eqY (x1, y1) (x2, y2) =
eqX x1 x2 * eqY y1 y2;
def sumEq eqX eqY a b =
a (fun x1 { b (fun x2 { eqX x1 x2 }) (fun y2 { 0 }) })
(fun y1 { b (fun x2 { 0 }) (fun y2 { eqY y1 y2 }) });

We can write some functions to assist in automatically generating Vamp-IR code. This code will convert syntax trees to strings of Vamp-IR code and also automatically generate equality checkers based on a functor.

For several checks, we need to check equality between lists. This isn't possible in general, as lists can be arbitrarily long. We can, however, create a check that works for every list up to a specific length. For our example, the limit is length 10, which has 11 constructors including the empty list at the end. As such, we can automatically compute this and add it as a definition.

It's quite a long definition, so I've truncated it here. From here, we can start implementing each of the checks. filterAlgCheck is the most technically sophisticated. Firstly, it needs a method to verify that one number is less than another. Since none of our numbers are over 9, I'll only make a big enough range check for 0-15.

With these, the check can be implemented directly.

def or b1 b2 = 1 - (1 - b1) * (1 - b2);
def filterAlgCheck p a (b1, b2) =
a (fun x { listEq b1 (inl 0) * listEq b2 (inl 0) })
(fun (h, (a1, a2)) {
or (less4 h p * listEq (inr (h, a1)) b1 * listEq a2 b2)
((1 - less4 h p) * listEq a1 b1 * listEq (inr (h, a2)) b2)
});

We can then automatically generate Vamp-IR constraints in Mathematica if we implement functions for converting elements of polynomial endofunctors and using them to generate the full check.

concatAlgCheck is much simpler, but still requires some basic work.

def concatAlgCheck p a b =
a (fun x { listEq p b })
(fun (n, l) { listEq (inr (n, l)) b });
concatAlgCheck[p_, a_, b_] :=
eq[ap["concatAlgCheck",
{listConv[p],
funConv[$1 ⊕ ($C ⊗ $L[$C])][a],
listConv[b]
}], 1]

The remaining equality checks are trivial so long as we know their types.

I've compiled the full Vamp-IR program into this gist, if you want to see it. It does, indeed, produce a valid proof, but I'm not fully satisfied with it. You can also find a Mathematica notebook with the code that was used to produce it here.

There is an argument to be made that what I did here was not a principled way to arithmetize data-structure manipulations. The compiled circuit will throw away most data-structure details and only really check field element manipulations. Arithmetizing the data-structure manipulations themselves would be more involved, and would require a few design choices. The latter approach is related to what GEB is doing. I also had the thought, while writing this post, that these constructions could be made even more generic by describing all the functions through the interface of a distributive category. This would allow the arithmetizations for the primitive functions to be more automatic. This is even more like what GEB does. Perhaps this idea should be revisited there.