Type Theory


A common occurrence in category theory is the adjoint triple. This is a pair of adjunctions relating three functors:

F ⊣ G ⊣ H
F ⊣ G, G ⊣ H

Perhaps part of the reason they are so common is that (co)limits form one:

colim ⊣ Δ ⊣ lim

where Δ : C -> C^J is the diagonal functor, which takes objects in C to the constant functor returning that object. A version of this shows up in Haskell (with some extensions) and dependent type theories, as:

∃ ⊣ Const ⊣ ∀
Σ ⊣ Const ⊣ Π

where, if we only care about quantifying over a single variable, existential and sigma types can be seen as a left adjoint to a diagonal functor that maps types into constant type families (either over * for the first triple in Haskell, or some other type for the second in a dependently typed language), while universal and pi types can be seen as a right adjoint to the same.

It's not uncommon to see the above information in type theory discussion forums. But, there are a few cute properties and examples of adjoint triples that I haven't really seen come up in such contexts.

To begin, we can compose the two adjunctions involved, since the common functor ensures things match up. By calculating on the hom definition, we can see:

Hom(FGA, B)     Hom(GFA, B)
    ~=              ~=
Hom(GA, GB)     Hom(FA, HB)
    ~=              ~=
Hom(A, HGB)     Hom(A, GHB)

So there are two ways to compose the adjunctions, giving two induced adjunctions:

FG ⊣ HG,  GF ⊣ GH

And there is something special about these adjunctions. Note that FG is the comonad for the F ⊣ G adjunction, while HG is the monad for the G ⊣ H adjunction. Similarly, GF is the F ⊣ G monad, and GH is the G ⊣ H comonad. So each adjoint triple gives rise to two adjunctions between monads and comonads.

The second of these has another interesting property. We often want to consider the algebras of a monad, and coalgebras of a comonad. The (co)algebra operations with carrier A have type:

alg   : GFA -> A
coalg : A -> GHA

but these types are isomorphic according to the GF ⊣ GH adjunction. Thus, one might guess that GF monad algebras are also GH comonad coalgebras, and that in such a situation, we actually have some structure that can be characterized both ways. In fact this is true for any monad left adjoint to a comonad; [0] but all adjoint triples give rise to these.

The first adjunction actually turns out to be more familiar for the triple examples above, though. (Edit: [2]) If we consider the Σ ⊣ Const ⊣ Π adjunction, where:

Σ Π : (A -> Type) -> Type
Const : Type -> (A -> Type)

we get:

ΣConst : Type -> Type
ΣConst B = A × B
ΠConst : Type -> Type
ΠConst B = A -> B

So this is the familiar adjunction:

A × - ⊣ A -> -

But, there happens to be a triple that is a bit more interesting for both cases. It refers back to categories of functors vs. bare type constructors mentioned in previous posts. So, suppose we have a category called Con whose objects are (partially applied) type constructors (f, g) with kind * -> *, and arrows are polymorphic functions with types like:

 
forall x. f x -> g x
 

And let us further imagine that there is a similar category, called Func, except its objects are the things with Functor instances. Now, there is a functor:

U : Func -> Con

that 'forgets' the functor instance requirement. This functor is in the middle of an adjoint triple:

F ⊣ U ⊣ C
F, C : Con -> Func

where F creates the free functor over a type constructor, and C creates the cofree functor over a type constructor. These can be written using the types:

 
data F f a = forall e. F (e -> a) (f e)
newtype C f a = C (forall r. (a -> r) -> f r)
 

and these types will also serve as the types involved in the composite adjunctions:

FU ⊣ CU : Func -> Func
UF ⊣ UC : Con -> Con

Now, CU is a monad on functors, and the Yoneda lemma tells us that it is actually the identity monad. Similarly, FU is a comonad, and the co-Yoneda lemma tells us that it is the identity comonad (which makes sense, because identity is self-adjoint; and the above is why F and C are often named (Co)Yoneda in Haskell examples).

On the other hand, UF is a monad on type constructors (note, U isn't represented in the Haskell types; F and C just play triple duty, and the constraints on f control what's going on):

 
eta :: f a -> F f a
eta = F id
 
transform :: (forall x. f x -> g x) -> F f a -> F g a
transform tr (F g x) = F g (tr x)
 
mu :: F (F f) a -> F f a
mu (F g (F h x)) = F (g . h) x
 

and UC is a comonad:

 
epsilon :: C f a -> f a
epsilon (C e) = e id
 
transform' :: (forall x. f x -> g x) -> C f a -> C g a
transform' tr (C e) = C (tr . e)
 
delta :: C f a -> C (C f) a
delta (C e) = C $ \h -> C $ \g -> e (g . h)
 

These are not the identity (co)monad, but this is the case where we have algebras and coalgebras that are equivalent. So, what are the (co)algebras? If we consider UF (and unpack the definitions somewhat):

 
alg :: forall e. (e -> a, f e) -> f a
alg (id, x) = x
alg (g . h, x) = alg (g, alg (h, x))
 

and for UC:

 
coalg :: f a -> forall r. (a -> r) -> f r
coalg x id = x
coalg x (g . h) = coalg (coalg x h) g
 

in other words, (co)algebra actions of these (co)monads are (mangled) fmap implementations, and the commutativity requirements are exactly what is required to be a law abiding instance. So the (co)algebras are exactly the Functors. [1]

There are, of course, many other examples of adjoint triples. And further, there are even adjoint quadruples, which in turn give rise to adjoint triples of (co)monads. Hopefully this has sparked some folks' interest in finding and studying more interesting examples.

[0]: Another exmaple is A × - ⊣ A -> - where the A in question is a monoid. (Co)monad (co)algebras of these correspond to actions of the monoid on the carrier set.

[1]: This shouldn't be too surprising, because having a category of (co)algebraic structures that is equivalent to the category of (co)algebras of the (co)monad that comes from the (co)free-forgetful adjunction is the basis for doing algebra in category theory (with (co)monads, at least). However, it is somewhat unusual for a forgetful functor to have both a left and right adjoint. In many cases, something is either algebraic or coalgebraic, and not both.

[2]: Urs Schreiber informed me of an interesting interpretation of the ConstΣ ⊣ ConstΠ adjunction. If you are familiar with modal logic and the possible worlds semantics thereof, you can probably imagine that we could model it using something like P : W -> Type, where W is the type of possible worlds, and propositions are types. Then values of type Σ P demonstrate that P holds in particular worlds, while values of type Π P demonstrate that it holds in all worlds. Const turns these types back into world-indexed 'propositions,' so ConstΣ is the possibility modality and ConstΠ is the necessity modality.

I recently attended RDP in Warsaw, where there was quite a bit of work on Homotopy Type Theory, including a special workshop organized to present recent and ongoing work. The organizers of all the events did a fantastic job and there was a great deal of exciting work. I should add that I will not be able to go to RDP next year, as the two constituent central conferences (RTA — Rewriting Techniques and Applications and TLCA — Typed Lambda Calculus and Applications) have merged and changed names. Next year it will now be called FSCD — Formal Structures for Computation and Deduction. So I very much look forward to attending FSCD instead.

In any case, one of the invited speakers was Vladimir Voevodsky, who gave an invited talk on his recent work relating to univalent foundations titled "From Syntax to Semantics of Dependent Type Theories — Formalized”. This was a very clear talk that helped me understand his current research direction and the motivations for it. I also had the benefit of some very useful conversations with others involved in collaboration with some of this work, who patiently answered my questions. The notes below are complimentary to the slides from his talk.

I had sort of understood what the motivation for studying “C-Systems” was, but I had not taken it on myself to look at Voevodsky’s “B-Systems” before, nor had I grasped how his research programme fit together. Since I found this experience enlightening, I figured I might as well write up what I think I understand, with all the usual caveats. Also note, in all the below, by “type theory” I invariably mean the intensional sort. So all the following is in reference to the B-systems paper that Voevodsky has posted on arXiv (arXiv:1410.5389).

That said, if anything I describe here strikes you as funny, it is more likely that I am not describing things right than that the source material is troublesome — i.e. take this with a grain of salt. And bear in mind that I am not attempting to directly paraphrase Voevodsky himself or others I spoke to, but rather I am giving an account of where what they described resonated with me, and filtered through my own examples, etc. Also, if all of the “why and wherefore” is already familiar to you, feel free to skip directly to the “B-Systems” section where I will just discuss Voevodsky’s paper on this topic, and my attempts to understand portions of it. And if you already understand B-Systems, please do reply and explain all the things I’m sure I’m missing!

(more...)

Emil Axelsson and Koen Claessen wrote a functional pearl last year about Using Circular Programs for Higher-Order Syntax.

About 6 months ago I had an opportunity to play with this approach in earnest, and realized we can speed it up a great deal. This has kept coming up in conversation ever since, so I've decided to write up an article here.

In my bound library I exploit the fact that monads are about substitution to make a monad transformer that manages substitution for me.

Here I'm going to take a more coupled approach.

To have a type system with enough complexity to be worth examining, I'll adapt Dan Doel's UPTS, which is a pure type system with universe polymorphism. I won't finish the implementation here, but from where we get it should be obvious how to finish the job.

(more...)

As requested, here are the slides from Dan Doel's excellent presentation on Homotopy and Directed Type Theory from this past Monday's Boston Haskell.

Ok, I decided to take a step back from my flawed approach in the last post and play with the idea of power series of functors from a different perspective.

I dusted off my copy of Herbert Wilf's generatingfunctionology and switched goals to try to see some well known recursive functors or species as formal power series. It appears that we can pick a few things out about the generating functions of polynomial functors.

As an example:

 
Maybe x = 1 + x
 

Ok. We're done. Thank you very much. I'll be here all week. Try the veal...

For a more serious example, the formal power series for the list [x] is just a geometric series:

(more...)

The post below will only compile on a version of GHC >= 6.9, since it uses type families.

(more...)

Recently Eric Kidd and Dan Piponi have used a bit of type hackery by Oleg Kiselyov and -fno-implicit-prelude to build some interesting restricted monads, like the Wadler Set and Bag monads.

There is another interesting monad variation - a parameterized monad - where the monad carries around an additional parameter at the type level such as a type-level set of effects. One really good example of this is the separation logic monad in Hoare Type Theory. The pre- and post- conditions can be viewed as the parameter carried around on that monad. Wadler and Thiemann, Jean-Christophe Filliâtre and others have explore this notion for encoding effects.

(more...)

Was reading Castagna, Ghelli, and Longo's 1995 paper on "A Calculus for Overloaded Functions with Subtyping" today and in it they have to jump through some hoops to index their '&' types to keep them well behaved under β-reduction.

It seems to me, at least from my back-of-the-envelope scribblings, that if you CPS transform the calculus before, that the main technical innovation (overloaded functions using the tighter run-time type information) remains intact, but the need for this technical trick goes away. In this case you know what the reduction will evaluate out to regardless of call-by-value or call-by-need (just bottom), and if the specification changes during evaluation it is still sound, so no need for an index.

 \inference{\Gamma \vdash M:W_1 \leq \lbrace\neg U_i\rbrace_{i\leq(n-1)} & \Gamma \vdash N : W_2 \leq \neg U_n}{\Gamma \vdash (M \binampersand N) : \lbrace \neg U_i \rbrace_{i \leq n }}[$\lbrace\rbrace$-I]

 \inference{\Gamma \vdash M : \lbrace \neg U_i \rbrace_{i \in I} & \Gamma \vdash N : U & U_j = \min_{i \in I} \lbrace U_i \vert U \leq U_i \rbrace } {\Gamma \vdash M \bullet N : \perp }[$\lbrace\rbrace$-E]

The above then would requires explicit continuations and might interfere with rederiving tupling from the overloading mechanism alone, but seems to eliminate some of the barriers they mention to the higher order case. However, I'm not convinced it is a net win regardless, because it would require a notion of typecase.