Foldable
is effectively the toList
class. However, this turns out to be wrong. The real fundamental member of Foldable
is foldMap
(which should look suspiciously like traverse
, incidentally). To understand exactly why this is, it helps to understand another surprising fact: lists are not free monoids in Haskell.
This latter fact can be seen relatively easily by considering another list-like type:
data SL a = Empty | SL a :> a instance Monoid (SL a) where mempty = Empty mappend ys Empty = ys mappend ys (xs :> x) = (mappend ys xs) :> x single :: a -> SL a single x = Empty :> x
So, we have a type SL a
of snoc lists, which are a monoid, and a function that embeds a
into SL a
. If (ordinary) lists were the free monoid, there would be a unique monoid homomorphism from lists to snoc lists. Such a homomorphism (call it h
) would have the following properties:
h [] = Empty h (xs <> ys) = h xs <> h ys h [x] = single x
And in fact, this (together with some general facts about Haskell functions) should be enough to define h
for our purposes (or any purposes, really). So, let's consider its behavior on two values:
h [1] = single 1 h [1,1..] = h ([1] <> [1,1..]) -- [1,1..] is an infinite list of 1s = h [1] <> h [1,1..]
This second equation can tell us what the value of h
is at this infinite value, since we can consider it the definition of a possibly infinite value:
x = h [1] <> x = fix (single 1 <>) h [1,1..] = x
(single 1 <>)
is a strict function, so the fixed point theorem tells us that x = ⊥
.
This is a problem, though. Considering some additional equations:
[1,1..] <> [n] = [1,1..] -- true for all n h [1,1..] = ⊥ h ([1,1..] <> [1]) = h [1,1..] <> h [1] = ⊥ <> single 1 = ⊥ :> 1 ≠ ⊥
So, our requirements for h
are contradictory, and no such homomorphism can exist.
The issue is that Haskell types are domains. They contain these extra partially defined values and infinite values. The monoid structure on (cons) lists has infinite lists absorbing all right-hand sides, while the snoc lists are just the opposite.
This also means that finite lists (or any method of implementing finite sequences) are not free monoids in Haskell. They, as domains, still contain the additional bottom element, and it absorbs all other elements, which is incorrect behavior for the free monoid:
pure x <> ⊥ = ⊥ h ⊥ = ⊥ h (pure x <> ⊥) = [x] <> h ⊥ = [x] ++ ⊥ = x:⊥ ≠ ⊥
So, what is the free monoid? In a sense, it can't be written down at all in Haskell, because we cannot enforce value-level equations, and because we don't have quotients. But, if conventions are good enough, there is a way. First, suppose we have a free monoid type FM a
. Then for any other monoid m
and embedding a -> m
, there must be a monoid homomorphism from FM a
to m
. We can model this as a Haskell type:
forall a m. Monoid m => (a -> m) -> FM a -> m
Where we consider the Monoid m
constraint to be enforcing that m
actually has valid monoid structure. Now, a trick is to recognize that this sort of universal property can be used to define types in Haskell (or, GHC at least), due to polymorphic types being first class; we just rearrange the arguments and quantifiers, and take FM a
to be the polymorphic type:
newtype FM a = FM { unFM :: forall m. Monoid m => (a -> m) -> m }
Types defined like this are automatically universal in the right sense. [1] The only thing we have to check is that FM a
is actually a monoid over a
. But that turns out to be easily witnessed:
embed :: a -> FM a embed x = FM $ \k -> k x instance Monoid (FM a) where mempty = FM $ \_ -> mempty mappend (FM e1) (FM e2) = FM $ \k -> e1 k <> e2 k
Demonstrating that the above is a proper monoid delegates to instances of Monoid
being proper monoids. So as long as we trust that convention, we have a free monoid.
However, one might wonder what a free monoid would look like as something closer to a traditional data type. To construct that, first ignore the required equations, and consider only the generators; we get:
data FMG a = None | Single a | FMG a :<> FMG a
Now, the proper FM a
is the quotient of this by the equations:
None :<> x = x = x :<> None x :<> (y :<> z) = (x :<> y) :<> z
One way of mimicking this in Haskell is to hide the implementation in a module, and only allow elimination into Monoid
s (again, using the convention that Monoid
ensures actual monoid structure) using the function:
unFMG :: forall a m. Monoid m => FMG a -> (a -> m) -> m unFMG None _ = mempty unFMG (Single x) k = k x unFMG (x :<> y) k = unFMG x k <> unFMG y k
This is actually how quotients can be thought of in richer languages; the quotient does not eliminate any of the generated structure internally, it just restricts the way in which the values can be consumed. Those richer languages just allow us to prove equations, and enforce properties by proof obligations, rather than conventions and structure hiding. Also, one should note that the above should look pretty similar to our encoding of FM a
using universal quantification earlier.
Now, one might look at the above and have some objections. For one, we'd normally think that the quotient of the above type is just [a]
. Second, it seems like the type is revealing something about the associativity of the operations, because defining recursive values via left nesting is different from right nesting, and this difference is observable by extracting into different monoids. But aren't monoids supposed to remove associativity as a concern? For instance:
ones1 = embed 1 <> ones1 ones2 = ones2 <> embed 1
Shouldn't we be able to prove these are the same, becuase of an argument like:
ones1 = embed 1 <> (embed 1 <> ...) ... reassociate ... = (... <> embed 1) <> embed 1 = ones2
The answer is that the equation we have only specifies the behavior of associating three values:
x <> (y <> z) = (x <> y) <> z
And while this is sufficient to nail down the behavior of finite values, and finitary reassociating, it does not tell us that infinitary reassociating yields the same value back. And the "... reassociate ..." step in the argument above was decidedly infinitary. And while the rules tell us that we can peel any finite number of copies of embed 1
to the front of ones1
or the end of ones2
, it does not tell us that ones1 = ones2
. And in fact it is vital for FM a
to have distinct values for these two things; it is what makes it the free monoid when we're dealing with domains of lazy values.
Finally, we can come back to Foldable
. If we look at foldMap
:
foldMap :: (Foldable f, Monoid m) => (a -> m) -> f a -> m
we can rearrange things a bit, and get the type:
Foldable f => f a -> (forall m. Monoid m => (a -> m) -> m)
And thus, the most fundamental operation of Foldable
is not toList
, but toFreeMonoid
, and lists are not free monoids in Haskell.
[1]: What we are doing here is noting that (co)limits are objects that internalize natural transformations, but the natural transformations expressible by quantification in GHC are already automatically internalized using quantifiers. However, one has to be careful that the quantifiers are actually enforcing the relevant naturality conditions. In many simple cases they are.
]]>About 6 months ago I had an opportunity to play with this approach in earnest, and realized we can speed it up a great deal. This has kept coming up in conversation ever since, so I've decided to write up an article here.
In my bound library I exploit the fact that monads are about substitution to make a monad transformer that manages substitution for me.
Here I'm going to take a more coupled approach.
To have a type system with enough complexity to be worth examining, I'll adapt Dan Doel's UPTS, which is a pure type system with universe polymorphism. I won't finish the implementation here, but from where we get it should be obvious how to finish the job.
Unlike Axelsson and Claessen I'm not going to bother to abstract over my name representation.
To avoid losing the original name from the source, we'll just track names as strings with an integer counting the number of times it has been 'primed'. The name is purely for expository purposes, the real variable identifier is the number. We'll follow the Axelsson and Claessen convention of having the identifier assigned to each binder be larger than any one bound inside of it. If you don't need he original source names you can cull them from the representation, but they can be useful if you are representing a syntax tree for something you parsed and/or that you plan to pretty print later.
data Name = Name String Int deriving (Show,Read) hint :: Name -> String hint (Name n _) = n nameId :: Name -> Int nameId (Name _ i) = i instance Eq Name where (==) = (==) `on` nameId instance Ord Name where compare = compare `on` nameId prime :: String -> Int -> Name prime n i = Name n (i + 1)
So what is the language I want to work with?
type Level = Int data Constant = Level | LevelLiteral {-# UNPACK #-} !Level | Omega deriving (Eq,Ord,Show,Read,Typeable) data Term a = Free a | Bound {-# UNPACK #-} !Name | Constant !Constant | Term a :+ {-# UNPACK #-} !Level | Max [Term a] | Type !(Term a) | Lam {-# UNPACK #-} !Name !(Term a) !(Term a) | Pi {-# UNPACK #-} !Name !(Term a) !(Term a) | Sigma {-# UNPACK #-} !Name !(Term a) !(Term a) | App !(Term a) !(Term a) | Fst !(Term a) | Snd !(Term a) | Pair !(Term a) !(Term a) !(Term a) deriving (Show,Read,Eq,Ord,Functor,Foldable,Traversable,Typeable)
That is perhaps a bit paranoid about remaining strict, but it seemed like a good idea at the time.
We can define capture avoiding substitution on terms:
subst :: Eq a => a -> Term a -> Term a -> Term a subst a x y = y >>= \a' -> if a == a' then x else return a'
Now we finally need to implement Axelsson and Claessen's circular programming trick. Here we'll abstract over terms that allow us to find the highest bound value within them:
class Bindable t where bound :: t -> Int
and instantiate it for our Term
type
instance Bindable (Term a) where bound Free{} = 0 bound Bound{} = 0 -- intentional! bound Constant{} = 0 bound (a :+ _) = bound a bound (Max xs) = foldr (\a r -> bound a `max` r) 0 xs bound (Type t) = bound t bound (Lam b t _) = nameId b `max` bound t bound (Pi b t _) = nameId b `max` bound t bound (Sigma b t _) = nameId b `max` bound t bound (App x y) = bound x `max` bound y bound (Fst t) = bound t bound (Snd t) = bound t bound (Pair t x y) = bound t `max` bound x `max` bound y
As in the original pearl we avoid traversing into the body of the binders, hence the _'s in the code above.
Now we can abstract over the pattern used to create a binder in the functional pearl, since we have multiple binder types in this syntax tree, and the code would get repetitive.
binder :: Bindable t => (Name -> t) -> (Name -> t -> r) -> String -> (t -> t) -> r binder bd c n e = c b body where body = e (bd b) b = prime n (bound body) lam, pi, sigma :: String -> Term a -> (Term a -> Term a) -> Term a lam s t = binder Bound (`Lam` t) s pi s t = binder Bound (`Pi` t) s sigma s t = binder Bound (`Sigma` t) s
We may not always want to give names to the variables we capture, so let's define:
lam_, pi_, sigma_ :: Term a -> (Term a -> Term a) -> Term a lam_ = lam "_" pi_ = pi "_" sigma_ = sigma "_"
Now, here's the interesting part. The problem with Axelsson and Claessen's original trick is that every substitution is being handled separately. This means that if you were to write a monad for doing substitution with it, it'd actually be quite slow. You have to walk the syntax tree over and over and over.
We can fuse these together by making a single pass:
instantiate :: Name -> t -> IntMap t -> IntMap t instantiate = IntMap.insert . nameId rebind :: IntMap (Term b) -> Term a -> (a -> Term b) -> Term b rebind env xs0 f = go xs0 where go = \case Free a -> f a Bound b -> env IntMap.! nameId b Constant c -> Constant c m :+ n -> go m :+ n Type t -> Type (go t) Max xs -> Max (fmap go xs) Lam b t e -> lam (hint b) (go t) $ \v -> rebind (instantiate b v env) e f Pi b t e -> pi (hint b) (go t) $ \v -> rebind (instantiate b v env) e f Sigma b t e -> sigma (hint b) (go t) $ \v -> rebind (instantiate b v env) e f App x y -> App (go x) (go y) Fst x -> Fst (go x) Snd x -> Snd (go x) Pair t x y -> Pair (go t) (go x) (go y)
Note that the Lam
, Pi
and Sigma
cases just extend the current environment.
With that now we can upgrade the pearl's encoding to allow for an actual Monad in the same sense as bound
.
instance Applicative Term where pure = Free (< *>) = ap instance Monad Term where return = Free (>>=) = rebind IntMap.empty
To show that we can work with this syntax tree representation, let's write an evaluator from it to weak head normal form:
First we'll need some helpers:
apply :: Term a -> [Term a] -> Term a apply = foldl App rwhnf :: IntMap (Term a) -> [Term a] -> Term a -> Term a rwhnf env stk (App f x) = rwhnf env (rebind env x Free:stk) f rwhnf env (x:stk) (Lam b _ e) = rwhnf (instantiate b x env) stk e rwhnf env stk (Fst e) = case rwhnf env [] e of Pair _ e' _ -> rwhnf env stk e' e' -> Fst e' rwhnf env stk (Snd e) = case rwhnf env [] e of Pair _ _ e' -> rwhnf env stk e' e' -> Snd e' rwhnf env stk e = apply (rebind env e Free) stk
Then we can start off the whnf
by calling our helper with an initial starting environment:
whnf :: Term a -> Term a whnf = rwhnf IntMap.empty []
So what have we given up? Well, bound
automatically lets you compare terms for alpha equivalence by quotienting out the placement of "F" terms in the syntax tree. Here we have a problem in that the identifiers we get assigned aren't necessarily canonical.
But we can get the same identifiers out by just using the monad above:
alphaEq :: Eq a => Term a -> Term a -> Bool alphaEq = (==) `on` liftM id
It makes me a bit uncomfortable that our monad is only up to alpha equivalence and that liftM
swaps out the identifiers used throughout the entire syntax tree, and we've also lost the ironclad protection against exotic terms.
But overall, this is a much faster version of Axelsson and Claessen's trick and it can be used as a drop-in replacement for something like bound
in many cases, and unlike bound, it lets you use HOAS-style syntax for constructing lam
, pi
and sigma
terms.
With pattern synonyms you can prevent the user from doing bad things as well. Once 7.10 ships you'd be able to use a bidirectional pattern synonym for Pi
, Sigma
and Lam
to hide the real constructors behind. I'm not yet sure of the "best practices" in this area.
Here's the code all in one place:
Happy Holidays,
-Edward
You’ve recently entered the world of strongly typed functional programming, and you’ve decided it is great. You’ve written a program or two or a library or two, and you’re getting the hang of it. You hop on IRC and hear new words and ideas every day. There are always new concepts to learn, new libraries to explore, new ways to refactor your code, new typeclasses to make instances of.
Now, you’re a social person, and you want to go forth and share all the great things you’ve learned. And you have learned enough to distinguish some true statements from some false statements, and you want to go and slay all the false statements in the world.
Is this really what you want to do? Do you want to help people, do you want to teach people new wonderful things? Do you want to share the things that excite you? Or do you want to feel better about yourself, confirm that you are programming better, confirm that you are smarter and know more, reassure yourself that your adherence to a niche language is ok by striking out against the mainstream? Of course, you want to do the former. But a part of you probably secretly wants to do the latter, because in my experience that part is in all of us. It is our ego, and it drives us to great things, but it also can hold us back, make us act like jerks, and, worst of all, stand in the way of communicating with others about what we truly care about.
Haskell wasn’t built on great ideas, although it has those. It was built on a culture of how ideas are treated. It was not built on slaying others’ dragons, but on finding our own way; not tearing down rotten ideas (no matter how rotten) but showing by example how we didn’t need those ideas after all.
In functional programming, our proofs are not by contradiction, but by construction. If you want to teach functional programming, or preach functional programming, or just to even have productive discussions as we all build libraries and projects together, it will serve you well to learn that ethic.
You know better than the next developer, or so you think. This is because of something you have learned. So how do you help them want to learn it too? You do not tell them this is a language for smart people. You do not tell them you are smart because you use this language. You tell them that types are for fallible people, like we all are. They help us reason and catch our mistakes, because while software has grown more complex, we’re still stuck with the same old brains. If they tell you they don’t need types to catch errors, tell them that they must be much smarter than you, because you sure do. But even more, tell them that all the brainpower they use to not need types could turn into even greater, bigger, and more creative ideas if they let the compiler help them.
This is not a language for clever people, although there are clever things that can be done in this language. It is a language for simple things and clever things alike, and sometimes we want to be simple, and sometimes we want to be clever. But we don’t give bonus points for being clever. Sometimes, it’s just fun, like solving a crossword puzzle or playing a tricky Bach prelude, or learning a tango. We want to keep simple things simple so that tricky things are possible.
It is not a language that is “more mathematical” or “for math” or “about math”. Yes, in a deep formal sense, programming is math. But when someone objects to this, this is not because they are a dumb person, a bad person, or a malicious person. They object because they have had a bad notion of math foisted on them. “Math” is the thing that people wield over them to tell them they are not good enough, that they cannot learn things, that they don’t have the mindset for it. That’s a dirty lie. Math is not calculation — that’s what computers are for. Nor is math just abstract symbols. Nor is math a prerequisite for Haskell. If anything, Haskell might be what makes somebody find math interesting at all. Our equation should not be that math is hard, and so programming is hard. Rather, it should be that programming can be fun, and this means that math can be fun too. Some may object that programming is not only math, because it is engineering as well, and creativity, and practical tradeoffs. But, surprisingly, these are also elements of the practice of math, if not the textbooks we are given.
I have known great Haskell programmers, and even great computer scientists who know only a little linear algebra maybe, or never bothered to pick up category theory. You don’t need that stuff to be a great Haskell programmer. It might be one way. The only thing you need category theory for is to take great categorical and mathematical concepts from the world and import them back to programming, and translate them along the way so that others don’t need to make the same journey you did. And you don’t even need to do that, if you have patience, because somebody else will come along and do it for you, eventually.
The most important thing, though not hardest, about teaching and spreading knowledge is to emphasize that this is for everyone. Nobody is too young, too inexperienced, too old, too set in their ways, too excitable, insufficiently mathematical, etc. Believe in everyone, attack nobody, even the trolliest.* Attacking somebody builds a culture of sniping and argumentativeness. It spreads to the second trolliest, and soforth, and then eventually to an innocent bystander who just says the wrong thing to spark bad memories of the last big argument.
The hardest thing, and the second most important, is to put aside your pride. If you want to teach people, you have to empathize with how they think, and also with how they feel. If your primary goal is to spread knowledge, then you must be relentlessly self-critical of anything you do or say that gets in the way of that. And you don’t get to judge that — others do. And you must just believe them. I told you this was hard. So if somebody finds you offputting, that’s your fault. If you say something and somebody is hurt or takes offense, it is not their fault for being upset, or feeling bad. This is not about what is abstractly hurtful in a cosmic sense; it is about the fact that you have failed, concretely, to communicate as you desired. So accept the criticism, apologize for giving offense (not just for having upset someone but also for what you did to hurt them), and attempt to learn why they feel how they feel, for next time.
Note that if you have made somebody feel crummy, they may not be in a mood to explain why or how, because their opinion of you has already plummeted. So don’t declare that they must or should explain themselves to you, although you may politely ask. Remember that knowledge does not stand above human behavior. Often, you don't need to know exactly why a person feels the way they do, only that they do, so you can respect that. If you find yourself demanding explanations, ask yourself, if you knew this thing, would that change your behavior? How? If not, then learn to let it go.
Remember also that they were put off by your actions, not by your existence. It is easy to miss this distinction and react defensively. "Fight-or-flight" stands in the way of clear thinking and your ability to empathize; try taking a breath and maybe a walk until the adrenaline isn't derailing your true intentions.
Will this leave you satisfied? That depends. If your goal is to understand everything and have everybody agree with regards to everything that is in some sense objectively true, it will not. If your goal is to have the widest, nicest, most diverse, and most fun Haskell community possible, and to interact in an atmosphere of mutual respect and consideration, then it is the only thing that will leave you satisfied.
If you make even the most modest (to your mind) mistake, be it in social interaction or technical detail, be quick to apologize and retract, and do so freely. What is there to lose? Only your pride. Who keeps track? Only you. What is there to gain? Integrity, and ultimately that integrity will feel far more fulfilling than the cheap passing thrills of cutting somebody else down or deflecting their concerns.
Sometimes it may be, for whatever reason, that somebody doesn’t want to talk to you, because at some point your conversation turned into an argument. Maybe they did it, maybe you did it, maybe you did it together. It doesn’t matter, learn to walk away. Learn from the experience how to communicate better, how to avoid that pattern, how to always be the more positive, more friendly, more forward-looking. Take satisfaction in the effort in that. Don’t talk about them behind their back, because that will only fuel your own bad impulses. Instead, think about how you can change.
Your self-esteem doesn’t need your help. You may feel you need to prove yourself, but you don't. Other people, in general, have better things to do with their time than judge you, even when you may sometimes feel otherwise. You know you’re talented, that you have learned things, and built things, and that this will be recognized in time. Nobody else wants to hear it from you, and the more they hear it, the less they will believe it, and the more it will distract from what you really want, which is not to feed your ego, not to be great, but to accomplish something great, or even just to find others to share something great with. In fact, if anyone's self-esteem should be cared for, it is that of the people you are talking to. The more confident they are in their capacity and their worth, the more willing they will be to learn new things, and to acknowledge that their knowledge, like all of ours, is limited and partial. You must believe in yourself to be willing to learn new things, and if you want to cultivate more learners, you must cultivate that self-belief in others.
Knowledge is not imposing. Knowledge is fun. Anyone, given time and inclination, can acquire it. Don’t only lecture, but continue to learn, because there is always much more than you know. (And if there wasn’t, wow, that would be depressing, because what would there be to learn next?) Learn to value all opinions, because they all come from experiences, and all those experiences have something to teach us. Dynamic typing advocates have brought us great leaps in JIT techniques. If you’re interested in certain numerical optimizations, you need to turn to work pioneered in C++ or Fortran. Like you, I would rather write in Haskell. But it is not just the tools that matter but the ideas, and you will find they come from everywhere.
In fact, we have so much to learn that we direct our learning by setting up barriers — declaring certain tools, fields, languages, or communities not worth our time. This isn’t because they have nothing to offer, but it is a crutch for us to shortcut evaluating too many options all at once. It is fine, and in fact necessary, to narrow the scope of your knowledge to increase its depth. But be glad that others are charting other paths! Who knows what they will bring back from those explorations.
If somebody is chatting about programming on the internet, they’re already ahead of the pack, already interested in craft and knowledge. You may not share their opinions, but you have things to learn from one another, always. Maybe the time and place aren’t right to share ideas and go over disputes. That’s ok. There will be another time and place, or maybe there won’t be. There is a big internet full of people, and you don’t need to be everybody’s friend or everybody’s mentor. You should just avoid being anybody’s enemy, because your time and theirs is too precious to waste it on hard feelings instead of learning new cool stuff.
This advice is not a one-time proposition. Every time we learn something new and want to share it, we face these issues all over again -- the desire to proclaim, to overturn received wisdom all at once -- and the worse the received wisdom, the more vehemently we want to strike out. But if we are generous listeners and attentive teachers, we not only teach better and spread more knowledge, but also learn more, and enjoy ourselves more in the process. To paraphrase Rilke’s “Letter to a Young Poet”: Knowledge is good if it has sprung from necessity. In this nature of its origin lies the judgement of it: there is no other.
Thanks to the various folks in and around the Haskell world who have helped me refine this article. I don't name you only because I don't want to imply your endorsement, or give what is still, at base, a very personal take, any particular sort of imprimatur of a broader group of people, all of whom I suspect will disagree among themselves and with me about various specifics.
*: It has been pointed out to me that this advice is not universal. Clearly there are some things that deserve more pointed responses. Bigotry, outright harassment and poisonous behavior, etc. So please read this paragraph only as it applies to talking about technical issues, not as regards to many other things, where there are people better equipped than me to give advice.
]]>
The annual CUFP workshop is a place where people can see how others are using functional programming to solve real world problems; where practitioners meet and collaborate; where language designers and users can share ideas about the future of their favorite language; and where one can learn practical techniques and approaches for putting functional programming to work.
If you have experience using functional languages in a practical setting, we invite you to submit a proposal to give a talk at the workshop. We're looking for two kinds of talks:
Experience reports are typically 25 minutes long, and aim to inform participants about how functional programming plays out in real-world applications, focusing especially on lessons learned and insights gained. Experience reports don't need to be highly technical; reflections on the commercial, management, or software engineering aspects are, if anything, more important.
Technical talks are also 25 minutes long, and should focus on teaching the audience something about a particular technique or methodology, from the point of view of someone who has seen it play out in practice. These talks could cover anything from techniques for building functional concurrent applications, to managing dynamic reconfigurations, to design recipes for using types effectively in large-scale applications. While these talks will often be based on a particular language, they should be accessible to a broad range of programmers.
We strongly encourage submissions from people in communities that are underrepresented in functional programming, including but not limited to women; people of color; people in gender, sexual and romantic minorities; people with disabilities; people residing in Asia, Africa, or Latin America; and people who have never presented at a conference before. We recognize that inclusion is an important part of our mission to promote functional programming. So that CUFP can be a safe environment in which participants openly exchange ideas, we abide by the SIGPLAN Conference Anti-Harassment Policy.
If you are interested in offering a talk, or nominating someone to do
so, please submit your presentation before 27 June 2014 via the
CUFP 2014 Presentation Submission Form
You do not need to submit a paper, just a short proposal for your talk! There will be a short scribe's report of the presentations and discussions but not of the details of individual talks, as the meeting is intended to be more a discussion forum than a technical interchange.
Nevertheless, presentations will be video taped and presenters will be expected to sign an ACM copyright release form.
Note that we will need all presenters to register for the CUFP workshop and travel to Gothenburg at their own expense.
For more information on CUFP, including videos of presentations from
previous years, take a look at the CUFP website at cufp.org. Note that presenters, like other attendees, will need to register for the event. Presentations will be video taped and presenters will be expected to sign an ACM copyright release form. Acceptance and rejection letters will be sent out by July 16th.
Focus on the interesting bits: Think about what will distinguish your talk, and what will engage the audience, and focus there. There are a number of places to look for those interesting bits.
Let's do some counting exercises. Product Identity Identity
holds exactly two things. It is therefore isomorphic to ((->) Bool)
, or if we prefer, ((->) Either () ())
. That is to say that a pair that holds two values of type a
is the same as a function that takes a two-valued type and yields a value of type a
. A product of more functors in turn is isomorphic to the reader of the sum of each of the datatypes that "represent" them. E.g. Product (Product Identity Identity) (Product (Const ()) Identity)
is iso to ((->) (Either (Either () ()) ())
, i.e. a data type with three possible inhabitants. In making this move we took Product to Either -- multiplication to sum. We can pull a similar trick with Compose. Compose (Product Identity Identity) (Product Identity Identity)
goes to ((->) (Either () (),Either () ())). So again we took Product to a sum type, but now we took Compose to a pair -- a product type! The intuition is that composition multiplies the possibilities of spaces in each nested functor.
Hmm.. products go to sums, composition goes to multiplication, etc. This should remind us of something -- these rules are exactly the rules for working with exponentials. x^n * x^m = x^(n + m). (x^n)^m = x^(n*m). x^0 = 1, x^1 = x.
Seen from the right standpoint, this isn't surprising at all, but almost inevitable. The functors we're describing are known as "representable," a term which derives from category theory. (See appendix on representable functors below).
In Haskell-land, a "representable functor" is just any functor isomorphic to the reader functor ((->) a)
for some appropriate a. Now if we think back to our algebraic representations of data types, we call the arrow type constructor an exponential. We can "count" a -> x
as x^a, since e.g. there are 3^2 distinct functions that inhabit the type 2 -> 3. The intuition for this is that for each input we pick one of the possible results, so as the number of inputs goes up by one, the number of functions goes up by multiplying through by the set of possible results. 1 -> 3 = 3, 2 -> 3 = 3 * 3, (n + 1) -> 3 = 3 * (n -> 3).
Hence, if we "represent" our functors by exponentials, then we can work with them directly as exponentials as well, with all the usual rules. Edward Kmett has a library encoding representable functors in Haskell.
Meanwhile, Peter Hancock prefers to call such functors "Naperian" after John Napier, inventor of the logarithm (See also here). Why Naperian? Because if our functors are isomorphic to exponentials, then we can take their logs! And that brings us back to the initial discussion of type mathematics. We have some functor F, and claim that it is isomorphic to -^R for some concrete data type R. Well, this means that R is the logarithm of F. E.g. (R -> a, S -> a) =~ Either R S -> a
, which is to say that if log F = R and log G =~ S, then log (F * G) = log F + log G. Similarly, for any other data type n, again with log F = R, we have n -> F a =~ n -> R -> a =~ (n * R) -> a
, which is to say that log (F^n) =~ n * log F.
This gives us one intuition for why the sum functor is not generally representable -- it is very difficult to decompose log (F + G) into some simpler compound expression of logs.
So what functors are Representable? Anything that can be seen as a fixed shape with some index. Pairs, fixed-size vectors, fixed-size matrices, any nesting of fixed vectors and matricies. But also infinite structures of regular shape! However, not things whose shape can vary -- not lists, not sums. Trees of fixed depth or infinite binary trees therefore, but not trees of arbitrary depth or with ragged structure, etc.
Representable functors turn out to be extremely powerful tools. Once we know a functor is representable, we know exactly what its applicative instance must be, and that its applicative instance will be "zippy" -- i.e. acting pointwise across the structure. We also know that it has a monad instance! And, unfortunately, that this monad instance is typically fairly useless (in that it is also "zippy" -- i.e. the monad instance on a pair just acts on the two elements pointwise, without ever allowing anything in the first slot to affect anything in the second slot, etc.). But we know more than that. We know that a representable functor, by virtue of being a reader in disguise, cannot have effects that migrate outwards. So any two actions in a representable functor are commutative. And more than that, they are entirely independent.
This means that all representable functors are "distributive"! Given any functor f, and any data type r, then we have
distributeReader :: Functor f => f (r -> a) -> (r -> f a) distributeReader fra = \r -> fmap ($r) fra
That is to say, given an arrow "inside" a functor, we can always pull the arrow out, and "distribute" application across the contents of the functor. A list of functions from Int -> Int
becomes a single function from Int
to a list of Int
, etc. More generally, since all representable functors are isomorphic to reader, given g representable, and f any functor, then we have: distribute :: (Functor f, Representable g) => f (g a) -> g (f a).
This is pretty powerful sauce! And if f and g are both representable, then we get the transposition isomorphism, witnessed by flip
! That's just the beginning of the good stuff. If we take functions and "unrepresent" them back to functors (i.e. take their logs), then we can do things like move from ((->) Bool)
to pairs, etc. Since we're in a pervasively lazy language, we've just created a library for memoization! This is because we've gone from a function to a data structure we can index into, representing each possible argument to this function as a "slot" in the structure. And the laziness pays off because we only need to evaluate the contents of each slot on demand (otherwise we'd have a precomputed lookup table rather than a dynamically-evaluated memo table).
And now suppose we take our representable functor in the form s -> a
and paired it with an "index" into that function, in the form of a concrete s
. Then we'd be able to step that s
forward or backwards and navigate around our structure of a
s. And this is precisely the Store Comonad! And this in turn gives a characterization of the lens laws.
What this all gives us a tiny taste of, in fact, is the tremendous power of the Yoneda lemma, which, in Haskell, is all about going between values and functions, and in fact captures the important universality and uniqueness properties that make working with representable functors tractable. A further tiny taste of Yoneda comes from a nice blog post by Conal Elliott on memoization.
Extra Credit on Sum Functors
There in fact is a log identity on sums. It goes like this:
log(a + c) = log a + log (1 + c/a)
Do you have a useful computational interpretation of this? I've got the inklings of one, but not much else.
Appendix: Notes on Representable Functors in Hask.
The way to think about this is to take some arbitrary category C, and some category that's basically Set (in our case, Hask. In fact, in our case, C is Hask too, and we're just talking about endofunctors on Hask). Now, we take some functor F : C -> Set, and some A which is an element of C. The set of morphisms originating at A (denoted by Hom(A,-)) constitutes a functor called the "hom functor." For any object X in C, we can "plug it in" to Hom(A,-), to then get the set of all arrows from A to X. And for any morphism X -> Y in C, we can derive a morphism from Hom(A,X) to Hom(A,Y), by composition. This is equivalent to, in Haskell-land, using a function f :: x -> y
to send g :: a -> x
to a -> y
by writing "functionAToY = f . g".
So, for any A in C, we have a hom functor on C, which is C -> Set, where the elements of the resultant Set are homomorphisms in C. Now, we have this other arbitrary functor F, which is also C -> Set. Now, if there is an isomorphism of functors between F, and Hom(A,_), then we say F is "representable". A representable functor is thus one that can be worked with entirely as an appropriate hom-functor.
]]>I was incredibly honored and I figured that if that many people (they had 30 or so registered attendees and 10 presentations) were going to spend that much time going over software that I had written, I should at least offer to show up!
I'd like to apologize for any errors in the romanization of people's names or misunderstandings I may have in the following text. My grasp of Japanese is very poor! Please feel free to send me corrections or additions!
Sadly, my boss's immediate reaction to hearing that there was a workshop in Japan about my work was to quip that "You're saying you're huge in Japan?" With him conspicuously not offering to fly me out here, I had to settle for surprising the organizers and attending via Google Hangout.
@nushio was very helpful in getting me connected, and while the speakers gave their talks I sat on the irc.freenode.net #haskell-lens channel and Google Hangout and answered questions and provided a running commentary with more details and references. Per freenode policy the fact that we were logging the channel was announced -- well, at least before things got too far underway.
Here is the IRC session log as a gist. IKEGAMI Daisuke @ikegami__ (ikeg
in the IRC log) tried to keep up a high-level running commentary about what was happening in the video to the log, which may be helpful if you are trying to follow along through each retroactively.
Other background chatter and material is strewn across twitter under the #ekmett_conf hash tag and on a japanese twitter aggregator named togetter
The 1PM start time in Shibuya, Tokyo, Japan translates to midnight at the start of Easter here in Boston, which meant ~6 hours later when we reached the Q&A session, I was a bit loopy from lack of sleep, but they were incredibly polite and didn't seem to mind my long rambling responses.
Thanks to the organizers, we have video of the vast majority of the event! There was no audio for the first couple of minutes, and the recording machine lost power for the last talk and the Q&A session at the end as we ran somewhat longer than they had originally scheduled! -- And since I was attending remotely and a number of others flitted in and out over the course of the night, they were nice enough to put most of the slides and background material online.
Liyang Hu (@liyanghu) started the session off with a nicely self-contained crash course on my profunctors package, since profunctors are used fairly heavily inside the implementation of lens and machines, with a couple of detours into contravariant and bifunctors.
His presentation materials are available interactively from the new FP Complete School of Haskell. You can also watch the video recording of his talk on ustream.
This talk was followed by a much more condensed version of very similar content in Japanese by Hibino Kei (@khibino) His talk was more focused on the relationship between arrows and profunctors, and the slides are available through slideshare.
Once the necessary background material was out of the way, the talk on lens -- arguably the presentation that most of the people were there for -- came early.
@its_out_of_tune gave an incredibly dense overview of how to use the main parts of the lens package in Japanese. His slides are available online and here is a recording of his talk.
Over the course of a half hour, he was able to cram in a great cross-section of the library including material that I hadn't even been able to get to even with 4x the amount of time available during my New York talk on how to use the lens template-haskell code to automatically generate lenses for user data types and how to use the lens Action machinery.
Next up, was my free package and the neat free-game engine that Kinoshita Fumiaki (@fumieval) built on top.
The slides were in English, though the talk and humor were very Japanese. ^_^
That said, he had some amazingly nice demos, including a live demo of his tetris clone, Monaris, which is visible about 10 minutes into the video!
@nebutalab, like me, joined the session remotely through Google Hangout, and proceeded to give a tutorial on how forward mode automatic differentiation works through my AD package.
His slides were made available before the talk and the video is available in two parts due a technical hiccup in the middle of the recording.
I'm currently working to drastically simplify the API for ad with Alex Lang. Fortunately almost all of the material in this presentation will still be relevant to the new design.
Next up, Murayama Shohei (@yuga) gave an introduction to tables, which is a small in memory data-store that I wrote a few months back to sit on top of lens.
Video of @yuga's talk and his slides are available, which I think makes this the first public talk about this project. -_^
Yoshida Sanshiro (@halcat0x15a) gave a nice overview of the currently released version of machines including a lot of examples! I think he may have actually written more code using machines just for demonstrations than I have written using it myself.
Video of his talk is available along with his slide deck -- just tap left or right to move through the slides. He has also written a blog post documenting his early explorations of the library, and some thoughts about using it with attoparsec.
I've recently been trying to redesign machines with coworker Paul CHIUSANO @pchiusano and we've begun greatly simplifying the design of machines based on some work he has been doing in Scala, so unfortunately many of the particulars of this talk will be soon outdated, but the overall 'feel' of working with machines should be preserved across the change-over. Some of these changes can be seen in the master branch on github now.
There were 4 more sessions, but alas, I'm out of time for the moment! I'll continue this write-up with more links to the source material and my thoughts as soon as I can tomorrow!
]]>It's well known that if you have any Functor F a
, you can take its "fixpoint", creating a structure of infinitely nested Fs, like so. F (F (F (...) ) )
Since we can't have infinite types directly in Haskell, we introduce the Fix newtype:
newtype Fix f = Fix (f (Fix f))
This "wraps up" the recursion so that GHC accepts the type. Fix f
is a Fix
constructor, containing an "f" of Fix f
inside. Each in turn expands out, and soforth. Fixpoints of functors have fixedpoints of functors inside 'em. And so on, and so on, ad infinitum.
(Digression: We speak of "algebraic data types" in Haskell. The "algebra" in question is an "F-algebra", and we can build up structures with fixpoints of functors, taking those functors as initial or terminal objects and generating either initial algebras or terminal coalgebras. These latter two concepts coincide in Haskell in the Fix description given above, as greatest and least fixpoints of data types in Haskell turn out to be the same thing. For more background, one can go to Wadler's "Recursive Types for Free," or Jacobs and Rutten's "Tutorial on (Co)Algebras and (Co)Induction" for starters.)
The family of functors built from our friends Const, Sum, Product, and Reader (exponentiation) are known as Polynomial Functors. If we take closure of these with a proper fixpoint construct (that lets us build infinite structures), we get things that are variously known as Containers, Shapely Types, and Strictly Positive types.
One irritating thing is that the fixpoint of a functor as we've written it is no longer itself a functor. The type constructor Fix is of kind (* -> *) -> *
, which says it takes an "f" which takes one argument (e.g. "Maybe" or "Identity" or etc.) and returns a proper type (i.e. a value at the type level of kind *).
We want a fixpoint construction that gives back something of kind * -> *
— i.e. something that is a type constructor representing a functor, and not just a plain old type. The following does the trick.
newtype FixF f a = FixF (f (FixF f) a) deriving instance (Show (f (FixF f) a)) => Show (FixF f a)
(I learned about FixF from a paper by Ralf Hinze, but I'm sure the origins go back much further).
FixF is of kind ((* -> *) -> * -> *) -> * -> *
. It takes the fixpoint of a "second-order Functor" (a Functor that sends a Functor to another Functor, i.e. an endofunctor on the functor category of hask), to recover a standard "first order Functor" back out. This sounds scary, but it isn't once you load it up in ghci and start playing with it. In fact, we've encountered second order functors just recently. Product, Sum, and Compose are all of kind (* -> *) -> (* -> *) -> * -> *
. So they all send two functors to a third functor. That means that Product Const
, Sum Identity
and Compose Maybe
are all second-order functors, and things appropriate to take our "second-order fixpoint" of.
Conceptually, "Fix f" took a value with one hole, and we filled that hole with "Fix f" so there was no room for a type parameter. Now we've got an "f" with two holes, the first of which takes a functor, and the second of which is the hole of the resulting functor.
Unlike boring old "Fix", we can write Functor and Applicative instances for "FixF", and they're about as simple and compositional as we could possibly hope.
instance Functor (f (FixF f)) => Functor (FixF f) where fmap f (FixF x) = FixF $ fmap f x instance Applicative (f (FixF f)) => Applicative (FixF f) where pure x = FixF $ pure x (FixF f) < *> (FixF x) = FixF (f < *> x)
But now we run into a new problem! It seems like this "a" parameter is just hanging out there, doing basically nothing. We take our classic functors and embed them in there, and they still only have "one hole" at the value level, so don't actually have any place to put the "a" type we now introduced. For example, we can write the following:
-- FixF . Compose . Just . FixF . Compose $ Nothing -- > FixF (Compose (Just (FixF (Compose Nothing)))) -- :t FixF (Compose (Just (FixF (Compose Nothing)))) -- > FixF (Compose Maybe) a
We now introduce one new member of our basic constructions — a second order functor that acts like "const" on the type level, taking any functor and returning Identity.
newtype Embed (f :: * -> *) a = Embed a deriving (Show) instance Functor (Embed f) where fmap f (Embed x) = Embed $ f x instance Applicative (Embed f) where pure x = Embed x (Embed f) < *> (Embed x) = Embed (f x)
Now we can actually stick functorial values into our fixpoints:
-- FixF $ Embed "hi" -- > FixF (Embed "hi") -- fmap (++ " world") $ FixF (Embed "hi") -- > FixF (Embed "hi world") -- FixF . Product (Embed "hi") . -- FixF . Product (Embed "there") . FixF $ undefined -- > FixF (Product (Embed "hi") -- (FixF (Product (Embed "there") -- (FixF *** Exception: Prelude.undefined
You may have noticed that we seem to be able to use "product" to begin a chain of nested fixpoints, but we don't seem able to stick a "Maybe" in there to stop the chain. And it seems like we're not even "fixing" where we intend to be:
-- :t FixF . Product (Embed "hi") . FixF . Product (Embed "there") . FixF $ undefined -- > FixF (Product (Embed f)) [Char]
That's because we're applying Product, takes and yields arguments of kind * -> *
in a context where we really want to take and yield second-order functors as arguments — things of kind (* -> *) -> * -> *
. If we had proper kind polymorphism, "Product" and "ProductF" would be able to collapse (and maybe, soon, they will). But at least in the ghc 7.4.1 that I'm working with, we have to write the same darn thing, but "up a kind".
data ProductF f g (b :: * -> *) a = ProductF (f b a) (g b a) deriving Show instance (Functor (f b), Functor (g b)) => Functor (ProductF f g b) where fmap f (ProductF x y) = ProductF (fmap f x) (fmap f y) instance (Applicative (f b), Applicative (g b)) => Applicative (ProductF f g b) where pure x = ProductF (pure x) (pure x) (ProductF f g) < *> (ProductF x y) = ProductF (f < *> x) (g < *> y)
We can now do the following properly.
yy = FixF . ProductF (Embed "foo") $ InL $ Const () xx = FixF . ProductF (Embed "bar") . InR . FixF . ProductF (Embed "baz") . InL $ Const ()
So we've recovered proper lists in Haskell, as the "second-order fixpoint" of polynomial functors. And the types look right too:
-- :t yy -- > FixF (ProductF Embed (Sum (Const ()))) [Char]
Because we've built our Applicative instances compositionally, we have an applicative for our list list construction automatically:
-- (++) < $> yy < *> xx -- > FixF (ProductF (Embed "foobar") (InL (Const ())))
This is precisely the "ZipList" applicative instance. In fact, our applicative instances from this "functor toolkit" are all "zippy" — matching up structure where possible, and "smashing it down" where not. This is because Sum, with its associated natural transformation logic, is the only way to introduce a disjoint choice of shape. Here are some simple examples with Sum to demonstrate:
-- liftA2 (++) (InL (Identity "hi")) $ -- InR (Product (Identity " there") (Const ([12::Int]))) -- > InL (Identity "hi there") -- liftA2 (++) (InR (Identity "hi")) $ InL (Product (Identity " there") (Const ([12::Int]))) -- > InL (Product (Identity "hi there") (Const [12]))
We're always "smashing" towards the left. So in the first case, that means throwing away half of the pair. In the second case, it means injecting Const mempty into a pair, and then operating with that.
In any case, we now have infinite and possibly infinite branching structures. And not only are they Functors, but they're also Applicatives, and in a way that's uniform and straightforward to reason about.
In the next post, we'll stop building out our vocabulary of "base" Functors (though it's not quite complete) and instead push forward on what else these functors can provide "beyond" Applicative.
]]>liftM2 foo a b
when we could instead write foo <$> a <*> b
? But we seldom use the Applicative as such — when Functor is too little, Monad is too much, but a lax monoidal functor is just right. I noticed lately a spate of proper uses of Applicative —Formlets (and their later incarnation in the reform library), OptParse-Applicative (and its competitor library CmdTheLine), and a post by Gergo Erdi on applicatives for declaring dependencies of computations. I also ran into a very similar genuine use for applicatives in working on the Panels library (part of jmacro-rpc), where I wanted to determine dependencies of a dynamically generated dataflow computation. And then, again, I stumbled into an applicative while cooking up a form validation library, which turned out to be a reinvention of the same ideas as formlets.
Given all this, It seems post on thinking with applicatives is in order, showing how to build them up and reason about them. One nice thing about the approach we'll be taking is that it uses a "final" encoding of applicatives, rather than building up and then later interpreting a structure. This is in fact how we typically write monads (pace operational, free, etc.), but since we more often only determine our data structures are applicative after the fact, we often get some extra junk lying around (OptParse-Applicative, for example, has a GADT that I think is entirely extraneous).
So the usual throat clearing:
{-# LANGUAGE TypeOperators, MultiParamTypeClasses, FlexibleInstances, StandaloneDeriving, FlexibleContexts, UndecidableInstances, GADTs, KindSignatures, RankNTypes #-} module Main where import Control.Applicative hiding (Const) import Data.Monoid hiding (Sum, Product) import Control.Monad.Identity instance Show a => Show (Identity a) where show (Identity x) = "(Identity " ++ show x ++ ")"
And now, let's start with a classic applicative, going back to the Applicative Programming With Effects paper:
data Const mo a = Const mo deriving Show instance Functor (Const mo) where fmap _ (Const mo) = Const mo instance Monoid mo => Applicative (Const mo) where pure _ = Const mempty (Const f) < *> (Const x) = Const (f <> x)
(Const
lives in transformers as the Constant
functor, or in base as Const
)
Note that Const
is not a monad. We've defined it so that its structure is independent of the `a` type. Hence if we try to write (>>=)
of type Const mo a -> (a -> Const mo b) -> Const mo b
, we'll have no way to "get out" the first `a` and feed it to our second argument.
One great thing about Applicatives is that there is no distinction between applicative transformers and applicatives themselves. This is to say that the composition of two applicatives is cleanly and naturally always also an applicative. We can capture this like so:
newtype Compose f g a = Compose (f (g a)) deriving Show instance (Functor f, Functor g) => Functor (Compose f g) where fmap f (Compose x) = Compose $ (fmap . fmap) f x instance (Applicative f, Applicative g) => Applicative (Compose f g) where pure = Compose . pure . pure (Compose f) < *> (Compose x) = Compose $ (< *>) < $> f < *> x
(Compose
also lives in transformers)
Note that Applicatives compose two ways. We can also write:
data Product f g a = Product (f a) (g a) deriving Show instance (Functor f, Functor g) => Functor (Product f g) where fmap f (Product x y) = Product (fmap f x) (fmap f y) instance (Applicative f, Applicative g) => Applicative (Product f g) where pure x = Product (pure x) (pure x) (Product f g) < *> (Product x y) = Product (f < *> x) (g < *> y)
(Product
lives in transformers as well)
This lets us now construct an extremely rich set of applicative structures from humble beginnings. For example, we can reconstruct the Writer Applicative.
type Writer mo = Product (Const mo) Identity tell :: mo -> Writer mo () tell x = Product (Const x) (pure ())
-- tell [1] *> tell [2] -- > Product (Const [1,2]) (Identity ())
Note that if we strip away the newtype noise, Writer turns into (mo,a)
which is isomorphic to the Writer monad. However, we've learned something along the way, which is that the monoidal component of Writer (as long as we stay within the rules of applicative) is entirely independent from the "identity" component. However, if we went on to write the Monad instance for our writer (by defining >>=
), we'd have to "reach in" to the identity component to grab a value to hand back to the function yielding our monoidal component. Which is to say we would destroy this nice seperation of "trace" and "computational content" afforded by simply taking the product of two Applicatives.
Now let's make things more interesting. It turns out that just as the composition of two applicatives may be a monad, so too the composition of two monads may be no stronger than an applicative!
We'll see this by introducing Maybe into the picture, for possibly failing computations.
type FailingWriter mo = Compose (Writer mo) Maybe tellFW :: Monoid mo => mo -> FailingWriter mo () tellFW x = Compose (tell x *> pure (Just ())) failFW :: Monoid mo => FailingWriter mo a failFW = Compose (pure Nothing)
-- tellFW [1] *> tellFW [2] -- > Compose (Product (Const [1,2]) (Identity Just ())) -- tellFW [1] *> failFW *> tellFW [2] -- > Compose (Product (Const [1,2]) (Identity Nothing))
Maybe over Writer gives us the same effects we'd get in a Monad — either the entire computation fails, or we get the result and the trace. But Writer over Maybe gives us new behavior. We get the entire trace, even if some computations have failed! This structure, just like Const, cannot be given a proper Monad instance. (In fact if we take Writer over Maybe as a Monad, we get only the trace until the first point of failure).
This seperation of a monoidal trace from computational effects (either entirely independent of a computation [via a product] or independent between parts of a computation [via Compose]) is the key to lots of neat tricks with applicative functors.
Next, let's look at Gergo Erdi's "Static Analysis with Applicatives" that is built using free applicatives. We can get essentially the same behavior directly from the product of a constant monad with an arbitrary effectful monad representing our ambient environment of information. As long as we constrain ourselves to only querying it with the takeEnv function, then we can either read the left side of our product to statically read dependencies, or the right side to actually utilize them.
type HasEnv k m = Product (Const [k]) m takeEnv :: (k -> m a) -> k -> HasEnv k m a takeEnv f x = Product (Const [x]) (f x)
If we prefer, we can capture queries of a static environment directly with the standard Reader applicative, which is just a newtype over the function arrow. There are other varients of this that perhaps come closer to exactly how Erdi's post does things, but I think this is enough to demonstrate the general idea.
data Reader r a = Reader (r -> a) instance Functor (Reader r) where fmap f (Reader x) = Reader (f . x) instance Applicative (Reader r) where pure x = Reader $ pure x (Reader f) < *> (Reader x) = Reader (f < *> x) runReader :: (Reader r a) -> r -> a runReader (Reader f) = f takeEnvNew :: (env -> k -> a) -> k -> HasEnv k (Reader env) a takeEnvNew f x = Product (Const [x]) (Reader $ flip f x)
So, what then is a full formlet? It's something that can be executed in one context as a monoid that builds a form, and in another as a parser. so the top level must be a product.
type FormletOne mo a = Product (Const mo) Identity a
Below the product, we read from an environment and perhaps get an answer. So that's reader with a maybe.
type FormletTwo mo env a = Product (Const mo) (Compose (Reader env) Maybe) a
Now if we fail, we want to have a trace of errors. So we expand out the Maybe into a product as well to get the following, which adds monoidal errors:
type FormletThree mo err env a = Product (Const mo) (Compose (Reader env) (Product (Const err) Maybe)) a
But now we get errors whether or not the parse succeeds. We want to say either the parse succeeds or we get errors. For this, we can turn to the typical Sum functor, which currently lives as Coproduct in comonad-transformers, but will hopefully be moving as Sum to the transformers library in short order.
data Sum f g a = InL (f a) | InR (g a) deriving Show instance (Functor f, Functor g) => Functor (Sum f g) where fmap f (InL x) = InL (fmap f x) fmap f (InR y) = InR (fmap f y)
The Functor instance is straightforward for Sum, but the applicative instance is puzzling. What should "pure" do? It needs to inject into either the left or the right, so clearly we need some form of "bias" in the instance. What we really need is the capacity to "work in" one side of the sum until compelled to switch over to the other, at which point we're stuck there. If two functors, F and G are in a relationship such that we can always send f x -> g x
in a way that "respects" fmap (that is to say, such that (fmap f . fToG == ftoG . fmap f
) then we call this a natural transformation. The action that sends f to g is typically called "eta". (We actually want something slightly stronger called a "monoidal natural transformation" that respects not only the functorial action fmap
but the applicative action <*>
, but we can ignore that for now).
Now we can assert that as long as there is a natural transformation between g and f, then Sum f g can be made an Applicative, like so:
class Natural f g where eta :: f a -> g a instance (Applicative f, Applicative g, Natural g f) => Applicative (Sum f g) where pure x = InR $ pure x (InL f) < *> (InL x) = InL (f < *> x) (InR g) < *> (InR y) = InR (g < *> y) (InL f) < *> (InR x) = InL (f < *> eta x) (InR g) < *> (InL x) = InL (eta g < *> x)
The natural transformation we'll tend to use simply sends any functor to Const.
instance Monoid mo => Natural f (Const mo) where eta = const (Const mempty)
However, there are plenty of other natural transformations that we could potentially make use of, like so:
instance Applicative f => Natural g (Compose f g) where eta = Compose . pure instance (Applicative g, Functor f) => Natural f (Compose f g) where eta = Compose . fmap pure instance (Natural f g) => Natural f (Product f g) where eta x = Product x (eta x) instance (Natural g f) => Natural g (Product f g) where eta x = Product (eta x) x instance Natural (Product f g) f where eta (Product x _ ) = x instance Natural (Product f g) g where eta (Product _ x) = x instance Natural g f => Natural (Sum f g) f where eta (InL x) = x eta (InR y) = eta y instance Natural Identity (Reader r) where eta (Identity x) = pure x
In theory, there should also be a natural transformation that can be built magically from the product of any other two natural transformations, but that will just confuse the Haskell typechecker hopelessly. This is because we know that often different "paths" of typeclass choices will often be isomorphic, but the compiler has to actually pick one "canonical" composition of natural transformations to compute with, although multiple paths will typically be possible.
For similar reasons of avoiding overlap, we can't both have the terminal homomorphism that sends everything to "Const" and the initial homomorphism that sends "Identity" to anything like so:
-- instance Applicative g => Natural Identity g where -- eta (Identity x) = pure x
We choose to keep the terminal transformation around because it is more generally useful for our purposes. As the comments below point out, it turns out that a version of "Sum" with the initial transformation baked in now lives in transformers as Lift
.
In any case we can now write a proper Validation applicative:
type Validation mo = Sum (Const mo) Identity validationError :: Monoid mo => mo -> Validation mo a validationError x = InL (Const x)
This applicative will yield either a single result, or an accumulation of monoidal errors. It exists on hackage in the Validation package.
Now, based on the same principles, we can produce a full Formlet.
type Formlet mo err env a = Product (Const mo) (Compose (Reader env) (Sum (Const err) Identity)) a
All the type and newtype noise looks a bit ugly, I'll grant. But the idea is to think with structures built with applicatives, which gives guarantees that we're building applicative structures, and furthermore, structures with certain guarantees in terms of which components can be interpreted independently of which others. So, for example, we can strip away the newtype noise and find the following:
type FormletClean mo err env a = (mo, env -> Either err a)
Because we built this up from our basic library of applicatives, we also know how to write its applicative instance directly.
Now that we've gotten a basic algebraic vocabulary of applicatives, and especially now that we've produced this nifty Sum applicative (which I haven't seen presented before), we've gotten to where I intended to stop.
But lots of other questions arise, on two axes. First, what other typeclasses beyond applicative do our constructions satisfy? Second, what basic pieces of vocabulary are missing from our constructions — what do we need to add to flesh out our universe of discourse? (Fixpoints come to mind).
Also, what statements can we make about "completeness" — what portion of the space of all applicatives can we enumerate and construct in this way? Finally, why is it that monoids seem to crop up so much in the course of working with Applicatives? I plan to tackle at least some of these questions in future blog posts.
]]>Probably the best way to categorize the difference, for the purpose of where we're eventually going, is that natural deduction focuses on the ways to build proof terms up from their constituent parts. This comes in the form of introduction and elimination rules for the various propositions. For instance, the rules for conjunction are:
This spartan style gets a bit annoying (in my opinion) for the hypothetical premises of the implication introduction, but this can be solved by adding contexts:
This is the style most commonly adopted for presenting type theories, except we reason about terms with a type, rather than just propositions. The context we added for convenience above also becomes fairly essential for keeping track of variables:
As can be seen, all the rules involve taking terms from the premise and building on them in the conclusion.
The other type of system in question, sequent calculus, looks very similar, but represents a subtle shift in focus for our purposes (sequent calculi are a lot more obviously different when presenting classical logics). First, the inference rules relate sequents, which look a lot like our contextual judgments above, and I'll write them the same way. The difference is that not all rules operate on the conclusion side; some operate just on the context. Generally, introduction rules stay similar to natural deduction (and are called right rules), while elimination rules are replaced by manipulations of the context, and are called left rules. For pairs, we can use the rules:
We could also have two separate left rules:
But these two different sets are equivalent as long as we're not considering substructural logics. Do note, however, that we're moving from on the top left to on the bottom left, using the fact that is sufficient to imply . That is, projections apply contravariantly to the left.
It turns out that almost no type theory is done in this style; natural deduction is far and away more popular. There are, I think, a few reasons for this. The first is: how do we even extend the left rules to type theory (eliminations are obvious, by contrast)? I know of two ways. The first is to introduce pattern matching into the contexts, so our left rule becomes:
This is an acceptable choice (and may avoid some of the pitfalls in the next option), but it doesn't gel with your typical lambda calculus. It's probably more suited to a pattern calculus of some sort (although, even then, if you want to bend your brain, go look at the left rule for implication and try to figure out how it translates into such a theory; I think you probably need higher-order contexts of some sort). Anyhow, I'm not going to explore this further.
The other option (and one that I've seen in the literature) is that left rules actually involve a variable substitution. So we come up with the following rule:
And with this rule, it becomes (I think) more obvious why natural deduction is preferred over sequent calculus, as implementing this rule in a type checker seems significantly harder. Checking the rules of natural deduction involves examining some outer-most structure of the term, and then checking the constituents of the term, possibly in an augmented context, and which rule we're dealing with is always syntax directed. But this left rule has no syntactic correspondent, so it seems as though we must nondeterministically try all left rules at each step, which is unlikely to result in a good algorithm. This is the same kind of problem that plagues extensional type theory, and ultimately results in only derivations being checkable, not terms.
However, there are certain problems that I believe are well modeled by such a sequent calculus, and one of them is type class checking and associated dictionary translations. This is due mainly to the fact that the process is mainly context-directed term building, rather than term-directed type checking. As far as the type class algorithm goes, there are two interesting cases, having to do with the following two varieties of declaration:
class Eq a => Ord a where ... instance (Eq a, Eq b) => Eq (a, b) where ...
It turns out that each of these leads to a left rule in a kind of type class sequent calculus:
That is:
Eq a
is a sufficient constraint for M : T
, then the stronger constraint Ord a
is also sufficient, so we can discharge the Eq a
constraint and use Ord a
instead.Eq (a, b)
constraint using two constraints, Eq a, Eq b
together with an instance telling us how to do so. This also works for instances without contexts, giving us rules like:
Importantly, the type inference algorithm for type classes specifies when we should use these rules based only on the contexts we're dealing with. Now, these look more like the logical sequent rules, but it turns out that they have corresponding type theory-like versions when we consider dictionary passing:
And this kind of substituting into dictionary variables produces exactly the evidence passing translation we want.
Another way to look at the difference in feasibility is that type checking involves moving bottom-to-top across the rules; in natural deduction, this is always easy, and we need look only at the terms to figure out which we should do. Type class checking and dictionary translation moves from top-to-bottom, directed by the left hand context, and produces terms on the right via complex operations, and that is a perfect fit for the sequent calculus rules.
I believe this corresponds to the general opinion on those who have studied sequent calculi with regard to type theory. A quick search revealed mostly papers on proof search, rather than type checking, and type classes rather fall into that realm (they're a very limited form of proof search).
]]>In category theory, the codensity monad is given by the rather frightening expression:
Where the integral notation denotes an end, and the square brackets denote a power, which allows us to take what is essentially an exponential of the objects of by objects of , where is enriched in . Provided the above end exists, is a monad regardless of whether has an adjoint, which is the usual way one thinks of functors (in general) giving rise to monads.
It also turns out that this construction is a sort of generalization of the adjunction case. If we do have , this gives rise to a monad . But, in such a case, , so the codensity monad is the same as the monad given by the adjunction when it exists, but codensity may exist when there is no adjunction.
In Haskell, this all becomes a bit simpler (to my eyes, at least). Our category is always , which is enriched in itself, so powers are just function spaces. And all the functors we write will be rather like (objects will come from kinds we can quantify over), so ends of functors will look like forall r. F r r
where . Then:
newtype Codensity f a = Codensity (forall r. (a -> f r) -> f r)
As mentioned, we've known for a while that we can write a Monad instance for Codensity f
without caring at all about f
.
As for the adjunction correspondence, consider the adjunction between products and exponentials:
This gives rise to the monad , the state monad. According to the facts above, we should have that Codensity (s ->)
(excuse the sectioning) is the same as state, and if we look, we see:
forall r. (a -> s -> r) -> s -> r
which is the continuation passing, or Church (or Boehm-Berarducci) encoding of the monad.
Now, it's also well known that for any monad, we can construct an adjunction that gives rise to it. There are multiple ways to do this, but the most accessible in Haskell is probably via the Kleisli category. So, given a monad on , there is a category with the same objects, but where . The identity for each object is return
and composition of arrows is:
(f >=> g) x = f x >>= g
Our two functors are:
F a = a
F f = return . f
U a = M a
U f = (>>= f)
Verifying that requires only that , but this is just , which is a triviality. Now we should have that .
So, one of the simplest monads is reader, . Now, just takes objects in the Kleisli category (which are objects in ) and applies to them, so we should have Codensity (e ->)
is reader. But earlier we had Codensity (e ->)
was state. So reader is state, right?
We can actually arrive at this result another way. One of the most famous pieces of category theory is the Yoneda lemma, which states that the following correspondence holds for any functor :
This also works for any functor into and looks like:
F a ~= forall r. (a -> r) -> F r
for . But we also have our functor , which should look more like:
U a ~= forall r. (a -> M r) -> U r
M a ~= forall r. (a -> M r) -> M r
So, we fill in M = (e ->)
and get that reader is isomorphic to state, right? What's going on?
To see, we have to take a closer look at natural transformations. Given two categories and , and functors , a natural transformation is a family of maps such that for every the following diagram commutes:
The key piece is what the morphisms look like. It's well known that parametricity ensures the naturality of t :: forall a. F a -> G a
for , and it also works when the source is . It should also work for a category, call it , which has Haskell types as objects, but where , which is the sort of category that newtype Endo a = Endo (a -> a)
is a functor from. So we should be at liberty to say:
Codensity Endo a = forall r. (a -> r -> r) -> r -> r ~= [a]
However, hom types for are not merely made up of hom types on the same arguments, so naturality turns out not to be guaranteed. A functor must take a Kleisli arrow to an arrow , and transformations must commute with that mapping. So, if we look at our use of Yoneda, we are considering transformations :
Now, and . So
t :: forall r. (a -> M r) -> M r
will get us the right type of maps. But, the above commutative square corresponds to the condition that for all f :: b -> M c
:
t . (>=> f) = (>>= f) . t
So, if we have h :: a -> M b
, Kleisli composing it with f
and then feeding to t
is the same as feeding h
to t
and then binding the result with f
.
Now, if we go back to reader, we can consider the reader morphism:
f = const id :: a -> e -> e
For all relevant m
and g
, m >>= f = id
and g >=> f = f
. So the
naturality condition here states that t f = id
.
Now, t :: forall r. (a -> e -> r) -> e -> r
. The general form of these is state actions (I've split e -> (a, e)
into two pieces):
t f e = f (v e) (st e) where rd :: e -> a st :: e -> e
If f = const id
, then:
t (const id) e = st e where st :: e -> e
But our naturality condition states that this must be the identity, so we must have st = id
. That is, the naturality condition selects t
for which the corresponding state action does not change the state, meaning it is equivalent to a reader action! Presumably the definition of an end (which involves dinaturality) enforces a similar condition, although I won't work through it, as it'd be rather more complicated.
However, we have learned a lesson. Quantifiers do not necessarily enforce (di)naturality for every category with objects of the relevant kind. It is important to look at the hom types, not just the objects .In this case, the point of failure seems to be the common, extra s
. Even though the type contains nautral transformations for the similar functors over , they can (in general) still manipulate the shared parameter in ways that are not natural for the domain in question.
I am unsure of how exactly one could enforce the above condition in (Haskell's) types. For instance, if we consider:
forall r m. Monad m => (a -> m r) -> m r
This still contains transformations of the form:
t k = k a >> k a
And for this to be natural would require:
(k >=> f) a >> (k >=> f) a = (k a >> k a) >>= f
Which is not true for all possible instantiations of f. It seems as though leaving m
unconstrained would be sufficient, as all that could happen is t
feeding a value to k
and yielding the result, but it seems likely to be over-restrictive.