Fri 24 Jun 2011
Free Monads for Less (Part 3 of 3): Yielding IO
Posted by Edward Kmett under Algorithms , Category Theory , Comonads , Data Structures , Haskell , Kan Extensions , Monads[10] Comments
Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot.
Yield for Less
Last month at TPDC 2011, Roshan James and Amr Sabry presented Yield: Mainstream Delimited Continuations.
Without calling it such they worked with the free monad of the indexed store comonad. Ignoring the comonad, and just looking at the functor we can see that
data Store i o r = Store (i -> r) o deriving (Functor)
admits the operation
class Functor y => Yieldable y i o | y -> i o where yield :: o -> y i instance Yieldable (Store i o) i o where yield = Store id
The free monad of Store i o
is a nice model for asymmetric coroutines.
type Yield i o = Free (Store i o) liftFree :: Functor f => f a -> Free f a liftFree = Free . fmap Pure instance Yieldable y i o => Yieldable (Free y) i o where yield = liftFree . yield
With its Monad
, you can write computations like:
foo :: Num o => Yield () o () foo = do yield 1 yield 2 yield 3
or to streamline one of James and Sabry's examples
walk :: Traversable f => f o -> Yield i o (f i) walk = traverse yield
is an asymmetric coroutine that yields each of the elements in a traversable container in turn, replacing them with the responses from whatever is driving the coroutine.
James and Sabry called this the naive frame grabbing implementation. It is inefficient for the same reasons that we discussed before about retraversing the common trunk in free monads in general. Note that the unchanging trunk here isn't the data structure that we're traversing, but instead the chain of Store i o
actions we took to get to the current instruction.
James and Sabry then proceeded to optimize it by hitting it with Codensity.
type Iterator i o = Codensity (Yield i o) instance (Monad y, Yieldable y i o) => Yieldable (Codensity y) i o where yield = liftCodensity . yield
But we've now seen that we can get away with something smaller and get the same benefits.
liftF :: Functor f => f a -> F f a liftF f = F (\kp kf -> kf (fmap kp f)) instance Yieldable y i o => Yieldable (F y) i o where yield = liftF . yield
Flattened, and with the store untupled the new optimized representation looks like:
newtype Iterator i o a = Iterator { runIterator :: forall r. (a -> r) -> (o -> (i -> r) -> r) -> r) }
and provides the same performance improvements for asymmetric coroutines as the Codensity
version, used by James and Sabry, which would flatten to the much larger and less satisfying:
newtype RSIterator i o a = RSIterator { runRSIterator :: forall r. (a -> (o -> (i -> r) -> r) -> r) -> (o -> (i -> r) -> r) -> r }
They proceed to give an encoding of delimited continuations into this type and vice versa, but the rest of their material is of no further use to us here.
As an aside the performance benefits of encoding Oleg's iteratees in continuation passing style arise for much the same reason. The resuting encoding is a right Kan extension!
Who Needs the RealWorld?
As Runar recently tweeted, we have put this to good use here at ClariFI. (Yes, we are hiring! If the contents of my blog make sense to you then email me and let's talk.)
At ClariFI have a strongly typed functional language that bears a strong resemblance to Haskell with rank-n types and a number of other interesting type system features that are particularly suited to our problem domain.
However, as with Haskell, we needed a story for how to deal with IO
.
Now, GHC models IO with the type
newtype IO a = IO (State# RealWorld -> (# a, State# RealWorld #))
where they model IO
by working in a strict state monad, passing around a real world that they promise not to mutate or copy. (In practice, the world is passed around as a 0-byte token.
This is somewhat problematic semantically, for a number of reasons.
First, There is always the risk of copying it or plumbing it through backwards, so we carefully hide the State# RealWorld
from the end user. So this model really wants some notion of uniqueness or linear typing to render it perfectly safe. Heck, the entire Clean language arose from just trying to address this concern.
Second, you don't really get to pass the real world around! We have multiple cores working these days. Stuff is happening in the back end, and as much as you might want it to be, your program isn't responsible for everything that happens in the RealWorld
!.
Third, if in some sense all bottoms are the same, then forever (putStrLn "Hello World")
and undefined
are the same in that sense, despite the slew of side-effects that arise from the first one. Now, in Haskell you are allowed to catch some bottoms in the IO monad, and thereby escape from certain doom, but it is still a reasonable objection.
One alternate model for talking about IO
is to view it as a free monad of some set of operations. This approach was taken by Wouter Swierstra's Functional Pearl: Data Types a la Carte.
You can then supply some sort of external interpreter that pumps that tree structure, performing the individual actions.
This is unsatisfying because of two things:
First, the performance is abysmal using the common ADT encoding of a free monad. Janis Voigtländer of course showed, that this can be rectified by using the Codensity
monad.
Second, the set of FFI
operations is closed.
What we've done instead is to define our primitive IO
actions externally as some FFI
type:
type FFI o i -- external, side-effecting computation taking o, returning i
In practice, these are obtained by reflection by our foreign import
statements since we run in the JVM.
Then we looked at the free monad of
newtype OI a = forall o i. OI (FFI o i) o (i -> a) deriving Functor
where OI
is the indexed store comonad used as the building block above, yielding arguments to FFI
of type o, and representing a computation that would resume with a value of type i to obtain a result of type a.
In some sense this yields a more useful notion than Richard Kieburtz's novel, but largely unimplementable, OI
comonad from Codata and Comonads in Haskell.
Flattening Free OI
would yield the naive
-- data FIO a where -- Return :: a -> FIO a -- FIO :: FFI o i -> o -> (i -> FIO a) -> FIO a
which would be interpreted by the runtime system.
But once we've converted to our Church-encoded Free monad and flattened we obtain:
newtype IO a = IO (forall r. (a -> r) -> (forall i o. FFI o i -> o -> (i -> r) -> r) -> r)
with the Functor
and Monad
instances defined above.
This then gives us a number of choices on how we implement the runtime system:
We can use the machinery described earlier to convert from IO a
to Free OI a
or FIO a
, and then have the runtime system pattern match on that structure on our main method, taking the FFI
actions and their arguments and passing the results in to the language, or we can invert control, and implement things more directly by just defining
FFI = (->)
while letting the FFI
'd methods have side-effects, and then defining
unsafePerformIO :: IO a -> a unsafePerformIO (IO m) = m id (\ oi o ir -> ir (oi o))
But regardless of how FFI
is implemented, this model provides a clear structural difference between forever (putStrLn "Hello")
and undefined
and does not require us to believe the pleasant fiction that we can get our hands on the real world and pass it around.
Our actual IO
representation is only slightly more complicated than the one presented here in order to deal with the plumbing of an extra continuation to deal with Java exceptions, but the substance of this approach isn't changed by this addition.
[Edit: incorporated a minor typographical fix into Iterator from Max Bolingbroke]
[Edit: fixed Store to be data, an liftM that should have been an fmap and added the missing Functor constraint that was present in my actual implementation but didn't make it to the web, and a couple of typos in the implementation of RSIterator, all noted by Clumsy.]
June 24th, 2011 at 10:07 am
Unspeakably awesome series of posts: the idea density is so high :-). This has answered many questions for me, but raised many more that I now need to think about..
June 24th, 2011 at 11:36 pm
Your language’s IO type is extremely reminiscent of the old pre-monad Haskell days of main :: [Response]->[Request]. Except that with the coroutine structure you ensure that requests and responses can’t get out of sync. That’s quite elegant. Much nicer than the RealWorld baton in a lot of ways.
Have you considered how parallelism plays into this model of I/O?
June 25th, 2011 at 12:31 am
@Wren: parallelism works fine and was a large portion of the motivation for this approach. A program exists in a quiescent state. From the outside (a thread), I can ask a term to evaluate itself apply itself to an argument, and do case analysis upon the result. We apply that to main and keep pumping. Any of those operations can be performed in parallel; the answers to them are all referentially transparent! Hence we can have as many threads running as we want demanding those answers. We aren’t passing a world. The evaluator just responds transparently to demands from the RTS, potentially many such demands simultaneously.
June 25th, 2011 at 9:13 am
After being clumsy for a while, I reached the end. Great post.
Reordering some arguments in IO to
(forall i o. (i -> r) -> FFI o i -> o -> r)
gives the more concise
unsafePerformIO (IO m) = m id fmap
which for simple FFI functions is
unsafePerformIO (IO m) = m id (.)
June 25th, 2011 at 1:24 pm
@Chris: Nice!
September 7th, 2012 at 1:24 am
I’m not clear on what you mean by the set of FFI operations being closed with the “Data Types a la Carte” approach. It seems to me the point of that approach is to not limit operations. You just define new operations and interpreters for those operations.
I’m also not clear on how the proposed FFI type you’ve presented here works. How is it not limited? How would you define such a thing in Haskell? (Very curious about this approach because right now I’m planning on using the approach from “Data Types a la Carte”, but am always interested in learning about/using something better.)
September 7th, 2012 at 8:53 am
The difference is in order to support new operations with the usual free monad/a la carte approach you need to define a new data type or a new constructor, or tuple up more and more stuff and pay a higher and higher cost when accessing the next constructor.
We use this in a language we have here at S&P Capital IQ called Ermine as our only FFI mechanism. It isn’t something you could bolt into, say, GHC without changing out pretty much everything in the language, but in a new Haskell implementation, it has the benefit that you don’t need to deal with magic RealWorld tokens and then restrict transformations that may attempt to commute past them.
This enables you to transform the language more easily, in exchange, we denote operations with this separate FFI type, rather than primops, and view the runtime system as an external agent driving an asymmetric coroutine.
October 6th, 2012 at 6:44 pm
Ok, I understand the cost of tupling up more and more and that being undesirable. Also, you’re right that adding new constructors breaks the whole “a la Carte” notion.
I’m just looking for alternatives to the typical way of building FP apps of using a monad transformer stack with ReaderT and StateT and type-classes for all the data you want to inject (config, db connections, etc.). I was looking at the a la Carte approach as a way of doing that. Although, to be fair my distaste for this approach comes from a Scala app where it became a PITA to manage everything and if it had been written in Haskell the experience might have been different.
October 6th, 2012 at 10:38 pm
The irony with that is that it is particularly easy to add cases to a ‘Free’/a la carte monad in scala. Just don’t seal the class. ;)
To mitigate the boilerplate in Haskell at least, the typeclasses for config, db connection, etc. can be autogenerated by something like makeClassy from my lens package.
September 27th, 2022 at 5:29 pm
reading Life coaching…
The Comonad.Reader » Free Monads for Less (Part 3 of 3): Yielding IO…