Coding: Fleeting Thoughts

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

commodorejohn
Posts: 958
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: Coding: Fleeting Thoughts

Postby commodorejohn » Wed May 25, 2016 8:45 pm UTC

chridd wrote:• Some sort of way to generate a function given code, or some representation of code (e.g., an object-oriented language could have a way to generate a function given a list of Statement objects, or a LISP variant could have a way to generate a function given a list). The only programming language I can think of that has this feature is JavaScript, where you can make a function using something like

Code: Select all

new Function("x", "return x+1;")
but JavaScript also supports proper closures and I'm not sure how widely used the new Function thing is. (Also technically any language with eval has this feature.)

Didn't you yourself just bring up Lisp? I admit I'm not super-fluent in it, but isn't that exactly what defun does?
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
chridd
Has a vermicelli title
Posts: 763
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: Coding: Fleeting Thoughts

Postby chridd » Wed May 25, 2016 8:57 pm UTC

commodorejohn wrote:
chridd wrote:• Some sort of way to generate a function given code, or some representation of code (e.g., an object-oriented language could have a way to generate a function given a list of Statement objects, or a LISP variant could have a way to generate a function given a list). The only programming language I can think of that has this feature is JavaScript, where you can make a function using something like

Code: Select all

new Function("x", "return x+1;")
but JavaScript also supports proper closures and I'm not sure how widely used the new Function thing is. (Also technically any language with eval has this feature.)

Didn't you yourself just bring up Lisp? I admit I'm not super-fluent in it, but isn't that exactly what defun does?
I'm pretty sure the code given to defun has to be constant at compile time. That is, if you have a variable "code" containing the list (+ x 1) (and which could have something else depending on the program's input), I don't think there's a way to pass that list into defun and get a function that adds 1 to x (unless you use "eval", but that goes under "technically any language with eval has this feature"). Whereas in JavaScript, you could ask the user for a number and then do new Function("x", "return x+" + number + ";") or something, and have the function be different each time the program runs. (Of course, depending on where you got the number, this could be a security hole.)
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

EvanED
Posts: 4324
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: Coding: Fleeting Thoughts

Postby EvanED » Wed May 25, 2016 11:23 pm UTC

Right.

Code: Select all

(defun add-one (x) (+ 1 x))

is analogous to

Code: Select all

function add_one(x) { return 1 + x; }
var add_one = function() { return 1 + x; }; // (or perhaps to this)

whereas

Code: Select all

var add_one = new Function("x", "return x + 1;")

is constructing a function from a string at runtime. I'm not actually sure how you'd do this in Lisp; maybe read it into an S-exp and then eval it.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Wed May 25, 2016 11:43 pm UTC

To use the specific term, not being able to construct arbitrary function bodies in C makes it not possible to write closures. You can hack it with lambda lifting, but it's obviously not pretty.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

commodorejohn
Posts: 958
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: Coding: Fleeting Thoughts

Postby commodorejohn » Wed May 25, 2016 11:57 pm UTC

EvanED wrote:I'm not actually sure how you'd do this in Lisp; maybe read it into an S-exp and then eval it.

Well, based on what I'm reading just poking around the Internet, the read function reads a token from an input stream and converts it to a single-item S-expression, so hypothetically you could just concatenate those and then have a list that eval could work with. From a quick test, (eval `(defun double (x) (+ x x))) in GNU Common Lisp successfully constructs the function you'd expect, so I imagine you could take that notion just about as far as you want.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Fri May 27, 2016 7:15 am UTC

chridd wrote:C has function pointers, which one can pass around as data, but there's no way to (portably) create new functions at runtime, so I'm not sure if that counts as first-class functions.

I don't think being able to create new functions at runtime is a good definition for "having first class functions"; neither is it a commonly used one. It would make "having first class functions" equivalent to having eval(). And languages like Haskell would not fall into this category if you refused to use the System.Eval.Haskell package.

The better definition is: A programming language has first class functions if it can operate on functions the same way it can operate on other values like numbers, objects and arrays. C certainly satisfies this definition: You can store functions in variables, you can pass them to other functions (i.e. C has higher-order functions), return them from other functions and you can perform the natural function operation on variables that hold functions, i.e. you can invoke them through the variable.

I don't think "having first class functions" is about syntax alone, in particular I don't think having to write an asterisk sometimes prevents C from having first-class functions.

I also claim that the only natural operation that can be done on a function is invoking it. One might argue that there are other natural operations, e.g. partial application that first-class functions should provide. My response would be that this would make "having first class functions" equivalent to "having closures".

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Fri May 27, 2016 10:37 am UTC

Eval isn't what was meant by creating new functions at runtime. You can't define this "appliers" in C without writing at least 10000 lines of code.

Code: Select all

(#) = \x -> \f -> f x
appliers = map (#) [1..10000]


An analogous situation would be: you can store numbers in variables, you can pass them to functions, you can return them from functions, but you can only use numbers that have already been defined in a global variable.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Fri May 27, 2016 11:23 am UTC

No, this is really not analogous. The point is that by allowing only "global" numbers you would forbid the natural "add" operation of the numbers data type. My reasoning is that partial application (which is what you're doing in your code snippet) is not a natural operation on first class functions, at least if you want a definition of first class functions that does not entail closures. (Because partial application is equivalent to closures modulo minor syntax differences)

commodorejohn
Posts: 958
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: Coding: Fleeting Thoughts

Postby commodorejohn » Fri May 27, 2016 2:20 pm UTC

headprogrammingczar wrote:Eval isn't what was meant by creating new functions at runtime.

Um, I think it was. Unless I'm missing something, eval can be used to do exactly the thing that chridd was mentioning when the subject first came up.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Fri May 27, 2016 7:27 pm UTC

korona wrote:No, this is really not analogous. The point is that by allowing only "global" numbers you would forbid the natural "add" operation of the numbers data type. My reasoning is that partial application (which is what you're doing in your code snippet) is not a natural operation on first class functions, at least if you want a definition of first class functions that does not entail closures. (Because partial application is equivalent to closures modulo minor syntax differences)

I didn't write a partial application, I wrote a function that returns another function. Lest you think I am cheating, let's write this in some other languages that have first-class functions:

Code: Select all

-- lisp
(lambda (x) (lambda (f) ((f x))))

// javascript
function (x) {
  function (f) {
    f(x)
  }
}

# php
function($x) {
  function($f) {
    $f($x)
  }
}


Why should addition be a natural operation on numbers, while closures are not a natural operation on functions?
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Fri May 27, 2016 7:54 pm UTC

The OP wanted a definition of first-class functions that did not entail closures so demanding first-class functions to support closures (or the equivalent partial application) would be pointless.

User avatar
Diadem
Posts: 5649
Joined: Wed Jun 11, 2008 11:03 am UTC
Location: The Netherlands

Re: Coding: Fleeting Thoughts

Postby Diadem » Fri Jun 03, 2016 12:26 pm UTC

Dear colleague,

Yes, I am aware that my function to calculate the current position of an object gives the wrong output, if you give it wrong input for the initial position of the object. I am deeply, deeply sorry for this. I truly wish I was about to write a function that gives correct output even with incorrect input, but alas, I am not that skilled a programmer.

I humbly request that you forgive me my inability to bend the laws of mathematics, so we can move forward with this project.
It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist
- Bernard Woolley in Yes, Prime Minister

User avatar
Xenomortis
Not actually a special flower.
Posts: 1397
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Fri Jun 03, 2016 12:32 pm UTC

The answer is obvious.

Code: Select all

int do_maths(int input_x) {
   return 42; // pre-calculated using correct input_x for performance
}
Image

User avatar
You, sir, name?
Posts: 6974
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

Re: Coding: Fleeting Thoughts

Postby You, sir, name? » Fri Jun 03, 2016 2:21 pm UTC

Diadem wrote:Dear colleague,

Yes, I am aware that my function to calculate the current position of an object gives the wrong output, if you give it wrong input for the initial position of the object. I am deeply, deeply sorry for this. I truly wish I was about to write a function that gives correct output even with incorrect input, but alas, I am not that skilled a programmer.

I humbly request that you forgive me my inability to bend the laws of mathematics, so we can move forward with this project.


What did you do, simulate a double pendulum?
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11045
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Coding: Fleeting Thoughts

Postby Yakk » Fri Jun 03, 2016 2:43 pm UTC

So I think I'm starting to get Haskell functors.

It is like the dual of a vector space. (T->U)(T) is U, and ([T])(T->U) is U. It is an object that knows how to invoke functions that take a given type.

The variardic extension is roughly:

Code: Select all

template<class...Ts>
auto functor(std::tuple<Ts...> tup) {
  return [tup = std::move(tup)](auto&& f)->decltype(auto) mutable {
    return std::experimental::apply( decltype(f)(f), std::move(tup) );
  };
}

(the above is one-shot -- writing one that works both efficiently one-shot and many-call is a bit more work)

Then a monad conceptually is like a functor, but instead of "returning" the unwrapped result, it returns the result still wrapped in the same monadic wrapping. It "passes functions through". There are a bunch of details involving functions that themselves take monads, or return monads, and the bind/then/fmap operations, the process of extracting the value out of the monad and putting one in, but those are (in my opinion) not core to the concept.

A wrapper of a type T that accesses T in a thread-safe manner is a functor.

A wrapper of a type T that accesses T in a dedicated thread, and allows you to queue up actions and later wait for results, is more of a monad.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Fri Jun 03, 2016 11:51 pm UTC

I'm *fascinated* by how other people approach understanding of functors and their variants.

I've spent a lot of time digesting them, and finally came to what I think is a really simple way of understanding them - a functor is a wrapper object around some arbitrary value(s), that has a well-behaved .map() method. That's it! A closely-related understanding of it is that it's a way to add an "interceptor" for function calls, altering the behavior of how functions work in a well-behaved way. (This arises from the fact that you can partially-applied the map() by just giving it the function to call; this returns a new function that expects a particular type of wrapped value, does stuff to the value, and gives you a new wrapper.)

A monadic functor is just a functor that knows how to smoosh nested instances of itself together, usually denoted through a .join() method. These arise when the mapped function returns another instance of the functor; since you just blindly stuff the mapped function's return value back into the original wrapper (or an identical new instance), you end up with a double-wrapped value. That ends up being inconvenient, because now to call something on the plain value you have to map *twice*. Calling .join() afterwards smushes them together into a single wrapper, in a well-behaved way that preserves what each of the original wrappers meant. (This is usually done all at once, through a method called .bind() or .chain() that does a .map() and .join() at the same time for you.)

An applicative functor is a multi-arg version of a normal functor. The easier way for me to think of it is that it's a functor that knows how to smoosh together two *independent* versions of itself (siblings, not nested wrappers). This is often very similar to how you smoosh nested things, but not always; sometimes there are multiple well-behaved ways to smoosh together siblings and not all of them work for nested stuff. (Example: the most natural way to smoosh nested Lists is to flatten them into a single master List, and the analogous way to smoosh together sibling lists is to cross-product them, creating a list of tuples containing every pair of items from the two siblings. (Showing that this is analogous is interesting, if you don't see it immediately). There's another way to smoosh together sibling Lists, tho - pair up their values index-by-index, so you get a list who first value is a tuple of the two siblings' first values, second value is a tuple of the two siblings' second values, etc. This is traditionally called a ZipList, and it actually can't be generally extended to nested lists. Again, interesting to show why.) This lets you map variadic functions over a functor - if all the arguments are in the same type of functor, and it's applicative, you can smoosh them together into a single functor holding a tuple of the argument values, and then map the function over that.

However, it's traditionally implemented and understood in a different way, both because this second way is more natural in Haskell and related heavy-functional languages, and because it avoids having to deal with tuples ever. Instead, you put *the function you want to map* in an instance of the functor, and then call another method (traditionally called .ap()), passing it the argument functor. .ap() knows how to reach into each of the functors, pull out the function and value and call them, then put the result in a smooshed-together functor made from the earlier two wrappers (as I described above). If you curry the function, so being called with the first arg just produces another function that expects the second arg, you can repeat this until all the args are filled in and you're left with a single functor containing the result. Multi-arg mapping functions without ever having to actually acknowledge that functions can *have* multiple args!

(As it turns out, "monadic" is a subclass of "applicative" - if you can do nested-smooshing, there's a clever trick you can use to automatically get the analogous sibling-smooshing behavior. However, sibling-smooshing can often be done more efficiently if implemented directly.)

Then there's "traversable" functors, which are another subclass of "applicative" and have a .traverse() method that gives you a powerful way to iterate a mapping function over a "structured" functor (like Lists or Trees) without destroying the structure. Through ~clever tricks~, they let you automatically *swap* nested functors, so if your value is like A<B<value>>, you can automatically swap it to B<A<value>> as long as the A is traversable and the B is at least applicative, without having to know anything about what B is. This lets you get around a major weakness of monadic functors, which is that they're not composable - if you try to compose two monads (say you have a List of Options), then if you call them in the monadic way (the mapped function returns another instance of the composed monad), you'll end up with the value wrapped up like A<B<A<B<value>>>>, and neither A nor B's .join() method can work automatically. If B is traversable, tho, then you can map the swapper into the middle, giving you A<A<B<B<value>>>>, then call A's join once on the outside, getting A<B<B<value>>>, then map B's join in one level to get A<B<value>>, exactly where you wanted to be!
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Sat Jun 04, 2016 2:19 am UTC

It's really hard to read this with no types or laws, and with functions and methods being totally mixed up.

A type `F` is a functor if there exists a function with the type `F<B> fmap(Function<A, B> f, F<A> a)`, obeying the following laws:
For all a, fmap({x => x}, a) = a
For all f and g and a, fmap({x => f(g(x))}, a) = fmap({x => f(x)}, fmap({x => g(x)}, a)

Maybe that function is implemented in a particular language as a method on F<A> of type F<B> fmap(Function<A, B> f), or you might use type classes as Haskell does.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
Flumble
Yes Man
Posts: 1945
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Coding: Fleeting Thoughts

Postby Flumble » Sat Jun 04, 2016 12:19 pm UTC

So "the functor", "the monad" and all those other things are the types, whenever a function(s) exists for such a type that adheres to the corresponding rules? Or is "the functor" etc. the combination of the type and the particular function(s) following the rules? (or is it not defined at all and do people make use these things arbitrarily?)

In any case, a functor clearly requires a map function (with laws) and I guess there's no alternate way of approaching it.
How about the applicative functor structure? I know it from Haskell as the thing that requires a pure and an ap function (with laws) in addition to map. But are these functions all required? Is there another way to approach this structure?
And how about monads? There's the Haskell way with its bind function and alternatively there's the join function that seems more appealing to the (mathematical) understanding. And in both those cases it seems to me the ap from applicatives becomes superfluous. :?

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Sun Jun 05, 2016 12:55 pm UTC

In math, "the functor" is the combination of the type and the definition of map, which you combine in a tuple. For instance the notation for defining a monad is (T, η, μ), with T being the type and η/μ being pure/join.

Applicative theoretically would only requires ap, and then pure would be point from pointed and there would be laws about that. However, it was found a few years ago that the laws for pointed and applicative are free theorems, so pointed became useless. So, it does require both pure and ap to be defined, in addition to map.

For monad, either bind or join is needed, and with one of them plus map you can implement the other. With pure and join/bind you can implement ap, so in that sense monad does overshadow applicative a bit, but it really just shows the significant leap in what is computable that happens by adding join. Incidentally, with applicative's ap and pure you can also directly write map.

Code: Select all

map f a = ap (pure f) a
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
Yakk
Poster with most posts but no title.
Posts: 11045
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Coding: Fleeting Thoughts

Postby Yakk » Mon Jun 06, 2016 2:40 pm UTC

Xanthir wrote:I've spent a lot of time digesting them, and finally came to what I think is a really simple way of understanding them - a functor is a wrapper object around some arbitrary value(s), that has a well-behaved .map() method.

The values need not be there.

Hmm, I think I'm wrong on functor: I was reading it at [T]->(T->U)->U, not [T]->(T->U)->[U] (the second of which is map?)

What is [T]->(T->U)->U, just a bound argument without the function?

For some reason, I thought a functor was sort of a dual of the value, such that ft = [t]f = u for f:T->U. Looks like I was wrong.
A monadic functor is just a functor that knows how to smoosh nested instances of itself together, usually denoted through a .join() method.

So it has join which does [[T]]->[T]? (suffuciently nicely)

I've done .chain or .bind a few times in C++, but not by defining .join.

An applicative functor is a multi-arg version of a normal functor. The easier way for me to think of it is that it's a functor that knows how to smoosh together two *independent* versions of itself (siblings, not nested wrappers). This is often very similar to how you smoosh nested things, but not always; sometimes there are multiple well-behaved ways to smoosh together siblings and not all of them work for nested stuff. (Example: the most natural way to smoosh nested Lists is to flatten them into a single master List, and the analogous way to smoosh together sibling lists is to cross-product them, creating a list of tuples containing every pair of items from the two siblings. (Showing that this is analogous is interesting, if you don't see it immediately). There's another way to smoosh together sibling Lists, tho - pair up their values index-by-index, so you get a list who first value is a tuple of the two siblings' first values, second value is a tuple of the two siblings' second values, etc. This is traditionally called a ZipList, and it actually can't be generally extended to nested lists. Again, interesting to show why.) This lets you map variadic functions over a functor - if all the arguments are in the same type of functor, and it's applicative, you can smoosh them together into a single functor holding a tuple of the argument values, and then map the function over that.

The cross product case seems computationally expensive and impractical.

Simple operations that generate exponential work ... are usually best avoided?
However, it's traditionally implemented and understood in a different way, both because this second way is more natural in Haskell and related heavy-functional languages, and because it avoids having to deal with tuples ever. Instead, you put *the function you want to map* in an instance of the functor, and then call another method (traditionally called .ap()), passing it the argument functor. .ap() knows how to reach into each of the functors, pull out the function and value and call them, then put the result in a smooshed-together functor made from the earlier two wrappers (as I described above). If you curry the function, so being called with the first arg just produces another function that expects the second arg, you can repeat this until all the args are filled in and you're left with a single functor containing the result. Multi-arg mapping functions without ever having to actually acknowledge that functions can *have* multiple args!

And, in the above case, it maybe generates exponential amounts of list output? Well, not for non-list functors naturally.
(As it turns out, "monadic" is a subclass of "applicative" - if you can do nested-smooshing, there's a clever trick you can use to automatically get the analogous sibling-smooshing behavior. However, sibling-smooshing can often be done more efficiently if implemented directly.)


Then there's "traversable" functors, which are another subclass of "applicative" and have a .traverse() method that gives you a powerful way to iterate a mapping function over a "structured" functor (like Lists or Trees) without destroying the structure. Through ~clever tricks~, they let you automatically *swap* nested functors, so if your value is like A<B<value>>, you can automatically swap it to B<A<value>> as long as the A is traversable and the B is at least applicative, without having to know anything about what B is. This lets you get around a major weakness of monadic functors, which is that they're not composable - if you try to compose two monads (say you have a List of Options), then if you call them in the monadic way (the mapped function returns another instance of the composed monad), you'll end up with the value wrapped up like A<B<A<B<value>>>>, and neither A nor B's .join() method can work automatically. If B is traversable, tho, then you can map the swapper into the middle, giving you A<A<B<B<value>>>>, then call A's join once on the outside, getting A<B<B<value>>>, then map B's join in one level to get A<B<value>>, exactly where you wanted to be!

So the point of these axioms, within Haskell, is to have "kinds" of "type operations" (a list of X is a kind of type operation here?). If the type operation supports the right axioms, certain algorithms work on the type operations without knowing any more about the type operation, and the result of the algorithm can be described and has certain properties (which someone might desire).
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Mon Jun 06, 2016 3:25 pm UTC

Maybe the best way to teach you, Yakk, what Monads (or applicative functors etc.) are is to translate them into C++, because you are already quite proficient in C++. The correct way to translate them into C++ is as a concept and not as a type.

Take the following concept definition:

A template (or template-alias) M is called a Monad if the following conditions are true:
  • For every type X the expression M<X> refers to a type.
  • M<X> has a constructor that takes an instance of X as its single non-defaulted argument, i.e. std::is_constructible_v<M<X>, X> is true.
  • If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals M<Y> for some type Y then m * f is an expression of type M<Y>.
  • A few other conditions hold (the axioms). I'm pretty sure you can translate them yourself into this language.


In Haskell the constructor is called the return-operation whereas the overloaded operator* would be called the bind-operation.

Using this definition its easy to turn std::optional into a Monad. (modulo undefined behavior that we get by adding an operator to namespace std)

Code: Select all

template<typename X, typename F>
std::result_of_t<F(X)> operator* (std::optional<X> m, F f) {
    if(m)
        return f(*m);
    return std::nullopt;
}

// one could optionally (no pun intended) do some std::enable_if magic to make sure std::result_of_t<F(X)> is std::optional<Y> for some Y; but that would require a struct to pattern match the type and obscure the example


This allows us to do

Code: Select all

std::make_optional(21) * [] (int x) { return std::make_optional(x * 2); }


Note that the Monad concept is not very useful on it's own (just like the Callable concept is not very useful on it's own: you know that you can invoke a Callable type but you don't know what invoking it actually does). But it allows you to categorize types and to write some generic algorithms (like std::bind for Callable types).

The reason why the axioms we needed is the same why we need axioms for ForwardIterator or any other concept. They make sure that the type behaves in some predictable way (e.g. for ForwardIterators the axioms make sure that it++ and ++it are compatible).

User avatar
Yakk
Poster with most posts but no title.
Posts: 11045
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Coding: Fleeting Thoughts

Postby Yakk » Mon Jun 06, 2016 3:46 pm UTC

Why:
If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals M<Y> for some type Y then m * f is an expression of type M<Y>.

And not:
If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type Y.


The second is what I implemented when I translated the functor concept over to C++. Is there a reason why the return value of the callable need be restricted to M<Y>?

Ok, you are talking about monad. Which is different. I want to talk about functor first, which I appear to be more confused about.

( As an aside, I've been using `operator ->*` for my "and then do something" operation. )

The operation:

If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type M<Y>.

also seems easier to use than the one you describe, even on optionals. With join it becomes:

If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type M<Y>. If, however, Y is of type M<U> for some U, then m*f is instead of type M<U>.

which I thought corresponded to bind/chain/whatever. A map followed by a join?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Flumble
Yes Man
Posts: 1945
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: Coding: Fleeting Thoughts

Postby Flumble » Mon Jun 06, 2016 5:26 pm UTC

Yakk wrote:Hmm, I think I'm wrong on functor: I was reading it at [T]->(T->U)->U, not [T]->(T->U)->[U] (the second of which is map?)

What is [T]->(T->U)->U, just a bound argument without the function?

Ooh, I think I can explain this one. Kind of.
If the "map" operation of a functor were to be F<T> -> (T->U) -> U (haskellish: f t -> (t->u) -> u), i.e. it would extract a result from the functor by applying a function, (it would obviously break the definition of the map operation, but also) it would break the composition law. And not even the description of the composition law, but just the way the types interact: namely, you should be able to feed a composed function g.f to a functor (map (negate . (*2)) [1,2,3,5]) as well as feed a function g to a functor to which you've fed a function f (((map negate) . (map (*2))) [1,2,3,5]). To be able to do that, the map operation surely needs to return a functor of some kind, not remove it and return a value held within.

TIL (again?): there are also functor classes that have a function f t -> t, called copointed functors (and above; like comonads and such). But apparently they're so unusual or unusable to not get their own typeclass. Rather, some types that are monadic define a function to extract a value, like fromJust (extract a value from an Optional or crash the whole program with no survivors) or head (extract the first element of a list or crash when it's empty).
Also, can anyone enlighten me whether normal thingies and co-thingies can be merged and which ones have meaningful laws (like czar said, pointed functors don't add laws)?

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Mon Jun 06, 2016 6:27 pm UTC

Yakk wrote:Why:
If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals M<Y> for some type Y then m * f is an expression of type M<Y>.

And not:
If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type Y.


The second is what I implemented when I translated the functor concept over to C++. Is there a reason why the return value of the callable need be restricted to M<Y>?


To see why it has to be M<Y> and not Y (this example is for functor but you can extend it yourself for monad), let's let M = an array-like container.

Code: Select all

Y map(Array<X> arr, Function<X, Y> f);


Completely ignoring everything else about the functor laws, it's not even possible to write this function! It cannot produce a result of type Y if arr is empty, because there's nothing there to apply f to.

Code: Select all

Array<Y> map(Array<X> arr, Function<X, Y> f);


This is much more doable. The result array has the same length as the input array, and result[n] = f(input[n]) for all n. With an empty array, you just return another empty array.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
You, sir, name?
Posts: 6974
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

Re: Coding: Fleeting Thoughts

Postby You, sir, name? » Mon Jun 06, 2016 10:01 pm UTC

Code: Select all

Y map(Array<X> arr, Function<X, Y> f);


Seems smells weird for other reasons too, since it implies some sort of hidden algebra for combining Y:s. Maybe easy to imagine for integers, but harder for the True/False/FileNotFound enumeration.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Mon Jun 06, 2016 11:37 pm UTC

headprogrammingczar wrote:It's really hard to read this with no types or laws, and with functions and methods being totally mixed up.

A type `F` is a functor if there exists a function with the type `F<B> fmap(Function<A, B> f, F<A> a)`, obeying the following laws:
For all a, fmap({x => x}, a) = a
For all f and g and a, fmap({x => f(g(x))}, a) = fmap({x => f(x)}, fmap({x => g(x)}, a)

Maybe that function is implemented in a particular language as a method on F<A> of type F<B> fmap(Function<A, B> f), or you might use type classes as Haskell does.

Functions and methods *can* be mixed up in practice. Maybe map() is a method of the functor that takes a callback and returns a new functor. Maybe it's a function that takes another function and upgrades it to take a functor instead of a normal value (and returns a functor instead of a normal value).

I've always found the laws-based approach to be *spectacularly* unhelpful for learning the concepts. For me they helped firm up the corners of my understanding after I had a decent grasp of what everything else actually did, but they were just distracting noise before that. (Particularly for applicative and traversable functors, who are more complex/non-obvious in law-structure.)
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Tue Jun 07, 2016 12:10 am UTC

Yakk wrote:
Xanthir wrote:I've spent a lot of time digesting them, and finally came to what I think is a really simple way of understanding them - a functor is a wrapper object around some arbitrary value(s), that has a well-behaved .map() method.

The values need not be there.

Sure, but getting into the more abstract things you can represent with functors isn't great for early learning. At least, it confused the *hell* out of me when I was first learning. A *pure* container-based approach is also misleading, but at least starting from that direction gives an easier learning path.

Hmm, I think I'm wrong on functor: I was reading it at [T]->(T->U)->U, not [T]->(T->U)->[U] (the second of which is map?)

Right, it's the latter. *All* of the functor variants have a characteristic operation that upgrades some sort of function into an [A]->[B] function.

For plain functors, map upgrades "plain" functions: A->B into [A]->[B].
For monadic functors, bind upgrades ones that return the functor: A->[B] into [A]->[B]
(Comonadic functors do the opposite: extend upgrades [A]->B into [A]->[B].)
For applicative functors, ap upgrades "wrapped" functions: [A->B] into [A]->[B]
Traversable functors are a little more complex.

What is [T]->(T->U)->U, just a bound argument without the function?

That's nothing, within the functor algebra. Just some function that consumes a container and a callback.

For some reason, I thought a functor was sort of a dual of the value, such that ft = [t]f = u for f:T->U. Looks like I was wrong.


A monadic functor is just a functor that knows how to smoosh nested instances of itself together, usually denoted through a .join() method.

So it has join which does [[T]]->[T]? (suffuciently nicely)

Yeah.

I've done .chain or .bind a few times in C++, but not by defining .join.

Yeah, it's actually awkward to *use* join() in practice; bind() has much better ergonomics in general usage. But it can be very useful pedantically to think of the monadic functor in terms of its join operation. Similarly, just directly combining two applicative functors is annoying in practice; you want to use the ap operation and do the funky function application, but *thinking* in terms of direct combination is easier when understanding a new structure.

An applicative functor is a multi-arg version of a normal functor. The easier way for me to think of it is that it's a functor that knows how to smoosh together two *independent* versions of itself (siblings, not nested wrappers). This is often very similar to how you smoosh nested things, but not always; sometimes there are multiple well-behaved ways to smoosh together siblings and not all of them work for nested stuff. (Example: the most natural way to smoosh nested Lists is to flatten them into a single master List, and the analogous way to smoosh together sibling lists is to cross-product them, creating a list of tuples containing every pair of items from the two siblings. (Showing that this is analogous is interesting, if you don't see it immediately). There's another way to smoosh together sibling Lists, tho - pair up their values index-by-index, so you get a list who first value is a tuple of the two siblings' first values, second value is a tuple of the two siblings' second values, etc. This is traditionally called a ZipList, and it actually can't be generally extended to nested lists. Again, interesting to show why.) This lets you map variadic functions over a functor - if all the arguments are in the same type of functor, and it's applicative, you can smoosh them together into a single functor holding a tuple of the argument values, and then map the function over that.

The cross product case seems computationally expensive and impractical.

Simple operations that generate exponential work ... are usually best avoided?

It's just nested loops!

Code: Select all

ap([func], [a,b,c,...], [1,2,3,...])

is morally equivalent to

for first in [a,b,c,...]:
  for second in [1,2,3,...]:
    result.append(func(first,second))


However, it's traditionally implemented and understood in a different way, both because this second way is more natural in Haskell and related heavy-functional languages, and because it avoids having to deal with tuples ever. Instead, you put *the function you want to map* in an instance of the functor, and then call another method (traditionally called .ap()), passing it the argument functor. .ap() knows how to reach into each of the functors, pull out the function and value and call them, then put the result in a smooshed-together functor made from the earlier two wrappers (as I described above). If you curry the function, so being called with the first arg just produces another function that expects the second arg, you can repeat this until all the args are filled in and you're left with a single functor containing the result. Multi-arg mapping functions without ever having to actually acknowledge that functions can *have* multiple args!

And, in the above case, it maybe generates exponential amounts of list output? Well, not for non-list functors naturally.

Right. For example, the Maybe monad (representing either a value, or a failure) is easy to smoosh - if both Maybes have values, just use those; if either of them are failure, the whole thing is automatically failure.

Then there's "traversable" functors, which are another subclass of "applicative" and have a .traverse() method that gives you a powerful way to iterate a mapping function over a "structured" functor (like Lists or Trees) without destroying the structure. Through ~clever tricks~, they let you automatically *swap* nested functors, so if your value is like A<B<value>>, you can automatically swap it to B<A<value>> as long as the A is traversable and the B is at least applicative, without having to know anything about what B is. This lets you get around a major weakness of monadic functors, which is that they're not composable - if you try to compose two monads (say you have a List of Options), then if you call them in the monadic way (the mapped function returns another instance of the composed monad), you'll end up with the value wrapped up like A<B<A<B<value>>>>, and neither A nor B's .join() method can work automatically. If B is traversable, tho, then you can map the swapper into the middle, giving you A<A<B<B<value>>>>, then call A's join once on the outside, getting A<B<B<value>>>, then map B's join in one level to get A<B<value>>, exactly where you wanted to be!

So the point of these axioms, within Haskell, is to have "kinds" of "type operations" (a list of X is a kind of type operation here?). If the type operation supports the right axioms, certain algorithms work on the type operations without knowing any more about the type operation, and the result of the algorithm can be described and has certain properties (which someone might desire).

Yeah, exactly. And these kinds in particular (a) can be used to model a whole bunch of things, and (b) have a lot of useful abstract operations associated with them.

It's nothing to do with Haskell, Haskell just popularized it first and has good static typing support for it. For example, Dr. Boolean's Mostly Adequate Guide shows how to use functors well in JS, and has several useful and compelling examples strewn thruout.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Wildcard
Candlestick!
Posts: 251
Joined: Wed Jul 02, 2008 12:42 am UTC
Location: Outside of the box

Re: Coding: Fleeting Thoughts

Postby Wildcard » Tue Jun 07, 2016 4:41 am UTC

This is probably heretical, but here goes my fleeting thought.

So, I just read the clearest explanation of monads I've ever encountered, after wading through the second clearest explanation of monads I've encountered.

Coming from a shell scripting background (with lots of text processing and extensive use of the more powerful features of grep, sed, and awk) I think I see how I'd relate monads to shell scripting.

Monads handle piping data between functions in typed languages where the types otherwise wouldn't match up.

In shell scripting, everything is a string—or an implicitly newline delimited array (e.g. in sed or awk) complete with implicit looping—and thus the distinction between data structures becomes mostly irrelevant. If you want some extra info to be passed along (in a shell pipeline), you mash it onto the end of each line of data and un-mash it with grep or sed at the other end before using the value.

Okay, how bad did I do? :)
There's no such thing as a funny sig.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Tue Jun 07, 2016 7:32 am UTC

Yakk wrote:The operation:

If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type M<Y>.

also seems easier to use than the one you describe, even on optionals. With join it becomes:

If X is a complete type, m is an instance of M<X>, f is an instance of a Callable type and std::result_of_t<F(X)> equals Y for some type Y then m * f is an expression of type M<Y>. If, however, Y is of type M<U> for some U, then m*f is instead of type M<U>.

which I thought corresponded to bind/chain/whatever. A map followed by a join?

The second operation is more complex and can always be performed by doing a return() before the bind() so functional programmers prefer the first operation and define it as bind().

Functors are much simpler than Monads. A functor in C++ would be:
A template (or template-alias) F is called a Functor if the following conditions are true:
  • For every type X the expression F<X> refers to a type.
  • If g is an instance of a Callable type G and m is an instance of F<X> the expression fmap(g, m) is an expression of type F<std::result_of_t<G(X)>>
  • Certain axioms hold


The following code turns std::vector into a Functor:

Code: Select all

template<typename X, typename G>
std::vector<std::result_of_t<G(X)>> fmap(G g, const std::vector<X> &m) {
    std::vector<std::result_of_t<G(X)>> r;
    std::transform(m.begin(), m.end(), std::inserter(r, r.begin()), g);
    return r;
}
Last edited by korona on Tue Jun 07, 2016 7:54 am UTC, edited 2 times in total.

User avatar
phlip
Restorer of Worlds
Posts: 7543
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Coding: Fleeting Thoughts

Postby phlip » Tue Jun 07, 2016 7:45 am UTC

korona wrote:
The following code turns std::vector into a Functor:

Code: Select all

template<typename X, typename G>
std::vector<std::result_of_t<G(X)>> fmap(G g, const std::vector<X> &m) {
    std::vector<std::result_of_t<G(X)>> r;
    std::transform(m.begin(), m.end(), std::inserter(r, r.begin()));
    return r;
}

I'm not super familiar with C++ template nonsense, but I'm at least 70% sure you don't actually call g at any point here?

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Tue Jun 07, 2016 7:50 am UTC

Oh, thanks for pointing that out. I fixed the post above.

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Tue Jun 07, 2016 12:32 pm UTC

Wildcard wrote:This is probably heretical, but here goes my fleeting thought.

So, I just read the clearest explanation of monads I've ever encountered, after wading through the second clearest explanation of monads I've encountered.

Coming from a shell scripting background (with lots of text processing and extensive use of the more powerful features of grep, sed, and awk) I think I see how I'd relate monads to shell scripting.

Monads handle piping data between functions in typed languages where the types otherwise wouldn't match up.

In shell scripting, everything is a string—or an implicitly newline delimited array (e.g. in sed or awk) complete with implicit looping—and thus the distinction between data structures becomes mostly irrelevant. If you want some extra info to be passed along (in a shell pipeline), you mash it onto the end of each line of data and un-mash it with grep or sed at the other end before using the value.

Okay, how bad did I do? :)

"Carrying extra data between functions that otherwise wouldn't know how to handle it" is one purpose of functors (including monadic functors), yeah. It's one way to use the Writer monad, like your first link talks about.

And more generally, monadic functors make functions composable in one situation when they otherwise wouldn't be - they upgrade functions of the form "A -> monad(B)" to "monad(A) -> monad(B)", so you can chain two such functions together.

But the concept of "what a functor does" is much wider than that. (Which is why I stress the functional aspects rather than the "meaningful"/abstract ones.) A functor is a container that holds a value and lets you map functions over that value without them having to be aware of what the container is and how it works. The various elaborations of functors perform variations of this for slightly more complex situations. Because this is so abstract, you can model a really surprising amount of things with it. Any single metaphorical model will fail on at least some types of functors; only a mechanical understanding will take you thru every one.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Xeio
Friends, Faidites, Countrymen
Posts: 5086
Joined: Wed Jul 25, 2007 11:12 am UTC
Location: C:\Users\Xeio\
Contact:

Re: Coding: Fleeting Thoughts

Postby Xeio » Tue Jun 07, 2016 2:22 pm UTC

So I can either add a configuration to the "Environment" page where it's easily visible and comparable* between environments, but I have to duplicate around 10 settings over dozens of environments and maintain them indivdually. Or I can add it to the environment tree structure, where they're completely hidden unless you go looking for them, but at least it's maintainable... at least if you know to look there.

*Ok, well, once they fix the compare button some distant time in the future, woo vendor bugs. Is this what it's like when we ship broken software to our customers?

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Tue Jun 07, 2016 4:09 pm UTC

Xanthir wrote:But the concept of "what a functor does" is much wider than that. (Which is why I stress the functional aspects rather than the "meaningful"/abstract ones.) A functor is a container that holds a value and lets you map functions over that value without them having to be aware of what the container is and how it works. The various elaborations of functors perform variations of this for slightly more complex situations. Because this is so abstract, you can model a really surprising amount of things with it. Any single metaphorical model will fail on at least some types of functors; only a mechanical understanding will take you thru every one.

A functor does not have to be a data structure. From a category theory POV a Haskell functor is an endofunctor on the category of Haskell types. That means that it is a mapping F that maps each type A to a different type F(A) and also maps functions f between two types A -> B to functions F(f) between F(A) -> F(B). It does that while preserving the structure of the category, i.e. it commutes with function composition.

An example of a functor that is not a data structure would be a map that takes any type and maps it to a type that only has a single instance.

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Tue Jun 07, 2016 4:23 pm UTC

Yes, yes, we can go all abstract and ephemeral. But that's not helpful. That's why functors have taken so long to start bleeding out of Haskell and other heavy-functional languages.

Functors are data structures. They are implemented in a computer as an object with a particular interface - namely, a map() method. There are a few other methods that constitute the various useful specializations of functors. In the most common examples the functor literally represents a container; in other cases it represents something more abstract (like an IO operation, holding "the state of the world"), but it's always implemented as a container holding some value (in the case of IO, just a function-with-side-effects you haven't evaluated yet).

Handwaving about abstract algebra is useful when *developing* these concepts, and once you get to advanced level it can be useful to work in those terms, but it's absolutely terrible for teaching people and for evangelizing the concepts.

If that functor you describe is actually *implemented*, it would be a data structure.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Tue Jun 07, 2016 4:43 pm UTC

Xanthir wrote:Functors are data structures. They are implemented in a computer as an object with a particular interface - namely, a map() method. There are a few other methods that constitute the various useful specializations of functors. In the most common examples the functor literally represents a container; in other cases it represents something more abstract (like an IO operation, holding "the state of the world"), but it's always implemented as a container holding some value (in the case of IO, just a function-with-side-effects you haven't evaluated yet).

The point of my post was not handwaving over abstract algebra. The functor concept is broader than the container / data structure concept. IO is a perfect non-trivial example for a functor that is not implemented as a container.

(If you want call a function a data structure just because it is stored as machine instructions in RAM then of course IO is implemented as a container. But I doubt that most people would call a function a data structure.)

I come from a mathematical background and learned about functors in math before I learned about them in Haskell; knowing both sides certainly helps to understand the intention behind them. Even if you're not interested in algebra it might be useful to understand that a functor is a higher-order function (a function on types and not values) and not necessarily a container.

User avatar
Wildcard
Candlestick!
Posts: 251
Joined: Wed Jul 02, 2008 12:42 am UTC
Location: Outside of the box

Re: Coding: Fleeting Thoughts

Postby Wildcard » Tue Jun 07, 2016 5:21 pm UTC

Xanthir wrote:But the concept of "what a functor does" is much wider than that.

I know there is an (evidently heated) discussion of the niceties of functors in full play here, much of it over my head at present. I was interjecting an actual "Fleeting Thought" into the flow of things. ;) My comment comparing monads to shell scripting filters (grep, sed, awk, etc.) made no mention of functors at all.

I'm still fuzzy on functors and it's unlikely that the discussion here will clear it up for me (which is fine). But, how accurate (or how bad) was my comparison regarding monads? Just monads, not functors. :)
There's no such thing as a funny sig.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Coding: Fleeting Thoughts

Postby headprogrammingczar » Tue Jun 07, 2016 6:15 pm UTC

Wildcard wrote:
Xanthir wrote:But the concept of "what a functor does" is much wider than that.

I know there is an (evidently heated) discussion of the niceties of functors in full play here, much of it over my head at present. I was interjecting an actual "Fleeting Thought" into the flow of things. ;) My comment comparing monads to shell scripting filters (grep, sed, awk, etc.) made no mention of functors at all.

I'm still fuzzy on functors and it's unlikely that the discussion here will clear it up for me (which is fine). But, how accurate (or how bad) was my comparison regarding monads? Just monads, not functors. :)


I did a brief skim of your first link, https://blog.jcoglan.com/2011/03/05/tra ... ever-read/, and it doesn't seem to have any inaccuracies. In your shell scripting comment, you have probably described a particular implementation, but it's hard to tell just from that and a lot of "hints" gets lost by not having a type system to guide you. That link has this weakness as well, though it seems to do a good job explaining around it. You might just get more out of reading the sigfpe blog post that it links to.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
Xanthir
My HERO!!!
Posts: 5215
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Coding: Fleeting Thoughts

Postby Xanthir » Tue Jun 07, 2016 6:53 pm UTC

I'm still fuzzy on functors and it's unlikely that the discussion here will clear it up for me (which is fine). But, how accurate (or how bad) was my comparison regarding monads? Just monads, not functors. :)

A monad is a functor; "monad" is shorthand for "monadic functor". (headprogrammingczar actually answered your question, so i'll leave it at that.)

korona wrote:The point of my post was not handwaving over abstract algebra. The functor concept is broader than the container / data structure concept. IO is a perfect non-trivial example for a functor that is not implemented as a container.

(If you want call a function a data structure just because it is stored as machine instructions in RAM then of course IO is implemented as a container. But I doubt that most people would call a function a data structure.)


IO *is* implemented as a container, though. IO and Reader are *literally the exact same thing*, and both of them are *literally the exact same thing* as Function. In functor form, they're all a container holding a function that you'll evaluate later when you want to extract the "value" from them. The only reason they're distinguished is because there are several distinct semantics that can all be represented by the same internal representation, and giving things good names that represent their usage is important.

The abstract talk about what IO and Reader each "represent" prevented me from understanding them at more than a minimal surface level for a long time. When I finally got an explanation of what they actually *were*, implementation-wise, I was angry at how simple they were compared to the complicated descriptions of them. (Then I was happy about how clever they were, capturing such useful semantics in such simple implementations.)

This is nowhere near "machine instructions in RAM" level. It's just how you actually write the danged thing in a computer language.

(In a language like JS, they're all literally container objects with an internal property storing a function, because you need that structure to make methods work. In languages with better type support for multimethods, like Haskell, they can instead just be literal functions, tagged appropriately so you can invoke the correct map() implemenation or whatever.)

I come from a mathematical background and learned about functors in math before I learned about them in Haskell; knowing both sides certainly helps to understand the intention behind them. Even if you're not interested in algebra it might be useful to understand that a functor is a higher-order function (a function on types and not values) and not necessarily a container.

A whole lot of people who like monads and such come from a math background. Like I said, I think that's done a lot to advance these elegant abstractions developmentally, I feel like it's been a massive drag on the spread of the knowledge. To almost anyone who hasn't *studied* abstract algebra, an AA "explanation" of something is worse than useless; it obfuscates the concept and makes it seem like something complicated and difficult, even if it's really quite trivial and easy to explain by just looking at code.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Tue Jun 07, 2016 9:43 pm UTC

Xanthir wrote:
I come from a mathematical background and learned about functors in math before I learned about them in Haskell; knowing both sides certainly helps to understand the intention behind them. Even if you're not interested in algebra it might be useful to understand that a functor is a higher-order function (a function on types and not values) and not necessarily a container.

A whole lot of people who like monads and such come from a math background. Like I said, I think that's done a lot to advance these elegant abstractions developmentally, I feel like it's been a massive drag on the spread of the knowledge. To almost anyone who hasn't *studied* abstract algebra, an AA "explanation" of something is worse than useless; it obfuscates the concept and makes it seem like something complicated and difficult, even if it's really quite trivial and easy to explain by just looking at code.

I think the reason why so many people have a hard time understanding monads, functors etc. (both in math and is Haskell) is because they try to study the abstract concept without studying concrete examples first. It's hard to grasp monads when you ask "What is a monad?". Its much easier to ask "What is Maybe? What is IO? What is Reader?". After a while you realize "Hey, all those types have something in common".


Return to “Coding”

Who is online

Users browsing this forum: Yahoo [Bot] and 9 guests