## 1270: Functional

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

orthogon
Posts: 2934
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

### Re: 1270: Functional

I think you've just invented Forth.

The order of operations is the point. The processor can only do one very simple thing at a time and has to do the complex thing by doing lots of the simple things in turn. But to assemble the results of the simple things into the complex thing requires storage for some of those intermediate results. So the processor is forever loading from memory into registers, performing operations, and storing the results. It's particularly the case when your functions are all nested, and especially if they're nested in a more general way than your example, such as add(mul(a,b), mul(c, div(d,e))). You need to do the multiplication a*b, keep the result behind your ear, then divide d by e, multiply the result by c and then add that onto the result you stored earlier. The simplest of languages, like machine code, will require you to spell this out in all its glory, and that's how you end up with the imperative style, since you're telling the processor exactly what to do, step by step.

Adding subroutines, procedures or function calls shifts some of the work onto the language, letting you write your nested functions out as you would prefer, but requires more sophisticated parsing and a more complex run-time environment: in particular for arbitrary levels of nesting you need a stack. Arithmetic expressions were probably one of the first things to be implemented in this way, because originally we thought computers were going to be used for doing big calculations (rather than distributing images of lesbian sex).
xtifr wrote:... and orthogon merely sounds undecided.

moiraemachy
Posts: 190
Joined: Wed Jan 04, 2012 9:47 pm UTC

### Re: 1270: Functional

Pfhorrest wrote:Yes, that is a major fact which my previous (now uncertain, but returning thanks to you) impression of how computers work hinged upon. And every operation a processor can perform is composed of a bunch of those boolean functions wired together, correct? E.g. Addition of two 8-bit numbers is a complex of boolean functions taking 16 boolean values (which together encode two 8-digit binary numbers) and doing a bunch of logical functions on them in a way that outputs 8 boolean values which encode the sum of the two numbers encoded in the input booleans. And a program written in assembly is equivalent to a complex of those processor-operation functions.

Except at this level it seems to stop looking like a single complex function and instead gets written as a series of imperative commands. And a program written in C, which ultimately gets translated to this, continues to look like a series of imperative commands. And on up, until at some point someone decided that it would be nice to start writing functions instead of commands again, except now all our tools are build on top of tools built in the framework of issuing commands to the processor rather than handing it a function with a bunch of arguments to evaluate, even though that's what it's really been doing all along.

Here is the thing: CPUs are imperative. If you have N different additions in your code, you will not rewire the CPU to have N adders, or rewire the inputs of the various adders in the CPU. What happens is that you have one* adder that is being reused all the time, and a gatekeeper circuit that decides what goes in and what goes out. Now, this gatekeeper (which is actually much more complex than the circuit that adds) is imperative: it sequentially, being controlled by the clock, reads bytes (from the memory) that tell it what to do. In sequence. Imperatively. That is why the turing machine abstraction is so important: you don't care about logic gates anymore. You only care about what commands you send to the gatekeeper circuit.

To clarify how the gatekeeper circuit works, here is everything a command will specify**, in a simple architecture:
- In which two memory address do I get the bytes for my operation?
- To which logic operation circuit do I send these bytes: adder, and, or?
- In which memory position do I write it back? (it will only be written when the gatekeeper circuit is sure that enough time has passed so that the output of the logic operation circuit is the correct one. The clock makes this call)
- In which position, in memory, do I get the next command? (which will only be loaded in the CPU when the other instruction is completed, or inputs and outputs could get mixed up. Clock again)

In this example, in a memory with 64KB (16 bits to address every byte), your instructions could have 11 bytes (4 for input addresses , 2 for output address, one for operation code, which decides which operation to perform with the inputs, and 4 for next intruction addresses - 4 because then you have two different destination addresses, and the operation code can decide to which one to go - say, #1 if the result is nonzero, #2 if it is zero?)

*Maybe more
** This is one possible architecture which I consider quite simple to understand, but it absolutely is not the only one. But the core idea is the same: intructions being read in sequence.

Hope it is clear. In case its still a mess, the main point is: yes, it is just logic gates. But the presence of memory makes it more complex than a chain of truth tables.

Ninja'd by orthogon

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

Pfhorrest wrote:Let me ask a question to try to clarify the issue of how computers work at the lower level. Say you had nothing but hardware NOR gates, some kind of array of bits to store outputs into and read inputs in from, and the ability for a user to read and write to those bits himself to send input into and receive output from the system. Could you, in principle, build a dedicated hardware version of, say, Photoshop?

Yes, and it's called Intel Pentium PC. As long as you run nothing but Photoshop on it.

Pfhorrest wrote:If so, it seems that the code for Photoshop (and in principle any other program) boils down to a very complex logical function, built ultimately out of nested NORs but which could be abstracted to a nest of much higher-level logical expressions without the need for any imperative-style programming language along the way. The program itself could be seen as one large function, every subroutine of it a sub-function, every command used by each subroutine a sub-sub-function, every processor op a sub-sub-sub-function, on down to the individual logic gates.

Is that not how it really works?

It's not. The reason is that "a tree of logic gates as a function" language has neither recursion (which has no direct mapping onto hardware logic gates) nor iteration (which has direct mapping onto hardware logic gate loops, but has no direct mapping into your language).

Writing Photoshop in such a language is something that no programmer would want to do (unless paid per line of code and able to use code generators... then I'll be the first in line).

Besides, your language lacks higher-order functions. Which makes it impossible to write Photoshop plugins.

Surfer
Posts: 54
Joined: Sat Aug 11, 2012 6:26 pm UTC

### Re: 1270: Functional

Pfhorrest wrote:
Mikeski wrote:the "sandal instead" reference is two NOR gates wired as a flip-flop.

Ah, that is interesting, though not (I think) necessary to the point I was getting at. Can you explain how that works?

Here's a way of building a NOR flip flop that might be clearer. You've been imagining NOR gates wired together in a feedforward network, with ones and zeros flowing along the wires. But sometimes they're wired to form loops instead of just feeding forward. If a wire forms a loop, then either a 0 flows around in a circle forever or a 1 flows around in a circle. So the loop forms a memory of one bit. If you insert two NOR gates into that loop, with each gate having the loop wire as one input and output, and an external 0 feeding into the other input, then you again get a single bit of memory: the loop will either have a 0 flowing from the first NOR to the second and a 1 flowing around the other half, or vice versa. So there are two possible states: 01 or 10. That's one bit of memory.

Now, if you momentarily change one of those outside inputs from a 0 to a 1, then change it back to 0, you will force the loop to contain a certain state. If you do the change to one NOR gate, you'll set it one way, and if you change the input to the other one, you'll set it the other way. In this way, we have a long-term way to store one bit of information. We can read it by having the loop split off and become an input to some other circuit. And we can set it by setting one or the other input to 1 briefly. This is a simple flip flop. There are many other kinds of flip flops that are used. By the way, it's more common to use NAND gates than NOR gates, but either can be used to do anything.

Getting back to the topic: feedfoward circuits of NOR gates are "functional", but feedback circuits are not. The output of a flip flop is not just a function of its current inputs, because there's also an internal state, that changes over time. Computers use gates that are connected both acyclicly and cyclicly, so computers at the lowest level are imperative, not functional.

Pfhorrest wrote:And then it would seem higher-level languages should essentially be made up of a bunch of programs (functions) written in that level of code. For a very simple example you could have a slightly higher-level function which adds X to A "in place" as it were, call it the "add-to" function, where add-to(A,X) = store(A,(add(load(A),X)). You could then piece those higher-level functions together into still higher-level functions until you have functions on the level of simple shell commands, and piece those together still into the realm of full-featured end-user programs. At at every stage the functional nature of the program would be clearly evident.

No, doing something "in place" is not functional. It's imperative. Your command store(A,(add(load(A),X)) is assuming there's a variable A that can change over time. In functional programming, there are no variables that can change. The "add" and "load" commands are functional, but the "store" command is imperative.

It might seem like being functional would prevent you from writing certain programs. But it turns out that anything you can do imperatively, you can also do functionally. You just have to think in a very different way. But at the lowest levels, computer CPUs are definitely imperative, because everything works using flip flops and registers whose contents can change.

There's a higher-level example that might make this clearer. Mathematica allows both functional and imperative programming. If X is a list of a million numbers, then the command x[[1]]=42 will change the first number on the list to be a 42. That's imperative. The command ReplacePart[x,1->42] makes a brand new list that is a copy of all million numbers, except that the first one in the new list is a 42 instead of a copy of the old first element. And then the function ReplacePart returns the new list of a million numbers. It doesn't change X. It doesn't change any variables. That's functional, because the original list didn't change. It was just copied to a new list that had a slight difference.

Of course, it would be horrendously inefficient to actually do all that copying on a real computer, because computers are inherently imperative. But Mathematica is smart enough to not really copy the whole list. It just maintains in memory an acyclic directed graph of nodes with reference counters, in such a way that it LOOKS like it made a copy of all million numbers, even though it actually did something else. And that's typical of functional languages. The compilers are smart enough to translate functional operations into equivalent imperative operations that are efficient on real hardware.

heatsink
Posts: 86
Joined: Fri Jun 30, 2006 8:58 am UTC

### Re: 1270: Functional

Diadem wrote:But the question is, is [functional programming] better than imperative programming? I've not seen a single argument for this. I have seen many convincing arguments against.

Even in an imperative programming language, functional programming style makes your code easier to debug, extend, and maintain. Programmers certainly understand the idea of side effects, yet our limited mental faculties lead to lots of bugs when we rely on them too much. That's why project leaders want their C++ programmers to write functional code.

Haskell doesn't need side effects to be "the world's finest imperative programming language". Although imperative programmers are constantly writing commands, imperative programming languages are bad at manipulating commands. In Haskell, you can define new command functionality such as backtracking (useful for parsing). You can define functions that transform commands (useful for control flow abstractions and resource management). You can put commands in variables or data structures (useful in concurrent programs when thread X wants to tell thread Y what to do). Imperative workarounds use functions, object-oriented objects, or data structures as proxies for commands, bloating programs and making bugs hard to find. Heck, some workarounds have turned into language extensions, which is pretty much an admission that the language isn't up to the task. *cough*c++11*cough*

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

orthogon wrote:
Kit. wrote:
orthogon wrote:
DR6 wrote: ... Another benefit is that you can do "imperative programming"(aka monads) with things other than IO...

A monad sounds like what you have if you suffer from monorchism. And you know who else had that? That's right. Adolf Hitler, and here's a picture to prove it:
(And thus was created the Grand Unified Post, which satisfies Godwin's Law and Rule 34 in the same sentence.)
Spoiler:
Actually, I haven't gone looking for a picture of a single-occupancy scrotum. But if anyone has one I'll be happy to edit in a link. After all, Rule 34 only requires that the porn of it either exist already or come into existence as a result of its being mentioned.

How is that porn?

I was pretending that the spoiler contained a pornographic picture of a Hitler lookalike with his scrotum on display. I didn't want to actually go looking for such a picture, so I left it to your imagination. I suspect that's where the best porn is, anyway.

I'm not sure my imagination is where the best porn is. I'm not really into porn.

So, can we just say that we've found an exception to Rule 34?

Besides, Godwin's Law is morally obsolete. I suggest icanhascheezburger law.

Spoiler:

Hafting
Posts: 57
Joined: Thu Feb 24, 2011 11:23 am UTC

### Re: 1270: Functional

Pfhorrest wrote:This is also why difficulties with threading programs for multiprocessing always struck me as counterintuitive, because in my mind -- not really understanding how exactly imperative languages get translated into the series of boolean logical functions that everything boils down to on the hardware level -- it seems like the computer itself must be inherently functional, so why doesn't it automatically just evaluate all the first tier of sub-expressions in parallel, then evaluate all the next tier of sub-expressions in parallel, and so on.

Well, is this clever humour?

In case you don't know, the computer is not inherently functional. It is inherently imperative. The reason we have so many imperative languages, that are also older than any of the functional ones - is that the imperative languages are much closer to how the microprocessor works. A processor does not reason the way a functional language does. It executes simple commands one after another, mostly in the order they appear. Commands like "add these two numbers", "store result in this memory location", "jump to another part of the code if the last result was bigger than 10".

Imperative languages was not so hard to make from building blocks like that. Functional languages are also built from the same primitive operations, but that is harder to do. A processor cannot "reason about mathematical statements", so lots of these primitive operations are needed to implement them. Hence, a program in a functional language may be short and elegant, but executing it can still be slow because the underlying hardware doesn't really work that way.

Earthling on Mars
Morning
Posts: 669
Joined: Sat Apr 06, 2013 6:35 pm UTC
Location: Somewhere in the Solar System

### Re: 1270: Functional

Could I make a suggestion to those who say they don't like functional languages? Or, for that matter, those who just haven't bothered to try?

Go try learning one.

When you have time, go download Haskell or another functional language, find a tutorial (I used this one), and work through it. I found it fun to learn to write things in a functional way. Now, I never actually got very far; I went through that tutorial quite a while ago and haven't touched Haskell since; I'm still an imperative programmer, and though I did manage to grasp the concept of monads, I never quite understood (or bothered to understand) how to actually use them in Haskell programs. But it was fun, and I think it helped me to program better in imperative languages, for example I've been trying more and more to limit side effects lately.

By the way, I'd also recommend learning some other non-mainstream languages: namely Lisp (multi-paradigm) and Smalltalk (object-oriented). I found Smalltalk before I found any functional language, and Smalltalk's "blocks" helped me to understand lambda functions when I got to Haskell.

DR6
Posts: 171
Joined: Thu Nov 01, 2012 1:44 pm UTC

### Re: 1270: Functional

Hafting wrote:The reason we have so many imperative languages, that are also older than any of the functional ones

What? Lisp is way older than almost all modern languages: it's from 1958, while for instance C is from 1978.
And even if you consider Lisp as a class of languages rather than an actual language, that still means functional languages have been around for a lot of time.

Hafting wrote:A processor cannot "reason about mathematical statements", so lots of these primitive operations are needed to implement them. Hence, a program in a functional language may be short and elegant, but executing it can still be slow because the underlying hardware doesn't really work that way.

That's just not true either. The processor can't "reason about mathematical statements", but the compiler can totally do that, with good results: actually they can reason about functional code way better than about imperative code. It's pretty normal in Haskell to write an algorithm using a "linked list", and have it compiled to something that doesn't even allocate, by using general functions like map, filter and foldl.

For instance(this is not supposed to be a complete proof, but an example), check the computer language benchmarks game(I don't think I can post links, the url is:" h t t p ://benchmarksgame.alioth.debian.org/"). Haskell, Erlang and Clojure are quite high, with the only mainstream languages being over haskell being C, C++, C# with Mono and Dart: and ATS is the second, that is, over C. (And probably haskell isn't using its llvm bindings, which would make it faster).

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

Hmm, I wasn't sure, but it appears you're right:

Paul Graham wrote:The language he defined in 1960 was missing a lot. It has no side-effects, no sequential execution (which is useful only with side effects anyway), no practical numbers, and dynamic scope. But these limitations can be remedied with surprisingly little additional code. Steele and Sussman show how to do it in a famous paper called "The Art of the Interpreter."

Most programming languages may well be based on the computer's primitive operations (an example from around the same time as Lisp was FORTRAN). Lisp kind of did it backward by starting off as math, which is why it has this timeless air about it; as PG said, "math doesn't get stale". Admittedly, not having side-effects seems to have been a bit of an accident at first (or the simplest form of lisp, depending on how you look at it).

EverVigilant
Posts: 13
Joined: Mon Oct 10, 2011 12:15 pm UTC

### Re: 1270: Functional

Just checked in, I realize it's been a while.

Pfhorrest wrote:But at a much lower level than that, every instruction of a processor is a series of logic gates that executes a function taking in a number of boolean values and outputting the value of that function. In principle every single operation a processor does could be emulated by a bunch of nested NORs or NANDs.

This is the big confusing mystery which has made serious programming unapproachable to me. I have some higher-level hands-on experience with what kinds of code I can write to get the computer to do something, but from my perspective that is essentially just operating a preexisting program. E.g. I can write Javascript to make a web browser do something, or Ruby to make a web server do something, but that seems fundamentally little different from clicking tools and dragging them across the screen to make Photoshop or Illustrator do something (or automating that with a batch script): I still have no idea how it's doing what it's doing. But then on the other end I understand boolean logic and the fact that processors are made up of logic gates and how to combine a bunch of simple gates to get a complex operation. What I don't understand is what connects that low-level functionality to the high-level stuff, so when working at the high level stuff I just feel blind, like I know that if I do this it will get the result I want, but I have no idea why it will.

Well, yes, it's all just gates at bottom, but the first thing that's been done is organize those gates in a way that performs sequential instructions. Look up Arithmetic Logic Unit (ALU). That's where all the calculations are performed. It's a bunch of gates, ingeniously put together so that when you give it two data inputs (sets of bits specifying numbers to do an operation on) and one control input (specifying the basic operation to be performed, be it addition, multiplication, etc.), it outputs the result of that operation.

If you look at assembly languages, you will see this in action. Each line contains an opcode (the control input) and one or two data arguments. (Not all of these use the ALU. If the opcode is a memory fetch, for example, the processor will read data at the specified location from the RAM and place it in one of the processor's internal registers.) If the opcode is an arithmetic operation, it will take data from the registers and feed it through the ALU and place the result in another register. Then the processor continues, executing the next instruction in sequence. (Another stored value called the program counter, which points to the memory location of the current instruction in the program, is incremented. Then the process starts over, with the processor reading the next instruction from memory based on the program counter. Control statements like loops and function calls directly modify the program counter itself).

What I have just described is the oversimplified, computer-engineering-undergrad version of a processor. Modern processors with pipelining and branch prediction and parallelism of course complicate matters immensely. But they do not change the fact that a program is still a logical sequence, and all these things are just designed to speed up the process.

So yes, at bottom it's all logic gates and whatnot. But those gates are specifically put together to make a machine that processes things in sequence, and that machine is the fundamental baseline of a computer. You can build a functional evaluation device out of logic gates, but it's not a computer as we know it. Computers all start with a processor which is a device for executing a sequential program. Any functional programming languages running on such a machine are compiled into such a sequential program before execution.

PM 2Ring
Posts: 3646
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

### Re: 1270: Functional

Would you classify FRACTRAN as a functional language?

I've played with Haskell a bit. I admit it's easier to write programs in than FRACTRAN, and although I do use some functional stuff in Python, I still find it rather hard to wrap my head around the Haskell way of doing things.

OTOH, I enjoy creating elaborate structures in Conway's Game of Life and some of those have a feel reminiscent of functional programming, especially the ones that do simple binary arithmetic.
Spoiler:
For example, I've made patterns that generate Fibonacci numbers and other quadratic number sequences; my most ambitious structure of this type is a Collatz sequence generator.

The full-sized version of this image of a Collatz sequence generator in Life can be loaded into Golly and other modern Game of Life applications.

orthogon
Posts: 2934
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

### Re: 1270: Functional

lalop wrote:Monads can be used to abstract the I/O of a program. To quote wikipedia:

We can think of a value of type IO as a function that takes as its argument the current state of the world, and will return a new world where the state has been changed according to the function's return value. The state created in this way can be passed to another function, thus defining a series of functions which will be applied in order as steps of state changes. This process is similar to how a temporal logic represents the passage of time using only declarative propositions.

I spent a few hours trying to understand that Wikipedia page, and feel I should warn fellow imperativistas who would like to be more functional that it's not a great place to start. It looks as though it's been edited to death: notation is used before being introduced and the whole thing is pretty hard to understand unless you're already fluent in Haskell and Category Theory. The Talk page is actually more enlightening, and goes a long way to explain why the article has ended up like it is through the well-intentioned efforts of some very clever people. Also, it contains a link to this very helpful tutorial.

After working through that, I think I sort-of get it. However, I can't help feeling that the "syntactic sugar" introduced at the end is effectively admitting that some problems are better expressed as a series of imperative steps. It seems to be saying "if you get your head around these monad things, use a couple of built-in monads, and then this syntactic sugar, you can make Haskell look a bit like BASIC. But don't worry, it all gets translated into Haskell, so you're not really cheating.". The trouble with syntactic sugar is that the compiler is likely to give you errors related to the sugar-free version. (See also: templates in C++).

Actually that's the point that I've been meaning to make for a while: I'm all for writing the program in a way that looks more like the problem domain than the solution domain, and it seems to me that while some problem domains are certainly best expressed functionally, others look more OO-like and others still simply are imperative-shaped. A powerful language should make it easy to write in the style appropriate to the problem. I can see that ideas from functional languages have a lot to offer in this regard. But having imperative programming as a bolt-on achieved by at least two layers of mind-bending theory and obscure syntax doesn't seem like the obvious way forward.

ETA: I note that the difficulty of explaining monads seems to be becoming a field of study in its own right with terminology including "the monad tutorial fallacy".
xtifr wrote:... and orthogon merely sounds undecided.

ctdonath
Posts: 198
Joined: Wed Feb 08, 2012 2:40 pm UTC

### Re: 1270: Functional

Pfhorrest wrote:But instead, apparently, magic happens...

The magic you're missing is scale. You grasp every construct involved, because at its own level it is created and packaged in a manageable size, leveraging the packaging of comparable-looking lower-level constructs. And that's the magic: packaging manageable-sized constructs so they can be functional components of the next-layer construct...repeated and packaged and nested and reused and duplicated a bewildering number of times, such that the highest level looks like magic precisely because there is so freaking much going on below it, all packaged into manageable chunks.

We toss around metrics like gigahertz and terabytes with little consideration of how utterly freaking huge those numbers are. I mean we're talking computers running so fast that in the time it takes for light to go from the screen to your eye, the CPU has just processed another dozen or so instructions. In one second flat, no less than three billion instructions have been performed. On a MicroSD card the size of your pinky nail, you can store all the TV/movies you'll see this year; on a pocket-sized 1TB drive, we could store a first-person recording of your entire life.

Now, take all that capacity for data & processing, and divide it all up into manageable chunks - say, groups of 10. (You've learned to manipulate more than that at once, as far as anyone is concerned, but one way or another 10 is about the upper limit for the quantity of chunks your brain can handle at once.) Take each chunk and likewise divide that up into a manageable 10 pieces. ...and so on. That division process works out to 12 layers of groupings to break down 1TB of data to individual numbers. 12 conceptual layers is a lot, when taking into account that much breadth to each layer - but it's still humanly manageable. But wait, those numbers are bytes, 8 bits, round to 10 for some overhead you pick up along the way. Processing that 1TB uses something like 10 billion transistors; with the same reasoning, that breaks down to another 10 layers of encapsulation & abstraction. Put it all together from "bare metal" to "play movie", and you've got some 22 major layers of abstraction corresponding to 22 orders of magnitude of capacity. And I haven't even touched on GPUs, networking, etc. nor even thought about "horizontal" abstractions.

Point is: while you grasp how components are constructed at any level, you haven't followed that thru from one extreme end to the other, grabbing the ends but looking at the middle as "...and then a miracle occurs". Just keep thinking it thru and you'll see how the "magic" works.

Posts: 13
Joined: Sun May 01, 2011 2:08 pm UTC

### Re: 1270: Functional

Great Justice wrote:
Kit. wrote:If you are developing a product that does alpha-blending in video, you don't want it to crash with a stack overflow after a mere gigabyte of data.

So, any fp language that pretends to be both practical and universal, is better to have tail recursion as a language feature. Like Scheme does.

You can write functional style in a language/compiler that doesn't support tail recursion

Sure you can write it, but it will cause a stack overflow once the input gets too big.
Without a tail-recursive compiler, tail-recursive programs will use stack space proportional to the input size. And what most people consider to be functional style involves a lot of recursion. (In Haskell, however, you usually call predefined recursion schemes and provide the pieces which vary between applications.)

Also, proper functional languages support general tail calls — which turn arbitrary tail calls into gotos, and allow writing state machines running in constant stack space in a functional language.

Posts: 13
Joined: Sun May 01, 2011 2:08 pm UTC

### Re: 1270: Functional

orthogon wrote:The trouble with syntactic sugar is that the compiler is likely to give you errors related to the sugar-free version. (See also: templates in C++).

*Likely* things don't always happen. The standard Haskell compiler (GHC) does error checking before any desugaring. (Sometimes you'll see exception, but they're accepted as valid bugs and fixed).

Now, a lot of the Haskell community simply does a bad job at explaining monads, but it's mostly a "marketing" failure. They're also in F# under another name and they scare exactly nobody.

Myself, I mostly program Scala and mix functional, imperative and OO programming with much flexibility.

Eternal Density
Posts: 5545
Joined: Thu Oct 02, 2008 12:37 am UTC
Contact:

### Re(1270(Functional))

Blaisorblade wrote:Also, proper functional languages support general tail calls — which turn arbitrary tail calls into gotos, and allow writing state machines running in constant stack space in a functional language.
So essentially tail recursion is an attempt at protecting us from raptor attacks. But is it enough?

[edit: Swiftkey m*starded up some words]
Last edited by Eternal Density on Fri Oct 11, 2013 12:49 am UTC, edited 1 time in total.
Play the game of Time! castle.chirpingmustard.com Hotdog Vending Supplier But what is this?
In the Marvel vs. DC film-making war, we're all winners.

orthogon
Posts: 2934
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

### Re: 1270: Functional

orthogon wrote:The trouble with syntactic sugar is that the compiler is likely to give you errors related to the sugar-free version. (See also: templates in C++).

*Likely* things don't always happen. The standard Haskell compiler (GHC) does error checking before any desugaring. (Sometimes you'll see exception, but they're accepted as valid bugs and fixed).

Now, a lot of the Haskell community simply does a bad job at explaining monads, but it's mostly a "marketing" failure. They're also in F# under another name and they scare exactly nobody.

Myself, I mostly program Scala and mix functional, imperative and OO programming with much flexibility.

That's encouraging, thanks - I admit I was speculating based on bad experiences with compilers in the past. I just found out that my company offers Scala and Clojure courses so I'm thinking of signing up if I can come up with an excuse to use them...
xtifr wrote:... and orthogon merely sounds undecided.

tormeh
Posts: 7
Joined: Sat May 19, 2012 1:26 pm UTC

### Re: 1270: Functional

Just a thing I would like to clarify for those who think that OOP and FP are mutually exclusive:

OOP is about code and data organization.

They are orthogonal concepts. I think pure FP and OOP can be mutually exclusive because of state, though. Not sure.

orthogon wrote:
orthogon wrote:The trouble with syntactic sugar is that the compiler is likely to give you errors related to the sugar-free version. (See also: templates in C++).

*Likely* things don't always happen. The standard Haskell compiler (GHC) does error checking before any desugaring. (Sometimes you'll see exception, but they're accepted as valid bugs and fixed).

Now, a lot of the Haskell community simply does a bad job at explaining monads, but it's mostly a "marketing" failure. They're also in F# under another name and they scare exactly nobody.

Myself, I mostly program Scala and mix functional, imperative and OO programming with much flexibility.

That's encouraging, thanks - I admit I was speculating based on bad experiences with compilers in the past. I just found out that my company offers Scala and Clojure courses so I'm thinking of signing up if I can come up with an excuse to use them...

If you are already able to program in a C-style language then I would recommend Clojure. Clojure would teach you a new style as well as forcing you to program functionally (which Scala doesn't). I think you'll be a better Scala programmer if you can also program in Clojure, since you'll be better able to use Scala's functional tools.
I've dabbled in Scala and I can report that it's really great. With reservations. Scala is to Java as C++ is to C, except that it clears up Javas syntax, not the other way around. You can translate a Java program to Scala line for line; nothing in Scala is forcing you to not program in Java with Scala syntax. What Scala does is that it adds functional programming and lots of concurrency features. All Java (mutable) data types get an immutable equivalent and all sequences get paralell equivalents and the list of such duplication goes on. Scala is basically a jungle of features, with a lot of ways in which you can do anything, and it's quite possible to just live in your part of the jungle unaware that the perfect tool for the job is in another part. Coming from a C-style language you probably know a subset of Scala pretty well anyway, Clojure forces you to learn other parts of Scala. The rest is just syntax.

I'm not saying that Clojure is only suited as an introduction to Scalas functional side, btw. Many Scala developers aren't shy about their admiration of Clojure, and I'd assume they know what they're talking about, but Scala has a lot more momentum in both open-source use and job advertisements. Also, Scala doesn't constrict you in any way. Scala will always accept an imperative solution if an algorithm becomes awkward in the functional style. Other than that I don't really know enough about Clojure to compare them.

orthogon
Posts: 2934
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

### Re: 1270: Functional

tormeh wrote:Just a thing I would like to clarify for those who think that OOP and FP are mutually exclusive:

OOP is about code and data organization.

They are orthogonal concepts. I think pure FP and OOP can be mutually exclusive because of state, though. Not sure.
[...]
If you are already able to program in a C-style language then I would recommend Clojure. Clojure would teach you a new style as well as forcing you to program functionally (which Scala doesn't). I think you'll be a better Scala programmer if you can also program in Clojure, since you'll be better able to use Scala's functional tools.
I've dabbled in Scala and I can report that it's really great. With reservations. Scala is to Java as C++ is to C, except that it clears up Javas syntax, not the other way around. You can translate a Java program to Scala line for line; nothing in Scala is forcing you to not program in Java with Scala syntax. What Scala does is that it adds functional programming and lots of concurrency features. All Java (mutable) data types get an immutable equivalent and all sequences get paralell equivalents and the list of such duplication goes on. Scala is basically a jungle of features, with a lot of ways in which you can do anything, and it's quite possible to just live in your part of the jungle unaware that the perfect tool for the job is in another part. Coming from a C-style language you probably know a subset of Scala pretty well anyway, Clojure forces you to learn other parts of Scala. The rest is just syntax.

I'm not saying that Clojure is only suited as an introduction to Scalas functional side, btw. Many Scala developers aren't shy about their admiration of Clojure, and I'd assume they know what they're talking about, but Scala has a lot more momentum in both open-source use and job advertisements. Also, Scala doesn't constrict you in any way. Scala will always accept an imperative solution if an algorithm becomes awkward in the functional style. Other than that I don't really know enough about Clojure to compare them.

A very clear summary, thanks.

Regarding the OO vs functional thing, I do sense a real tension between them. Functional programming, as far as I understand it, places a great importance on avoiding side effects, whereas one of the really powerful things about OO is precisely the way that a whole cascade of side effects can be brought about without the client code needing to be aware. You call a method on a simple, clean interface, but the real object providing the interface can be a complex construction with all sorts of links to other objects. Your call to update a value might write the new value to a database, log the activity in a log file and notify each of several subscribers of the change. The subscribers could send an XML update to a remote web client, redraw the relevant parts of an on-screen bar chart, and sound a siren if the value exceeds some threshold. The ability to put these things together at run-time is what allows many of the complex software systems we use today, and I have difficulty seeing how it can be reconciled with the functional approach. (It would require the Mother of all Monads - the Moam - not to be confused with the Mome, which I think is BlitzGirl). As I understand it, the thinking is that those complex OO systems are actually too complex, too hard to predict and therefore too bug-prone, in large part because of the side effects.
[Edit: acronym fail]
xtifr wrote:... and orthogon merely sounds undecided.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

Is it still OOP if the objects are immutable? It seems to me that immutable OOP would solely be about "where the functions are"/"what the function calls look like", which should be perfectly compatible with FP.

In this case, a method that, in mutable OOP, would modify the object, would instead return a new copy of the object with that field modified.

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

lalop wrote:It seems to me that immutable OOP would solely be about "where the functions are"/"what the function calls look like",

That would be "modular programming".

There is no point in "immutable OOP" as the whole idea of OOP is to split the total program state into the islands of nearly-independent micro-states to make program state modifications as local as possible.

King Author
Posts: 736
Joined: Sun Apr 12, 2009 12:30 pm UTC
Location: Pennsylvania, USA

### Re: 1270: Functional

I've read (okay, I'll be honest -- casually skimmed) a handful of (two) articles (Yahoo Answers) describing functional programming and still don't get it. Then again, I can never learn programming academically, only by examining code and doing stuff myself.

Can someone give me a really simple example of functional vs not functional programming? I'll give you an easy task, and you can give me the code in either Javascript or Python (the only two languages I know even moderately): make a program that increments a variable, let's call it "blargh" every time an incrementer equals five, and have it stop at 100, then print the result.

In Javascript that'd be something like...

Code: Select all

blargh = 0;

for (i = 0; i <= 100; i++)
{
if (i % 5 = 0)
{
blargh += 1;
}
}

if (i = 100)
{
document.write(blargh)
}

That's probably pretty awful, but I'm tired and you get what I mean. How would that be done functionally? And what is what I typed called (besides sloppy)? "Nonfunctional?"
I have signitures disabled. If you do, too...you can't read this, so nevermind >_>

airdrik
Posts: 238
Joined: Wed May 09, 2012 3:08 pm UTC

### Re: 1270: Functional

Functional means that your primary abstraction is to use functions to decompose the problem, while non-functional seeks other ways to decompose the problem - most commonly using objects to structure the data.
Functional programming also has some implications about style; namely that you will have some higher-order functions which accept other functions and perform operations on or using them, and other functions which return another function (yes, in the most sublime and pure functional programming languages, it is functions all the way down); also that your functions don't modify any state, they only return new values (which may be the new state).

Code: Select all

#non-functional
def increment_and_print(blargh):
for i in range(0, 101):
if i % 5 == 0:
blargh += 1
if i == 100: #not that this is entirely necessary.  i is guaranteed to be 100 at the end of the loop
print(blargh)

Code: Select all

#functional
def divisible_by(n):
def divisible_by_n(i):
return i % n == 0
return divisible_by_n

#In a 'better' functional language, these would be functions on two arguments and we would use currying to bind one of the arguments
#e.g. def less_than(n, i): return i < n; less_than_101 = curry(less_than, 101); or something like that.
def less_than(n):
def less_than_n(i):
return i < n
return less_than_n

def increment_and_perform_target_action(blargh, increment_blargh, continue_incrementing, target_action):
i = 0
while continue_incrementing(i):
if increment_blargh(i):
blargh += 1
#this isn't purely functional as this is assumed to produce side-effects which are prohibited from strict functional;
#but we're using python and this is just a demonstration
target_action(blargh)

def increment_every_5_and_print(blargh):
return increment_and_perform_target_action(blargh, divisible_by(5), less_than(101), print)

increment_every_5_and_print(0)

While the functional version looks a lot more verbose, you can get just as verbose in OO where each of those functions could be a class whose duty is to perform the same action. The benefit is that each of those components may be reused in other places to help solve different problems.

King Author
Posts: 736
Joined: Sun Apr 12, 2009 12:30 pm UTC
Location: Pennsylvania, USA

### Re: 1270: Functional

Thanks for taking the time to type all that up.

Since I learn by doing, instead of by reading theory, I sorta understand the code (again -- novice here) but I don't really understand the explanations. For example, I don't know what a state is, or what currying is. Although the important part is, I can see how the functional example is fundamentally different from the nonfunctional. As you said -- it's all functions.

To see if I've got this right, in other words, with functional programming, you can do anything anytime, the program never has to be in a certain location in the game loop (I mostly do game stuff, I just always call it a game loop :p) -- you can just call any function anytime. I guess that's what a state is?

Can anyone give an even simpler example of nonfunctional vs functional? I kinda set up too complex an example. Because I kinda can't see how anything any less complex can be different between functional and nonfunctional. For example, just get input and reprint it. Like, prompt "What is your name?", let the user enter their name, then print, "Nice to meet you, ____." How could that be done differently in functional and nonfunctional ways? Its pretty much literally just get input, print output.
I have signitures disabled. If you do, too...you can't read this, so nevermind >_>

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

The instructions require that you mutate a variable, which is already non-functional. I'll try to provide alternatives.

Assuming blargh is supposed to be initialized as 0, equivalent output code would be:

Code: Select all

print len(filter(lambda i : i % 5 == 0, range(101)))

filter(function,list) returns a sub-list of only those elements i for which function(i) is True. So I filter out all except the sublist where i % 5 == 0, and take that sublist's length.

More generally, if blargh is supposed to be initialized as blargh_init, the code would be:

Code: Select all

print len(filter(lambda i : i % 5 == 0, range(101))) + blargh_init

If you need the value of blargh for later (and haven't yet initialized it):

Code: Select all

blargh = len(filter(lambda i : i % 5 == 0, range(101))) + blargh_init

print blargh

Note that, though it is okay to use blargh, mutating it is considered non-functional.

Here's another version that's somewhat closer to the original in spirit, though it's harder to understand:

Code: Select all

print reduce(lambda blargh, i : blargh + 1 if i % 5 == 0 else blargh, range(101), blargh_init)

From the docs:

reduce(function, sequence[, initial]) -> value

Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.

Functionally, reduce may be thought of like this (it's not actually implemented this way for performance reasons):

Code: Select all

def reduce(function, list, initial):
if len(list) == 0:
return initial
else:
return reduce(function, list[1:],function(initial,list[0]))

Or imperatively, like this (I don't think it's really defined this way either):

Code: Select all

def reduce(function, list, initial):
value = initial
for i in list:
value = function(value, i)

return value

In other words, setting (function, list, initial) to be (lambda blargh, i : blargh + 1 if i % 5 == 0 else blargh, range(101), blargh_init), you end up with code that does the same thing as:

Code: Select all

value = blargh_init
for i in range(101):
value = value + 1 if i % 5 == 0 else value

return value

or, more descriptively,

Code: Select all

blargh = blargh_init
for i in range(101):
blargh = blargh + 1 if i % 5 == 0 else blargh

return blargh

King Author
Posts: 736
Joined: Sun Apr 12, 2009 12:30 pm UTC
Location: Pennsylvania, USA

### Re: 1270: Functional

lalop wrote:The instructions require that you mutate a variable, which is already non-functional. I'll try to provide alternatives.

Assuming blargh is supposed to be initialized as 0, equivalent output code would be:

Code: Select all

print len(filter(lambda i : i % 5 == 0, range(101)))

filter(function,list) returns a sub-list of only those elements i for which function(i) is True. So I filter out all except the sublist where i % 5 == 0, and take that sublist's length.

More generally, if blargh is supposed to be initialized as blargh_init, the code would be:

Code: Select all

print len(filter(lambda i : i % 5 == 0, range(101))) + blargh_init

If you need the value of blargh for later (and haven't yet initialized it):

Code: Select all

blargh = len(filter(lambda i : i % 5 == 0, range(101))) + blargh_init

print blargh

Note that, though it is okay to use blargh, mutating it is considered non-functional.

Here's another version that's somewhat closer to the original in spirit, though it's harder to understand:

Code: Select all

print reduce(lambda blargh, i : blargh + 1 if i % 5 == 0 else blargh, range(101), blargh_init)

From the docs:

reduce(function, sequence[, initial]) -> value

Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.

Functionally, reduce may be thought of like this (it's not actually implemented this way for performance reasons):

Code: Select all

def reduce(function, list, initial):
if len(list) == 0:
return initial
else:
return reduce(function, list[1:],function(initial,list[0]))

Or imperatively, like this (I don't think it's really defined this way either):

Code: Select all

def reduce(function, list, initial):
value = initial
for i in list:
value = function(value, i)

return value

In other words, setting (function, list, initial) to be (lambda blargh, i : blargh + 1 if i % 5 == 0 else blargh, range(101), blargh_init), you end up with code that does the same thing as:

Code: Select all

value = blargh_init
for i in range(101):
value = value + 1 if i % 5 == 0 else value

return value

or, more descriptively,

Code: Select all

blargh = blargh_init
for i in range(101):
blargh = blargh + 1 if i % 5 == 0 else blargh

return blargh

Whoa, thanks.

So, there's things you fundamentally can't do functionally. I wasn't expecting that. I tend to think of different styles of programming as necessarily being applicable in a general sense.

It also looks like functional programming is highly, highly dependent on the language having keywords that enable you to program that way. You couldn't possibly program functionally in, say, ASM or Basic or Lisp.

So, can you give me the long short of what the point of programming functionally is? If there's stuff you fundamentally can't do, and it's so language-dependent, what's functional programming useful for? Just pure math stuff?

I can't imagine programming, say, a videogame, or a spreadsheet application this way.
I have signitures disabled. If you do, too...you can't read this, so nevermind >_>

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

King Author wrote:So, there's things you fundamentally can't do functionally. I wasn't expecting that. I tend to think of different styles of programming as necessarily being applicable in a general sense.

There are no (known) things that can be done imperatively but cannot be done functionally, unless there is a specific resource limitation (such as a need to fit your logic into 16 bytes of machine code). There are ways that are done differently, and there may be syntactic sugar that makes functional code look like imperative code, although still being functional in its principles.

For example, a function that communicates with the outside world takes the original world as an argument and returns the modified world as a part of its return value. If you give that returned world to another function, you have effectively implemented imperative programming using the functional approach. Some languages (like Haskell) can hide this returning-passing from you as a part of their syntactic sugar if you indicate in the signature of your function that it communicates with the outside world. In exactly the same way as the OOP languages hide passing the 'this' parameter.

King Author wrote:It also looks like functional programming is highly, highly dependent on the language having keywords that enable you to program that way.

Nope. Those keywords are just syntactic sugar hiding tail recursion from your direct sight. You can implement the same functionality on your own using more basic language features, but it may look less pretty.

Or it may look exactly the same, depending on the power of the language you use.

King Author wrote:You couldn't possibly program functionally in, say, ASM or Basic or Lisp.

Are you trolling or what?

King Author wrote:So, can you give me the long short of what the point of programming functionally is? If there's stuff you fundamentally can't do, and it's so language-dependent, what's functional programming useful for? Just pure math stuff?

I can't imagine programming, say, a videogame, or a spreadsheet application this way.

I don't see how you can write a good spreadsheet application without at least part of its code (the domain logic) being intentionally functional.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

I'm just going to give a few supplemental answers.

King Author wrote:It also looks like functional programming is highly, highly dependent on the language having keywords that enable you to program that way.

You need first-class functions. I think that's it. map, reduce, etc can be defined, imperatively if need be, kinda like what I did above.

King Author wrote:You couldn't possibly program functionally in...Lisp.

What makes you say that?

King Author wrote:So, can you give me the long short of what the point of programming functionally is?

The main purpose is to minimize state. Having state is like having moving parts in your program, which makes it harder to test or reason about. For example, in the imperative "blargh" program, exactly what would happen if you tried to do something with the variable blargh (say, printed it) would vary, depending on how far through the loop you are. Though this may not seem like much, real-world imperative programs can have hundreds or thousands of variables changing near-simultaneously, making them very hard to test (certain bugs may only ever appear in rare states which you might not be able to reproduce) or reason about.

As secondary benefits, functional programs are often more concise and allow for a high degree of composability. Even basic structures like map, reduce work for a variety of use-cases, since they accept functions that can do a variety of things, rather than just data forced to go through whatever is pre-programmed.
Last edited by lalop on Fri Nov 29, 2013 1:20 pm UTC, edited 1 time in total.

King Author
Posts: 736
Joined: Sun Apr 12, 2009 12:30 pm UTC
Location: Pennsylvania, USA

### Re: 1270: Functional

Kit. wrote:
King Author wrote:So, there's things you fundamentally can't do functionally. I wasn't expecting that. I tend to think of different styles of programming as necessarily being applicable in a general sense.

There are no (known) things that can be done imperatively but cannot be done functionally, unless there is a specific resource limitation (such as a need to fit your logic into 16 bytes of machine code). There are ways that are done differently, and there may be syntactic sugar that makes functional code look like imperative code, although still being functional in its principles.

For example, a function that communicates with the outside world takes the original world as an argument and returns the modified world as a part of its return value. If you give that returned world to another function, you have effectively implemented imperative programming using the functional approach. Some languages (like Haskell) can hide this returning-passing from you as a part of their syntactic sugar if you indicate in the signature of your function that it communicates with the outside world. In exactly the same way as the OOP languages hide passing the 'this' parameter.

King Author wrote:It also looks like functional programming is highly, highly dependent on the language having keywords that enable you to program that way.

Nope. Those keywords are just syntactic sugar hiding tail recursion from your direct sight. You can implement the same functionality on your own using more basic language features, but it may look less pretty.

Or it may look exactly the same, depending on the power of the language you use.

King Author wrote:You couldn't possibly program functionally in, say, ASM or Basic or Lisp.

Are you trolling or what?

King Author wrote:So, can you give me the long short of what the point of programming functionally is? If there's stuff you fundamentally can't do, and it's so language-dependent, what's functional programming useful for? Just pure math stuff?

I can't imagine programming, say, a videogame, or a spreadsheet application this way.

I don't see how you can write a good spreadsheet application without at least part of its code (the domain logic) being intentionally functional.

Crap, this always happens. You and lalop clearly both know way more about programming than me, but he says one thing (certain things can't be done "purely" functionally) and you say another (they can). I don't know enough to decide who's right.

Kit. wrote:Are you trolling or what?

Nope, just dumb.

lalop wrote:I'm just going to give a few supplemental answers.

King Author wrote:It also looks like functional programming is highly, highly dependent on the language having keywords that enable you to program that way.

You need first-class functions. I think that's it. map, reduce, etc can be defined, imperatively if need be, kinda like what I did above.

King Author wrote:You couldn't possibly program functionally in...Lisp.

What makes you say that?

King Author wrote:So, can you give me the long short of what the point of programming functionally is?

The main purpose is to minimize state. Having state is like having moving parts in your program, which makes it harder to test or reason about. For example, in the imperative "blargh" program, exactly what would happen if you tried to do something with the variable blargh (say, printed it) would vary, depending on how far through the loop you are. Though this may not seem like much, real-world imperative programs can have hundreds or thousands of variables changing near-simultaneously, making them very hard to test (certain bugs may only ever appear in rare states which you might not be able to reproduce) or reason about.

As secondary benefits, functional programs are often more concise and allow for a high degree of composability. Even basic structures like map, reduce work for a variety of use-cases, since they accept functions rather than specific data-types.

Ohh, that makes sense.

To me, functional programming just looks like you cram a whole section of code into a single function. Which would be really useful, like you said, for making the program less state-dependent. Even I can see that, limited though my theoretical knowledge be.

Lemme ask a really important question I should've started with. Very few programs are 100% functional, correct? I imagine even when a programmer is striving for functional programming, there's sections of code that aren't, either because it's easier/simpler to write or because it's unnecessary to squeeze out those extra few drops of efficiency.

FYI, when doing game-related stuff, I typically think of my program as a bunch of variables, a bunch of functions, and the game loop. And possibly objects, though I'm just starting to understand what OOP is, let alone use it. Primitive, I know, but I'm a novice.
I have signitures disabled. If you do, too...you can't read this, so nevermind >_>

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

King Author wrote:To me, functional programming just looks like you cram a whole section of code into a single function.

That's actually more imperative programming, since in that paradigm you can manipulate states as long as you well please. A purely functional.. erm, function, should really be only one statement (though it could be a pretty long and intricate statement, e.g. my recursive "reduce" definition), expressing a return value. Multiple statements serve no point except to manipulate some kind of.. erm, state. This encourages a style of many short, composable functions, rather than any single long function.

King Author wrote: I imagine even when a programmer is striving for functional programming, there's sections of code that aren't, either because it's easier/simpler to write or because it's unnecessary to squeeze out those extra few drops of efficiency.

Sure, why not. The trap, though, is if you make too many "it's unnecessary" excuses and end up with a program that's hard to debug anyway. I prefer to avoid imperative programming unless a section is "naturally imperative", which you sort of learn through experience. And sometimes, in limited languages like Java, you have little choice but to use imperative programming.

King Author wrote:FYI, when doing game-related stuff, I typically think of my program as a bunch of variables, a bunch of functions, and the game loop.

Functionally, you might have a "loop" function that takes in the game state (some data structure containing all game info) and user input, and returns a new game state, which is then displayed to the user.

Admittedly, I'm not a game-programmer, so don't know much. There have been games programmed in haskell, however.

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

King Author wrote:Crap, this always happens. You and lalop clearly both know way more about programming than me, but he says one thing (certain things can't be done "purely" functionally) and you say another (they can). I don't know enough to decide who's right.

There are not so many useful absolutely-purely-functional languages. Most (otherwise) functional languages don't allow input-output to be done in a purely functional way, so the part of the code that needs to perform input-output becomes imperative. But as shown with Haskell, it's not a limitation of the functional approach itself.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

What I said was, you aren't allowed to mutate variables - that would be a non-functional, imperative style of programming.

What Kit said was that this does not limit the theoretical expressiveness of functional programming: given any imperative program, you can write a functional program that does the same thing.

I think this follows from Haskell being Turing-complete. I could be wrong, though.

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

Turing-completeness only tells about computability. It doesn't say about how input-output could be organized. Interacting with a pure Turing machine is not for a typical iPad user.

Haskell is Turing-complete, but it's not the point. Haskell allows you to write imperatively-looking code that still has a clear purely-functional meaning. And Haskell is a lazily evaluating language, so you can technically write your own lazily-evaluated fully functional(*) world emulator, say, to automatically test your code.

*) Pun intended.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

Kit. wrote:Turing-completeness only tells about computability. It doesn't say about how input-output could be organized.

Quite right. What I meant was that, given that there's a fully functional language out there (including I/O), and it's turing complete, that should imply that any program can be implemented in a fully functional language. My theory's very rusty, so please correct any misconceptions you notice, but my thinking was as follows:

1. Any program can be represented as a Turing Machine, with input tape populated based on user-input and an output tape. For example, the input tape can get added to every 1ms, either by a "user does nothing" symbol or, for example, by a "user presses i and j" symbol. The output tape can contain symbols like "display color #ffffff to pixel 482583".

Alternatively (this might be a more faithful representation, since IIRC the input tape is supposed to be fixed), a turing machine that's "refreshed" every 1ms. On each "refresh", the input tape contains the previous state (not the state machine's state, but rather the contents of any variables) of the program, as well as any user input in the last 1ms. The output tape will contain the current state, as well as symbols representing any updates to the screen.

2. Since Haskell is Turing Complete, it can simulate any TM.

3. Haskell is purely functional (including the I/O, as you say).

Therefore, any program can be implemented in a purely functional language.

PM 2Ring
Posts: 3646
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

### Re: 1270: Functional

Kit. wrote:Turing-completeness only tells about computability. It doesn't say about how input-output could be organized. Interacting with a pure Turing machine is not for a typical iPad user.

The traditional Turing machine design isn't interactive: all the input data must exist on the tape before the computation starts.

lalop wrote:What I meant was that, given that there's a fully functional language out there (including I/O), and it's turing complete, that should imply that any program can be implemented in a fully functional language. My theory's very rusty, so please correct any misconceptions you notice, but my thinking was as follows:

1. Any program can be represented as a Turing Machine, with input tape populated based on user-input and an output tape. For example, the input tape can get added to every 1ms, either by a "user does nothing" symbol or, for example, by a "user presses i and j" symbol. The output tape can contain symbols like "display color #ffffff to pixel 482583".

Strictly speaking, there's only one tape. Also, the data on it is in binary - each tape "cell" is either marked or unmarked. But of course, the TM could decode / encode hex data on the tape.

See http://en.wikipedia.org/wiki/Turing_machine

@King Author
It's not easy to compare & contrast traditional imperative programming with strict functional programming - standard imperative operations like getting input, modifying some variables and displaying output are not particularly suited to the pure functional approach.

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

### Re: 1270: Functional

PM 2Ring wrote:Strictly speaking, there's only one tape. Also, the data on it is in binary - each tape "cell" is either marked or unmarked. But of course, the TM could decode / encode hex data on the tape.

IIRC (and I think it's pretty obvious) that's equivalent in power to having an input and output tape, say by having input as evens, output as odds (in the case of fixed input, it's not even that complicated: just write to the output tape instead of the same input tape). Also, I'm pretty darn sure a binary alphabet is not required, though it doesn't change anything anyway.

But yeah, I'm not sure how well my turing suggestions correspond to programs with I/O.

airdrik
Posts: 238
Joined: Wed May 09, 2012 3:08 pm UTC

### Re: 1270: Functional

lalop wrote:
PM 2Ring wrote:Strictly speaking, there's only one tape. Also, the data on it is in binary - each tape "cell" is either marked or unmarked. But of course, the TM could decode / encode hex data on the tape.

IIRC (and I think it's pretty obvious) that's equivalent in power to having an input and output tape, say by having input as evens, output as odds (in the case of fixed input, it's not even that complicated: just write to the output tape instead of the same input tape). Also, I'm pretty darn sure a binary alphabet is not required, though it doesn't change anything anyway.

But yeah, I'm not sure how well my turing suggestions correspond to programs with I/O.

Strictly speaking a turing machine has one tape and the cells on the tape contains a character from some finite alphabet (not restricted to 1s and 0s - that was an imposition for the busy beaver problem; and while any data may be encoded as 1s and 0s, the specification only requires a finite alphabet), however there are TM equivalents which have multiple tapes (which may be alternatively encoded onto a single-tape machine).

lalop is mostly correct about how a TM could handle user input and screen output: they're just additional tapes with different sets of symbols on them (the symbols on the input tape indicate different actions performed which may include symbols indicating no action performed for n seconds; from the TM's perspective all input is already on the tape (it just might take longer to read from, but that's not a problem since time is just another symbol on the input tape that the TM can read and decide to either go do something else for a bit or just wait while the next symbol is being retrieved from the user)). Similarly other I/O operations may be represented as additional tapes (one for network, one for disk, even one for each file used by a given program, etc)

rmsgrey
Posts: 3406
Joined: Wed Nov 16, 2011 6:35 pm UTC

### Re: 1270: Functional

The trouble comes when the Turing Machine's input depends on its output - it's not clear whether the user can be modelled as a second Turing Machine operating on the same tapes but with input and output switched - if so, then the combination can be modelled by a third Turing Machine; if not, then introducing a user breaks the model...

Kit.
Posts: 1079
Joined: Thu Jun 16, 2011 5:14 pm UTC

### Re: 1270: Functional

An interaction of Turing machine with a user can be achieved by halting the machine before the user input and restarting it after the input. The tape content can be arranged in the following way:

at halt:
persistent-internal-status | machine-output

at restart:
persistent-internal-status | user-input

Each interaction will be a halt/restart cycle.