## Coding: Fleeting Thoughts

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

### Re: Coding: Fleeting Thoughts

http://pythonsweetness.tumblr.com/post/ ... 45-minutes

So much cringe. Not that it should be surprising. Financial IT is so rife with bad practices and general cowboy development, it's surprising it hasn't happened before.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

### Re: Coding: Fleeting Thoughts

You, sir, name? wrote:http://pythonsweetness.tumblr.com/post/64740079543/how-to-lose-172-222-a-second-for-45-minutes

So much cringe. Not that it should be surprising. Financial IT is so rife with bad practices and general cowboy development, it's surprising it hasn't happened before.

It's hard to believe they're that careless with so much money at stake - over timeframes too small for humans to respond to.

skeptical scientist wrote:No, that's not it at all. I'm saying that if bar is defined by

Code: Select all

`void foo_(int *x, int y, int z) {  *x = foo(y, z);}`
then the lines "x = foo(y, z);" and "foo_(&x, y, z);" are formally equivalent, two notations for the same operation. It makes no difference whether the function's return value is stored in the standard return register or at some address specified by the calling function. So these functions should also be considered equally pure.

But I do consider them equally pure - equally pure to the assignment "x = foo(y, z);", which is impure.

I agree that those two lines are equivalent (at least AFICT, given my limited understanding of C++ semantics). But being equivalent to an assignment to a function call is a far cry from being equivalent to the function call itself. The assignment is impure.

I don't see why you are saying the function is conditionally pure; if you think of x as being just another return value for the function, the fact that the return value is assigned to a global variable does not make the function foo_ impure, although the line where the calling function uses foo_ to modify a global variable is obviously makes that function impure (same as if it had assigned the return value of foo to a global variable).

Well, in the assignment, the function call is pure and the assignment is impure. In this case, the assignment is part of the function. Where exactly is the impurity attributed? There's nothing there but the call. And if a function call causes side effects, that means the function itself is impure.

I think you're using a subtly different definition of purity, here. For a function to be pure, it can't have side effects, no matter what arguments it's called with. I agree with jareds that maybe "idempotent" would be a better description for these functions.*
* You can probably guess, but in the context of imperative languages, idempotence is defined relative to sequencing rather than function composition. ie. it means f(args),f(args) == f(args) where == denotes semantic equivalence (not just return value equality).

Note that defining pure functions this way means purity is no longer preserved under function composition. This makes it a lot harder to work out what functions are pure in the first place. Normally, a pure function is identified using a simple recursive process: a function is pure if its body uses only constants and pure functions/expressions (and maybe local assignment). If a pure function could write to a pointer, you'd have to check the function signature of every "pure" function used and check that any pointer arguments are used correctly throughout the scope.

How can you guarantee that value won't change after the function writes it?
You can't. But again, that is true for ANY variable used to store a function return value, whether that variable is a pointer argument to the function or the left-hand-side of an assignment statement.

No it isn't. You could call a pure function and assign the result to a const variable. That would maintain purity. You can't do that if you're passing in a pointer, since you have to define it first and then pass it to be read. (It would work if you had a "write once" variable - if you had that and could place the specifier in the function signature, that could probably be considered pure.)

Plus, a function call need not be assigned at all. Its result might be passed to another function, or otherwise used directly in an expression.

These don't make the function impure, and are concerns that you need to think about when you call any function, pure or not.

They're things you need to think about when you use the result of a function call (same as any value). But with a pure function, you have a guarantee that the function call itself won't produce side effects, no matter what arguments it's passed.

Even with "good values", the function wouldn't really work like a true pure function.
It works exactly the same, as long as you think of the pointer arguments as additional return values. Instead of substituting the one return value, you have to substitute each of the return values.

I wouldn't say it's exactly the same. One of those "return values" is not a value, but an assignment which has an effect on the rest of the scope. And you can only do that after you determine which if any pointer parameters are actually acting as "return values".

That said, I was probably wrong to connect this to purity. Multiple return values would make things more complicated, but they're probably not impure, per se. I should have said "wouldn't really work like a normal pure function".

The real issue with purity is the fact that there's no way to ensure the pointer is local and only written to once. The fact of multiple return values itself is tricky, but probably peripheral.

sleepingdrone
Posts: 2
Joined: Fri Oct 18, 2013 4:55 pm UTC

### Re: Coding: Fleeting Thoughts

korona wrote:Some languages have break 2; constructs that breaks two nested loops. You could replace some instances of goto with such statements but it is questionable if that justifies a new language construct.
Funny coincidence. I saw exactly this suggestion just a few days ago on the Python Issue Tracker (issue19318)

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Coding: Fleeting Thoughts

In C++,

Code: Select all

`a = b`

can *depend on the prior state of a*. As does

Code: Select all

`*a = b`

However

Code: Select all

`auto a = b`

does *not* depend on the prior state of a, as it had no prior state.

There is no way to pass in an uncreated location of a type to be created by a function, other than as part of the return parameters, or by lieing (passing in a T* to what is not actually a T), or by doing relatively low level type gymnastics (passing in a void* or char* pointing to a buffer of correctly aligned and sized memory that will be interpreted as a T after the function completes).

If C++ had the ability to change the type of a variable via a statement, then the above would be workable.

The function would take a variable of type "uninitialized<T>*" and guarantee that the type would be "T*" when the function finishes. At the call site, you'd create an uninitialized<T>, pass a pointer to it, and on every line after the function call the variable would be a T*. (There are other complications that make this idea much trickier, like what to do with failure branches).

Then we could pass pointer to uninitialized data into [[pure]] functions and have them write to them, producing initialized pointers afterwards. Repeated calls to the [[pure]] function with the "same" arguments would be ruled out by the type system, as the type of the pointers used would be changed by the call.

However, we lack this capability.

I guess if we could detect or describe "operator=" that does not depend on the state of the thing being assigned to? It seems like a knotty problem.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

### Re: Coding: Fleeting Thoughts

I think the problem here really is C/C++ are designed to be too close to what processors *actually do*, where everything is really a global variable, and any function can in principle depend on the complete state of the computer's memory, even though some of that memory is supposedly "private". Pointers and pointer arithmetic are particularly thorny in this regard as they can be used to access bits of memory that you have no business accessing. For truly safe pure functions, you probably need to disallow pointers entirely (which is problematic because some functions simply can't be computed without the ability to allocate arbitrary amounts of memory—although maybe you could use the stack instead) or use a language with a more controlled execution environment. Another alternative might be a processor architecture designed for secure execution which enforces strict constraints on what memory different functions are allowed to access, which would allow a compiler to impose a runtime restriction on pure functions that they only access separate memory pages which are guaranteed to be uninitialized.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

### Re: Coding: Fleeting Thoughts

Yeah, pointers (or references however you call them) are a fundamental problem for pure functions. Even when pointer arithmetic is not allowed and there is a garbage collector. Local state is not a problem and can be removed by applying SSA transformations. It would be incredibly easy to extend Haskell (or any other pure functional language) with mutable variables and for/while/if constructs while preserving purity. But once you throw in pointers everything breaks.

Haskell has the ST monand that allows you to simulate references. (Personally I hate the term "monand" because it just describes that there is a combination operator but doesn't say anything about what that operator does; I think the way IO works in Haskell would be easier to understand for beginners if the term monand did not exist)

It allows you to allocate and access memory through references in a controlled environment but suffers from two great problems (just like the IO monand, which can be implemented on top of ST):
1) the ST (or IO) monand enforces a total order on the order of reference accesses. This is kind of what it was invented for but a partial order instead of a total one would be much more useful. It would improve optimization because the compiler would be allowed to reorder some memory accesses (without magic). It also makes more sense from a semantical point of view.
2) the ST (or IO) monand depends on compiler magic for correctness. Evaluating a ST monand twice could break correctness

If anyone has an idea how to implement safe references (or a useful restriction of general references) in a pure language and in a way that does not suffer from those problems I would be very interested to hear about that.

heatsink
Posts: 86
Joined: Fri Jun 30, 2006 8:58 am UTC

### Re: Coding: Fleeting Thoughts

If C++ had the ability to change the type of a variable via a statement, then the above would be workable.

There's been research into typestate systems that allow an object's type to change. This is hard to support because a system that allows an object's type to be updated should ensure that when an object's type changes, all references to it also change their type, which means the system must keep track of pointer aliasing. This is a difficult problem.

Aaeriele
Posts: 2127
Joined: Tue Feb 23, 2010 3:30 am UTC
Location: San Francisco, CA

### Re: Coding: Fleeting Thoughts

You, sir, name? wrote:http://pythonsweetness.tumblr.com/post/64740079543/how-to-lose-172-222-a-second-for-45-minutes

So much cringe. Not that it should be surprising. Financial IT is so rife with bad practices and general cowboy development, it's surprising it hasn't happened before.

Heh. That especially makes me cringe, and also makes me glad that I work for Google SRE. The policies and practices here are so much better.
Vaniver wrote:Harvard is a hedge fund that runs the most prestigious dating agency in the world, and incidentally employs famous scientists to do research.

afuzzyduck wrote:ITS MEANT TO BE FLUTTERSHY BUT I JUST SEE AAERIELE! CURSE YOU FORA!

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

### Re: Coding: Fleeting Thoughts

skeptical scientist wrote:For truly safe pure functions, you probably need to disallow pointers entirely (which is problematic because some functions simply can't be computed without the ability to allocate arbitrary amounts of memory—although maybe you could use the stack instead) or <snip>

I guess if you wanted heap memory, you could use new and then either delete the object before the function returns or pass it out with some sort of smart pointer? Maybe...

korona wrote:Personally I hate the term "monand"

because it just describes that there is a combination operator but doesn't say anything about what that operator does; I think the way IO works in Haskell would be easier to understand for beginners if the term monand did not exist)

A beginner doesn't necessarily have to care about the term monad. But if they do, it does say something important. In particular, it says "once you turn something into IO, you can never get it out. You can combine IO and non-IO to make new IO, and if you IO something that's already IO'd that's okay - you can take it out of the 'second IO', but you can never take it out the first."

1) the ST (or IO) monand enforces a total order on the order of reference accesses. This is kind of what it was invented for but a partial order instead of a total one would be much more useful. It would improve optimization because the compiler would be allowed to reorder some memory accesses (without magic). It also makes more sense from a semantical point of view.

If that's a great problem, it's a great problem that's shared by almost all programming languages. The IO and ST monads are used for imperative programming, and imperative programming represents instructions as a total order.

The Arrow type class is a more general representation of computation than monads, and should be able to do what you want. You could probably make an arrowized ST, although I don't know if anyone has. You wouldn't have the "do" syntactic sugar, of course (and if you did have a syntax, it would have to be more complex). You could also search for "dataflow programming in Haskell" and see what turns up.

2) the ST (or IO) monand depends on compiler magic for correctness. Evaluating a ST monand twice could break correctness

It relies on stuff like mutable variable support, but that's going to be necessary in any efficient implementation for references. What do mean "break correctness"?

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:

### Re: Coding: Fleeting Thoughts

troyp wrote:
because it just describes that there is a combination operator but doesn't say anything about what that operator does; I think the way IO works in Haskell would be easier to understand for beginners if the term monand did not exist)

A beginner doesn't necessarily have to care about the term monad. But if they do, it does say something important. In particular, it says "once you turn something into IO, you can never get it out. You can combine IO and non-IO to make new IO, and if you IO something that's already IO'd that's okay - you can take it out of the 'second IO', but you can never take it out the first."
The flip side of that is I was confused for a long time, because I made exactly that inference, and then applied it to all monads. And of course this is horrendously confusing once people start talking about how Maybe and such are monads. But that property is not true of monads, but just the special case of IO -- every other monad you can "escape" from.

The fact that was not true was one of two big epiphanies I had re. monads that at one point made me understand them a lot more. Alas, I forget the second one... I don't program in Haskell, so I don't know if that's because I've just internalized whatever my second revelation was, or have forgotten it and am worse off by that amount.

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

### Re: Coding: Fleeting Thoughts

Yeah, I see what you mean. Although in this respect, the IO monad is only "special" by being unusually "general" (in the sense of being a monad and nothing more). Not being able to escape a monad is like not having an identity element for a semigroup. It doesn't preclude it, it just doesn't guarantee it. If you're not doing IO, there's probably no reason not to offer that added functionality. I guess that could confuse a newbie if they took a monad to be something inherently designed to restrict rather than to generalise.

I don't program in Haskell either. I do really like it, though. I've been meaning to go back to it. Maybe even do something practical in it (although really, I just want to play with it).

edit: "no reason not to offer that added functionality" ? What am I saying...you generally have to provide it or there's nothing you can do with your results! Of course, in many cases, the runX function that extracts a value is only used once, at the very end of your monadic computations - eg.

Code: Select all

`runX \$ do    -- statements    -- performed    -- in X monad...`

roflwaffle
Posts: 360
Joined: Wed Jul 01, 2009 6:25 am UTC

### Re: Coding: Fleeting Thoughts

troyp wrote:
You, sir, name? wrote:http://pythonsweetness.tumblr.com/post/64740079543/how-to-lose-172-222-a-second-for-45-minutes

So much cringe. Not that it should be surprising. Financial IT is so rife with bad practices and general cowboy development, it's surprising it hasn't happened before.

It's hard to believe they're that careless with so much money at stake - over timeframes too small for humans to respond to.

They aren't explicitly careless. In my experience (QA), this tends to happen when companies go through busts and booms, gut departments, and an general don't properly staff/allocate resources. My company does the exact same thing, although my business unit couldn't incur a loss like that, at least not directly, and we have more than a few back-stops on the business side.

You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

### Re: Coding: Fleeting Thoughts

roflwaffle wrote:
troyp wrote:
You, sir, name? wrote:http://pythonsweetness.tumblr.com/post/64740079543/how-to-lose-172-222-a-second-for-45-minutes

So much cringe. Not that it should be surprising. Financial IT is so rife with bad practices and general cowboy development, it's surprising it hasn't happened before.

It's hard to believe they're that careless with so much money at stake - over timeframes too small for humans to respond to.

They aren't explicitly careless. In my experience (QA), this tends to happen when companies go through busts and booms, gut departments, and an general don't properly staff/allocate resources. My company does the exact same thing, although my business unit couldn't incur a loss like that, at least not directly, and we have more than a few back-stops on the business side.

There's more things that lead to this type of neglect. I think it's almost a prerequisite to not have software as a primary product (e.g. financial institutions that do their own internal development). This way you can Peter Principle all sorts of people with no understanding of proper software development into managerial roles.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

Xanthir
My HERO!!!
Posts: 5425
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

### Re: Coding: Fleeting Thoughts

EvanED wrote:
troyp wrote:
because it just describes that there is a combination operator but doesn't say anything about what that operator does; I think the way IO works in Haskell would be easier to understand for beginners if the term monand did not exist)

A beginner doesn't necessarily have to care about the term monad. But if they do, it does say something important. In particular, it says "once you turn something into IO, you can never get it out. You can combine IO and non-IO to make new IO, and if you IO something that's already IO'd that's okay - you can take it out of the 'second IO', but you can never take it out the first."
The flip side of that is I was confused for a long time, because I made exactly that inference, and then applied it to all monads. And of course this is horrendously confusing once people start talking about how Maybe and such are monads. But that property is not true of monads, but just the special case of IO -- every other monad you can "escape" from.

The fact that was not true was one of two big epiphanies I had re. monads that at one point made me understand them a lot more. Alas, I forget the second one... I don't program in Haskell, so I don't know if that's because I've just internalized whatever my second revelation was, or have forgotten it and am worse off by that amount.

Same here. It took me quite a while to understand the semantics of the various "runX" functions, because I'd internalized the notion that a monad was something that only had an entrance, not an exit. I understood things way better when I realized that the "monad" concept just describes an *aspect* of a type (specifically, the mapping and flattening part, and by convention the lifting part), and that pulling things out of a "monad" is unrelated to the monad-ness; it's just an ability that a given container/context type may or may not provide.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

troyp
Posts: 557
Joined: Thu May 22, 2008 9:20 pm UTC
Location: Lismore, NSW

### Re: Coding: Fleeting Thoughts

@EvanED, Xanthir:
Did you guys learn about monads before you learned about type classes in general? Because I suspect that might be one of the things that makes monads confusing. Most monad tutorials I've seen discuss the details of the Monad class, but don't discuss what a type class is (or even mention the term "type class").

The idea of type class is quite simple to explain, really: it's just an interface that a type can register to provide. But without an explanation, a reader is likely to assume that the restrictions on the monadic interface are actually restrictions on the type itself.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

### Re: Coding: Fleeting Thoughts

A monad is just a type that provides an operator (>>=) :: M a -> (a -> M b) -> M b.
That is, a function that takes a value of type M a and a function that maps from type a to another type M b and combines them to a value of type M b.
What that operation does varies from monad to monad. (Yes, to fulfill the formal definition you also need a return :: a -> M a operation and those two operations have to fulfill some axioms)
I think that the term monad is so general that it is confusing for beginners.

I think the right approach to learning IO in Haskell is not "IO is done by something called a monad which is a type equipped with certain operations". Most intro texts seem to present it that way. A better approach wouch be: Haskell is a pure language so we cannot have functions with side effects. What can we do if we actually want to do IO? Well for each type t we define a new type IO t that represents a computation, possibly involving side effects that returns a value of type t.

So putStrLn has absolutely no side effects and is a pure function. However it does not print anything.

Now we change the semantics of main: Main gets an empty argument but returns IO (). The semantics are that main is evaluated and after main completes we execute the computation returned by main.

Now you can introduce the >>= operator and say that it corresponds to "run procedures in sequence". After you learned all the IO stuff you can say: "Hey there are more types that have a similar >>= operator and obey the same laws. Let's call them monads". I wouldn't even mention the term monad until you learned IO.

Xanthir
My HERO!!!
Posts: 5425
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

### Re: Coding: Fleeting Thoughts

troyp wrote:@EvanED, Xanthir:
Did you guys learn about monads before you learned about type classes in general? Because I suspect that might be one of the things that makes monads confusing. Most monad tutorials I've seen discuss the details of the Monad class, but don't discuss what a type class is (or even mention the term "type class").

The idea of type class is quite simple to explain, really: it's just an interface that a type can register to provide. But without an explanation, a reader is likely to assume that the restrictions on the monadic interface are actually restrictions on the type itself.

Before, and yeah, I agree that it's probably a major source of the confusion. Most monad explanations sound like they're about "monad subclasses" or something like that, if you're an uninformed reader, when it's really about it being a typeclass.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

heatsink
Posts: 86
Joined: Fri Jun 30, 2006 8:58 am UTC

### Re: Coding: Fleeting Thoughts

korona wrote:A monad is just a type that provides an operator (>>=) :: M a -> (a -> M b) -> M b.
[...]
What that operation does varies from monad to monad. (Yes, to fulfill the formal definition you also need a return :: a -> M a operation and those two operations have to fulfill some axioms)
I think that the term monad is so general that it is confusing for beginners.

I think that the class of monads has to be learned through specific examples of monads. After using IO, parsers, lists, Maybe, Reader, and so forth, you eventually reach the point where you can apply your knowledge to monads that you haven't seen before. The problem with explaining monads is that useful, intuitive analogies like "containers", "computations", "burritos", and so forth aren't useful in all situations. Meanwhile, the technically accurate explanation, which is what you said up there, doesn't tell you what monads are good for.

While monads are especially difficult, the need to generalize from concrete examples holds for other algebraic structures too. Until you've seen enough functors and monoids to get a feel for what the terms mean, it's not so obvious what 'fmap' or 'mconcat' is supposed to do.

bluebambue
An der schönen blauen Donau
Posts: 900
Joined: Wed Oct 03, 2007 5:14 am UTC

### Re: Coding: Fleeting Thoughts

Random newbie question that I don't think is deserving of it's own thread.

I want to learn how to develop an app using node.js. Which requires GCC which I think requires UNIX. I'm on a Windows machine. Would installing Cygwin meet my needs?
https://github.com/joyent/node/wiki/Installation

Background that I don't think is relevant: I've started an overly ambitious project that would be a web app that runs node.js that is connected to a MongoDB source. I have never developed another app and expect to be thoroughly confused by obvious stuff often.

Edit: oh, hmm, the GCC is only needed for building and not for regular installing, I think?
Edit2: yep. Ignore me and my tendency to ask questions too quickly.

Nyktos
Posts: 138
Joined: Mon Mar 02, 2009 4:02 pm UTC

### Re: Coding: Fleeting Thoughts

If you did want to build it, though, Cygwin would probably work.

Edit: Though now that I check there are Windows build instructions on there that don't require it anyway.

Thesh
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC

### Re: Coding: Fleeting Thoughts

I never understood the religious devotion to avoiding nulls in a database. Sorry, but a date ot '1900-01-01' is much more of a PITA to deal with then NULL. I never had a real problem caused by NULLs, I have had problems caused by data being set to invalid values. Yes, I have had exceptions caused NULLs when working in a language like C#, but it's better to have the exception than a silent failure.
Summum ius, summa iniuria.

Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

### Re: Coding: Fleeting Thoughts

The religious devotion is rather bananas, but the practical side of it is when you have data that must always exist. I don't want it to be an error at select-time when I am processing a surprise NULL, I want it to be an error at insert-time when the field has to be provided in the first place. That said, if data is optional, allow NULL. That's what it is there for. Using out-of-band values in a database is cargo-culting.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Cheese> I love you

Thesh
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC

### Re: Coding: Fleeting Thoughts

Of course, allowing null is something that should only be done when it makes sense for the column, but putting dummy data in a column is something that should be avoided as much as possible.
Summum ius, summa iniuria.

Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

### Re: Coding: Fleeting Thoughts

Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

I wonder what the basic thing I'll be doing for the first time tomorrow will be, at interview of course.

Aaeriele
Posts: 2127
Joined: Tue Feb 23, 2010 3:30 am UTC
Location: San Francisco, CA

### Re: Coding: Fleeting Thoughts

Xenomortis wrote:Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

I wonder what the basic thing I'll be doing for the first time tomorrow will be, at interview of course.

Run-time analysis?
Vaniver wrote:Harvard is a hedge fund that runs the most prestigious dating agency in the world, and incidentally employs famous scientists to do research.

afuzzyduck wrote:ITS MEANT TO BE FLUTTERSHY BUT I JUST SEE AAERIELE! CURSE YOU FORA!

Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

### Re: Coding: Fleeting Thoughts

As in time complexity, etc?
That could be fun.

Negrebskoh
Posts: 139
Joined: Fri Mar 01, 2013 11:49 pm UTC
Location: The Netherlands

### Re: Coding: Fleeting Thoughts

Could someone explain to me the use of 'constexpr' in C++? I've heard it explained by a few people now, but... I honestly don't get what makes it so useful. Same goes for 'nullptr'.

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Coding: Fleeting Thoughts

`constexpr` says that a given function is intended to be evaluated at compile time for certain types of arguments, and in those cases its output can be used in compile-time constant situations.

The situations in which such a function is compile-time evaluable is defined by the standard, so you can know cross platform when it should work.

It can be a "maybe" situation, where some inputs result in a compile-time evaluated expression path, and others do not. Naturally, this decision can be decided at compile time.

This lets you write more traditional code and use it as a template argument or an array size argument, instead of having to rely on template meta-programming to do so.

The ability for the same function to be both compile time, and conditional on input, run-time, means that you get uniformity of behavior between compile and run time. It also means you don't have to "accept all inputs", and can assert or fail, causing the function to fail to evaluate at compile time.

In theory, you could eliminate constexpr from the language, and state that any function that matches the requirements for compile-time evaluation with the constexpr keyword should be evaluated at compile time (as many optimizers do). The downside to that approach is that you aren't marking your interface as "intended to be used at compile time" by the end user, so implementation detail changes could cause someone else's code to break at compile time. It isn't a huge downside.

Ie, imagine constexpr did not exist as a keyword, and you had a stub function that read "return 0;". Someone could then evaluate that function at compile time. Later, you replace it with a non-stub function, and their build breaks.

constexpr functions can still experience this, but when writing such functions you are saying that your compile-time vs non-compile-time paths are observable behavior on the part of the function call: so, with the constexpr keyword, the implementation is naked only if you say it is.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

### Re: Coding: Fleeting Thoughts

I've been using a lot of exceptions in my code recently. Exceptions are tricky, because when you use them the wrong way, you end up with highly annoying code that provokes people into handling them with an "e.printStackTrace();", or "throw new RuntimeException(e);". I don't think anything in programming is more annoying than using an interface that declares a wide range of exceptions and expects you to handle them. In 99.95% of cases, the code is only actually interested in "did it break?".

So, I've found this type of code to be a decent balance.

Code: Select all

`public class MyClassImpl implements MyClass {  @Override  public void doStuff() {     try {        doInternalStuff();        doOtherInternalCrap();     } catch (MyClassException e) {       log("the error");     }  }  @Override  public List<Stuff> getStuff() {    try {      List<Stuff> allStuff = new List<Stuff>();      for (InternalStuff stuff : mInternalStuff) {        allStuff.add(doYetMoreInternalThings(stuff));      }      return allStuff;    } catch (MyClassException e) {      log("the error");      return new List<Stuff>();    }  }  private void doInternalStuff() throws MyClassException {}  private void doOtherInternalCrap() throws MyClassException {}  private Stuff doYetMoreInternalThings(InternalStuff) throws MyClassException {}  private Thing getThingFromStuff(InternalStuff internal) throws MyClassException {    if (null == internal) throw new MyClassException("stuff is null");    if (null == internal.thing) throw new MyClassException("stuff is missing thing");    return internal.thing;  }  protected static MyClassException implements InternalExceptionIf /* public, extends Throwable */ {    // standard boilerplate in here  };};`

And then just spam the fuck out of exception-throwing assertions in the private methods. If something could conceivably be null, you bet I've written a get-wrapper for it that throws MyClassException. I've found this to make a very nice balance between code that's robust, yet not annoying to use.

There are more benefits to this. It makes it easier to resist making private methods public, which in my experience is often an ill-conceived hack designed to save time through code design-incest.

Occasionally, it makes sense to actually let the caller handle the exception, but then I'll declare it as "InternalExceptionIf" unless I have a seriously good reason to let my caller

Xenomortis wrote:Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

There are some exceptions based on language and application, but in general, yes that is bad.

With unit tests, you can "measure twice, cut once" in computer programming. It can be compelling to cut corners and skimp on testing (especially with tight deadlines and what have you), but it will come back and bite you down the line.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

skeptical scientist
closed-minded spiritualist
Posts: 6142
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

### Re: Coding: Fleeting Thoughts

Whenever I see code that looks like

Code: Select all

`try:  # do stuff  return 0except:  # do other stuff  return 1`

part of me wants to append

Code: Select all

`finally:  return 2`

* * *

Xenomortis wrote:Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

I think you're doing well. I didn't do either of those things until after I got hired. My entire prior programming experience consisted of 1 class in high school, 2 classes in college, and 130 Project Euler problems.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson

letterX
Posts: 535
Joined: Fri Feb 22, 2008 4:00 am UTC
Location: Ithaca, NY

### Re: Coding: Fleeting Thoughts

skeptical scientist wrote:
Xenomortis wrote:Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

I think you're doing well. I didn't do either of those things until after I got hired. My entire prior programming experience consisted of 1 class in high school, 2 classes in college, and 130 Project Euler problems.

Yeah, but you also started life as a mathematician. Not that, as a CS PhD student, my own experience of several coding 'standard practices' hasn't been exceedingly theoretical... But yeah, I think there's several companies that value 'ability to do research' (that comes with a PhD), many companies that expect you to be a good software engineer (which means knowing about unit tests and inheritance), and then a few companies that train their own coders from scratch (I've known a couple people get hired with essentially zero coding experience, with the expectation that they'd teach them everything they need to know. And then actually follow through on that, which is rarer).

Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

### Re: Coding: Fleeting Thoughts

skeptical scientist wrote:
Xenomortis wrote:Is it wrong that I wrote my first set of real unit tests the other day, whilst completing a pre-interview programming screen?
Is it wrong that the first time I actually worked with any code that utilised inheritance was at an interview?

I think you're doing well. I didn't do either of those things until after I got hired. My entire prior programming experience consisted of 1 class in high school, 2 classes in college, and 130 Project Euler problems.

Maybe I should have started with "I've been working as a .NET developer for close to 18 months now".
My programming experience prior to that was a couple of months of Pascal/Delphi programming for AS Computing (and like, 20 PE problems).

You, sir, name? wrote:With unit tests, you can "measure twice, cut once" in computer programming. It can be compelling to cut corners and skimp on testing (especially with tight deadlines and what have you), but it will come back and bite you down the line.

I don't doubt that. But overall, "best practices" are not something I witness much at my current job. Hell, I don't even know what "best practices" are most of the time.
Although I don't think it includes buggy code that "has never worked right, and crashes something else if it doesn't work (which is literally every time it runs)" and not doing anything about it for several years (until I get told "stop this other thing from crashing").
And I remember seeing a bubble sort in the middle of a 1000 line method somewhere. When I asked "why", I get the answer "it was easiest this way". It probably wasn't.

You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

### Re: Coding: Fleeting Thoughts

Xenomortis wrote:
You, sir, name? wrote:With unit tests, you can "measure twice, cut once" in computer programming. It can be compelling to cut corners and skimp on testing (especially with tight deadlines and what have you), but it will come back and bite you down the line.

I don't doubt that. But overall, "best practices" are not something I witness much at my current job. Hell, I don't even know what "best practices" are most of the time.
Although I don't think it includes buggy code that "has never worked right, and crashes something else if it doesn't work (which is literally every time it runs)" and not doing anything about it for several years (until I get told "stop this other thing from crashing").
And I remember seeing a bubble sort in the middle of a 1000 line method somewhere. When I asked "why", I get the answer "it was easiest this way". It probably wasn't.

The thing is, good coding practices should be the developer's prerogative. It won't come from management. All they typically care about is getting deliverables out the door ASAP. Which is why leeway for testing and the like must be part of any time-estimates you give. Writing tests for code should be considered part of writing the code. If you've written the code, but not the tests, you aren't done.

Skimping on good practices may get those deliverables slightly faster. But then, the next batch of deliverables will take more time because the code they build on is falling apart (and you end up spend a ton of time fixing bugs that should have never been there in the first place).

It will make a difference in the long run. I have some co-workers who spend way more hours than me every week, stressed as hell, constantly rushing their development effort to make the deadline (unit tests are at best an afterthought). Yet, I always finish my work load a week or more ahead of them. Come test-week, I sit twiddling my thumbs, trying to get my 85%+ test coverage into the 90s range, while they're in a mad dash to keep their code from falling part, swimming unsolved bugs and null pointer exceptions. I am not better at programming than them. The difference is I do not cut corners, and this enables me to do the same amount of work patiently, working normal hour weeks, resulting in 10% of the bug-reports at the end of the sprint.

What these people all have in common is that they are very upset with the code quality of the project. Everyone is an idiot for building such a stupid and un-maintainable system. It's everyone's fault except theirs. I would argue that their consistent lack of good practices makes them the architects of their own misery.

Ok, this is turning into an undirected rant. I'd better leave here.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

brant
Posts: 7
Joined: Wed Oct 03, 2007 3:42 am UTC
Location: CT

### Re: Coding: Fleeting Thoughts

+1000 for unit tests. You're really shooting yourself in the foot if you don't write any. Not only for the reasons above, but they also allow you to refactor your code later on with confidence that you haven't introduced additional bugs. Also, if someone writes a 1000-line function, I'm pretty sure you have the right to punch them in the face.

Thesh
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC

### Re: Coding: Fleeting Thoughts

Code: Select all

`f.write(hex(int((e-2)*16^16384)))`

I figure by the time I figure out a better way, it will have finished executing...
Summum ius, summa iniuria.

PM 2Ring
Posts: 3715
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

### Re: Coding: Fleeting Thoughts

Thesh wrote:

Code: Select all

`f.write(hex(int((e-2)*16^16384)))`

I figure by the time I figure out a better way, it will have finished executing...

If that e is the base of natural logarithms, you could modify the e program I posted in viewtopic.php?f=17&t=101315#p3324491 (which calculates e in factorial base) so that it gives output in hex instead of decimal. The program is reasonably fast in Python, but obviously somewhat faster in C.

Thesh
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC

### Re: Coding: Fleeting Thoughts

Yeah, it's the constant e... And I'll probably have to find a new method...

hex(int((e-2)*16^4987)) - a couple seconds
hex(int((e-2)*16^4988)) - don't know, hasn't completed, and I suspect it won't any time soon.

So yeah, probably have to implement my own base16 system to calculate it. Pity, I didn't want to spend too much time on this, I was hoping to just find it on the internet... It's probably there but "e in hexadecimal" does not exactly return relevant results.

EDIT: I ended up getting it to work in mpmath, pretty quick too (using internal precision of 81920 bits).

hex(int((exp(1)-2)*(mpf(16)^mpf(16384))))
Attachments
e.gz
Summum ius, summa iniuria.

PM 2Ring
Posts: 3715
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

### Re: Coding: Fleeting Thoughts

Why didn't you say you had mpmath?

FWIW, mpmath has e built-in: mp.e, which is probably optimized to be a bit faster than mp.exp(1).

Thesh
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC

### Re: Coding: Fleeting Thoughts

PM 2Ring wrote:Why didn't you say you had mpmath?

Summum ius, summa iniuria.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:

### Re: Coding: Fleeting Thoughts

OK, this is a stupid problem. Suppose I have a function void f(int a); that I want to call from within a function template

Code: Select all

`template<typename SomeInteger>void g(SomeInteger x){    f(x);}`

If g is instantiated such that SomeInteger is a larger int than int, then with the above code, MSVC produces a warning that the truncation can lose information, which is then turned into an error because of the equivalent of -Werror.

OK, that's easy enough to deal with; I add an explicit cast. The number should be small enough, but just to be sure, I add an assertion

Code: Select all

`template<typename SomeInteger>void g(SomeInteger x){    assert(x >= INT_MIN && x <= INT_MAX);    f(static_cast<int>(x));}`

and now when g is instantiated with an int, GCC produces a warning that the condition in the assertion is always true (thanks GCC ) because of the limited range of the data type.

I can think of a couple ways to fix this, but they're all somewhat ugly. Any suggestions?