Static vs. Dynamic Typing

Please compose all posts in Emacs.

Moderators: phlip, Prelates, Moderators General

Postby Strilanc » Tue Apr 24, 2007 12:20 pm UTC

Rysto: Convert 4 integers to a rect. Do you use left,right,top,bottom; x1,y1,x2,y2; or x,y,width,height? There's no 'obvious' best in this case, making access to the names very important. In fact, this happens A LOT when writing any program.
Don't pay attention to this signature, it's contradictory.
User avatar
Strilanc
 
Posts: 646
Joined: Fri Dec 08, 2006 7:18 am UTC

Postby Rysto » Tue Apr 24, 2007 12:42 pm UTC

I said that the types were often enough, not always. It's not like you can't put the names in the prototypes if you want them there.
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby Yakk » Tue Apr 24, 2007 2:45 pm UTC

torne wrote:
Yakk wrote:On the other hand, I write reasonably complex programs that are designed to fail if I screw up the static type checking.

When programming in a dynamicly typed language, I'm forced to manually verify all of the things I would staticly check using language constructs.

I've never written code to verify things that a typing system could've verified for me; what would be the point? If those things needed verifying, I would've used types. Unit tests pass, thus all statically checkable problems must have already been eliminated. I write programs that don't pass their test suite if I've made any error, statically checkable or otherwise. Seems easier. :)


This results in more complex unit tests that need to be kept in sync with the interface documentation of the procedure. The static type restrictions give you a language-understood interface specification, and a language-enforced compile-time test of both calling and called code.

Given infinite effort and care, all you need are bits. Programming constructs more advanced than pushing bits are still useful, because they reduce the effort and care required to write correct programs.

And you mean to tell my you write perfect test suites? You are joking, right?
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby Strilanc » Tue Apr 24, 2007 3:15 pm UTC

Yakk wrote:
torne wrote:
Yakk wrote:On the other hand, I write reasonably complex programs that are designed to fail if I screw up the static type checking.

When programming in a dynamicly typed language, I'm forced to manually verify all of the things I would staticly check using language constructs.

I've never written code to verify things that a typing system could've verified for me; what would be the point? If those things needed verifying, I would've used types. Unit tests pass, thus all statically checkable problems must have already been eliminated. I write programs that don't pass their test suite if I've made any error, statically checkable or otherwise. Seems easier. :)


This results in more complex unit tests that need to be kept in sync with the interface documentation of the procedure. The static type restrictions give you a language-understood interface specification, and a language-enforced compile-time test of both calling and called code.

Given infinite effort and care, all you need are bits. Programming constructs more advanced than pushing bits are still useful, because they reduce the effort and care required to write correct programs.

And you mean to tell my you write perfect test suites? You are joking, right?


I think he meant he would never write tests for stuff a static checker should do for him, not that his tests covered static checks.
Don't pay attention to this signature, it's contradictory.
User avatar
Strilanc
 
Posts: 646
Joined: Fri Dec 08, 2006 7:18 am UTC

Postby torne » Tue Apr 24, 2007 3:27 pm UTC

Rysto wrote:What kind of toy programs are you writing?

I'm mostly a hard-realtime embedded kernel developer, though I work on a lot of other stuff too in my own time. The kernel's public APIs are documented in great detail (as they are for the customer to use, who may not have the source) - internal APIs should be self-explanatory from the code. Writing any documentation for them is at best a waste of time, and at worst a barrier to future maintenance.

Yakk wrote:This results in more complex unit tests that need to be kept in sync with the interface documentation of the procedure. The static type restrictions give you a language-understood interface specification, and a language-enforced compile-time test of both calling and called code.

I don't see why the unit tests need to be more complex.. they cover all the cases the code has to handle, whether you're statically or dynamically typed. As for keeping them in sync with the interface documentation, I make every effort to write executable interface documentation in the first place, but surely you aren't implying that a typed function prototype explains the interface to a human being who has no access to the source?

Unit tests are the interface specification. Any non-machine-readable documentation is a secondary source that has to be maintained by hand regardless of typing system.

Given infinite effort and care, all you need are bits. Programming constructs more advanced than pushing bits are still useful, because they reduce the effort and care required to write correct programs.

Yah. The advanced construct I favour most is dynamic/duck typed object orientation. It saves me the most effort and care of any tool I have available to me. That's kinda been my point throughout.

And you mean to tell my you write perfect test suites? You are joking, right?

If the test is wrong, the test fails, and I'll notice and fix the test.
User avatar
torne
 
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby davean » Tue Apr 24, 2007 4:03 pm UTC

torne wrote:
And you mean to tell my you write perfect test suites? You are joking, right?

If the test is wrong, the test fails, and I'll notice and fix the test.


What about the ones that pass when they shouldn't?

If you want to get it right, you need a proof of your code.

Personally, I'd prefer to see formal methods applied to any kernel stuff instead of just fallible unit tests. Of course, testing every combination of branches if a good start, but I seriously doubt you actually test EVERY combination. And yes, an if statement that has multiple conditions, each one is separate.
User avatar
davean
Site Ninja
 
Posts: 2438
Joined: Sat Apr 08, 2006 7:50 am UTC

Postby EvanED » Tue Apr 24, 2007 4:24 pm UTC

torne wrote:
Rysto wrote:What kind of toy programs are you writing?

I'm mostly a hard-realtime embedded kernel developer, though I work on a lot of other stuff too in my own time. The kernel's public APIs are documented in great detail (as they are for the customer to use, who may not have the source) - internal APIs should be self-explanatory from the code. Writing any documentation for them is at best a waste of time, and at worst a barrier to future maintenance.


Really? So you have function names like DoSomething_mustBeCalledWithIOLockHeldAtIRQLPassiveMightSleep(void* nonnullThingy)?
EvanED
 
Posts: 4141
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI

Postby torne » Wed Apr 25, 2007 10:12 am UTC

davean wrote:What about the ones that pass when they shouldn't?

Those are always the most amusing. :)

If you want to get it right, you need a proof of your code.

Well, yes, but this is rarely practical.

Personally, I'd prefer to see formal methods applied to any kernel stuff instead of just fallible unit tests. Of course, testing every combination of branches if a good start, but I seriously doubt you actually test EVERY combination. And yes, an if statement that has multiple conditions, each one is separate.

I can't comment with a great deal of specificity on our kernel's testing procedures in case someone from work reads it and realises I'm giving the game away ;) Formal methods are unlikely, though; rather a lot of code.

My comments on unit testing are referring to my own personal projects. The point is not to test everything that appears in the code, the point is to test everything which is a requirement. Any code which doesn't get executed by the unit testing process can be removed as it's not required. The tests are the specification, and while yes, they can be wrong, it's much less likely than the code being wrong, as the tests are simple by comparison.

EvanED wrote:Really? So you have function names like DoSomething_mustBeCalledWithIOLockHeldAtIRQLPassiveMightSleep(void* nonnullThingy)?

No, I have function names like DoSomething(void* thingy), which start with __CHECK_IOLOCKHELD(); __CHECK_IRQL(Levels::Passive); __ASSERT_ALWAYS(thingy != NULL); and probably some more precondition checks that examine the calling context to ensure that sleeping won't be a problem. Thus, writing documentation for them is a waste of time; they already explain in great detail what you have to do to call them correctly.

I said self-explanatory from the code, not from the prototype. That's the difference between internal and external APIs.
Last edited by torne on Wed Apr 25, 2007 12:36 pm UTC, edited 1 time in total.
User avatar
torne
 
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby tendays » Wed Apr 25, 2007 12:24 pm UTC

Just wanted to put my two cents:

As soon as you have a moderately expressive type system, type checking becomes undecidable. So a type checker will either be sound but incomplete (anything it accepts is guaranteed to work but it will reject perfectly valid programs) or complete but unsound (if it rejects something then it's guaranteed to be bad - things that go through might fail at runtime). (Also, some are neither sound nor complete, obviously)

The first kind will tend to whine about you doing "dangerous" things even when you know they would work. I'm e.g. thinking of javac, that complains when I'm not covering all possible cases in a switch statement even though all cases that may actually happen at runtime are covered. And so I have to add bogus case statements everywhere.

Second kind will let you do stupid mistakes, but that degenerate into a runtime failure only in very special cases, so you don't know you made a typing error before it's compiled, shipped and millions of people are using it and finally the rare case occurs somewhere.

OTOH I am currently coding something (amusingly, it is a type system checker) in Java 1.4, which does not have generics, and every line I write makes me regret not having done it in caml or something. E.g. I have huge Sets and Maps and it already happened once that I put an object of a wrong type in there so debugging was uncool. Not to mention the map() or fold() methods that I had to emulate by hand ...
User avatar
tendays
 
Posts: 957
Joined: Sat Feb 17, 2007 6:21 pm UTC
Location: HCMC

Postby torne » Wed Apr 25, 2007 12:46 pm UTC

tendays wrote:As soon as you have a moderately expressive type system, type checking becomes undecidable. So a type checker will either be sound but incomplete (anything it accepts is guaranteed to work but it will reject perfectly valid programs) or complete but unsound (if it rejects something then it's guaranteed to be bad - things that go through might fail at runtime). (Also, some are neither sound nor complete, obviously)

Indeed, though I wouldn't say "guaranteed to work", but rather "guaranteed to be correctly typed", for obvious reasons.

There are a whole spectrum of type systems in 'real' languages, with varying degrees of soundness and completeness, and my experience has been that even a small amount of incompleteness is incredibly awkward and frustrating, and can take a lot of time to work around in the least unpleasant way. So, I'll take a complete type system instead; the problem being that as far as 'real' languages go, these tend to accept an awfully large proportion of incorrectly typed programs.

As a result, I've concluded that it's unlikely to be worth the effort and that type checking has a very low value to me.
User avatar
torne
 
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby Rysto » Wed Apr 25, 2007 8:38 pm UTC

tendays wrote:OTOH I am currently coding something (amusingly, it is a type system checker) in Java 1.4, which does not have generics, and every line I write makes me regret not having done it in caml or something. E.g. I have huge Sets and Maps and it already happened once that I put an object of a wrong type in there so debugging was uncool. Not to mention the map() or fold() methods that I had to emulate by hand ...

There's an option you can give javac that will let allow it to accept code that uses generics and still output 1.4-compatible bytecode.
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby Rysto » Thu Apr 26, 2007 12:34 am UTC

tendays wrote:Not to mention the map() or fold() methods that I had to emulate by hand ...

There shouldn't be any problem implementing a version of map() in Java. I can't think of any way to implement a clean version of fold(), though. However, it sounds like you're trying to program in a functional style. That's not going to work out for you in Java.
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby Strilanc » Thu Apr 26, 2007 12:56 am UTC

Rysto wrote:
tendays wrote:Not to mention the map() or fold() methods that I had to emulate by hand ...

There shouldn't be any problem implementing a version of map() in Java. I can't think of any way to implement a clean version of fold(), though. However, it sounds like you're trying to program in a functional style. That's not going to work out for you in Java.


I agree this would be ridiculous in 1.4, but in 1.5 generics really do let you "accomplish" it by jumping through some hoops:

Code: Select all
import java.util.ArrayList;
import java.util.Iterator;
//how many lines of code for map/reduce? WAY TOO MANY
public interface Functionalable<T> {
   Iterator<T> iterator();   
   T reduce(T t1, T t2);
   boolean filter(T t);
}
public class Func {
   public <T, S extends Functionalable<T>> T reduce(S s) {
      T t;
      Iterator<T> i = s.iterator();
      
      //get first
      if (i.hasNext())
         t = i.next();
      else
         return null;
      
      //reduce
      while (i.hasNext())
         t = s.reduce(t, i.next());
      
      return t;
   }
   public <T, S extends Functionalable<T>> ArrayList<T> filter(S s) {
      ArrayList<T> tt = new ArrayList<T>();
      Iterator<T> i = s.iterator();
      T t;
      
      //filter into array list
      while (i.hasNext()) {
         t = i.next();
         if (s.filter(t))
            tt.add(t);
      }
      
      return tt;      
   }
}
public class FuncTest implements Functionalable<Integer> {
   ArrayList<Integer> ii = new ArrayList<Integer>();
   public FuncTest(int... list) {
      for (int i = 0; i < list.length; i++)
         ii.add(list[i]);   
   }
   public boolean filter(Integer t) {
      return (t >= 10 && t <= 20);
   }
   public Iterator<Integer> iterator() {
      return ii.iterator();
   }
   public Integer reduce(Integer t1, Integer t2) {
      return t1 + t2;
   }
}


I wish they would allow passing of functions (that didn't require entire new classes) in java. It would make this whole thing a lot less verbose, and probably faster.
Don't pay attention to this signature, it's contradictory.
User avatar
Strilanc
 
Posts: 646
Joined: Fri Dec 08, 2006 7:18 am UTC

Postby Rysto » Thu Apr 26, 2007 1:05 am UTC

You know what? Scratch that; I forgot what fold actually does. It shouldn't be that difficult to implement either map() or fold() in Java.
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby Rysto » Thu Apr 26, 2007 1:27 am UTC

Code: Select all
public interface MapFunction<D, R> {
    public R apply(D x);
}

public static <D, R> List<R> map(MapFunction<D, R> f, List<D> list) {
    List<R> out = new ArrayList<R>(list.length()); //ok, I'll admit this isn't optimal

    for(D x : list) {
        out.add(f.apply(x));
    }

    return out;
}

public interface FoldFunction<T> {
    public T apply(T x, T y);
}

public static <T> T foldl(FoldFunction<T> f, T init, Iterable<T> list) {
    for(T x : list)
        init = f.apply(init, x);

    return init;
}

public static <T> T foldr(FoldFunction<T> f, T init, Iterable<T> list) {
    foldr_helper(f, init, list.iterator());
}

private static <T> T foldr_helper(FoldFunction<T> f, T init, Iterator<T> it) {
    if(!it.hasNext())
        return init;

    return f.apply(it.next(), foldr_helper(f, init, it)); //are arguments guaranteed to be evaluated left-to-right in Java?
}
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby bitwiseshiftleft » Thu Apr 26, 2007 2:34 am UTC

tendays wrote:As soon as you have a moderately expressive type system, type checking becomes undecidable. So a type checker will either be sound but incomplete (anything it accepts is guaranteed to work but it will reject perfectly valid programs) or complete but unsound (if it rejects something then it's guaranteed to be bad - things that go through might fail at runtime). (Also, some are neither sound nor complete, obviously).


Or it could just not always terminate. Or maybe it could always terminate and be sound, but not be as expressive as you want, unless you pass a flag, like, say, -fallow-undecidable-instances.
User avatar
bitwiseshiftleft
 
Posts: 295
Joined: Tue Jan 09, 2007 9:07 am UTC
Location: Stanford

Postby tendays » Thu Apr 26, 2007 2:06 pm UTC

About map and friends:
I know it's possible - it's just ugly and heavy.
This is the code I use:
The Tools.map() function:
Code: Select all
    public static List map(List l,Func f) {
   List a = new ArrayList();
   for (Iterator i = l.iterator();i.hasNext();)
       a.add(f.f(i.next()));
   return a;
    }

This part is not too ugly. But. This is one of the numerous cases where I use that function in my program:
Code: Select all
Tools.map(local,new Func() {
             public Object f(Object o) {
                 return ((DepSt)o).clean();
             }});

Instead of
Code: Select all
local.map( o -> o.clean() )

like it would be in a proper functional language with generic/parametric types.
Of course, as a consequence, the directory with compiled classes is littered with stuff like Cname$1.class up to Cname$412312.class because every single time I use a map or a filter or whatever I create an inner class (okay, maybe not 412312 but too much, still).
My fingers are getting tired of typing these "new Func(){public Object f(Object o)". Maybe I should use a preprocessor, actually??

Rysto: I'll investigate about that javac option for generics. Thanks.
User avatar
tendays
 
Posts: 957
Joined: Sat Feb 17, 2007 6:21 pm UTC
Location: HCMC

Postby Rysto » Thu Apr 26, 2007 2:44 pm UTC

Apparently, it's not documented, but if you do:
javac -source 1.5 -target jsr14

Then you get access to the following 1.5 features:
- generics
- varargs
- for-each loop over arrays and Collections
- autoboxing(not as efficient as 1.5 autoboxing)
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Why can't we have both?

Postby cdfh » Thu Apr 26, 2007 3:23 pm UTC

Hmm, apologies if I'm gatecrashing the party.

I like the static typing which Haskell provides. However, I also like the flexibility of languages like Lisp/Ruby/Perl, and duck typing.

I sometimes find Haskell is a bit too strict, however. For example, maybe I have a list of $things, where I have no idea what type each item will be. Maybe they are all objects I would like to kill (and eat).

Excuse my pseudo-ruby;

# Pop'n'kill all the things in the list
until (killstack.empty?) { |item| item.kill }

The thing is, of course, what if +item+ doesn't respond to kill()? That would be annoying, but we could tell much earlier, when push()ing things onto the stack, that they would cause problems.

Could we not have a nice syntax for a run-time type-checker (do we already?)?

list.elements_obey(:kill, :eat, :sleep)

But of course, some of this could be done at compile-time, like in Haskell - although, it would be much harder to do in a non-static, non-functional language.

While variables (and even classes) can change within the program, there are times when we know exactly what a class will look like.

For example;

x=5

We know x is an Integer at this point. We can trace x, and notice any point where it could potentially change type, etc (note: this should not create a halting problem, since we're not running the code - we're just looking at the code). At the points where x does change type, we may still be able to decide which types it could potentially be:

if ... then x = "String" else x+= 1;

After the execution of that statement, x will either be a String, or a type which responds to +=.

Consider:

if foo() then x.delete("bar") else x+=1;

This gives us a bit more information. It tells us that after evaluating foo(), x must either be something that responds to deleting (with an argument which is a string - we would trace the argument for the delete function to check that a string would not violate any of its typing), or += (with an argument which is an integer).

Realistically, I don't know how useful this would be. I think the compiler could gather a lot of information about the possible state of all the variables at different places in the code, but it would require a fair time to compute (probably an unreasonable length of time for general purpose ruby programs - it would probably want to safe the compiled code).

It's quite possible that, even in small programs, each variable could potentially be so many things that the extra information is just redundant (s/possible/likely/).

However, for small functions, it might be very useful. Frequently, in Haskell, I get type-errors from returning the car of a list, when I really wanted to return a singleton list containing the car (if that doesn't make sense, try s/car/head/). With the above type-checking, dynamic languages could often tell me at compile time that I'm being silly. Normally, I just don't code the same way in dynamic languages as I do in Haskell - if I did, I would never leave the debugging stage.

Anyway, that's my $0.02 - I suspect most of that is nonsense. Criticise, flame, and shoot me where appropriate :-)
cdfh
 
Posts: 2
Joined: Thu Apr 26, 2007 2:32 pm UTC
Location: Glasgow, Scotland (UK)

Postby Yakk » Thu Apr 26, 2007 4:16 pm UTC

What I want is the ability to say:

This list should only contain objects that have a kill method.

and have the compiler try to check it staticly at compile time. When the result is ambiguous, it tells me.

I then can make the claim that the object contains a kill method, optoinally with a branching construct tells me the assertion failed at run-time.

This takes a fair amount of work, but it is what the generic library of C++0x is attempting to make easy.

One places a number of requirements on what the list needs from it's elements.

One writes a wrapper that takes an arbitrary object, uses the compiler-understood requirements, and trusses it up in a run-time construct that wraps the underlying object in the interface that the list is requesting.

Now someone can come along and add a chicken to the list. If the chicken has a kill method, it works. If the chicken has a slay() method, it fails.

The person who wants to add the chicken goes "Hmm", writes compiler-understood instructions that say "chickens match the "can_be_killed" requirement by calling the slay() method", and now when you add the chicken to the list, it wraps the chicken in a translator that calls the slay() method.

Notice that this still does compile-time checking of types. So anything in the list is killable, even if it has no kill() method!

I find that kind of automatic code generation based off of explicit instructions by the programmer to be very interesting.

Naturally it cannot do everything. But that is why dynamic_cast exists. :)
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby evilbeanfiend » Thu Apr 26, 2007 4:22 pm UTC

that does sound pretty sweet. hopefully the library/compiler vendors are a little more ahead of the game with the standardisation process now so we wont have to wait to long (although convincing the company to upgrade mid project might be a task)
in ur beanz makin u eveel
User avatar
evilbeanfiend
 
Posts: 2650
Joined: Tue Mar 13, 2007 7:05 am UTC
Location: the old world

Re: Why can't we have both?

Postby taylor_venable » Fri Apr 27, 2007 2:01 am UTC

cdfh wrote:I sometimes find Haskell is a bit too strict, however. For example, maybe I have a list of $things, where I have no idea what type each item will be. Maybe they are all objects I would like to kill (and eat).

But Haskell is statically-typed, so you know the type of everything all the time. And besides, lists can only contain elements of the same type. If you want to make sure you can kill and eat them, you can use a type class to ensure that whatever particular type we're looking at supports those functions.

cdfh wrote:The thing is, of course, what if +item+ doesn't respond to kill()? That would be annoying, but we could tell much earlier, when push()ing things onto the stack, that they would cause problems.

Could we not have a nice syntax for a run-time type-checker (do we already?)?

list.elements_obey(:kill, :eat, :sleep)

For Ruby, yes. See the Object#respond_to? method, which can (of course) be applied to any object of any type, presuming its not overridden. In Python, you can always check the method dictionary for the class, working your way up the hierarchical chain (don't know if there's a "nice" way to do this like in Ruby, though).

cdfh wrote:But of course, some of this could be done at compile-time, like in Haskell - although, it would be much harder to do in a non-static, non-functional language.

This makes me think how cool it would be to have an annotation mechanism for that would be the equivalent of type classes (from Haskell) or interfaces (from Java) in a language like Ruby. Maybe some way to codify that some variable in the current scope will only hold values that correspond to types that support certain features. Of course, in Ruby you've got built-in support to find if a message is supported by a given receiver, so you might as well extend this to a generalized invariant system, like a contract system:
Code: Select all
invariant x.responds_to?('kill')
Now if x ever gets assigned a value which doesn't support kill() then the program aborts. Of course, this check will have to be called whenever x gets assigned, which could be quite frequently. But it would be more safe.

cdfh wrote:(note: this should not create a halting problem, since we're not running the code - we're just looking at the code)

...

if foo() then x.delete("bar") else x+=1;

(with an argument which is a string - we would trace the argument for the delete function to check that a string would not violate any of its typing)

Unfortunately, this has to be done at runtime in a language like Ruby, because the definition of delete() can be altered at runtime. And besides, the fact that the parameter is a string isn't important: just that whatever it is, it responds to whatever messages the parameter is sent in the course of the execution of delete().

cdfh wrote:Realistically, I don't know how useful this would be. I think the compiler could gather a lot of information about the possible state of all the variables at different places in the code, but it would require a fair time to compute (probably an unreasonable length of time for general purpose ruby programs - it would probably want to safe the compiled code).

I think it would be useful, but (as you say) quite expensive, and in some cases completely unknowable. I think runtime invariants are a better idea, but again there's a performance problem. Although... it may not be as bad as static type checking and inference for a dynamically typed language, and it's spread out during the execution of the program, so it wouldn't be as noticeable except in places of extreme assignment activity (e.g. loops). If you did use some kind of static type checking or inference, you'd want to just compile that code to bytecode right then, or else you'd have to do the checking each time you ran the script.
My website: http://real.metasyntax.net:2357/
"I have never let my schooling interfere with my education." (Mark Twain)
User avatar
taylor_venable
 
Posts: 30
Joined: Mon Apr 09, 2007 7:22 pm UTC
Location: United States

Yes, but do more at compile-time

Postby cdfh » Mon Apr 30, 2007 1:22 am UTC

Hello Taylor,

Yes, I knew about responds_to(), however, that is executed at runtime. I think it would be nice if Ruby had a preprocessor to look at these things at compile-time.

You make a valid point that methods (deletes) can be overwritten at runtime, but if we trace the code, we can tell whether or not it's possible for the method to have been overwritten (and whenever we call eval $str; the compiler sighs, and forgets everything it knows, since anything (and possibly everything) has change. It would then only be able to continue type checking for that scope in runtime).

Anyway, yes - I think we agree that it would be nice, but (alas), is probably unrealistic. There are many, many, things that the compiler needs to keep track of, and a single eval() breaks most of that (but with a modular design damage would be kept to a minimum).

Maybe we should just focus on making better interfaces between the two languages? There are some functions which just make sense to write in Haskell, and much less so in Ruby|Lisp|etc, and vice versa (I won't supply examples, since I'm sure you know plenty yourselves - and if not, I'm probably wrong :-) ).

Anyway, I must work!

Cheers :-)
cdfh
 
Posts: 2
Joined: Thu Apr 26, 2007 2:32 pm UTC
Location: Glasgow, Scotland (UK)

Postby Alan » Thu May 03, 2007 10:40 pm UTC

Static typing is a proof framework that makes it easier for the compiler to prove that certain aspects of your program work the way they are intended to work. A drawback is that you must write your program within the constraints of that proof framework.

As general proof system (such as ACL2 - http://www.cs.utexas.edu/users/moore/acl2/) technology advances, languages will be able to have the flexibility of dynamic typing while maintaining the compile-time correctness checking (and then some) or static typing.
Alan
 
Posts: 39
Joined: Thu Feb 08, 2007 9:09 pm UTC

Postby Yakk » Fri May 04, 2007 3:23 pm UTC

But, C++ has the flexibility of dynamic typing.

You just call dynamic_cast<> whenever you need to dynamically type, and use a "single root object" object-model. When you want run-time only checks, you can do them.

I'll admit there is boiler-plate code you need to write, and it needs to be less verbose...

A dynamically typed language is basically a language that disallows static types. And I find that annoying.
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby fortyseventeen » Sat May 05, 2007 6:10 pm UTC

Sorry to join late, but I found this discussion incredibly amusing. I observe three things:

• Both statically and dynamically typed languages have done a decent job at imitating each other's benefits. e.g. C++ has templates, interfaces and dynamic casts, each of which emulates a particular aspect of dynamic/duck typing, while Ruby is completely duck-typed and does not require variable declarations, but strictly enforces naming conventions to disambiguate constants, members, locals, and globals.

• This is what looks like a two-sided battle, but only a few have come to the conclusion that both static and dynamic typing have their purpose. Having used a good number of languages from both sides, I can say with some certainty that the deciding factor for using one or the other type of langauge in a project is runtime efficiency vs. programmer efficiency.

• No one has come to the root of many of the described problems: documentation, or the lack thereof. It can completely disambiguate the meaning and format of method arguments. Variable names can be a part of this documentation. It can't fix mistakes, but it can prevent a great deal of them.
Last edited by fortyseventeen on Sat May 05, 2007 6:20 pm UTC, edited 1 time in total.
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby Yakk » Sat May 05, 2007 6:20 pm UTC

fortyseventeen, I didn't know that runtime typed languages had faster runtime performance!
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby fortyseventeen » Sat May 05, 2007 6:21 pm UTC

Yakk wrote:fortyseventeen, I didn't know that runtime typed languages had faster runtime performance!


Explain please. It's well-known that dynamically typed languages generally improve programmer efficiency at the cost of speed and memory.

If you're referring to the fact that all type errors can be found at once during compilation, rather than one at a time during interpretation, I don't see that as much of an advantage. CS instructors and coder friends of mine all agree that it is best to fix one bug at a time, since errors can cascade very easily, and bugfixes can very easily make bugs.
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby fortyseventeen » Sat May 05, 2007 6:45 pm UTC

Yakk wrote:A dynamically typed language is basically a language that disallows static types. And I find that annoying.


Although that's true of languages that call themselves "dynamically typed", that's not necessarily true of all lanugages that contain dynamic typing constructs. It's concievable that one could write a Java library that declares an interface for every method, uses nothing but Objects as method arguments, and uses interface casts in order to aceess their methods. This would be a fairly verbose, but complete, duck-typing system. You could then begin to mix dynamic and static elements together, since the language natively supports static typing.

Personally, I would find such a system overly complicated, but it's one possible way of combining the two paradigms, out of the many ways that already exist.

I would prefer a new language that started from the assumption that dynamic typing is desired, but that allows types to be explicitly specified. Still, what reasons would one have to use such a language?
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby Yakk » Sat May 05, 2007 8:56 pm UTC

Explain please. It's well-known that dynamically typed languages generally improve programmer efficiency at the cost of speed and memory.


I do not find that dynamically typed languages generally improve programmer efficiency.

Dynamically typed languages do make it easier to toss out a short program by an unskilled programmer. Programs of any reasonable size (say, that take more than 1 programmer-year to build) are aided by static type contracts.

Scripting languages can improve programmer efficiency at the cost of speed and memory and precision of instruction.

Many scripting languages are dynamically typed: ie, they lack static type-checking facilities. This is unfortunate, and a flaw in the language in question.

If you're referring to the fact that all type errors can be found at once during compilation, rather than one at a time during interpretation, I don't see that as much of an advantage. CS instructors and coder friends of mine all agree that it is best to fix one bug at a time, since errors can cascade very easily, and bugfixes can very easily make bugs.


Type errors and contracts can be found and enforced staticly, thus making simpler tests and more reliable runtime execution.

Although that's true of languages that call themselves "dynamically typed", that's not necessarily true of all lanugages that contain dynamic typing constructs.


I am not aware of a serious language that lacks dynamic typing constructs. There are, effectively, no serious "pure static typed languages".

As this thread seems to be about the difference between C++ and other "static typed languages" and python/perl and other "dynamic typed languages", I stand by my assertion: what makes a language "dynamic typed" in common parlance is a lack of static type contracts.

It's concievable that one could write a Java library that declares an interface for every method, uses nothing but Objects as method arguments, and uses interface casts in order to aceess their methods. This would be a fairly verbose, but complete, duck-typing system. You could then begin to mix dynamic and static elements together, since the language natively supports static typing.


Yes. You can do this in C++, you can do this in C, you can do this in every "staticly typed" language I can remember programming in.

You cannot, meanwhile, use static type contracts in perl, python or (as far as I know) ruby.

Personally, I would find such a system overly complicated, but it's one possible way of combining the two paradigms, out of the many ways that already exist.


I suppose the difference is, in a "staticly typed language", it presumes wanting to bypass the type system is an exception, not a rule. As such, it makes the syntax required non-trivial.

I would prefer a new language that started from the assumption that dynamic typing is desired, but that allows types to be explicitly specified. Still, what reasons would one have to use such a language?


I would prefer a language with a better ability to describe and generate static types. :)

It is a ridiculously rare event that someone actually wants a container that holds anything. It is almost always the case that the thing that goes into the container, and the thing that comes out, is actually a specific kind of thing.

That is what types are. A way of describing what kind of thing is wanted.
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby fortyseventeen » Sun May 06, 2007 12:38 am UTC

Yakk wrote:I do not find that dynamically typed languages generally improve programmer efficiency.

Dynamically typed languages do make it easier to toss out a short program by an unskilled programmer. Programs of any reasonable size (say, that take more than 1 programmer-year to build) are aided by static type contracts.


Have you built many large projects in scripting languages? I find that cases for code reuse are encountered far more frequently than cases for type contracts. Any project can be broken up into indiviually verifiable pieces, as small as needed to ensure reliability and compatibility. If anything, scripting languages are far more scalable, even though the community around them (notably Perl) may give them a bad name.

Scripting languages can improve programmer efficiency at the cost of speed and memory and precision of instruction.


Very true, and this is why C and C++ are called "systems" langauges. Where and when preciseness is critical, they should be used.

Many scripting languages are dynamically typed: ie, they lack static type-checking facilities. This is unfortunate, and a flaw in the language in question.


I'm sorry that you think of it as a flaw. It's simply an alternate paradigm, with its own uses. Maybe you are not clear on its pros/cons.

Type errors and contracts can be found and enforced staticly, thus making simpler tests and more reliable runtime execution.


Static and dynamic typing are not the same as type safety, or strong typing, which is to what I think you are referring to when you mean that it improves testability and reliability.

C/C++ are not strongly typed, simply because of the way casts are handled. It tries to fit whatever data that is in the variable into the specification of some type, regardless of its original type. This allows data to be interpreted in arbitrarily many ways, regardless of whether those interpretations are meaningful or even legal, which may result in erroneous data or pointer errors. This actually has the potential to make testing quite a pain. C++'s polymorphic OO model along with dynamic_cast can prevent many possible invalid casts, making it substantially better than weakly typed, but it will still allow explicit casts that could possibly be invalid. Such casts are still used in practice, because they are more efficient, in both senses of the word. It's the case of "enough rope to hang yourself."

Java's typing is strong, since it allows only polymorphic casts that are known to be legal (i.e. if the types are "equivalent", as determined by the inheritance tree). This is possible because all objects carry reflexive type information, just as in Ruby. It does not, however, carry information about generic types, but its (complicated) generics syntax lets it verify types statically, while limiting some functionality.

Ruby is strongly typed for the same reasons, but instead of casts, all type conversion is performed by methods, a model that supports and is supported by duck typing. If an integer value is expected from a method argument, the method will call obj.to_i and let the object decide what its integer representation is. In cases not dealing with basic types, though, duck typing itself is enough to eliminate the need for type conversion, as well as generics/templates.

Python and Perl are not strongly typed, but I don't feel a need to discuss them here; I only need point out that the consequences are evident.

I am not aware of a serious language that lacks dynamic typing constructs. There are, effectively, no serious "pure static typed languages".

...

Yes. You can do this in C++, you can do this in C, you can do this in every "staticly typed" language I can remember programming in.


I suppose that if you wanted to write a dynamic typing library for C, this would be true, but dynamic languages are already written (most of them in C) for this purpose. I would still argue, though, that C's types are in fact completely static. It's the weakness of C types that would allow any possible dynamic functionality. (In other words, it fakes its way around.)

You cannot, meanwhile, use static type contracts in perl, python or (as far as I know) ruby.


Correct. I have personally found no reason for there to be such contracts in said scripting languages, though, outside of what we have already discussed (speed, reliability, etc.). It's a simple trade-off, with no lack of functionality, which is why there are different languages across the perpendicular spectrums of type dynamics and safety.

I suppose the difference is, in a "staticly typed language", it presumes wanting to bypass the type system is an exception, not a rule. As such, it makes the syntax required non-trivial.


Exactly! This is the trade-off! This is why I was miserable coding a Half-Life 2 mod completely in C++, but why I loved writing an encryption system in the same language. It's also why writing a sudoku generator in Ruby is less than exciting (very slow to run), but why a Rails app can be one of your most amazing coding experiences.

I would prefer a language with a better ability to describe and generate static types. :)


I'm ignorant of any such advances in current static languages (they seem to be doing pretty well already), but I know that Ruby 2.0 will feature a metacomplier and runtime machine, so that cases in which object (duck) type can be predetermined, will be. e.g. if an object is known not to contain a quack method at the time that it is sent the 'quack' message, the compiler will be able to report this before execution begins. Again, this is only an issue of efficiency.

It is a ridiculously rare event that someone actually wants a container that holds anything. It is almost always the case that the thing that goes into the container, and the thing that comes out, is actually a specific kind of thing.


Yes, and that is why duck typing even exists in scripting languages. It enforces the assumption that an object is capable of receiving the message that it is sent, thus narrowing the definition of the object's type. In this way, it only verifies the parts of the definition necessary to provide the desired functionality. This is its strength; it provides complete polymorphism, without the need for conversion or explicit type checking, without becoming mutually exclusive to either of them.

I still feel that any doubts you might have about the usefulness of dynamic languages is tied to a misunderstanding of their purpose and context. I haven't even touched on the many uses of metaprogramming.

Finally, I apologize to give the impression of a Ruby evangelist, but it really is the best example of these paradigms, since it's the only language that I know of that was written with all of them in mind.
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby Yakk » Sun May 06, 2007 3:28 pm UTC

fortyseventeen wrote:
Yakk wrote:I do not find that dynamically typed languages generally improve programmer efficiency.

Dynamically typed languages do make it easier to toss out a short program by an unskilled programmer. Programs of any reasonable size (say, that take more than 1 programmer-year to build) are aided by static type contracts.


Have you built many large projects in scripting languages? I find that cases for code reuse are encountered far more frequently than cases for type contracts. Any project can be broken up into indiviually verifiable pieces, as small as needed to ensure reliability and compatibility. If anything, scripting languages are far more scalable, even though the community around them (notably Perl) may give them a bad name.


No. The largest project I've played with in a scripting language was about 6 programmer-months in size. It had already gotten unwieldly.

Scripting languages can improve programmer efficiency at the cost of speed and memory and precision of instruction.


Very true, and this is why C and C++ are called "systems" langauges. Where and when preciseness is critical, they should be used.


I said can, not will. :)

Many scripting languages are dynamically typed: ie, they lack static type-checking facilities. This is unfortunate, and a flaw in the language in question.


I'm sorry that you think of it as a flaw. It's simply an alternate paradigm, with its own uses. Maybe you are not clear on its pros/cons.


How can lacking an additional feature, for which I can find explicit uses, not be a flaw?

Type errors and contracts can be found and enforced staticly, thus making simpler tests and more reliable runtime execution.


Static and dynamic typing are not the same as type safety, or strong typing, which is to what I think you are referring to when you mean that it improves testability and reliability.


No, I'm not talking about strong typing. Strong typing seems more like a pipe dream than anything practical.

If I had meant strong typing, I'd have said strong typing. :)

I am not aware of a serious language that lacks dynamic typing constructs. There are, effectively, no serious "pure static typed languages".

...

Yes. You can do this in C++, you can do this in C, you can do this in every "staticly typed" language I can remember programming in.


I suppose that if you wanted to write a dynamic typing library for C, this would be true, but dynamic languages are already written (most of them in C) for this purpose. I would still argue, though, that C's types are in fact completely static. It's the weakness of C types that would allow any possible dynamic functionality. (In other words, it fakes its way around.)


Your use of weakness above is, I presume, formal?

Dynamic typing in C is pretty common. You toss metadata into a struct and use it to determine which of the byte-compatable structs the struct actually is.

C is little more than an ASM language, so it does get ugly. :)

You cannot, meanwhile, use static type contracts in perl, python or (as far as I know) ruby.


Correct. I have personally found no reason for there to be such contracts in said scripting languages, though, outside of what we have already discussed (speed, reliability, etc.). It's a simple trade-off, with no lack of functionality, which is why there are different languages across the perpendicular spectrums of type dynamics and safety.


I'm aware of the turing tar-pit, that anything one language can do can be done in another. Bringing up "no lack of functionality" in the context of a programming language difference is not a useful comment.

However, the inability to express type requirements does reduce the ability of the language to express some concepts. I'm all for a language that can dynamically type everything: but the fact that such languages do not provide the option to enforce types is annoying. It is an entire category of useful and pithy automatic code generation that is tossed out with the bathwater.

The reason I find type safety useful has little to do with speed. ASM is an untyped language for the most part, yet is much faster than C or C++. C's type system is poor, and yet it runs faster than C++ in many contexts.

So long as you are hung up on "dynamic typing is better, but results in slower code", you are not understanding my point at all. This isn't about the tradeoff between runtime speed and programmer time -- rather, this is about a failure of some languages to provide entire classes of tools that save programmer time.

I have no problem with languages that allow you to create "any type" variables. I have problems with languages that don't let me create "this variable must be of type Z" explicit contracts, or containers that contain variables of type Z. I want to be able to enforce explicit conversion on people who place anytype variables into a variable of type Z, or who convert a variable of type Z to an anytype variable, or allow implicit conversion one way or another. This typing should be duck-typed, or explicit type, or inheritence based as I choose.

Why? So when I create a data structure containing Z-type things, I can write internal code that operates on the Z-type things. Any internal code that acts on the Z-type things as something other than a Z-type thing should raise flags, or require explicit programmer actions to recast the Z-type thing to a broader type.

None of this has anything at all to do with efficiency of the running code. Not one iota. It has to do with, among other things, catching mistakes earlier in the development cycle, enforcing simple contracts, and having a partially self-documenting interface.

I would prefer a language with a better ability to describe and generate static types. :)


I'm ignorant of any such advances in current static languages (they seem to be doing pretty well already), but I know that Ruby 2.0 will feature a metacomplier and runtime machine, so that cases in which object (duck) type can be predetermined, will be. e.g. if an object is known not to contain a quack method at the time that it is sent the 'quack' message, the compiler will be able to report this before execution begins. Again, this is only an issue of efficiency.


Which is, again, completely missing the point asto why I want a better type system.

The ability for compilers to figure out "oh, X is always an integer" is cute, but it only leads to a limited amount of speedup, and does nothing to make writing a program easier.
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10448
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Postby torne » Tue May 08, 2007 11:58 am UTC

Yakk wrote:However, the inability to express type requirements does reduce the ability of the language to express some concepts. I'm all for a language that can dynamically type everything: but the fact that such languages do not provide the option to enforce types is annoying. It is an entire category of useful and pithy automatic code generation that is tossed out with the bathwater.

The problem is, you are conflating two separate issues - the capabilities of static and dynamic typing systems, and the capabilities of actual languages which implement those systems.

As you mention, it's perfectly possible to do dynamic typing in C++, or Java, or whatever - just cast stuff all over the place and use Object references and so on. It's also perfectly possible to do static type checking in Python, or Ruby - just write a tool that does the required analysis, and introduce some syntax for specifying types if you want that too (function decorations, or docstring magic, or whatever). The point you miss is that this sucks, either way. There does not exist today a widely supported language which has the capability to conveniently and easily enforce static typing contracts, and also has the capability to conveniently and easily dynamically type those things it is useful and desirable to dynamically type. Of course I'd like both, if such were available! C++0x looks promising, even.

So, unless one is willing to invent such a language, the options are limited to those tools which actually exist today. The only argument I have been trying to make (which I believe is shared by the other proponents of dynamic typing) is that given that a tradeoff must be made, I would rather lose the ability to conveniently enforce static typing contracts than the ability to ducktype freely without inserting casts, extra interfaces, etc all over the place. The reason I choose to make the tradeoff this way is because in my experience of programming operating systems, compilers, web applications, network servers, embedded graphics engines, and deviant sexual toys, it has universally been the case that very few of my programming errors have ever been caught by the use of static typing contracts. Thus, I choose dynamic typing because I find it quicker to code that way - if nothing else, there are simply less keystrokes involved :)

If your experience is different then yes, you probably will choose a different tool. Perhaps your abilities are different to mine. I can do without the tools of static typing, because they don't save me time at all.
User avatar
torne
 
Posts: 98
Joined: Wed Nov 01, 2006 11:58 am UTC
Location: London, UK

Postby evilbeanfiend » Tue May 08, 2007 12:33 pm UTC

The only argument I have been trying to make (which I believe is shared by the other proponents of dynamic typing) is that given that a tradeoff must be made, I would rather lose the ability to conveniently enforce static typing contracts than the ability to ducktype freely without inserting casts, extra interfaces, etc all over the place. The reason I choose to make the tradeoff this way is because in my experience of programming operating systems, compilers, web applications, network servers, embedded graphics engines, and deviant sexual toys, it has universally been the case that very few of my programming errors have ever been caught by the use of static typing contracts. Thus, I choose dynamic typing because I find it quicker to code that way - if nothing else, there are simply less keystrokes involved Smile


and the only argument i make is that you favour dynamic typing because of the areas you work in. if you wrote critical software for hospital equipment you might insist on static typing (and possibly static logic analysis) to guarantee your code won't kill anyone. both are useful, for a specific job one may be more useful than another, but any discussion about one being better in general is pointless. (essentially i think we are all in heated agreement on this)
in ur beanz makin u eveel
User avatar
evilbeanfiend
 
Posts: 2650
Joined: Tue Mar 13, 2007 7:05 am UTC
Location: the old world

Postby fortyseventeen » Tue May 08, 2007 5:03 pm UTC

evilbeanfiend wrote:
The only argument I have been trying to make (which I believe is shared by the other proponents of dynamic typing) is that given that a tradeoff must be made, I would rather lose the ability to conveniently enforce static typing contracts than the ability to ducktype freely without inserting casts, extra interfaces, etc all over the place. The reason I choose to make the tradeoff this way is because in my experience of programming operating systems, compilers, web applications, network servers, embedded graphics engines, and deviant sexual toys, it has universally been the case that very few of my programming errors have ever been caught by the use of static typing contracts. Thus, I choose dynamic typing because I find it quicker to code that way - if nothing else, there are simply less keystrokes involved Smile


and the only argument i make is that you favour dynamic typing because of the areas you work in. if you wrote critical software for hospital equipment you might insist on static typing (and possibly static logic analysis) to guarantee your code won't kill anyone. both are useful, for a specific job one may be more useful than another, but any discussion about one being better in general is pointless. (essentially i think we are all in heated agreement on this)


Quite so. I suppose I'm just poor at getting the point across. :lol:

I'm sorry, Yakk, for inflaming the conversation. My intent wasn't prideful, but my arguments were.
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby yy2bggggs » Wed May 09, 2007 1:06 am UTC

Yakk:

To the credit of dynamically typed languages, the very lack of a capability to perform static typing actually does provide additional capability. You're browsing over the issue too quickly by conceiving it as an issue of simply lacking an optional capability.

The contracts you establish in a statically typed language are generally always enforced. The implication is that if you don't meet every contract in your code from beginning to end, you can't execute any of the code. This is how static typing works, and it does let you cover all of the contracts, but not executing code is in fact a price to pay.

By performing all of the typing dynamically, you are allowed to execute your code earlier. The disadvantage of doing this is that you could miss some of your contracts (by not executing them); thus, you could have a bug. But the ability to run your code without fulfilling all of the contracts in itself is a benefit; it allows you to test your code before you have all of your t's crossed and your i's dotted, so if your overall design doesn't work, you don't need to go further. The advantages are comparable to prototyping, only the prototype is your actual code.

Honestly, both statically typed and dynamically typed languages have their place. It's no wonder they both exist.
User avatar
yy2bggggs
 
Posts: 1261
Joined: Tue Oct 17, 2006 6:42 am UTC

Postby Rysto » Wed May 09, 2007 2:05 am UTC

yy2bggggs wrote:By performing all of the typing dynamically, you are allowed to execute your code earlier. The disadvantage of doing this is that you could miss some of your contracts (by not executing them); thus, you could have a bug. But the ability to run your code without fulfilling all of the contracts in itself is a benefit; it allows you to test your code before you have all of your t's crossed and your i's dotted, so if your overall design doesn't work, you don't need to go further. The advantages are comparable to prototyping, only the prototype is your actual code.

Wait, so now it's an advantage to find bugs later in the development cycle?
Rysto
 
Posts: 1443
Joined: Wed Mar 21, 2007 4:07 am UTC

Postby yy2bggggs » Wed May 09, 2007 2:23 am UTC

Rysto wrote:Wait, so now it's an advantage to find bugs later in the development cycle?

No.
User avatar
yy2bggggs
 
Posts: 1261
Joined: Tue Oct 17, 2006 6:42 am UTC

Postby fortyseventeen » Wed May 09, 2007 3:58 am UTC

yy2bggggs wrote:
Rysto wrote:Wait, so now it's an advantage to find bugs later in the development cycle?

No.


Of course it's not. It's a compromise. You lose the preciseness and reliability of finding all data-type contract failures early on for the advantages of indefinite code reuse and shorter development cycles.
Quick, what's schfifty-five minus schfourteen-teen?
User avatar
fortyseventeen
Ask for a lame title, receive a lame title
 
Posts: 88
Joined: Fri Mar 02, 2007 3:41 am UTC
Location: SLC, UT, USA

Postby bitwiseshiftleft » Wed May 09, 2007 7:30 am UTC

Code reuse has come up, like, what, a million times here. But how much better is code reuse in dynamic languages? If I have a lookup method of class coll key that has type, say,
Code: Select all
Monad m => key -> coll key value -> m value
or a fold method that has type
Code: Select all
(key -> value -> b -> b) -> coll key value -> b -> b
then how would a fully dynamic method be more reuseble? Can a dynamic language even do this much?

EDIT: In this example, there's always the whole "Haskell vs Side Effects" thing. But that's another debate.

I mean, yes, there are instances where it can be difficult. They occasionally require refactoring, adding additional classes, or plain old copy-paste to reuse the code; usually this is when you have some new object that is like your old one but doesn't share its methods. But they sometimes require refactoring in dynamically-typed languages too; does it happen that much more often than in statically-typed ones?

Do you people have a good example where type classes and parametric polymorphism just totally fail it?
User avatar
bitwiseshiftleft
 
Posts: 295
Joined: Tue Jan 09, 2007 9:07 am UTC
Location: Stanford

PreviousNext

Return to Religious Wars

Who is online

Users browsing this forum: No registered users and 4 guests