Coding: Fleeting Thoughts

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

User avatar
PM 2Ring
Posts: 3715
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

Re: Coding: Fleeting Thoughts

Postby PM 2Ring » Wed Oct 15, 2014 10:37 am UTC

phlip wrote:
PM 2Ring wrote:Kids these days have it too easy. Back [...] then you'd have to wait a couple of minutes [...]

Excuse me while I go fetch my mother, who did a lot of programming back in the day for batch processors (for which you'd write your code, put it in the queue, and come back the next day to receive your result and/or error) to laugh at you.


What do you think I did on that IBM 360 back in 1973? I think I even have some of my old punch cards hiding in deep storage at my mother's place. Maybe assembler, but more likely PL/I. The machine was at the old Museum of Applied Arts & Sciences in Sydney, the precursor of the Powerhouse Museum.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 15, 2014 10:43 am UTC

Careful, you're dating yourself. :P
Image

Wonderbolt
Posts: 212
Joined: Thu Mar 13, 2014 12:11 pm UTC

Re: Coding: Fleeting Thoughts

Postby Wonderbolt » Wed Oct 15, 2014 12:42 pm UTC

korona wrote:I recommend anyone who really wants to understand how computers work to write a toy kernel / operating system. The x86 architecture is very well documented so this is not as hard as it sounds. Doing so will certainly make you understand the design decisions that went into C.

The fact that the documentation is good doesn't change that the architecture is all kinds of terrible. Admittedly, you have to deal with most of this nonsense (the A20 gate, protected mode, *long* mode on x64, unreal mode, *all* the annoying tables, etc.) when trying to write a bootloader, not a kernel, but... x86, ew.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Wed Oct 15, 2014 1:57 pm UTC

Wonderbolt wrote:
korona wrote:I recommend anyone who really wants to understand how computers work to write a toy kernel / operating system. The x86 architecture is very well documented so this is not as hard as it sounds. Doing so will certainly make you understand the design decisions that went into C.

The fact that the documentation is good doesn't change that the architecture is all kinds of terrible. Admittedly, you have to deal with most of this nonsense (the A20 gate, protected mode, *long* mode on x64, unreal mode, *all* the annoying tables, etc.) when trying to write a bootloader, not a kernel, but... x86, ew.

Just use some existing boot loader like GRUB and you won't have to deal with most of the nasty bits. Tables for paging and protection are a necessary evil, even though the x86 segmentation tables are a bit anachronistic. Yes, the x86 architecture is full of legacy stuff but it seems that x86 is here to stay, there is not much one can do about that :D. It also seems that Intel does a very good job of optimizing x86 processors so there is not much desire to change the architecture.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: Coding: Fleeting Thoughts

Postby EvanED » Wed Oct 15, 2014 2:52 pm UTC

Hey, fun fact: address translation on x86 is Turing complete.

That's right: it's possible to perform computation on the x86's MMU without ever dispatching a real x86 instruction.

User avatar
Xeio
Friends, Faidites, Countrymen
Posts: 5101
Joined: Wed Jul 25, 2007 11:12 am UTC
Location: C:\Users\Xeio\
Contact:

Re: Coding: Fleeting Thoughts

Postby Xeio » Wed Oct 15, 2014 2:56 pm UTC

Code: Select all

private static object BuildRtmSecurityHeader(Object objSecurity)
{
    Type type = objSecurity.GetType();
    PropertyInfo[] properties = type.GetProperties();
    var eleXml = new XmlElement[1];
    eleXml[0] = CreateElement();
    properties[0].SetValue(objSecurity, eleXml, null);
    return objSecurity;
}

You might go "Why use object?" when there is exactly one place where this method is called, and only one type that it operates on. Or as a follow up "Why are we casting this right back to the source type from object?".

You might think "Hey, wait, reflection? Is it a good idea to rely on the first property of an object not changing?".

You might as why return the object you passed in. Or why you pass in "new X()" as the parameter to the method.

I can assure you, there are no good answers.

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Coding: Fleeting Thoughts

Postby Thesh » Wed Oct 15, 2014 3:13 pm UTC

Or why you would prefix it with "obj" - What is this? VBScript?
Summum ius, summa iniuria.

User avatar
Xeio
Friends, Faidites, Countrymen
Posts: 5101
Joined: Wed Jul 25, 2007 11:12 am UTC
Location: C:\Users\Xeio\
Contact:

Re: Coding: Fleeting Thoughts

Postby Xeio » Wed Oct 15, 2014 3:28 pm UTC

Thesh wrote:Or why you would prefix it with "obj" - What is this? VBScript?
I always found it a bit weird that the convention in C# appears to be to use Hungarian Notation for UI controls, but extremely frowned upon anywhere else.

Wonderbolt
Posts: 212
Joined: Thu Mar 13, 2014 12:11 pm UTC

Re: Coding: Fleeting Thoughts

Postby Wonderbolt » Wed Oct 15, 2014 4:13 pm UTC

korona wrote:Yes, the x86 architecture is full of legacy stuff but it seems that x86 is here to stay, there is not much one can do about that :D. It also seems that Intel does a very good job of optimizing x86 processors so there is not much desire to change the architecture.

I'm not so sure about that. ARM has gotten a lot of traction, though admittedly x86 still seems to be what's almost exclusively used for PCs.

I mean, feel free to write kernels for it to learn the architecture, I won't stop you. :P I simply feel it'd be much easier to learn to understand computer architecture without all the crapitude of x86 by - for example - writing a kernel for a RasPi.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Wed Oct 15, 2014 5:58 pm UTC

I don't know about the current ARM processors but I remember that programming some of the older ones (I think it was ARMv3?) was not much better than programming x86. They also have multiple operation modes IIRC. I don't know how ARM handles backwards compatibility; if they keep their processors backwards compatible then today's ARM processors might be as bad as x86 from a systems programming perspective. But maybe someone who knows more about ARM than I do could comment on this?

The main problem with x86 is backwards compatibility. There are a few instructions and a number of CPU modes and tables nobody uses anymore. Those tables were introduced in the 286 which only supported 16-bit modes. The structures had to be extended to 32-bit and then to 64-bit so today their layout is really horrible (e.g. 64-bit pointers are broken up into 8-, 16- or 32-bit parts that are distributed into multiple words of those tables).

However x86 dominates the TOP500. There are only very few RISC machines left in the list. I seems that the choice of an instruction set doesn't make a great difference anymore when processors use techniques like microcode and register renaming nowadays (but I'm not a CPU designer so I don't know how much complexity things like 16-bit mode introduce in a modern processors and how much space one could save on the die if those features were removed; but I guess it won't be that much because that stuff is probably handled by the microcode). Yes it would be nice to dump things 16-bit modes and horrible segmentation tables but setting those things up and entering 64-bit mode takes less than 200 lines of assembler instructions, so there is not much pressure to change the architecture.

There already have been attempts to get rid of x86: Intel tried to promote its Itanium architecture which is a much more modern but this attempt apparently failed. Many people are using programs that cannot easily be ported to other platforms, e.g. because their vendors simply have no intention to port them. Furthermore developing a new CPU architecture is very expensive. Billions of dollars have been put into processors like the Intel Core processors. That's why I think that we will stick to x86 in the next decades. But of course I might be wrong, people smarter than me have already been wrong on such topics before :D.

EvanED wrote:Hey, fun fact: address translation on x86 is Turing complete.

That's right: it's possible to perform computation on the x86's MMU without ever dispatching a real x86 instruction.

Wow, that is really a fun fact :D.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 15, 2014 6:45 pm UTC

Wonderbolt wrote:
korona wrote:Yes, the x86 architecture is full of legacy stuff but it seems that x86 is here to stay, there is not much one can do about that :D. It also seems that Intel does a very good job of optimizing x86 processors so there is not much desire to change the architecture.

I'm not so sure about that. ARM has gotten a lot of traction, though admittedly x86 still seems to be what's almost exclusively used for PCs.

x86 is in basically all consumer desktops, laptops and consoles?

korona wrote:That's why I think that we will stick to x86 in the next decades.

Just think, we'll still be able to run Doom in 20 years time.
Image

User avatar
Sizik
Posts: 1260
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: Coding: Fleeting Thoughts

Postby Sizik » Wed Oct 15, 2014 6:51 pm UTC

Xenomortis wrote:
Wonderbolt wrote:
korona wrote:Yes, the x86 architecture is full of legacy stuff but it seems that x86 is here to stay, there is not much one can do about that :D. It also seems that Intel does a very good job of optimizing x86 processors so there is not much desire to change the architecture.

I'm not so sure about that. ARM has gotten a lot of traction, though admittedly x86 still seems to be what's almost exclusively used for PCs.

x86 is in basically all consumer desktops, laptops and consoles?


The only x86 console is the PS4.

Edit: Huh, so is the Xbox One. I guess it was a bigger deal when Sony announced it.
Last edited by Sizik on Wed Oct 15, 2014 6:56 pm UTC, edited 1 time in total.
she/they
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 15, 2014 6:54 pm UTC

Sizik wrote:The only x86 console is the PS4.

XBox One is x86 too.
I guess the Wii U is still PowerPC though.
Image

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Coding: Fleeting Thoughts

Postby Thesh » Wed Oct 15, 2014 6:55 pm UTC

The time to move away from x86 was with the move to 64-bit, but unfortunately we messed that one up and just piled onto x86.
Summum ius, summa iniuria.

Ubik
Posts: 1016
Joined: Thu Oct 18, 2007 3:43 pm UTC

Re: Coding: Fleeting Thoughts

Postby Ubik » Wed Oct 15, 2014 6:56 pm UTC

As far as I know, Xbox One also has an x86-64 processor. Wii U has some kind of Power chip in it, though.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 15, 2014 7:02 pm UTC

Thesh wrote:The time to move away from x86 was with the move to 64-bit, but unfortunately we messed that one up and just piled onto x86.

Intel did try with the Itanium; but it didn't run Doom.
AMD then made a 64 bit processor that did run Doom.
Image

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: Coding: Fleeting Thoughts

Postby EvanED » Wed Oct 15, 2014 7:15 pm UTC

korona wrote:I don't know about the current ARM processors but I remember that programming some of the older ones (I think it was ARMv3?) was not much better than programming x86. They also have multiple operation modes IIRC.
Still not the mess of x86 modes though. I only know of two modes1, the main ARM instruction set and Thumb, the latter of which is looks more like, say, Java bytecode than a traditional three-address code.

1I'd expect to know of more if there were more, but I also don't feel confident that there aren't. For example, I am pretty familiar with straight ARM assembly, but I know next to nothing about Thumb. I also don't know if AArch64 is backwards compatible.

I seems that the choice of an instruction set doesn't make a great difference anymore when processors use techniques like microcode and register renaming nowadays
There's a recent paper in some arch conference about RISC vs CISC (I think by some U Wisconsin people but I'm not sure); I didn't read it, but the capsule summary I've heard is that it basically doesn't make a difference any more for power consumption or performance.

Edit: http://research.cs.wisc.edu/vertical/pa ... uggles.pdf

"Our study suggests that at performance levels in the range of A8 and higher, RISC/CISC is irrelevant for performance, power, and energy."

There already have been attempts to get rid of x86: Intel tried to promote its Itanium architecture which is a much more modern but this attempt apparently failed.
To be fair, that was only partially because it wasn't x86; partially it was because it was very atypical even for a non-x86 chip (VLIW instead of traditional RISC) and partially because Intel didn't do it well.

I think you only need to look at the Apple PPC->x86 transition that, while not seemless, went quite well and allowed running legacy apps inside of a VM. Had Intel handled Itanium better, it's very conceivable that that transition would have been more successful. Or maybe not.

I have a slightly anti-x86 position, coming to it from a very weird perspective. But from my position some of the big problems have to do with things like the effect the CISCyness has on instruction semantics rather than performance/power (they're more complicated to reason about).

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Coding: Fleeting Thoughts

Postby Thesh » Wed Oct 15, 2014 7:40 pm UTC

Xenomortis wrote:
Thesh wrote:The time to move away from x86 was with the move to 64-bit, but unfortunately we messed that one up and just piled onto x86.

Intel did try with the Itanium; but it didn't run Doom.
AMD then made a 64 bit processor that did run Doom.

Doom has been ported to non-x86 systems, including ARM and PowerPC.
Summum ius, summa iniuria.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 15, 2014 7:42 pm UTC

Thesh wrote:
Xenomortis wrote:
Thesh wrote:The time to move away from x86 was with the move to 64-bit, but unfortunately we messed that one up and just piled onto x86.

Intel did try with the Itanium; but it didn't run Doom.
AMD then made a 64 bit processor that did run Doom.

Doom has been ported to non-x86 systems, including ARM and PowerPC.

But not the copy of Doom I already have.
Which was my point.
Image

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Coding: Fleeting Thoughts

Postby Thesh » Wed Oct 15, 2014 8:16 pm UTC

That can be solved with virtualization/emulation. Yes, it can be difficult to migrate to new stuff, that doesn't mean you don't try.
Summum ius, summa iniuria.

Ubik
Posts: 1016
Joined: Thu Oct 18, 2007 3:43 pm UTC

Re: Coding: Fleeting Thoughts

Postby Ubik » Thu Oct 16, 2014 6:46 am UTC

The Mill processor thingy left a positive impression on me a while ago, but then again I know hardly anything about processors, so it can be just a case of well-done marketing. Even if it actually is really good, doesn't automatically mean anything will come out of it.

KnightExemplar
Posts: 5494
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Coding: Fleeting Thoughts

Postby KnightExemplar » Sat Oct 18, 2014 6:31 pm UTC

Thesh wrote:That can be solved with virtualization/emulation. Yes, it can be difficult to migrate to new stuff, that doesn't mean you don't try.


Itanium had x86 compatibility via emulation. Eventually, it also got the emulation software to run MIPS and SPARC code as well.

The problem with Itanium is that it was closer to a GPU architecture than a CPU one. VLIW works in GPUs due to the constant onslaught of parallel processing. But when running "typical" code, it really wasn't much faster than x86 anyway. Today, highly-parallel programs can run VLIW-like code on GPUs while running normal code on x86.

The fact that Apple did a PowerPC -> x86 transition successfully is because the Apple iMac G5 "felt" slower than the Intel Mac Pro. x86 was soooo much faster than PowerPC that even with the emulation penalty, x86 chips were a clear upgrade over PowerPC. Itanium was only somewhat faster than x86, and only in certain tasks. AMD Opteron were both faster and cheaper than Itanium in typical benchmarks.

If you look closely at the graph above, you'll note that the four-way Opteron nearly equals an 8-way Itanium 2 in terms of integer performance! That, in and of itself, says a mouthful about this architecture.


Why would you transition to another instruction set, when in practice that instruction set was slower and more expensive? AMD had a machine that was scaling better, was cheaper, and more performant than the expensive, non-backwards compatible Itanium2. Besides, VLIW exists today with GPUs and heterogeneous computing. Run serial code on x86-64, and run parallel code on GPUs.
First Strike +1/+1 and Indestructible.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: Coding: Fleeting Thoughts

Postby EvanED » Sat Oct 18, 2014 9:25 pm UTC

KnightExemplar wrote:Itanium had x86 compatibility via emulation. ... The fact that Apple did a PowerPC -> x86 transition successfully is because the Apple iMac G5 "felt" slower than the Intel Mac Pro. x86 was soooo much faster than PowerPC that even with the emulation penalty, x86 chips were a clear upgrade over PowerPC.
But, as that link points out, the Itanium emulation was waaay too slow. And the difference wasn't just (x86 vs Itanium) vs (PPC vs x86), it was that Apple did a very good job with Rosetta and Intel seems to have done a very bad job with Itanium.

The problem with Itanium is that it was closer to a GPU architecture than a CPU one. VLIW works in GPUs due to the constant onslaught of parallel processing. But when running "typical" code, it really wasn't much faster than x86 anyway. Today, highly-parallel programs can run VLIW-like code on GPUs while running normal code on x86.
To be honest, I disagree here. I was going to say I saw almost no relationship between the two at all, but I took the time to do a search and I saw that AMD was using VLIW for a while. But I would say this is more a case of "VLIW was well-suited to graphics processing" more than it was a fundamental aspect to the GPU's architecture. GPUs to me make me think SIMDish, embarassingly-parallel tasks... neither of which I have associated with VLIW, which I view as largely just an instruction scheduling problem.

To say a little more about that -- I don't view VLIW as a way to provide "parallelism" more than a traditional ISA, because desktop chips are already usually scheduling your instructions in parallel; probably at the superscalar level, almost certainly at the pipeline level, and quite possibly at the out-of-order level. VLIW to me is just moving a that responsibility onto the compiler. By contrast, to get a good GPU program you probably have to completely rearchitect your computation.

KnightExemplar
Posts: 5494
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Coding: Fleeting Thoughts

Postby KnightExemplar » Sun Oct 19, 2014 6:17 am UTC

EvanED wrote:
KnightExemplar wrote:Itanium had x86 compatibility via emulation. ... The fact that Apple did a PowerPC -> x86 transition successfully is because the Apple iMac G5 "felt" slower than the Intel Mac Pro. x86 was soooo much faster than PowerPC that even with the emulation penalty, x86 chips were a clear upgrade over PowerPC.
But, as that link points out, the Itanium emulation was waaay too slow. And the difference wasn't just (x86 vs Itanium) vs (PPC vs x86), it was that Apple did a very good job with Rosetta and Intel seems to have done a very bad job with Itanium.


2001 was when emulators first started coming out. By 2003, software emulators reached ~65% performance typically, and better on synthetic benchmarks.

http://www.realworldtech.com/x86-translation/8/

In another example cited in [16], a 1.5 GHz Itanium 2 achieves 105%, 99%, and 133% of the x86 performance of a 1.6 GHz Xeon processor when running SPECint2k, SPECfp2k, and the Internet content creation portion of Sysmark 2002 respectively.


By then, AMD64 has been released. Opteron demonstrated that extending the x86 instruction set could improve performance significantly. All the media hype that argued that x86's instruction set was inferior was suddenly demonstrated to be false... as a much smaller company managed to manufacture an x86 platform that beat Itanium in a number of benchmarks. (Not only in software emulated x86 code, but also in practical benchmarks). Basically, the x86 instruction set never really gave AMD a disadvantage against Itanium.
First Strike +1/+1 and Indestructible.

User avatar
You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

Re: Coding: Fleeting Thoughts

Postby You, sir, name? » Sun Oct 19, 2014 3:14 pm UTC

You know when you've stumbled into a bad neighborhood when you see things like

Code: Select all

// what does this even do? I asked around and nobody would tell me - some developer
type calculateThingX(too,many,arguments,for,sanity) {
 // 400 lines harrowing spaghetti code code inside
}

type calculateThingXWithoutSideEffects(too,many,arguments,for,sanity) {
 // 500 lines harrowing spaghetti code code inside
}
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 22, 2014 1:02 pm UTC

Code: Select all

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double

That just seem fundamentally wrong.

But anyway, what generated that error.

Code: Select all

protected Object getAThing( Object inputThing ) {
  return new double[] { (Double) inputThing };
}

Oh...
When would that work? Other than the trivial case of course.

Edit:
I know why it doesn't work.
It's just that, on the surface, it seems silly.
Last edited by Xenomortis on Wed Oct 22, 2014 1:16 pm UTC, edited 1 time in total.
Image

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Wed Oct 22, 2014 1:11 pm UTC

Xenomortis wrote:

Code: Select all

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double

That just seem fundamentally wrong.

Well Double and Integer are classes; they behave like any other class does. As Integer does not inherit from Double or vice versa that cast is illegal.

Xenomortis wrote:But anyway, what generated that error.

Code: Select all

protected Object getAThing( Object inputThing ) {
  return new double[] { (Double) inputThing };
}

Oh...
When would that work? Other than the trivial case of course.

Never, because Double is final.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Wed Oct 22, 2014 1:27 pm UTC

korona wrote:Well Double and Integer are classes; they behave like any other class does. As Integer does not inherit from Double or vice versa that cast is illegal.

Yeah, I know. The situation still looks silly though.
I guess I'm just complaining that Java doesn't allow classes to define valid conversions (which is basically operator overloading I guess).
Image

User avatar
jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: Coding: Fleeting Thoughts

Postby jestingrabbit » Fri Oct 24, 2014 6:28 am UTC

I dunno, there's several ways to cast values from one to the other. You want to map one to the other, it makes sense to diy it, so that you know what ends up where.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

schapel
Posts: 244
Joined: Fri Jun 13, 2014 1:33 am UTC

Re: Coding: Fleeting Thoughts

Postby schapel » Fri Oct 24, 2014 11:33 am UTC

Xenomortis wrote:I guess I'm just complaining that Java doesn't allow classes to define valid conversions (which is basically operator overloading I guess).

Java does it with method calls. The toString method will convert any object into a String.

I suppose what you mean is that casting a primitive can make a new primitive, but casting a reference always results in a reference that points to the original object, so certain reference casts are not allowed. Come to think of it, it is strange to use one kind of syntax to mean these two very different things -- casting a primitive is a completely different kind of operation than casting a reference. Actually, the JLS points out that the casting syntax provides three distinct operations, one of which happens at compile time rather than run time!
A cast expression converts, at run time, a value of one numeric type to a similar value of another numeric type; or confirms, at compile time, that the type of an expression is boolean; or checks, at run time, that a reference value refers to an object whose class is compatible with a specified reference type.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Fri Oct 24, 2014 11:42 am UTC

schapel wrote:
Xenomortis wrote:I guess I'm just complaining that Java doesn't allow classes to define valid conversions (which is basically operator overloading I guess).

Java does it with method calls. The toString method will convert any object into a String.

Doesn't extend though, particularly with custom types.
The compiler can't reasonably know about a "toMyTypeA()" method (well, it could...).

schapel wrote:I suppose what you mean is that casting a primitive can make a new primitive, but casting a reference always results in a reference that points to the original object, so certain reference casts are not allowed. Come to think of it, it is strange to use one kind of syntax to mean these two very different things -- casting a primitive is a completely different kind of operation than casting a reference. Actually, the JLS points out that the casting syntax provides three distinct operations, one of which happens at compile time rather than run time!

That's actually an angle I hadn't thought of - C++ has tainted my thinking.
One would expect a reference to the original object when casting a class to its base class (right?).
But if it was a straight user-defined conversion then it would have to be a new object, which might be confusing (especially if the cast was implicit).

C# allows custom conversions right? Does it do anything special there?
Image

User avatar
Xeio
Friends, Faidites, Countrymen
Posts: 5101
Joined: Wed Jul 25, 2007 11:12 am UTC
Location: C:\Users\Xeio\
Contact:

Re: Coding: Fleeting Thoughts

Postby Xeio » Fri Oct 24, 2014 3:19 pm UTC

Xenomortis wrote:C# allows custom conversions right? Does it do anything special there?
Well, you have to define it. And mark it as explicit or implicit. I don't think that's anything really special though...
Spoiler:

Code: Select all

class Foo
{
    public static explicit operator Foo(Bar b)
    {
        return new Foo(bar);
    }
}
That allows casting like "Foo foo = (Foo)someBar;".

Code: Select all

class Foo
{
    public static implicit operator Foo(Bar b)
    {
        return new Foo(bar);
    }
}
Which allows direct assignment "Foo foo = someBar;".

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

Re: Coding: Fleeting Thoughts

Postby korona » Fri Oct 24, 2014 3:57 pm UTC

I'm not sure if I like it that a cast involves a memory allocation and object construction and that if x is of type A then (A)(B)x != x.

User avatar
Xenomortis
Not actually a special flower.
Posts: 1455
Joined: Thu Oct 11, 2012 8:47 am UTC

Re: Coding: Fleeting Thoughts

Postby Xenomortis » Fri Oct 24, 2014 3:59 pm UTC

korona wrote:I'm not sure if I like it that a cast involves a memory allocation and object construction and that if x is of type A then (A)(B)x != x.

Perhaps then you're obliged to overload == and != ?
(This gets messy quite quick...)
Image

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 6598
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Coding: Fleeting Thoughts

Postby Thesh » Fri Oct 24, 2014 6:01 pm UTC

korona wrote:I'm not sure if I like it that a cast involves a memory allocation and object construction and that if x is of type A then (A)(B)x != x.


Like how (Double)(Int32)x != x? Yes, casts can cause data loss, that's why they should be explicit casts. As for the memory, well, unless you are casting to an inherited type it has to make a new copy for a reference type, no matter what, because until GC runs it can't assume there's not another reference to it. If it's a value type, it's getting copied anyway.
Summum ius, summa iniuria.

User avatar
You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

Re: Coding: Fleeting Thoughts

Postby You, sir, name? » Sun Oct 26, 2014 5:09 pm UTC

What is a good way to arrange C++ code? Seems like anything involving templates forces me to keep an ungodly amount of implementation details in the headers, which isn't very nice :-/

Some example stuff from my latest project, with some additional classes that were also in the header cut out for brevity. I find it offensively messy.

Code: Select all

typedef uint32_t TokenIndex;
#define TokenIndexNotSet UINT32_MAX

template<typename T>
class UnitVector {
    T next;
public:
    UnitVector(T next) : next(next) {};
    T fwd() {
        return next;
    }
};

template<typename T>
class InvertibleUnitVector {
    T prev;
public:
    InvertibleUnitVector(T prev) : prev(prev) {};
    T rev() {
        return prev;
    }
    T up() {
        return prev;
    }

};

template<typename T>
class VerticalInvertibleUnitVector {
    T paren;
public:
    VerticalInvertibleUnitVector(T paren) : paren(paren) {};
    T up() {
        return paren;
    }
};

struct TokenLocality {
    TokenIndex self;
    TokenIndex prev;
    TokenIndex next;

    TokenIndex left;
    TokenIndex right;
    TokenIndex parent;

    struct Left : public UnitVector<TokenIndex>, public VerticalInvertibleUnitVector<TokenIndex> {
        Left(const TokenLocality& loc) : UnitVector(loc.left), VerticalInvertibleUnitVector(loc.parent) {};
    };
    struct Right : public UnitVector<TokenIndex>, public VerticalInvertibleUnitVector<TokenIndex> {
        Right(const TokenLocality& loc) : UnitVector(loc.right), VerticalInvertibleUnitVector(loc.parent) {};
    };
    struct Prev : public UnitVector<TokenIndex>, public InvertibleUnitVector<TokenIndex> {
        Prev(const TokenLocality& loc) : UnitVector(loc.prev), InvertibleUnitVector(loc.next) {};
    };
    struct Next : public UnitVector<TokenIndex>, public InvertibleUnitVector<TokenIndex> {
        Next(const TokenLocality& loc) : UnitVector(loc.next), InvertibleUnitVector(loc.prev) {};
    };
    struct Parent : public UnitVector<TokenIndex> {
        Parent(const TokenLocality& loc) : UnitVector(loc.parent) {};
    };

    struct PrevMutator : public UnitVector<TokenIndex&>, public InvertibleUnitVector<TokenIndex&> {
        PrevMutator(TokenLocality& loc) : UnitVector(loc.prev), InvertibleUnitVector(loc.next) {};
    };
    struct NextMutator : public UnitVector<TokenIndex&>, public InvertibleUnitVector<TokenIndex&> {
        NextMutator(TokenLocality& loc) : UnitVector(loc.next), InvertibleUnitVector(loc.prev)  {};
    };
    struct LeftMutator : public UnitVector<TokenIndex&>, public VerticalInvertibleUnitVector<TokenIndex&> {
        LeftMutator(TokenLocality& loc) : UnitVector(loc.left), VerticalInvertibleUnitVector(loc.parent)  {};
    };
    struct RightMutator : public UnitVector<TokenIndex&>, public VerticalInvertibleUnitVector<TokenIndex&> {
        RightMutator(TokenLocality& loc) : UnitVector(loc.right), VerticalInvertibleUnitVector(loc.parent)  {};
    };
    TokenLocality();
};

class TokenGraphIterator;

class TokenGraph {
    std::vector<Token> tokens;

    void createFlatTokenList();


    template<typename T> T location(TokenIndex idx) {
        return T(tokens[idx].loc);
    }
    template<typename T> T location(TokenIndex idx) const {
        return T(tokens[idx].loc);
    }
public:
    TokenGraph(const std::vector<Token>& tokens);
    TokenGraph(std::vector<Token>&& tokens);
    TokenGraph(TokenGraph&& other);

    bool consistencyCheck() const;

    void removeTokenFromList(TokenIndex idx);

    template<typename Mutator> void removeTokenFromList(TokenIndex idx) {
        Mutator mut = location<Mutator>(idx);
        if (mut.fwd() != TokenIndexNotSet) {
            location<Mutator>(mut.fwd()).rev() = mut.rev();
        }
        if (mut.rev() != TokenIndexNotSet) {
            location<Mutator>(mut.rev()).fwd() = mut.fwd();
        }
        mut.fwd() = TokenIndexNotSet;
        mut.rev() = TokenIndexNotSet;
    }

    void removeTokenRangeFromList(TokenIndex first, TokenIndex last);

    template<typename Mutator>
    void set(TokenIndex dest, TokenIndex source) {
        location<Mutator>(dest).fwd() = source;
        location<Mutator>(source).up() = dest;
    }

    template<typename Mutator>
    void move(TokenIndex dest, TokenIndex source) {
        removeTokenFromList<TokenLocality::NextMutator>(source);
        set<Mutator>(dest, source);
    }

    template<typename Mutator>
    void move(TokenIndex dest, TokenIndex sourceBegin, TokenIndex sourceEnd) {
        cutRange<TokenLocality::NextMutator>(sourceBegin, sourceEnd);
        set<Mutator>(dest, sourceBegin);
    }

    template<typename Mutator>
    void cutRange(TokenIndex first, TokenIndex last) {
        Mutator firstMutator = location<Mutator>(first);
        Mutator lastMutator = location<Mutator>(last);

        TokenIndex lastPreceding = firstMutator.rev();
        TokenIndex firstSucceeding = lastMutator.fwd();

        if (lastPreceding != TokenIndexNotSet) {
            location<Mutator>(lastPreceding).fwd() = firstSucceeding;
        }

        if (firstSucceeding != TokenIndexNotSet) {
            location<Mutator>(firstSucceeding).rev() = lastPreceding;
        }


        firstMutator.rev() = TokenIndexNotSet;
        lastMutator.fwd() = TokenIndexNotSet;
    }

    template<typename V>
    void forEach(std::function<void(const Token&)> visitor, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    TokenGraphIterator findFirst(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    TokenGraphIterator findLast(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    unsigned int count(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

    const Token& at(TokenIndex index) const;

    const std::vector<Token>& getTokens() const;

    void print(TokenIndex) const;
    std::ostream& print(TokenIndex, std::ostream& o) const;

    TokenGraphIterator begin() const;
    TokenGraphIterator last() const;
    TokenGraphIterator iterFor(TokenIndex index) const;
    TokenGraphIterator end() const;
};

class TokenGraphIterator {
    TokenIndex idx;
    const TokenGraph& graph;

public:
    operator TokenIndex() const;

    const Token& operator*() const;
    const Token* operator->() const;
    bool operator==(const TokenGraphIterator& other) const;
    bool operator!=(const TokenGraphIterator& other) const;

    TokenGraphIterator& operator=(const TokenGraphIterator& other);
    TokenGraphIterator(const TokenGraphIterator& other);

    template<typename V>
    TokenGraphIterator advance() const {
        if (idx == TokenIndexNotSet) {
            throw std::runtime_error("Traversing graph iterator outside of domain");
        }

        return TokenGraphIterator(graph, V(graph.at(idx).loc).fwd());
    }

    template<typename V>
    unsigned int size() const {
        unsigned int size = 0;
        TokenIndex curr = idx;
        while (curr != TokenIndexNotSet) {
            size ++;
            curr = V(graph.at(curr).loc).fwd();
        }

        return size;
    }

    TokenGraphIterator next() const;
    TokenGraphIterator prev() const;
    TokenGraphIterator parent() const;
    TokenGraphIterator left() const;
    TokenGraphIterator right() const;

protected:
    TokenGraphIterator(const TokenGraph& pool, TokenIndex idx);

    friend class TokenGraph;

};

template<typename V>
inline void TokenGraph::forEach(std::function<void(const Token&)> visitor, TokenIndex begin, TokenIndex end) const  {
    TokenIndex idx = begin;
    while (idx != end) {
        visitor(at(idx));
        idx = location<V>(idx).fwd();
    }
}

template<typename V>
inline TokenGraphIterator TokenGraph::findFirst(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const {
    TokenIndex idx = begin;
    while (idx != end) {
        if (predicate(at(idx))) {
            return iterFor(idx);
        }
        idx = location<V>(idx).fwd();
    }
    return this->end();
}

template<typename V>
inline TokenGraphIterator TokenGraph::findLast(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const {
    TokenIndex idx = begin;
    TokenIndex lastHit = TokenIndexNotSet;
    while (idx != end) {
        if (predicate(at(idx))) {
            lastHit = idx;
        }
        idx = location<V>(idx).fwd();
    }
    return iterFor(lastHit);
}


template<typename V>
inline unsigned int TokenGraph::count(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const
{
    TokenIndex idx = begin;
    unsigned int hits = 0;
    while (idx != end) {
        if (predicate(at(idx))) {
            hits ++;
        }
        idx = location<V>(idx).fwd();
    }
    return hits;
}
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.

Ubik
Posts: 1016
Joined: Thu Oct 18, 2007 3:43 pm UTC

Re: Coding: Fleeting Thoughts

Postby Ubik » Sun Oct 26, 2014 6:45 pm UTC

Maybe split the header into two parts, one public and one that's not intended to be looked at? It still naturally has to be included by the public header, but at least users only need to skim through code similar to non-template headers if they're interested in looking at them.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: Coding: Fleeting Thoughts

Postby Jplus » Sun Oct 26, 2014 7:09 pm UTC

You, sir, name? wrote:What is a good way to arrange C++ code? Seems like anything involving templates forces me to keep an ungodly amount of implementation details in the headers, which isn't very nice :-/

Yes. Unfortunately that's an unsolved problem in C++. Ubik's suggestion works, though.
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Coding: Fleeting Thoughts

Postby Yakk » Sun Oct 26, 2014 8:53 pm UTC

Code: Select all

typedef uint32_t TokenIndex;
#define TokenIndexNotSet UINT32_MAX

template<class T>
class UnitVector {
    T next;
public:
    UnitVector(T next);
    T fwd();
};
template<class T>
UnitVector<T>::UnitVector(T next) : next(next) {};

template<class T>
T UnitVector<T>::fwd() {
    return next;
}

as the basic step. Moving bodies to their own _impl header file also good

Code: Select all


    template<typename V>
    void forEach(std::function<void(const Token&)> visitor, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    TokenGraphIterator findFirst(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    TokenGraphIterator findLast(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

    template<typename V>
    unsigned int count(std::function<bool(const Token&)> predicate, TokenIndex begin, TokenIndex end) const;

What is V, and why are these member algorithms at all?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
You, sir, name?
Posts: 6983
Joined: Sun Apr 22, 2007 10:07 am UTC
Location: Chako Paul City
Contact:

Re: Coding: Fleeting Thoughts

Postby You, sir, name? » Sun Oct 26, 2014 9:13 pm UTC

Yakk wrote:What is V, and why are these member algorithms at all?


V is a UnitVector, the axis in the graph the algorithms are traversing. Each node has up to five neighbors, which when populated satisfy certain identities (i.e. self->next->prev = self, self->left->parent = self). Next and Prev offers list semantics, Left, Right and Parent allows navigation as though in a binary tree.

Although you are quite right in that it ought to be possible to move the algorithms outside of the class if I have them eat iterators instead of raw indices.
I edit my posts a lot and sometimes the words wrong order words appear in sentences get messed up.


Return to “Coding”

Who is online

Users browsing this forum: No registered users and 8 guests