Peaceful Whale wrote:Is this how?
(I understand this *kinda*)
way of doing one thing. It might not be the thing you want to do, and it might not be the exact way you'd want to do the thing. You could perhaps make the call to Assess(x, y, z) check-and-correct the out of range (with a TrueVal() function) before passing to RealAssess(validX, validY, validZ) so as to not bother your function. Also, instead of dX going from -1 to 1, and offsetting the 'core' X, you could do a lookX that runs from (X-1) to (X+1) and do the 'is it corner, edge, face or core?' by direct check of whether lookX==coreX, lookY==coreY, lookZ==coreZ, but that's going to be an eight-leafed triple-branching check such as (I think!):
Code: Select all
...insert whitespace to taste, as you go.
Thus I quite like the dX,dY,dZ way of doing this kind of thing, as a way of extracting and combining the unit disolacements to determine the Manhattan Distance from the origin to the check-coords.And then I made an obvious error
. Please note this as you transcribe it. I couldn't recall what the signum function was, in Python. (Look for signum(), sgn() or sign() functions/methods. It's a neat monolithic way of getting +1, 0 or -1), and what I then meant to write
was that signum(x)*x
would give 1 for x=±1, zero for zero, to add to the similar results for y and z, giving 0 total at the core cell, 1in total at a face-on-face neighbour (six of them), 2 for edge-centres (12 of them) and 3 for the corners (8 of those, bringing the total to the 27 you'd expect).
(If you do this correctly, you can also plan to use this for (beyond your scope, but worth considering) next-to-neighbour assessment (dX=-2..+2) if you wanted, as signum(-2)*(-2)=>+2 (two away from origin, regardless of direction, and the reason I was doing signum*original at all, rather than just using the signum) and classify weightings in a useful manner for all 125 cubes in a up-to-2-distance offsetting on the origin cubes.)And then I realised
that I'd completely overcomplicated things. Look for an abs() function, 'absolute'. That makes -n straight into n. Doesn't need sgn(). Doh! But now you see where I was going, in a rush as the battery rapidly depleted.
In fact, given you're only going ±1 from the origin, just do a square. dX*dX (etc). ±1 => +1 and 0 => 0, without fuss. It's only values beyond 1 (either sign) or non-zero fractions of 1 that this might be wrong for your purposes, but those don't apply to you!
Would I just use this?
By the Gods, absolutely not! I did my best to make it easier to translate my pseudocode to the Python than if I'd written it in valid Pascal, Ada, Perl, COBOL, etc, which I would have had more confidence in.
You need to run through the lines and assume they are all
comments, then rewrite tye bits in your own favourite style of actual Python. (Noting the above.)
Is there a different way?
Always! I'm not even saying that the above way is the best way. It's one I'd favour, off the bat, but testing and development might force a rewrite because of an issue. Whether that be merely style, flexibility or because I've erred. Like I might have done, had I been doing the live programming myself!
Could I make it so it’s always the smallest array possible? Or just 3 bigger?
I'm not entirely sure what you mean by this. In a wrap-around 'world', the array is the size it is (change it, and you shoehorn cells in there, between neighbours to make them no longer neighbours).
If you're doing "move the edges out before they become significant" so that you have effectively infnite space, only not frommthe start, you need (at the simplest) to tack on a test each time a cell is set as 'live' to make sure that once you get to xmax
(or within a decent range of it) you resize/reallocate the array to give you a new higher xmax
to move into. Same with y and z, and ditto with x/y/zmin
s at the lower limit.
Beware 'explosions'. Even if you also regularly track deaths (and no births) of cells to raise your min
values and lower your max
ones, if you've got a pattern that sends elements out in all directions (even if it leaves the centre and most of the other directions devoid of live cells) the array will just resize larger and larger and larger. By doing a 'smart review' you might be able to identify a fragment travelling right+forward+up (with no 'debris' left behind) and a fragment travelling left+backwards+down (ditto) that can each be self-contained in their own very small box. But that's part of the "dictionary store of references to arrays" thing that I'm not sure I explained properly. (Sort of mentioned along the way in Tub's link, an interesting read indeed, but that won't help you.)
It's why I thought you might prefer the wrap-around (static) array. Simpler, though functionally different from the version without wrap-around, just ever expanding shoulder-room whilstsoever required.