The real question is how many other things it's missing.
Reading the process in TFA, it's very much dependent on the comprehensiveness of the testing framework. And apparently, the tests never built a lobby in the bottom left corner...
Anything else it didn't try, is probably also not documented and not implemented.
With the growing use of AI in reverse engineering, we might need to shift our goals to more strongly verifiable ones, such as matching decompilation.
Yeah the simulation also seems to have some bugs. I saw a person get stuck waiting for an elevator for hours when all the elevators were idle. He got more and more pissed off until he eventually despawned overnight.
I agree that you need to be able to produce source code that matches the original binary before you can start porting things.
Fluent means different things to different people (and in different languages!).
As I understand it, B2 means one has a solid, functional proficiency in the language. They conversate/listen/read/write in diverse situations, without needing to switch to a different language or to prepare in advance.
They're very likely, however, to make mistakes, say things in non-idiomatic ways etc. although this is expected to be minor enough to not affect the ability to understand them.
In order to get to C1 and above, one needs a deeper understanding of the language - phrases, idioms, connotations, registers, etc. and a broader set of situations they can handle, e.g., a philosophical discussion. An of course, errors are expected to be rarer.
So, literally speaking, B2 is rather fluent, since the language is "flowing" out of them and they're not stopping to think every other word (which is, as far as I understand, a common interpretation of flüssig in German).
But as "fluent" speakers should know, words come with expectations beyond the literal meaning :P
I'm back to searching for numbers that are palindromes both in decimal and in binary. [0]
I had an insight the other day, that as I fix the n least (and most, it's a palindrome!) significant decimal digits, I also fix the remainder from division in 5^n. Let's call it R. Since I also fix by that point a bunch of least (and most) significant bits, I can subtract how much they contribute mod 5^n from R, to get the remainder from division in 5^n of the still unknown bit. The thing is, maybe it's not possible to get this specific remainder with the unknown bits, because they're too few.
So, I can prepare in advance a table of size 5^n (for one or more ns) which tells me how many bits from the middle of the palindrome I need, to get a remainder of <index mod 5^n>.
Then when I get to the aforementioned situation, all I need to do is to compare the number in the table to number of unknown bits. If the number in the table is bigger, I can prune the entire subtree.
From a little bit of testing, this seems to work, and it seems to complement my current lookup tables and not prune the same branches. It won't make a huge difference, but every little bit helps.
The important thing, though, is that I'm just happy there are still algorithmic improvements! For a long while I've been only doing engineering improvements such as more efficient tables and porting to CUDA, but since the problem is exponential, real breakthroughs have to come from a better algorithm, and I almost gave up on finding one.
I did some manual golfing with nand2tetris assembly and developed similar hacks to the max() implementation, where one appropriates an arbitrary, conveniently placed, memory address.
After reading the article, though, I feel like I definitely need a superoptimiser, to see what could be improved :)
> for this game you can throw the usual tools away...
> The reason is that Starflight was written in Forth
I recently reverse-engineered another game from the 80's, and had a similar issue because it was written with Turbo Pascal.
The different tools didn't like quirks such as data stored inline between the instructions or every library function having a different calling convention.
Turns out the simplest way to handle this was to write my own decompiler, since Turbo Pascal 3 didn't do any optimisations.
In academia, seeing your research being published by someone else sucks, but the consolation prize is that you know you had the right instincts choosing what to research.
Reading the process in TFA, it's very much dependent on the comprehensiveness of the testing framework. And apparently, the tests never built a lobby in the bottom left corner...
Anything else it didn't try, is probably also not documented and not implemented.
With the growing use of AI in reverse engineering, we might need to shift our goals to more strongly verifiable ones, such as matching decompilation.
reply