Why I’m not a materialist anymore

Introduction

A few years ago, I wrote an article called “Why I’m not Catholic anymore”, detailing the reasons I had left Catholic faith. I stopped practicing this religion about 5 years ago for several reasons which I detailed in 2011 in this article on my Neowin blog (which is now dead of course). Many discoveries since then, however, have changed my position. I can no longer stand on the fence, and naturalism, although a convenient position to hold in this day and age, is just too absurd on too many levels.

There’s not one single reason but several, all pointing towards the same direction, and together making a case where a leap of faith remains necessary yet is the obvious and natural thing to do. I’ll articulate my thoughts around 4 points.

1. The historical evidence for Jesus and the Resurrection

It’s incredible how powerful certain popular myths are. Even some of the most erudite people I’ve talked to, for instance, thought that the flat earth theory was a common belief in the Middle-Ages, which is plain wrong; people knew the earth was round since the Classical Age. Similarly, many people (even among Christians) assume that History has little to say about Jesus and that the whole thing could very well have been a myth.

Yet summary research on Wikipedia tells us the Christ Myth Theory has been effectively refuted, that Jesus’ baptism and crucifixion are firmly established historical events, that central figures of the New Testament such as Pontius Pilate and Paul of Tarsus have undeniably existed and match the description made of them in the New Testament, etc. Current-day NT scholarship includes leading figures such as E. P. Sanders, Craig S. Keener and N. T. Wright, for whom the historical Jesus is not very different from the ones revered by Christians. Reading Keener’s The Historical Jesus of the Gospels was a real eye-opener in that regard.

It seems clear that History puts us in front of a Galilean Jew who died on the cross, whose tomb was then found empty, which many people, believers and non-believers alike, reported to have seen alive in bodily form after his death, some of whom went on to announce his resurrection all around the pagan world, often at the cost of their lives. The resurrection of Jesus may escape historical certainty, but it certainly seems to be the best explanation of the data we have.

2. The arguments for the existence of God

Even as an agnostic, I’ve still always leaned towards theism more than atheism. The main arguments for God’s existence, i.e. why is there something rather than nothing, what is the origin of the universe, why does the physical world obey abstract laws and mathematics, how come these laws are precisely those that allow life to appear, have never been really refuted and are still defended by some of the world’s best philosophers today, such as William Lane Craig and Alvin Plantinga. It also turns out that there aren’t any good arguments against the existence of God, except for the problem of Evil; but I feel the resurrection of Jesus is precisely a very good answer to the problem of Evil, i.e. Evil exists, but God does care about it and it won’t have the last word.

3. The existence of souls

While I was an agnostic, I’ve read quite a bit on neuroscience, including a book called “The User Illusion: Cutting Consciousness Down to Size” by Tor Norretranders. We now know that consciousness, i.e. subjective experience, is the result of an incredible work of filtering and synthesis done by the brain on several levels which give the impression of one coherent, synchronous reality. Furthermore, conscious choices would be determined by the brain several hundred milliseconds in advance before we make any conscious “decision”. Advances like these seem to support the idea that we don’t need souls to explain anything, and that consciousness and free will are indeed, like naturalism implies, just useful illusions.

That said, it’s been increasingly clear to me that on a fundamental level, an objective description of a physical process can never express, much less explain, subjectivity itself. I have no problem admitting that the content of experience may be scientifically described; but it is an epistemological impossibility to jump from there to postulating that there’s a subject having these experiences, as you can only ever study it as an object. Indeed, the only reason we think other human beings have subjective experiences is because we know, individually, that we are subjects, and we assume other human beings must also be; but technically nothing prevents us from believing we are the only subject of experience and everyone else is just an advanced biochemical robot, which only acts as if seeing and feeling but doesn’t really have any subjective experience. Such a robot would be objectively indistinguishable, down to the neurological level, from what we conceive as a human being.

It follows that either we’re more than physical objects, or our own subjectivity is an illusion. But the latter is contrary to our immediate experience. Therefore we are souls, i.e. some immaterial principle, and naturalism is false.

4. The absurdity of naturalism

The fourth point is that a naturalistic worldview, i.e. one that poses that only particles exist, has profound and highly repugnant consequences such as: consciousness, free will and the mind are illusions (as detailed above); human life is without hope nor objective meaning; humanity as a whole is probably totally insignificant in regards to all of reality. We may affirm such things, but no one lives as if they were true. In a sense, naturalism condemns us to live in either ignorance or hypocrisy. I’m not saying atheists can’t have ethics or meaningful lives, only that they can’t ask the question of objective meaning. If a naturalistic ethics is possible, it must suffer the same ontological reduction as subjectivity and free will, i.e. it’s a useful illusion at best.

Watching a recent debate between Alex Rosenberg and William Lane Craig on the existence of God has been very fruitful, not only in revisiting the arguments for and against God’s existence, but also on what a naturalistic worldview entails. Rosenberg maintained (as he does in his book The Atheist’s Guide to Reality), among other patently absurd propositions, that we don’t really think about other things, since in a materialistic perspective there’s no such thing as an aboutness to things, there are just things, i.e. material objects.

Atheists should spend less time talking about Pastafarianism and read books like this that explain what their belief (or non-belief as some like to put it) entails; I’m not sure many would follow Rosenberg all the way down that rabbit hole.

Conclusion

I do not claim to have solved every question in my mind; Christianity is in itself full of questions and not simply a set of ready-made answers as some seem to think. I don’t know what life after death will be like, I don’t even really know what Christ meant when he said “take your cross every day and follow me”. All I know now, as a recent convert, is that I should take this stuff seriously and try to figure out what to do about it.

No, we don’t actually think all atheists go to hell

Luke Muehlhauser of commonsenseatheism.org asks (well, asked, in 2011): “Do Christians REALLY Believe?” To put things in logical form, his point basically goes like this:

  • Christians believe that atheists go to hell
  • Hell is eternal torture
  • Any charitable spirit thinking someone might endure eternal torture will do what he can to prevent it
  • Christians don’t seem overly concerned with preventing atheists from going to hell
  • Therefore Christians don’t really believe atheists go to hell

Penn Jilette makes a similar point:


As someone who’s been Christian at least for most of his life, here’s what I would answer.

First, Luke is basically correct in his conclusion! Christians don’t really think atheists go to hell. Christian doctrine on salvation is based on our gospel sources for what Jesus said 2000 years ago. In particular Mark 16:15-16 chronicles that after his resurrection, Jesus said to his disciples:

“Go into all the world and proclaim the gospel to the whole creation. Whoever believes and is baptized will be saved, but whoever does not believe will be condemned.”

What’s remarkable here is that Jesus said nothing regarding those to whom the gospel has not been proclaimed. And regardless how some sarcastic criticism pictures theology, theologians don’t just make stuff up. So the reality is that we don’t really know what happens to people who have not been preached the gospel; we can only hope (more on that later) that God doesn’t just send them all to hell.

The silence of Jesus on those unable to receive baptism has caused many centuries of debate on the existence of limbo, for example. The current theological consensus, recently emphasized by Pope Benedict XVI, asserts that limbo doesn’t exist and we should hope that unbaptized dead babies go to heaven instead. Not everything is dogma in theology; as in any other field of investigation there are degrees of certitude to various beliefs.

Back to the topic at hand, we don’t really know what happens to people who have not been preached the gospel, but we believe God is fair and merciful, so we reasonably hope that these people are not sent to eternal torture. What does that make of atheists then? What of atheists who vaguely heard of the gospel but didn’t really gave a damn? What of atheists who carefully studied the question and still think it’s all noodly appendages and tooth fairies? Well, in short, some of them will certainly go to hell but most of them probably don’t.

First, most atheists only have very vague and skewed notions of what the gospel actually proclaims, which shows nobody properly preached it to them. So they would fall in the category of people who have not been preached the gospel and of which their post-death future is unknown yet unlikely to be hell.

Indeed, most arguments I hear against Christianity are based on utter misconceptions of what it teaches, for instance all the arguments based on Old Testament Mosaic Law (“Do you guys really stone homosexuals?”), or on our ultimate destiny (“If our ultimate destiny is to live as disembodied souls why should we care about our body?”) and so forth. Anyone using such arguments clearly has terribly inaccurate conceptions of what the message of Christ is and therefore cannot be considered to have been preached the gospel.

The only category of people on which Jesus’ condemnation falls seems really to be those who have understood the message of the gospel yet refused to believe because, deep down they don’t want to accept that someone out there cares about them, for whatever personal reasons (the emotional impact of the argument from evil is the most common cause of apostasy after all), or that they don’t want to assume the changes to their lives that following Christ would entail.

Even though they might put up a facade of rationality for their unbelief, the crux of the issue is really reject of God for what he is and what faith would entail, and according to everything Jesus said, with such motives someone just doesn’t enter the Kingdom of God.

So that’s for what Christians believe.

Now many self-proclaimed “Christians” don’t actually believe in Jesus at all! In Canada, a large percentage of people who answer the question “are you Catholic” positively, answer the question “do you believe in God” negatively. In my own experience, most self-proclaimed Catholics don’t even believe in hell, or if they do, they don’t think God sends anyone there. These people should read the gospel a bit more, or stop proclaiming conformity to a doctrine they disagree with.

To summarize:

  1. A lot of self-proclaimed Christians don’t believe in God or Hell so their attitudes on the matter is irrelevant
  2. Real Christians (those who actually believe in Jesus) don’t think atheists go to hell for the most part, although some atheists might.

Now I think the fact that even some atheists might go to hell for their unbelief should be well enough motivation to proselytize and I think it’s regrettable that even convinced Christians are so timid. What do you want, we’re just humans after all; we suck at standing up for our beliefs just like everyone else. God help us.

The fastest sum in F#

In this post I will explore the question of how to compute the sum of a collection in F# as fast as possible; that is, as close as optimised C++ as possible. This should give us some insights as to how the F# compiler and JIT optimize code.

So first let’s look at C++ in Visual Studio 2012. I’ve enabled optimizations, but not vector instructions (SSE) for the sake of comparison with the CLR JIT, which never vectorizes anything. That is to say, C++ can generate much more sophisticated assembly than this, but this is a good starting point from our perspective.

#include <array>
#include <iostream>

using namespace std;

template<size_t size>
int sum(array<int, size>& arr) {
	int total = 0;
	for (size_t i = 0; i < arr.size(); ++i) {
		total += arr[i];
	}
	return total;
}

int main() {
	array<int, 1000> arr = {};
	auto total = sum(arr);
	cout << total;
}

Of course the compiler inlines the function, but that doesn’t change the loop. It is compiled to the following assembly (comments by me)

001A129E  xor ecx,ecx                     // sum = 0
001A12A0  xor eax,eax                     // i = 0
001A12A2  xor edx,edx                     // sum2 = 0

001A12A4  add ecx,dword ptr [esp+eax*4]   // sum += arr[i]
001A12A7  add edx,dword ptr [esp+eax*4+4] // sum2 += arr[i + 1]
001A12AB  add eax,2                       // i += 2

001A12AE  cmp eax,3E8h                    // if i < 1000
001A12B3  jb main+34h (01A12A4h)          // goto 001A12A4  
 
001A12B5  add ecx,edx                      // sum = sum + sum2

Here the C++ compiler has decided to unroll the loop once, maintaining two running totals, and adding the two at the end. This is an interesting optimisation but it’s not very important; a strictly minimal loop could be reduced to 4 instructions:

add ecx dword ptr[esp+eax*4] // sum += arr[i]
inc eax                      // i++
cmp eax,3E8h                 // if i < 1000
jb 1                         // goto 1

Since the JIT compiler doesn’t do much in terms of advanced optimisations, this minimal loop is what we’d expect if it simply did its job correctly. Let’s see what we can get out of F#. We’ll compare four approaches: “functional” using Array.fold, “succint” using Array.sum, “imperative” using a for loop like we did in C++, and finally we’ll test the “for .. in” loop (foreach in C#).

Note that in order to see the optimized JIT assembly, you need to compile in Release mode and uncheck “Suppress JIT optimization on module load” and “Enable Just My Code” in the debugger options. This is in 32-bit mode as well.

[<EntryPoint>]
let main argv = 

    let arr = Array.zeroCreate 100

    let sum1 = Array.fold(fun acc elem -> acc + elem) 0 arr

    let sum2 = Array.sum arr

    let mutable sum3 = 0
    for i in 0 .. arr.Length - 1 do
        sum3 <- sum3 + arr.[i]
    
    let mutable sum4 = 0
    for elem in arr do
        sum4 <- sum4 + elem
    
    printfn "%d" sum1
    printfn "%d" sum2
    printfn "%d" sum3
    printfn "%d" sum4
    0

One might point out that since the array contains only zeroes, the compiler could optimize all sums away and simply print out zeroes, but fortunately for our study, it’s not smart enough to do that.

Let’s start with Array.fold. I won’t post all the assembly code because there’s quite a long prologue and epilogue which don’t really matter. The inner function is compiled to this:

00000000  mov         eax,edx 
00000002  mov         edx,dword ptr [esp+4] 
00000006  add         eax,edx 
00000008  ret         4 

This is quite good; it simply loads the next element of the array, adds it to the running total, and returns it.

The inner loop of the “fold” function is compiled to this:

0000002c  cmp         esi,ebx 
0000002e  jae         00000081 
00000030  push        dword ptr [edi+esi*4+8] 
00000034  mov         ecx,dword ptr [ebp-14h] 
00000037  mov         eax,dword ptr [ecx] 
00000039  mov         eax,dword ptr [eax+28h] 
0000003c  call        dword ptr [eax+14h] 
0000003f  mov         edx,eax 
00000041  inc         esi 
00000042  mov         eax,dword ptr [ebp-10h] 
00000045  inc         eax 
00000046  cmp         esi,eax 
00000048  jne         0000002C

I’m not going over this in detail; it basically just loads the arguments to the function in registers, calls the function, increments its counter, advances the position in the array, and loops until it hits the end of the array. This is not bad assembly code at all; it’s about as terse and efficient as you could expect a straightforward implementation of fold to be. That said, it’s definitely a lot of code compared to what we actually need, and it’s somewhat disappointing that such a short lambda function isn’t inlined and further optimisations done after that.

Let’s look at Array.sum now:

00000040  add         ebx,dword ptr [edi+esi*4+8] 
00000044  jo          000002DB 
0000004a  inc         esi 
0000004b  cmp         edx,esi 
0000004d  jg          00000040 

Whoa, now we’re talking. This is essentially our “minimal” 4 instructions loop, with the additional “jo” instruction after the “add”, which checks for overflow. If we force a jump to this 000002DB address, an OverflowException is thrown by the runtime. This makes a lot of sense; a sum of an arbitrary number of integers could easily overflow and it’s probably not a good thing in general.

The imperative loop is compiled to this:

0000005a  add         ebx,dword ptr [edi+esi*4+8] 
0000005e  inc         esi 
0000005f  cmp         edx,esi 
00000061  jg          0000005A 

Nice! This is as efficient as we’d hope the JIT to do. No overflow check is performed, so this will silently give incorrect results. If you’re really concerned about performance, your array is large, and if your array is large then overflow is quite likely. So in general I think Array.sum makes more sense than trying to bypass the overflow check to save 1 instruction, but it’s nice to see that you can avoid it if you really want to.

Finally let’s take a look at the for .. in loop:

0000006b  mov         eax,dword ptr [edi+ecx*4+8] 
0000006f  add         esi,eax 
00000071  inc         ecx 
00000072  cmp         edx,ecx 
00000074  jg          0000006B 

This is essentially the same thing, except that the “fetch array element and add to running total” operation is done in two steps instead of one, adding an unnecessary instruction. This is a bit disappointing but it certainly doesn’t change much in terms of performance.

In conclusion, we have seen that the fastest sum in F# is simply the for [index] in 0 .. [length – 1] loop, but that Array.sum gives us overflow checking and thus makes a lot more sense in general. The for .. in loop is ever so slightly less efficient, and Array.fold incurs substantial overhead and should be reserved for more complex computations. We’ve also seen that the C++ compiler can do interesting optimisations beyond what the JIT is capable of, but that the JIT can nonetheless generate quite good, if unsophisticated, assembly.

Re: 8 Most common mistakes C# developers make

A recent blog article by Pawel Bejger lists some common mistakes made by C# developers. As noted by several in the comments, some of this information can be misleading or incorrect. Let’s review the problematic points:

String concatenation instead of StringBuilder

The author argues that using string concatenation is inefficient compared to using StringBuilder. While this is true in the general case, including the example presented, this must be taken with a grain of salt. For instance, if you just want to concatenate a list of strings as in the example presented, the easiest and fastest way is String.Concat(), or String.Join() if you need to insert something in between each pair of strings.

In addition, compile-time concatenations are automatically translated by the compiler into the appropriate calls to String.Concat(). Therefore, the use of StringBuilder should be reserved to building complex strings at runtime; don’t go blindly replacing every string concatenation code with StringBuilders.

Casting with “(T)” instead of “as (T)”

The author argues that if there’s the slightest probability a cast could fail, “as (T)” should be used instead of the regular cast. This is a very common myth and is misleading advice. First, let’s review what each cast does:

  • “(T)” works with both value and reference types. It casts the object to T, and throws an InvalidCastException if the cast isn’t valid.
  • “as (T)” works only with reference types. It returns the object casted to T if it succeeds, and null if the cast isn’t valid.

Each cast expresses a different intent. “(T)” means that you fully expect this cast to succeed, and that if it doesn’t, this is an error in the code. This is the simple and general case. “as (T)” means that you fully expect this cast NOT to succeed at least some of the time, that this is a normal occurrence, and that you will take care of manually handling it via a null check after.

The real mistake I often see is “as (T)” not followed by a null check. The developer fully expects the cast to succeed so doesn’t bother to write the null check, but the day something goes awry, no exception is thrown on the invalid cast, no null check is performed, and a hard to track bug has found its way into the code base. So my advice is to always use the regular cast “(T)” unless you intend to check yourself for the invalid cast via “as (T)” and a null check.

Using “foreach” instead of “for” for anything else than collections

The author claims that using “for” to iterate “anything that is not a collection (so through e.g. an array)””, the “for” loop is much more efficient than the “foreach” loop.

As with any claims regarding performance, this is easily verified. In this case, the verification proves the author wrong.

As the page he links to claims (and which is very outdated), the generated IL for a “foreach” loop is slightly larger than that for a “for” loop. This is no indication of its performance however, as the speed of code doesn’t correlate with its size, and IL isn’t actually executed but compiled just-in-time to assembly code, where further optimizations may happen. A very simple test performing the sum of an array of ints shows that the machine code generated for a foreach loop is in fact shorter, and the execution time also shorter, than for the for loop. Results on my machine (x86 release mode with optimisations, no debugger attached):

SumForEach: 482 ms

SumFor: 503ms

As you can see, this is in fact a very small difference, so I would advise that in general you use the loop that is the most idiomatic or readable, and not worry too much about the performance difference. But if you’re writing performance-sensitive code, don’t rely on someone else’s benchmark and test things out yourself. You may find that it’s not because  code is visually more compact or more C-like that it performs any better.

Darkstone’s MTF file format, part 2

Following my first attempt at unraveling Darkstone’s MTF file format, I suddenly found a ton of information on the format on the web. Well, so to speak. It is vaguely described at Xentax Game File Format Central, and there’s even a working open-source extractor: Dragon UnPACKer. The relevant code is in drv_default.dpr. It’s apparently Pascal, a language I had never looked at before, but that didn’t present a problem as it’s highly readable.

Remember I said some entries had invalid sizes? Well, they did not: the specified size was simply the uncompressed size, and I assumed no file compression. I had noticed that most files in DATA.MTF began with AF BE or AE BE even though they were of completely different formats, but it had not sprung to my mind that this was because they were compressed. Anyway, had I guessed, I couldn’t have figured out the technique to decompress them; even with the description at Xentax plus working source code in front of me, writing my own implementation was touchy.

My extractor now does at least as well as Dragon UnPACKer, so I’m pretty confident about having nailed the file format. Rather than try to explain it in words, I’ll refer you to my code on pastebin, which has enough comments that it should be clear even you don’t know C#.

Now that I at least extract all files properly, the internal file formats are starting to make more sense.

O3D: a file format for static meshes, can be opened at least with makeo3d.exe. The “Flyff” 3d converter (o3d2obj.exe) appears not to work with these.

DAT: mainly ASCII strings separated by long strings of 0s or other repeated numbers. I still have no real clue as to what they mean, but at least they’re partially legible now.

CDF: references O3D files by their names, separated by long strings of 0s. Maybe level descriptions?

Anyway… this is probably going nowhere as I don’t have infinite time nor much experience with reverse-engineering game files, but at least I get to enjoy the music knowing I extracted it myself! Ha.

 

 

Darkstone’s MTF file format

One of my guilty pleasures as a programmer is writing extractors for weird/old/undocumented file formats. I’m not very good at it, but I try. My latest victim was the “.MTF” format used by Delphine Software’s 1998 Darkstone, the first RPG I ever owned:

All the data files except for movies are in these obscure binary archives:

  • DATA.MTF
  • dsp001.MTF
  • dsp004.MTF
  • MUSIC.MTF
  • VOICES.MTF

I searched Google in vain for any information on the format. No, this is not the same thing as Microsoft Tape Format. The only thing I found to help was a utility called dsxtract.exe, which extracted all the mp2 music files from MUSIC.MTF. It didn’t run on Windows 7 x64 (and of course the source code is nowhere to be found), but DosBox did the trick.

Now let’s take a look at MUSIC.MTF in a hex editor:

Ugh, not even an ASCII header. We can see that starting at byte 9, there’s an ASCII string, so the first 8 bytes are probably integers.

First things first: integers are usually little-endian. This means that if you blindly paste the highlighted 4 bytes in calc.exe and make it convert from hex to decimal, you’ll get 419430400, a number that has no obvious meaning. The trick is to invert the order of bytes: 00000019 is 25 in decimal. Does “25” make any sense?

Well when I ran dsxtract.exe, it produced 25 mp2 tracks. So it would appear that this first integer is the number of entries in the file.

So let’s look at the next integer. D is 13 in decimal. Hum, ok?

What about that next string. “MUSIC\22.MP2”. That looks like a path name. And it’s 12 characters long. Hum, almost 13… wait! The next byte is 00, the null character, so this is a 13-byte null-terminated string, and the integer before it was its length! Every string at the beginning of this file has 12 characters followed by 00, and each one is prefixed with the number 13. To confirm this hypothesis, I also looked at DATA.MTF which had paths of various lengths, and each was prefixed with the number of characters + 1. So, there we go.

Between “MUSIC\22.MP2” and the next string, there are 8 bytes:

75 02 00 00 93 FC 15 00

This is most likely two integers: 629 and 1440915. They might somehow indicate where this file is located within the archive. Let’s see what we have at offset 629:

FF FD 90 04 53 33 11 11 11 11 11 11 11 11 11 24

Well, I know nothing of the MP2 file format, but if this is the start of “MUSIC\22.MP2” then it probably looks similar to any other MP2 file. Let’s look at one of the extracted files in a hex editor:

FF FD 90 04 55 22 11 11 11 11 11 11 11 11 11 24

Sweet! Now let’s look at the file size of 22.MP2: 1 440 915 bytes. That’s exactly the second number following the path name, so that would give its size.

Note that I make this look very easy; in fact I spent about 3 hours to find this. Anyway.

At this point we can write the spec down for Darkstone’s MTF file format:

4 bytes – numFiles: integer; Number of files in the archive

This is followed by [numFiles] data entries. Each is structured like so:

4 bytes – pathLength: integer; Length in bytes of the next string.

[pathLength] bytes – null-terminated ASCII string: path of the entry.

4 bytes – offset: integer; absolute offset of the data in the archive.

4 bytes – size: integer; size of the data in the archive

Then follows all the data.

I then wrote a little C# application that read an entry, fetched the corresponding data, put it in a file of the same name, and proceeded to the next entry until they were all done. That almost worked. It would throw exceptions while reading DATA.MTF, because some of the specified sizes there are invalid and result in reading past the end of the file. Ugh.

So I had to resort to a more involved approach. Instead of processing one entry at a time, I start by reading all entries (path, offset, size), making a list of that, sort it by offset, and then go over that list to fetch the corresponding data. For each entry, I check if the size makes sense; if it doesn’t, I use the next entry’s offset to calculate the “real” size. Note that the real size could actually be smaller (there are unused bytes), but I suppose that’s the best I can do.

This worked well. Extracting these archives reveals the many file formats used by Darkstone (a lot more fun in store!):

  • MP2 – well-known sound format, used for all music and speech
  • WAV – well-known sound format, used for most sounds
  • DAT – could be anything; used by a few relatively small files: “SND.DAT”, “LANGUAGE.DAT”, etc.
  • AND – looks related to 3D models, I don’t know.
  • B3D – maybe this? I don’t know.
  • BRM – no clue
  • CLD – very repetitive (FC 01 FC 01 FC C0 FC 00 01 FC 01 FC C0 FC 01 FC 01 FC C0…), but I don’t know.
  • MBR – obscure binary format, I haven’t got a clue
  • MDL – idem
  • SKA – idem
  • O3D – seems used for meshes, maybe it’s Objective-3D, about which no one knows apparently.

Damn.

Here’s the full listing if anyone wants to use it. This must be compiled with the /UNSAFE compiler option:

http://pastebin.com/hpFN93Um