Oh! How could I be so stupid! Not resetting n is the main reason why this gives such weird values!
Dodicat: Thanks. I'm really surprised. I knew it was much better to perform the test several times, but I didn't expect it to give such a huge difference. Why would it be so slow the first time in comparison to the others?
Joshy: Wow! Fixed point with all the functions! Well, if I do this with fixed point, it's not a whole program using it. It'd be just one routine that would make its calculations with fixed point. It'd be a few multiplications and a couple of divisions, it's just it'd be called many, many times. So instead of building functions, I'd probably just enter the fixed point math inline. So from your numbers, I understand that switching to fixed point is only worth it if I am to use the same word width as my compiled code? (I imagine that a 32bit ELF running under a 64bit GNU would still run fast because it'd be running in a 32bit code memory region, but if I will use Longs, I have to compile in 32bit and if I will use LongInts, I have to compile in 64bit or just use Integers, right?)
Paul Doe: ¡Gracias! Yep, that's more or less what I was starting to think. On today's computers, it looks like fixed point math may be worth it, but you have to take lots of precautions. Just using fixed point instead of floating point at any random place in the program will likely make it slower. I'll take a benchmark for Singles vs Doubles.
jj2007: As a matter of fact, I may later on translate that single routine to assembly and call it from my program, but as you say, I would have to be very skilled to use packed singles. I'm going to first try the routine with doubles and see how slow it really is, then I'll try optimising it with singles, some fixed point, etc. If I see it is worth it, I'll translate it to assembly and I'll study packed floats to see if I can get that working. I don't want to complicate my code more than necessary, but if it looks like it'll be necessary, I will.