# The Floppy Disk of Floating Point

July 12, 2020

What, old acquaintance!
Could not all this flesh
Keep in a little life?

Henry IV, Part 1

After spending the most recent week of my life squeezing the last of my long doubles into regular doubles – this is a bit like fitting a jar of pickles into a money-clip – I write to pay tribute to x87, the 80-bit-capable ISA that’s been hanging around Intel’s neck for some forty years now, and that will soon go the way of the floppy disk.

If you look up a list of x86 assembly instructions, there’s an early interlude of mnemonics that begin with the letter F:, FABS, FADD, FDIV (yes, that FDIV). These are the dedicated x87 floating-point instructions – originally designed to run on a dedicated hardware unit (the 8087), sort of high-tech scalar sidecar that required the CPU to FWAIT on its results.

On that list you’ll discover instructions that, to this day, you can’t find anywhere else. The infallible trifecta of triangles is fully represented: FSIN, FCOS, FPTAN. You’ll find F2XM1, which computes in a swoop $$(2x-1)$$ – a kind of pauper’s fused multiply-add, some twenty-five years ahead of its time. There’s even FYL2XP1, for all those occasions when you need to compute $$y \log_2(x+1)$$ on the double.

Nowadays, these sorts of functions are delegated to standard math libraries, freeing up microcode writers to get back to the basics of LSR (loadin’, storin’, and ’rithmetic). But there was a brief period, before the graphical era, when hardware designers assumed that Personal Computers would be used to Compute Things, when sine and cosine and partial arc-tangent were deemed worthy of dedicated hardware support – and not with a mere 32 bits of floating-point precision, like you’ll find in today’s GPUs, or even 64 bits, large enough to roughly represent a googol – but with massive 80-bit stack registers, 64 bits of pure mantissa, and enough exponent to raise the Eddington number to the 50th power.

Nineteen decimal digits of floating-point precision – in the days when integers topped out around 65 grand.

There’s a kind of sweet innocence about x87. William Kahan, of compensated summation fame, drafted the first IEEE floating-point standard, and assisted Intel with that first coprocessor. He said this about the 80-bit registers:

For now the 10-byte Extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16-byte format…

Ultimately a 16-byte format. Ha! If the ancients could only witness our debaucheries. I will note in passing that all this took place before the invention of the GIF.

Pure science, as a market force, has given way to Deep Learning – which requires more data, and less precision, than its predecessor. Heavy computations have moved to the graphics cards. Desktop processing, if Apple’s recent example will be followed, is transitioning to ARM. Industry triumphantly announces not a 16-byte floating-point number, but a 16-bit one. Professor Kahan was last seen ranting about benchmarks. 80-bit is all but dead. And thus x87, once a model of numerical engineering and a harbinger of significant digits to come, will soon check into a suite next to the ENIAC and the PDP-10 and the Difference Engine and Fortran 66, in the echoing, whirring hallway of computations past.

You’re reading evanmiller.org, a random collection of math, tech, and musings. If you liked this you might also enjoy: