Why we (still) need Fortran, and why this won't change
By joe
- 3 minutes read - 474 wordsI saw a link to an article from /. on Wodehouse’s ideas in writing prose used for refactoring code. For those not in the know, code refactoring is the process of rewriting a code to be simpler, or more efficient, more expressive of the needs. What has this to do with Fortran, and in the bigger picture, HPC? Everything.
Fortran has not been in vogue in CS departments in this century, nor for the latter portion of the past century. It is hard to find Fortran classes. C users decry Fortran as old technology. C++ sniff contemptuously at it. But Fortran code exists, in massive amounts, in HPC and scientific and engineering computing circles. Fortran itself has not stood still. It has evolved from being a difficult to replace tool of computing, to be a modern OO language, capable of doing the many things required of it for new code development. F2003 and beyond are not by any stretch the same languages that we may have learned and used in the 80s. But they will compile the codes written then. And that is the critical aspect. HPC codes developed twenty, thirty, and forty years ago can (and do) compile and run on modern systems. Critical research dependencies and computing pathways depend upon them. These codes represent huge sunk costs. As the article points out, old and crusty code is battle scarred. It has withstood the tests of time. The rigors of use. Replace it with a modern C/C++ implementation, and you throw all that value away. And all the bug fixes. And corner case handling. Which vastly increases the cost of the rewrite. It certainly increases the test cases one would have to write. Think about this the next time an accelerator developer proposes having you rewrite your libraries and code base in their native C/C++ dialect to exploit their functionality. As many customers have told me recently, they would seriously consider accelerators, when they come with a decent fortran compiler. Until then, accelerators are not on the agenda. This was an eye opener to me personally, but I do understand it. And it represents a huge potential opportunity for the first group to do this. Since CUDA is based upon the Open64 compilers, my guess is that they will be first. There is little doubt that accelerators are the future of HPC, there is no other way to get more efficient and larger numbers of instructions per unit time. Unfortunately, without lowering barriers, it would be very hard to enable existing users to make use of them. Demanding a user port their code, change their language, alter what they do to use your great new technology, is a non-starter in most cases. This is not a viable business model. Don’t believe me? Ask the customers. Tell them the value prop and the requirements.