Skip to main content

Forget about programming languages, long live the compiler

What programming languages do you speak? And how important do you think that is? At the end of the day, you are trying to talk to a computer system and the hardware understands as little about C# and Java as it does about Python or R. The computer you are programming for comprehends only one thing: machine language or binary code.

1. 0 and 1 or Hello World

Fortunately, today you do not need to know machine language to program. You write code in a high-level language that is then translated to the level of the machine. After all, print (“Hello World”) is much easier to type than a sequence of zeros and ones.

Different languages have different objective and subjective advantages. That’s how it is everywhere. While English seems suitable for international communication in aviation, French holds the vocabulary for expressing love. In the context of computers, no matter what language you choose, you need a translator. This translator converts your work into ones and zeros, and that’s where the magic happens.

2. Shut up and translate

Today, the compiler is not getting the love it deserves. The programmer writes code, the compiler does not complain, and the world is beautiful. The tool does what it is supposed to do, but no one wonders what goes on behind the scenes.
The compiler was invented by Grace Hopper. She is directly responsible for allowing you to write down programs in different variations of English that end up running on a multitude of binary systems.

Compilers make or break ecosystems. It is the compiler that decides whether your code can run smoothly on x86 systems, ARM smartphones or RISC-V servers. Thanks to the compiler, you can write in your favorite language and, at the end of the workday, an executable file emerges that works on different systems. Is the compiler a poor interpreter translating your fancy algorithms into slow code? Then the language or hardware that compiler is connected to will die a quiet death.

3. Composable

Compilers today generally adopt the same structure. They consist of three main parts: the front end, the optimization, and the back end. The front end reads your code and provides a first rudimentary translation. It then goes through an optimization, and finally the back end reworks it to exactly the right format for the system you are compiling for.
This trinity is essential to the success of modern ecosystems. The modularity of a modern compiler means that you only need to modify the front end when you want to introduce a new programming language on an existing ecosystem. Or conversely, a compiler for a particular language can be reworked for a new binary architecture simply by modifying the back end.

4. Post Von Neumann

Today this is more important than ever. The era of Moore’s Law, the reign of x86 and by extension the Von Neumann architecture is over. There are physical limits to the clock speed and transistor density of chips and they are gradually coming within sight. The solution on the hardware side is a heterogeneous compute architecture.

That means in part that you use the right architecture for the job (such as ARM for mobile applications), but also that you combine. A processor today is assisted by a GPU, a DPU and perhaps another FPGA that accelerates a specific function.

5. A little bit of love

How easy it will be to program for those heterogeneous systems is largely in the hands of compilers. The new hardware will not immediately give birth to a new language to replace all others. Existing languages will remain popular and new ones will emerge, but to be successful they must be able to handle a multitude of hardware. Modular compilers are going to make that possible, and therefore deserve a little love from time to time.

Is your love for compilers (almost) as great as your love for programming? Check out our job openings or send an open application today.

Leave a Reply