Ever since Apple decided to move onto its own chips for their line of Macbooks and Apple Mini, the ARM vs Intel, or rather ARM vs x86 war have intensified. Earlier the two have dominated their respective turfs, ARM playing in the low power smartphone or tablet sector, and x86 in the laptop, desktop and server sector. But now, the fastest supercomputer in existence, Fugaku, is based on ARM architecture.
And with ARM now encroaching on the laptop sector, we can expect a turf war.
In fact, it has already begun. Soon after Apple announced their M1 Macbook, Intel made a point by point comparison between Intel laptops and the M1 Macbook.
But what exactly is the difference between ARM and x86? Let’s have a look.
The ARM architecture is a CPU architecture developed by Arm Ltd. As mentioned in the introduction, the ARM architecture has been mostly used in smartphones and other devices of small form factors. Their low power consumption and heat generation contributed to this use case.
Arm ltd was initially a subsidiary of Acorn computers and ARM stood for Acorn RISC Machines. It now stands for Advanced RISC Machines.
The x86 architecture was initially developed by Intel, and has been used mostly in laptops, desktop computers, and servers. Intel did release an Atom series based on x86 architecture, but it was soon discontinued. Currently, around 4 companies hold licenses for the x86 architecture, but only AMD and Intel are actively manufacturing the chips. x86 chips are more about giving the best performance without much care for power consumption or heat generation.
The x86 architecture is based on the 8086 microprocessor released by Intel in 1978. The design of x86 architecture is based on CISC philosophy.
To know the difference between ARM and x86, you have to know the difference between RISC and CISC
If you google ARM vs x86, this is something you’d come across quite a lot, and rightfully so. RISC and CISC are two CPU designing philosophies, ARM is based on RISC and x86 is on CISC architecture.
RISC stands for Reduced Instruction Set Computing and CISC stands for Complex Instruction Set Computing. As the name suggests, CISC is based on complex instructions and RISC is based on simple instructions. Let’s try to figure out exactly what this means.
A given CPU understands only a given set of instructions. You can picture it as a robot dog that understands only a few commands, and you can’t teach it anything more. So if you want your dog to do something, you’ll have to give a series of commands from this one set of commands. Same for a CPU. And this set of instructions that a CPU understands is called an instruction set architecture.
So imagine you’re designing a CPU. You can either make a CPU with a set of simple instructions each of which carries out simple things, or you can make one with a set of instructions each of which carries out very complex things with a single instruction.
Taking our robotic dog analogy, you can teach a dog to run to the gate, to bite the newspaper, and to run back to you. And every time you want your dog to get the newspaper to you, you can repeat these instructions to it. That would be the RISC approach. In the CISC approach, teaching your dog to fetch the newspaper would be just one instruction.
As you can tell from this oversimplified analogy, both approaches have their advantages. In the RISC approach, you have to give a lot of instructions, while in the CISC approach fewer instructions do the trick. For the person writing the instructions, CISC looks easier, as they have to write less. But designing a CPU that performs individual complex instructions could be tricky.
But the advantages and disadvantages of RISC and CISC depend on the use case and the available technology.
Earlier when people used to write Assembly programs by hand, instead of using compilers, CISC was a better option. You don’t have to write a whole bunch of code when you have one single instruction for something. This wasn’t an advantage anymore as compilers got better.
When you read about the evolution of RISC and CISC, it becomes apparent how technological advances shifted the balance to RISC.
Other differences between RISC and CISC
Intel and AMD were leading the charge with CISC and ARM was on the opposite side with RISC. And to stay in the competition, both sides have borrowed/appropriated/stolen ideas from the other side. Because of this, some experts argue that the differences between RISC and CISC are non-existent.
But differences still exist. A key difference is that instructions in RISC are of the same size. This means that RISC will have an advantage in pipelining. RISC also relies on a large number of registers. The idea was to reduce the memory access, as this would slow down the process. RISC CPUs also had/have larger caches. As you may be aware, caches are a form of highspeed memory. Because RISC instructions are simpler, it requires fewer transistors, leaving room for a larger cache.
And RISC (and therefore ARM chips) instructions are completed in one clock cycle, while CISC (x86 chips) instructions may take multiple clock cycles to complete.
The differences between RISC and CISC play into ARM vs x86. But the differences between Arm ltd and Intel also define the advantages of one over the other.
Arm Ltd and Intel
The differences between parent companies and how they operate also defines the differences between ARM and x86.
Intel is a chip manufacturer. It sells and distributes fully fabricated chips to laptop and desktop OEMs. Same with AMD. The companies decide what goes on the chips, what features to add on, etc. So if an OEM wants something more, maybe specialised hardware for security, they’ll have to add it separately, and not on the chip.
Intel does offer customizations on their chips for some of their large clients. But even in this case, the final design and manufacturing will be done by Intel. And unlike some other semiconductor companies, Intel fabricates its chips in its facilities. The company even fabricates chips for other semiconductor companies.
This strategy has made Intel the biggest name in CPUs. You could even argue that CPU equals Intel. Device OEMs has a straightforward supply of CPUs for their device. They get fully manufactured chips. They simply have to build a device around this device.
And that’s exactly where Arm differs from Intel. Arm is not a chip manufacturer. They don’t have a fabrication facility. Arm licences its architecture to partners who develop and manufacture their chips around ARM cores. These partners include Qualcomm, Samsung, Apple, and just about anyone who makes chips for smartphones.
This means partners will license the ARM architecture, build their systems, and manufacture their chips to use in their products. Or in some cases, like Qualcomm, they use the ARM architecture for its CPU core, design products around it, manufacture it using third party foundries, and sell the product to OEMs.
SoC and heterogeneous computing
Due to the nature of the licenses, partners can use the ARM IP to build their custom chips. This has led to the use of SoCs and heterogeneous computing. SoCs or System on Chips are entire computers on a single chip. They’ll include GPUs, modems, specialized chips for AI, video processing, or anything else the manufacturer wants on their device.
This is one of the reasons why ARM has complete dominance over smartphones. Because of their small form factor, there isn’t much space to add separate hardware for a specialized task, unlike a laptop or a desktop. By bringing all of them into one, smartphone OEMs can improve the performance of their devices for security or image editing or other tasks. For example, Qualcomm’s Snapdragon 865+ has a Prime Core, three high-performance ARM Cortex-A77 cores, and 4 power-saving ARM Cortex-A55.
The same SoC also has a 5G modem, modules for WiFi 6, integrated GPU, DSP, and a lot of other things. With increased applications of AI and IoT, OEMs are also developing chips with specific modules for these.
The ARM architecture has made heterogeneous computing the standard in smartphones with the introduction of big.LITTLE. Big.Little brings together high-performance cores along with power-efficient cores. Intel has introduced this same thing in its Lakefield chip, combining 4 Core processors with 4 Atom processors.
Of course, AMD also has its share of SoCs which were used in Xbox and PlayStation 5. But Arm partners, like Snapdragon, have taken this much further.
The arm licensing model also allows better integration between hardware and software
Aside from the raw performance of SoCs, developing your hardware has other benefits. As you can imagine, you have complete control over the package by building your chips for your devices. You get to choose the features, and you get to make everything play well with each other. This is an advantage that Arm partners enjoy.
And Apple has recognized this long back.
ARM and Apple
Since 2010, Apple has been making its own chips for iPhones, iPads, and Apple watches. Apple has always had the “Whole Widget” philosophy. And compared to other smartphone OEMs, they have the unique advantage of controlling their OS. And by designing their own chips, they control everything that goes into their device.
Needless to say, they’ve had a huge success with this. The chips have played a huge role in maintaining Apple’s position in the smartphone segment.
And now Apple is bringing this success into its MacBook and Mini line-up. Ever since their launch, reviewers have been raving about the M1 chip. It has blown the competition in almost all benchmarks.
Apple wasn’t the first to bring ARM to a laptop, Microsoft beat them to it. In collaboration with Qualcomm, Microsoft developed the SQ1 which was used on the Surface Pro X. While Microsoft did build an emulator to deal with x86 app compatibility issues on the ARM, it really didn’t work out well.
And app compatibility is one reason why the M1 Macbook is such a success. All Apple apps already run natively on M1, and support for more apps are expected soon. But until the native versions are available, Apple developed the Rosetta 2 emulator for running x86 on Apple silicon. If an app only has x86 instructions, Rosetta launches automatically and translates it for M1. So sometimes there may be a delay when you launch an app for the first time, but the next time it should work just fine.
As with the case of the Microsoft SQ1, emulators don’t have the best success rates, but Apple has clearly invested in Rosetta 2. And it’s paying off well. Most translated apps are working as good on the M1 as they do on Intel processors. And some are performing even better than they do on Intel.
Moving forward, Apple appears to be committed to M1. While they’re still selling Intel-powered Macs, they’ll eventually be transitioning completely to ARM. The future looks bright for ARM. x86 will be around for a while, and may never completely become obsolete. But the ARM vs x86 battle will definitely go around for a while.