The future of mobile devices. Apple and Microsoft are pursuing ARM

Right now, somewhere in Apple’s top secret lab, there’s ‘most likely’ a machine running MacOS on an ARM-based processor. Why is this such a big deal? Let me explain.

First a little pre-text. Apple’s Computers have long been known to be comparably fast, efficient and reliable (attributed mostly to proper software). Although, we’re reaching a ‘peak’ of what ‘modern’ Intel-powered x86 laptops can do. If you’re a ‘tech-head’, you’ve most likely heard about the ‘Space-Heater’ MacBook Pro i9, which gets so hot its unsuitable for normal use, and throttles itself down to the point of where a regular i7 is faster. To solve these arising problems with modern pc’s, Google, Samsung, Microsoft, Apple and others are pursuing an entirely new CPU architecture for their devices.

All of Apple’s devices will switch to ARM based CPU’s in the next few years.

A brief introduction to ARM

ARM has actually been around for quite some time. Qualcomm is the one spearheading the development of the natively asynchronous CPU architecture. Unlike the CISC based X86 architecture, the RISC based ARM architecture is natively asynchronous, which means instruction sets become far less bloated. The development focus was also, from the beginning way more focused on efficiency per transistor, instead of aiming for a power increase from the entire CPU chip itself every generation. This makes the ARM CPU’s way more power efficient. In fact, the first ARM coprocessor, developed by ACORN Computers, was expected to be low power, but at the time of testing, they forgot to connect the CPU to the power supply, but yet the CPU was still running. The chip was so low power, it was running through the signal inputs from the test board. It was drawing a tenth of a watt.

What does ARM do that x86 doesn’t?

Where x86 excels is at handling workloads where tasks are just thrown at it, without a specific way to do them. It’s like someone telling you to clean your room, and that’s it. With ARM on the other hand, it has to be told what to do, and how to do it. In our ‘clean your room’ example, that would mean a supervisor standing behind while you’re cleaning, telling you where to put things, what mop to use etc. This makes ARM faster and more efficient. ARM uses RISC (‘Reduced Instruction Set Computing’) which doesn’t use the bloated instruction set of the x86’s CISC (Complex Instruction Set Computing) architecture. Making it more efficient in applications compiled for the ARM architecture.

Intel Confirms Apple Macs Will Switch to Arm CPUs by 2020, Says Report – Tom’s HARDWARE

ARM in today’s market

The iPad Pro is faster than about 90% of laptops in the world. Photo by Daniel Korpai on Unsplash

Qualcomm plans to release the Snapdragon 8CX. An ARM CPU made for laptops in 2019, which will rival the intel i5 series of mobile CPUs. If you want an ARM product right now, chances are, you’ve already got one. Qualcomm sells designs of CPU cores to companies so they can manufacture their own chips. Most android phones use Snapdragon series chips, while all iPhones and iPads use Apple’s proprietary A-series chips, which use the same cores as the Snapdragon mobile CPUs, just utilized in a different chip-design. This is also why the new iPad Pro with the A12X CPU can outperform the i7 Macbook Pro. It’s using an ARM based CPU. To put things in perspective, the newest i7 8565u used in ultrabooks like the Razer Blade, Zenbook, Dell XPS and Microsoft Surface Pro’s, is still behind performance wise compared to the A12X chip used in the iPad Pro, while the A12X consumes far less power.

ARM is also being tested extensively in server-farms as one saves about 2/3rds of the upfront costs of a traditional server-farm of the same caliber. The ARM server CPUs also use far less power, at about 50% of the x86 servers power consumption.

Why x86 is struggling

ARM and x86 are CPU architectures. The most common in today’s laptops in 2019 is Intel’s x86 platform. The x86 platform is tried and true, but it’s starting to show the limits of its capabilities. It was originally designed for synchronous workloads, meaning it does one task, then moves to another, then another. This has been proven to be very inefficient for the continuous asynchronous workloads modern computers take on. To make x86 behave more ‘asynchronously’ there has been done a lot of modifications to the original architecture, making it more efficient and simpler for programs to compile on. This makes the instruction set more bloated, and in turn, the cpu itself less efficient because it spends more clock cycles on each task.

The arising issue with the X86 architecture is that the ‘only’ way to make it faster, is to switch more transistors per second. Transistors are like tiny on-off switches which there are billions of, crammed into a smaller and smaller space. The norm has been to jam in more transistors every generation. This is where the problem lies. Switching more transistors every clock cycle, or increasing the switching speed of the transistors requires more power. This power draw has been increasing steadily with the number of transistors but has been kept in check because of the constant shrinking of the manufacturing process. A 10nm chip is more energy efficient than a 24nm chip. This solution is becoming obsolete as shrinking the transistor-size has become so hard that Intel is struggling to release a stable 10nm version of their own processors, and therefore halting the steady performance increase we’ve had the last several years.

More on differences between X86 and ARM:

Read Abhinav Choudhury‘s answer to What’s the difference between ARM and x86 processors? on Quora

Video about ARM and the future of mobile computing:

ARM as Fast as Possible: