Ever since Apple launched its M1 processor and showed it running fast and cool on new MacBooks, the tech community has been abuzz testing the SoC and trying to draw comparisons to see where the M1 stands in terms of performance and efficiency against Intel or AMD counterparts.

Needless to say, it’s not a straight line you can draw when Intel and AMD run x86 applications, and M1 runs native Arm code and can also translate x86. Some will dismiss M1 efforts as being only for Apple devices (true), while others may see “magic” happening when Apple has been able to deliver a fast laptop that gets iPad-like battery life on their first attempt (also true).

In this article, we’d like to share a few of our thoughts on why Apple M1 is a very relevant development in the world of computer hardware. For us, this is akin to Intel joining the GPU wars in 2021. It’s simply the kind of thing that doesn’t happen every day, or every year. And now Apple has effectively entered the mainstream CPU market, to rival the likes of Intel, AMD and Qualcomm.

The transition

M1 marks a big architectural transition for the Mac since 2006, when Apple scrapped PowerPC in favor of Intel processors. Now the Cupertino giant is betting its entire future on Arm-based chips developed fully in-house, leaving Intel behind and becoming more technologically self-sufficient.

The first devices powered by the Apple M1 include the MacBook Air, MacBook Pro 13 and Mac mini. This is relevant because the MacBook Air is their least expensive and most popular laptop. The Air is now also fanless.

Inside the MacBook Air: no fans. Image: iFixit

These first M1 computers are not performance-oriented models. Apple’s breakup with Intel kickstarts a two-year migration process, meaning the entire Mac lineup (MacBook Pro, iMac, Mac Pro) will move to Arm-based custom silicon.

Leaving Intel behind

Intel has been struggling with manufacturing after years of relentless advances. Apple saw this coming years ahead and started working on its own desktop chip before really needing it. The vertical integration that Apple is achieving goes back to its roots and how it’s always perceived computers.

The biggest benefits Apple will get from their switch to Arm is system integration and efficiency. When they used Intel x86 before, they could only choose from a handful of offerings. Basically whatever Intel thought would be a good idea. If Apple wanted to tweak something like adding more GPU performance or removing unused parts of a processor, that wasn’t possible before. Arm on the other hand is nearly infinitely customizable. What Arm creates are blueprints and small pieces of intellectual property. It’s just like going to eat at a buffet where you can pick and choose only the things you want. This switch to Arm allows Apple’s engineers to design chips that perfectly fit their needs rather than having to settle for one of Intel’s off-the-shelf chips.

Intel makes great CPUs, but nothing can match the performance and efficiency of a fully-custom design. Apple was supposedly the “number one filer of problems in the [x86] architecture” according to one of their former engineers. Quality issues with Skylake finally pushed Apple over the edge to decide to just build their own CPUs. The decision will hurt Intel’s bottom line, but not that much. Apple only accounts for around 3% of Intel’s sales.

Not a CPU, an SoC

Not only is the initial M1 hardware capable. It is also very efficient. Plus, it does SoC stuff, so processing + graphics + IO + system memory, all in the same package. It’s likely Apple had a lesser version of the M1 ready over a year ago, but they waited until they could leapfrog the rest of the industry in terms of performance per watt.

It’s also clear that Apple has leveraged its decade-long experience working on specialized hardware for the iPhone. By applying some of those principles into desktop hardware, it’s brought on hardware-level optimizations to typical workloads which means M1 can be extremely fast for some tasks including JavaScript, encoding/decoding, image processing, encryption, AI, (and very clever of Apple), even x86 emulation. This reminds us of Intel MMX extensions of yesteryear, but on steroids.

Power and cooling has been a big limitation in how fast processors can go. You can only build a chip as fast as you can safely cool and power it. The preliminary performance and efficiency numbers for the M1 are where Apple deserves the most praise. Keep in mind that the M1 is essentially a beefed up iPhone A14, but that’s only the beginning. It can’t compete with high end CPUs in performance, but it isn’t trying to yet. This is the first generation of what will likely be a long line of processors.

The M1’s performance and energy efficiency compared to other low-power CPUs is great and is the biggest benefit of switching Macs over to Apple silicon.

Love and hype for Apple?

As tech enthusiasts, we have nothing but admiration for the engineering teams at chip makers like Intel, AMD, Nvidia and Qualcomm. The fact that Apple has been able to join the fray, building a world-class team capable of surpassing the likes of Qualcomm and other mobile makers first, and now playing the same game as AMD and Intel is impressive.

Or a not so impressive view…

At the same time, this isn’t necessarily as big of a deal as the hype makes it seem. Apple didn’t invent anything new or particularly novel. To grossly over-simplify, what Apple has done is built a beefed up iPhone CPU and put it in a laptop. Remember that Apple has been building iPhone SoCs in-house for over a decade, so they aren’t exactly new to the game. That’s not to say that Apple isn’t worthy of praise for their accomplishments. To pull this off, they’ve gambled potentially billions of dollars in R&D on the hope that this switch will be beneficial in the long term.

What’s the deal with UMA?

Unified Memory Architecture or UMA is one area that has potential for Apple to greatly improve performance and efficiency. UMA means the CPU and GPU work together and share the same memory. On a traditional system, the RAM is used by the CPU and then the graphics card will have its own dedicated video memory. Imagine you’re trying to send your crush a message. The traditional approach to CPU and GPU memory is like you putting a letter in the mail and waiting for it to be delivered to them. This approach is slow since all messages have to go through the post office. To help make this faster, a technology called Direct Memory Access or DMA can be used where one device can directly access the memory of another. This is like if they give you a key to their house and you just stop by to drop off the message. It’s faster, but you still have to travel and get into their house. UMA is the equivalent of moving in and sharing the same house; there’s no need to wait or travel anywhere to send a message.

UMA is great for low-power applications where you want maximum integration to save on space and power consumption. However, it does have performance issues. There’s a reason why high-end dedicated graphics cards are orders of magnitude faster than integrated graphics. You can only fit so much stuff on a chip. There are other issues that arise with resource contention. If you’re doing a very GPU intensive task that is using up lots of memory, you don’t want it to choke out the CPU. Apple has done an excellent job of managing this to ensure a resource hog in one area doesn’t bring down the entire system.

Not just hardware, but software

Moving macOS to Arm so seamlessly is no small feat. We know Microsoft has struggled with the same for years. So Apple ported macOS to Arm, all first party apps, developed Rosetta translation for x86 compatibility, and worked on the developer tools that will ease the transition for all developers already invested in the Mac ecosystem.

Apple had been using Intel x86 CPUs in their Mac line of products since 2005. Before that they used PowerPC and Motorola even earlier. Each switch in architectures has a big list of pros and cons. The biggest problem with switching architectures is that all software must be recompiled.

It’s like the operating system is speaking English while the processor speaks French. They have to match or nothing will work. It’s easy to do this statically for a few apps, but very difficult to do across an entire ecosystem. The benefits of switching architectures can include increased efficiency, lower cost, higher performance, and many more.

x86, Rosetta and compatibility

We said earlier that the switch to Arm means Macs will speak another language. Rosetta translates applications from x86 to Arm. It can either perform this translation ahead of time when an application is installed or in real time while an application is running. This is no easy task considering the complexity and latency requirements.

The fact that Apple has even beaten out Intel hardware running the same code in certain circumstances deserves a big round of applause for the Rosetta team. It’s not perfect though. Some programs run at 50% of their speed compared to native x86 hardware, and some just don’t work at all. That’s not the end of the world though. Rosetta is just intended to make the transition easier by offering a method to keep running x86 apps before developers have ported their code over to Arm.

Apple hasn’t reinvented the wheel with M1, but they have more or less started producing their own custom modified wheels. Intel and AMD will still dominate the high performance CPU market for years to come, but Apple isn’t necessarily that far behind. You can’t just crank this stuff out overnight, so it will take some time.

PC gamers won’t care

In the short to mid term, gamers, enthusiasts, and PC builders are entirely unaffected. It’s going to take Apple one or two more release cycles to match the best you can buy on a desktop today, but even when/if they do, Apple’s ecosystem is not the same place where gamers live. At the same time, for every user that will buy only Apple, there’s at least one that will never buy Apple, too.

What were chip makers doing all this time?

Fairly typical question we’ve seen asked in the past month: why hasn’t AMD or Intel been doing this or that? How is it possible that all of a sudden Apple has come up with a novel way to integrate memory into the CPU and become more efficient?

Remember that if not for AMD, the desktop PC space would have been stagnant this past half decade. But just like AMD has been working hard on building the Zen architecture for desktop, workstation, and server workloads, Apple has been doing the same but building from a more constrained, mobile scope.

Image: iFixit

There’s still much to learn about how far beyond Apple can push M1, its successors, and UMA into building a more complex chip that can scale to have more cores and memory.

How the PC industry can benefit

Engineers have been able to optimize software to run better on given hardware for a long time. Since Apple is now designing their own desktop processors, they can also optimize the hardware to run the software better.

That’s a genuine threat to the Windows PC ecosystem, and staying behind is not an option. Thus, we wouldn’t be surprised if some of the key actors in that space: Microsoft, AMD, Intel, Nvidia, HP, Dell, Lenovo, etc., start working together to offer similar optimizations on hardware/software to make PCs faster, better, or more efficient.

A prime example of this are next-gen gaming consoles getting fast storage and I/O, thanks to tightly integrated hardware and software that allows for such an experience. Nvidia was keen to announce RTX graphics cards could provide such a path to low latency and faster storage with RTX I/O, while a more direct equivalent to Xbox Series X’s solution will be made available as a DirectX 12 feature called DirectStorage.

It’s been characteristic of the hardware industry that when a new player or technology enters the market, it does so disrupting the status quo. Apple’s M1 has done just that.

Source link