Is a Computing Revolution on the Way?

When Apple launched its M1 processor, it shocked the world – but not quite everyone in the loop over in silicon valley was surprised. Intel is years behind schedule with their 10nm chip designs that were supposed to have been released … well, forever ago now. More than five years, that’s for sure. Apple needed Intel to get their chips down to a smaller node – their latest phones were made possible by the 5nm technology being provided by TSMC, why can’t Intel get something similar running for their Macs and MacBooks?

It’s fascinating to wonder at which moment Tim Cook had the epiphany… why can’t we put those same Apple A15 Bionic chips into a MacBook? Hell, why can’t we make a super version of it and build a Mac Pro? Just the image that Apple now projects by designing the chips used in all of its devices gives it more “cutting edge” capital than any PC maker could ever dream of.

The PC Side

Two of the best makers of portable and desktop PCs are Dell and Lenovo, who acquired IBM’s Thinkpad line-up several years ago and have continued to focus on the same methodology as IBM – create fantastic, high-spec machines for lovers of secure, well-built workstations designed for doing real work. You’ll even see these things in use on the International Space Station – that’s how tough they are, although I suspect NASA may have made just one or two little change requests before sending them up in a SpaceX Dragon cargo ship.

As for Dell, well their Precision line of laptops has always been at the forefront of enthusiast design (I’m talking professional work here – not gaming). Sure, they have their XPS Ultrabooks and Latitude series of gorgeous, small laptops, but it says a lot that so many Precision 6800 machines are still in use today despite featuring years old technology by this point. And as for the latest 5750 machines with Xeon processors, Quadro RTX 3000, four M.2 sockets, and four RAM slots for up to 128GB of RAM… these are serious machines, with serious price tags to match. But here comes the problem…

Apple Demonstrates the M1 CPU

When Apple first demonstrated its M1 processor to the world, people were sceptical – this is a smartphone chip, right? How can I put that on my laptop and expect anything like the performance I had before? The thing is, these were heavily re-engineered smartphone chips, with real GPU’s integrated onto the die – as well as 16GB (minimum) of “unified” memory, meaning that it was available equally to all functions of the machine together. It’s like merging a PC’s RAM with the GPU’s RAM and removing all the bottlenecks and instructions that have to be run just to swap data between them.

The performance was spectacular – the benchmarks that PC techies love to run such a sCinebench just breezed through on these new chips, and we haven’t even gotten to the best part: Rosetta2. Apple knew if they wanted to successfully move from Intel over to their Silicon they would need to offer a stunning translation layer, just as they had with Rosetta when moving from PowerPC to Intel. But here’s what people simply could not believe: x86 software ran master under emulation on the M1 than it did on a real Mac sporting Intel’s very best chip.

I do not doubt that some serious software development wizardry is at work here, don’t get me wrong – I’m a developer myself, and know what a difficult task converting code from one format to another in real-time can be. Just imagine the number of free slots games you could spend all year testing for compatibility if your code wasn’t perfect – it doesn’t bear thinking about!

In truth though, believe a huge part of Apple’s gains here are coming from that 5nm node CPU – a smaller node does more work whilst generating the same amount of heat… only we are talking two very different heat curves here. The whole reason Apple got fed up waiting for Intel is that their Mac’s were constantly hamstrung by thermal throttling, caused by the CPU running at 100% load, forcing the chip to slow down to protect itself (and the rest of the computer). Think about this – we’ve gone from 14nm to 5nm in one fell swoop here. The engineers had a tonne of headroom to work with, and they did a fantastic job of it.

The M1 Mac can complete the Cinebench R23 test roughly 33% faster than the latest Intel 16” Macbook Pro.  Geekbench, Speedometer, and Blender. But there are a few tasks where Intel still reigns supreme – Blackmagic RAW and GFXBench 5.0 being the two culprits I found on benchmark lists. I suspect Apple’s engineers are already working on their Rosetta2 code to ensure a clean sweep!

Worries on the PC/x86 Side

Pah! Said some on the PC side. “We’ve had 7nm in our desktops for years, and 5nm since last year!” – this would be the AMD crowd talking of course, whose amazing comeback with the Ryzen and Threadrippper series of processors has been nothing short of stunning – no matter which team you support. The trouble is, the other features that Apple have built into their M1 aren’t just for fun – they are legitimately useful tools.

If you are editing video every day then having a powerful GPU on the same chip that is controlling the computer, and can give it direct access to the data it needs – no Mhz or MegaTransfers/sec, it’s just THERE… that is going to increase your productivity by a considerable margin. You can’t simply patch the x86 architecture to get around these issues – AMD has been leading the charge, first by putting the memory controller on the CPU die and then by ensuring that dual channel configuration actually works as if they were two banks of memory – it’s amazing the things computer companies will do to try and pull wool over your eyes, but not AMD.

But the truth is, even the best-specced Dell available today cannot hold a candle to the M1 in terms of photo editing, video editing, file encoding, programming, or CAD.

And Finally…

Straight after Apple spends month after month demonstrating upgraded versions of its M1 – most recently with the Mac Studio, boasting an M1 Ultra; memory is configurable but tops out at 128GB, it has a 20 Core CPU, a 64 Core GPU, and a 32 Core Neural Engine. Apple even goes as far as to state that this setup will transcode video to ProRes 5.6x faster than the two-year-old $40,000 Mac Pro with an Afterburner card. There’s no getting around it – none of us wants to see our ability to upgrade memory disappear, but we are going to have to get used to buying the most we could ever possibly need at purchase time real soon – the architecture is just that much better.

And then.. do you remember that moment when Lisa Su came on stage and announced Threadripper to everyone’s surprise? AMD was already wiping the floor with Intel in the PC marketplace, why release yet another chip. Answer? Because they could. And people bought them. Lots of them. Who wouldn’t want a 16/32/64 core workstation with more PCI express lines than you can find a use for?

Well, Tim Cook decided to do a Lisa Su and announce that the M1 was to have a very short shelf-life; introducing the M2. It was only the basic version – an 8-core CPU with 10-core graphics, These cores aren’t designed like the ones in our current machines – they were built to work on tasks together, rather than just occupying the same machine and expecting the developers to work the rest out. Then you have the high-efficiency cores which use 1/10th of the power for “weedy” tasks such as web browsing or writing documents – the battery lasts an astonishing 20 hours of screen-on time.

20 billion more transistors mean 100GB/s more memory bandwidth. It’s 1.4x the speed of the launch M1, the GPU is 35% more powerful, and it even receives a clock speed boost too – 3.49ghz. The Metal benchmark showed the greatest improvement – 30,627 vs 21,001.

In short, this was just Apple showing off – we all know that Pro, Max, and Ultra variants of this chip will be coming in short order, and when the Apple Silicon-based Mac Pro gets here you had better believe it – we could be about to see the biggest revolution in computer design, technology, and architecture since the launch of x86.