Apple Rumored to Make 3nm Four Die CPU’s — Whatever The Heck That Means!


Apple Rumored to Make 3nm Four Die CPU’s — Whatever The Heck That Means!

Just kidding, I do know what it means — probably

MacMini M1 logic board compared with A13 SOC on an iPhone 11 By Sonic8400 — Own work, CC BY-SA 4.0

A number of tech news outlets have reported that Apple is working on 3nm chips for 2022 that may use as many as four dies, which could mean 40 CPU cores. As Intel isn’t planning 3nm chips until later, this is a coup for Apple.

Not anything most of us would ever need, is it? Still, what does this mean to ordinary folks?


That’s the density of the transistors on the chip. Lower numbers, more transistors. The transistors make hardware decisions: this circuit has high voltage, that circuit doesn’t, so I’ll let current flow to that other place. Putting logic like that into more complicated arrangements lets the computer do math and more. More transistors, more power.


A die is what puts all those transistors together. A die might have one CPU, but nowadays probably has many. You’ll see that expressed as “cores”- for example, 8 high performance cores plus 4 energy efficient. The high performance cores are for things like processing videos and other large data sets, the energy efficient are for you and me pecking at our keyboards and doing other stuff that would bore the high performance cores to sleep (literally).

Interestingly, four dies is a regression in some sense. If you Google for “CPU dies”, you may find statements like “processor packages can be on one die or two dies” and they may go on to explain that older systems might have had more.

They had more because manufacturers wanted to be able to offer more power, but chip manufacturing technology couldn’t get as many cores as they wanted onto one die.

But the problem for multiple dies is coherency, particularly in caches. A cache is local high speed storage, a place to hold data that the CPU will probably need soon. If the cache is on the same die as the CPU cores, they can access it quickly and easily. But if a core on the second die needs something that is cached on the first die, it will take longer to go get it. Lower nanometers mean more cores and less need for multiple dies, hence better performance.

If you must use multiple dies, these issues can be helped by smarter design and smarter operating systems. You might put all the efficiency CPU’s on one die as they are not as likely to need anything from the high performance cores. At the operating system level, you might try to schedule threads (parts of a single program’s processes) on one die’s cores whenever possible. That would give the program/task better performance than letting it run on any available core.

You can be quite sure that Apple doesn’t need me to tell them that. The only reason I know even this much about it is that I had to pass a Sun certification test back in the mid nineties. The test was heavily loaded with questions about Solaris kernel internals, cache coherency, and similar esoteric knowledge. It was so loaded that soon after I had passed the exam, they changed it to remove the most difficult parts. That ticked me off because I had spent weeks studying!

I’ve forgotten almost all of that by now, but the “four dies” bit triggered some faint memories. Faint enough that I needed to do a bit of googling to refresh my tired old brain and that made me want to write this up for other folks who either never knew or have forgotten as I have.


Popular posts from this blog

Apple Has Fixed More of My Gripes and One of Them is Really Funny

I Owe an Apology to Anyone Using Voice Over

Apple Fitness+ Was a Big Help for My Weight Loss