Stupid computer question
-
Think of it as additional processors that can do work in parallel.
They don't automatically speed things up, though. Things get faster by adding cores if you run different applications in parallel, or if you have applications that have been programmed to make use of multiple cores (which isn't an easy thing to do). But if you run only a single CPU-intensive application and it hasn't been programmed to use multiple cores (and sometimes this isn't possible, not even in theory), then additional cores won't improve anything.
Modern graphics cards have even more cores than CPUs do, but these cores are a rather different matter. You cannot run independent programs on them, like in CPU cores. Rather, basically every core has to execute exactly the same program in lock-step, but each of them works on a different region of memory (e.g. regions of your screen - but these days graphics cards are also quite popular for general-purpose computing).
-
Think of # of cores like # of surgeons in an OR.
In theory more surgeons can work on different, multiple parts of the same patient and/or on different, multiple patients simultaneously, thus allowing the OR to clear more complex cases and/or more patients quicker.
But in practice there are limits, like when they share the same gas passer or have limited number of specialized instruments, like when one surgeon cannot perform a certain step until after another surgeon completed a different step, like when different specialists are called to do different steps, like when the patient can only take so much at any point in time, etc.
The computing equivalents of all those limitations would be things like "shared resource" when only one "core" can use a particular I/O port or a particular memory region at a time, necessary serializations when some computations depend the results from some other computations, having different types of "cores" optimized for different things (e.g., "general purpose core" vs "graphics core" vs "neural engine accelerators", "energy-efficient core" vs "high-performance core"), overall chip limit such as the absolute total thermal load that the chip cannot exceed or it would shutdown or meltdown.
-
@mik said in Stupid computer question:
Very good responses.
Agreed.
Each core is in essence a CPU that shares cache memory, memory controller, and other I/O to the outside world. A 2 core processor is not 2x as fast as a single core. The boost in performance is usually measured in percentages of increased performance that depend on various pre-requisites to be realized.
First the OS and application need to take advantage of the multiple cores/threads. Not all applications do this. Or they do it in various ways depending on how many things they need to do at once.
Games and Audio processing demand a fast single core thread vs. multiple cores beyond about 8 cores. Audio processing does take advantage of more cores but not as much as 3D rendering for example.
3D animation and rendering, Video processing, scientific applications for simulations of weather, etc produce much faster results, and the more cores/threads that are available, the better.
Modern OSes are fully multi-threaded and as such, perform better with more cores as system related processes can run much more efficiently on multi-core systems.
Virtual Machines (Hyper-V, etc) the more cores the merrier!
For your home web browsing/shopping/video streaming, etc. 4 cores is plenty.
-
Just got around to creating a couple of videos using iMovie.
From monitoring the numbers reported by Activity Monitor (basically the graphical version of Unix's "top" utility), it does not look like iMovie does much parallel processing. The M1 chip supposedly has eight general computing cores plus eight graphics cores, but when transcoding video for export it looks like no more than two general computing cores are used and no graphics core was used, which is somewhat disappointing seeing that the encoding of prerecorded video is supposedly the sort of computing tasks that is more easily parallelized. -
@klaus said in Stupid computer question:
They don't automatically speed things up, though. Things get faster by adding cores if you run different applications in parallel, or if you have applications that have been programmed to make use of multiple cores (which isn't an easy thing to do).
Unless parallel operations are built into the language. Then it's pretty easy.
-
Cores are also an easy concept to market. There are lots of things in computer architecture that contribute towards performance: clock speed (Ghz), cache, pipeline depth, memory I/O, etc.
Then there's the matter of how optimized the SW is for a particular type of architecture.
But cores are discrete and easy to understand. And to be fair, they are correlated with performance for many application types.
"This one has two little processors in it... this one has six... this one eight.."
-
@horace said in Stupid computer question:
@klaus said in Stupid computer question:
They don't automatically speed things up, though. Things get faster by adding cores if you run different applications in parallel, or if you have applications that have been programmed to make use of multiple cores (which isn't an easy thing to do).
Unless parallel operations are built into the language. Then it's pretty easy.
Try to program a parallel sorting procedure that is faster than a good sequential algorithm. Not easy, regardless of language support.
Or here’s another one that is most likely impossible: write a simulator for a single core that uses multiple cores, such that doubling the number of cores would approximately double the performance of the simulation (it can be 100x slower on 8 cores, say, that’s not the point, only the speed up matters). If you can do that, I’ll promise that you’ll win every prize computer science has to offer and likely become a billionaire. In CS circles, this is known as the “NC=P” problem (not to be confused with the better known but completely different “P=NP” problem).
There’s much more to parallel programming than “parallel for loops”.
-
@klaus said in Stupid computer question:
@horace said in Stupid computer question:
@klaus said in Stupid computer question:
They don't automatically speed things up, though. Things get faster by adding cores if you run different applications in parallel, or if you have applications that have been programmed to make use of multiple cores (which isn't an easy thing to do).
Unless parallel operations are built into the language. Then it's pretty easy.
Try to program a parallel sorting procedure that is faster than a good sequential algorithm. Not easy, regardless of language support.
Or here’s another one that is most likely impossible: write a simulator for a single core that uses multiple cores, such that doubling the number of cores would approximately double the performance of the simulation (it can be 100x slower on 8 cores, say, that’s not the point, only the speed up matters). If you can do that, I’ll promise that you’ll win every prize computer science has to offer and likely become a billionaire. In CS circles, this is known as the “NC=P” problem (not to be confused with the better known but completely different “P=NP” problem).
There’s much more to parallel programming than “parallel for loops”.
The claim I was responding to was that using multiple cores wasn’t easy. I don’t disagree with your new claim that some problems are difficult to parallelize.