There seems to be a widely spread idea that more threads imply more context switching but it just so happens that I'm not very sold to that idea -- at least it doesn't seem immediately obvious to me why would that be.
As my understanding goes, we have CPUs that are at any given moment assigned a bunch of threads to run -- their run-queues. In the modern Linux kernel, those run-queues consist actually in Red-Black-Trees, so for each run task will impose on that tree operations on the order of O(lg n)
. From this perspective it seems that if we have a system with lots of threads running around, on average the n
of this O(lg n)
will also increase -- but this is not really increasing the context switching count, just increasing the cost of each context switch.
From all other perspectives I don't think that having more or less threads will increase the context switching. It may increase the memory used (at least one user-space and one kernel-space stacks, plus maybe thread or at least cpu-local memory pools) but not the number of context switches.
Let's imagine that I have a program with an embarrassingly parallel workload that runs for hours, with plenty of ram to go around. What would be the difference between having, let's say, 8 entirely CPU-bound threads (in a 8 core machine) or 64? The only reason I could see a difference would be if the Linux scheduler algorithm would assign smaller slices (up to a point, of course) as the number of threads in its run-queue increases?
What am I missing here? Thanks!