Processor Affinity - Part 1
A thread of execution will typically run until it has used up its quantum (aka time slice), at which point it joins the back of the run queue waiting to be re-scheduled as soon as a processor core becomes available. While running the thread will have accumulated a significant amount of state in the processor, including instructions and data in the cache. If the thread can be re-scheduled to run on the same core as last time it can benefit from all that accumulated state. A thread may equally not run to the end of its quantum because it has been pre-empted, or blocked on IO or a lock. After which, when it is ready to run again, the same holds true.
There are numerous techniques available for pinning threads to a particular core. In this article I’ll illustrate the use of the taskset command on two threads exchanging IP multicast messages via a dummy interface. I’ve chosen this as the first example because in a low-latency environment multicast is the preferred IP protocol. For simplicity, I’ve also chosen to not involve the physical network while introducing the concepts. In the next article I’ll expand on this example and the issues involving a real network.
1. Create the dummy interface
$ su -
$ modprobe dummy
$ ifconfig dummy0 172.16.1.1 netmask 255.255.255.0
$ ifconfig dummy0 multicast
2. Get the Java files (Sender and Receiver) and compile them
$ javac *.java
3. Run the tests without CPU pinning
$ java MultiCastReceiver 22.214.171.124 dummy0
$ java MultiCastSender 126.96.36.199 dummy0 20000000
4. Run the tests with CPU pinning
$ taskset -c 2 java MultiCastReceiver 188.8.131.52 dummy0
$ taskset -c 4 java MultiCastSender 184.108.40.206 dummy0 20000000
The tests output once per second the number of messages they have managed to send and receive. A typically example run is charted in Figure 1 below.
The interesting thing I've observed is that the unpinned test will follow a step function of unpredictable performance. Across many runs I've seen different patterns but all similar in this step function nature. For the pinned tests I get consistent throughput with no step pattern and always the greatest throughput.
This test is not particularly CPU intensive, nor does it access the physical network device, yet it shows how critical processor affinity is to not just high performance but also predictable performance. In the next article of this series I'll introduce a network hop and the issues arising from interrupt handling.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)