Any assistance on this mystery would be greatly appreciated.
We have 2 servers that we use to run a single threaded process. Which means, that we can either let Windows Server handle the load distribution or set the affinity for the exe. This single process queries access database and outputs text files.
Dual AMD Opteron 2378
RAID 10 15k rpm
Windows Server 2008 64-bit
Dual AMD Opteron 6174
RAID 10 Intel SSD
Windows Server 2008 R2 64-bit
We found that on both servers, if we set the affinity on the main process it will run faster than if we let Windows decide. There are other processes associated with the application are handled by windows processes. When I watch the CPU usage on these 2 machines, I see 1 core at ~100% and the rest of the cores minimal. But there is a key difference between the 2 computers. The rest of the cores on the Opteron 2378 are almost 0%. On the Opteron 6174, the 23 cores spike to about 40% and seem to alternate between the cores. The Opteron 2378 will run about 23% faster.
Interestingly, if I boot the Opteron 6174 with only 2 cores, I get a 17% increase in performance. But that's pretty wasteful of 2 processors.
By setting the affinity, I get 30% boost on the Opteron 2378 and a 50% boost on the Opteron 6174.
Ideally, I would like for Windows and the processors to take the application and use all cores fully without setting an affinity. Unless someone can suggest how to do that, I have to settle with running multiple instances of the application and setting the affinity of each instance to a unique core.
The software installed is the same aside from drivers and the R2 edition of the OS. Here are some screen shots of the CPU usage.