Don't make the mistake of thinking that a CPU bottleneck means that the CPU is slowing the GPU down.
All that means is that the rate at which the CPU can calculate snapshots of the virtual world is lower than the rate at which the GPU can render that snapshot into an image.
The CPU's workload is not affected by screen resolution, so what you see at a low resolution like 1024x768 (I note in passing that I'm old enough to still find that statement odd - I paid, many moons ago, over $500 for a graphics card capable merely of displaying 1024x768 pixels in 24-bit color, with its huge 4MB frame buffer) is basically the maximum speed the CPU is capable of running that game at.
As resolution goes up, the GPU needs to do more and more to keep up, and if said resolution goes up high enough (or you enable sufficiently demanding modes of anti-aliasing and anisotropic filtering), the GPU will no longer be able to keep up with the CPU.
Things get complicated when you start offloading to the GPU work that's typically done by the CPU. There could easily be situations where the GPU has capacity to spare at low resolutions, making the game run faster with offloading, while at higher resolutions the speed would be better if the offloaded work was done by the CPU instead, leaving the GPU more time to render the additional pixels.
The results referred to are a bit poor for showing all this, since both aspect ratio and AA settings change at the lowest resolutions.