Let's respond backwards.
Crossfire used to use the lowest common denominator for clock speed, many years ago. It no longer does that. Each card in a CF setup can run at different clock speeds.
Temperature is not directly the issue. TDP is the issue - that stands for Thermal Design Power. Something with a TDP of 200W requires a cooling solution capable of removing that much heat, at a temperature below the maximum allowed (let it get hot enough, and simple radiation will give off 200W), to prevent it from overheating.
Maybe a bit more detail on the relationship between clock speed and TDP will help.
Before the 6900 series, ATI/AMD cards were given a maximum clock speed that was known to never exceed the TDP under any load. Any load other than Furmark, anyway - they were looking at real-world loads, not artificial spaceheater benchmarks. That maximum clock would be well below what the cooling solution could handle under many load conditions. So you were basically stuck with a card operating well below its potential a good portion of the time.
Starting with the 6900 series, AMD ditched that concept entirely. Instead of picking a safe maximum clock speed, the card was given a safe TDP. If the load on the card exceeds that TDP, the clock speed is reduced to lower heat output. The result is that the GPU works at pretty close to its maximum potential in a given thermal design envelope.
And rather than make that TDP a rigid quantity, they added the ability to tune it upwards or downwards to accomodate greater or lesser cooling ability. If you adjust the power limit upwards, the card will be allowed to generate more heat before reducing its clock speed.
As I said, it's about heat output, not actual temperature. At high temperature, the card will simply shut down to protect itself. Adjusting clock speed based on thermal output is a way to avoid thermal shutdown. Keep in mind that with insufficient cooling, you can go from 30C to 100C in a fraction of a second, which is too short a time to simply modify the clock speed to prevent it.
Armed with that understanding, it should be clear now that if your card drops at times to between 500MHz and 800Mhz, it does so regardless of the initial clock speed, and regardless of current temperature. It's a function of thermal output. If you're confident that your cooling is up to it, increase the power limit. Start with +5%, and watch your temperatures under full load. They should go up, and the reduction in clock speed should go down.
As for 2560x1600 with AA versus lower resolutions without it, that comes down to my earlier point that not all full-load situations are the same. The resources of the card may be fully occupied in different ways, with some generating more heat than others. The task of rendering a series of anti-aliased 2560x1600 frames can certainly generate less heat than rendering a series of aliased smaller frames more frequently. And it's the amount of heat generation that matters.
And finally, maybe a point of comparison may help. I have a pair of water-cooled 7970's at 1100/1450 each. The power limit is set at +20% for both cards.
Here's what I get when running the 1920x1080 15-minute benchmark in Furmark, stopped after almost five minutes to get the whole graph in. For the first 15 seconds or so, both cards are at 99%, and remain at 1100MHz. After that, the second card (a Sapphire 7970 OC) starts to downclock. I saw the number get down into the 800's. The load on the first card (a Gigabyte 7970 OC) goes down correspondingly, since it has less work to do to keep up with the lower-clocked secondary card, which remains at 99% the whole time.
And here's what happens when I do two short runs at 2560x1600, first without AA, then with 4xAA. Without AA, you can see the same pattern - starts out normally, then the clock speed starts fluctuating. With AA, the load on the cards is different, and the clock speed doesn't need to go down at all.
If I turn the power limit to -20%, the first GPU stays at 500MHz for the Furmark test, with the expected performance hit. Set to +0%, the primary GPU clocks down almost immediately to the 800's, with a ~16% decrease in performance.
So a higher power limit means higher performance, with the expectation that your cooling solution will be able to handle the extra heat output.
Curiously, the Sapphire card seems to behave exactly the same at +0% as at +20%, so I wonder if there isn't a BIOS bug in that card. But it behaves as expected at -20%, and Furmark isn't representative of any actual game. I never see a reduction in clock speed in normal games.