Not really. Burnout only happens in that time period with extreme overvolting, and those who do extreme overclocks generally replace their gear too frequently for it to burn out.
For overclocking with adequate cooling (if it keeps the part at reasonable temperatures at stock speed, it's adequate here) and no voltage increases (this last part is especially important), average part lifetime doesn't seem to be any worse than with no overclocking at all. It certainly wouldn't make sense for a significant different to exist.. you don't see old high-end processors dying all over the place just because they were clocked faster than slower parts from the same generation. (And yeah, there really aren't any other factors there. Speed binning is not terribly complex or elegant.)
For overclocking with sane voltage increases and adequate cooling (adequate being enough to keep up with voltage increases here, nothing fancy needed), average part lifetime might decrease, but computer hardware is usually designed for at least a small level of voltage tolerance-- in other words, feeding a CPU 105% of stock voltage is not likely to cause it to set on fire. Remember that many manufacturers have to raise the voltage for a processor as they ramp speeds; if the cores couldn't handle this, you would be hearing about this.
What about not-so-sane voltage increases? Well, I'm wary of increasing voltage for CPUs by more than 10-12%, mainly because of SNDS and STDS (Sudden Northwood/Thoroughbred Death Syndrome-- SNDS is more common, dunno why). If you see CPUs burning out randomly due to voltage increases, something is definitely not right, and I have doubts that the remaining chips will get very close to the average lifetime of chips running at stock speeds. I would imagine that the same applies for other hardware, but SDRAM TSOPs and the other ICs likely to be overvolted seem more robust than CPUs, evidenced by the fact that many people feed their (2.5v) SDRAM upwards of 3.0v without experiencing Sudden DDR Death Syndrome or whatever you want to call it. Significant lifetime degradation for memory probably occurs upwards of 2.8v or 2.9v, which is conveniently also where the point of diminishing returns is located.
It's pretty hard to find solid data on this sort of thing for one simple reason: most of the people that overclock replace their hardware fairly frequently. A three-year upgrade cycle is considered sluggish in the hardware world, but it's still more than short enough to ensure that collecting data on the long-term effects of overvolting (because it's overvolting that's truly dangerous) is difficult. My 10-12% figure is a long way from being exact, I just use it because it "seems to work." The only thing I'm completely sure of is that overclocking with no voltage increase is safe provided that your cooling setup is adequate.
DEC Pentium X2 5200+ w/ HyperCache (Ezra core)
Asus M7NCD-MAX3 (OPTi Vendetta 82C760)
6x Generic 32MB PC2700 RDRAM (50ns SIMMs)
2x nVidia Millennium X1800 Duo (SLI mode)
12x (daisy-chained) Quantum Medalist 180GXP (w/ separate SCSI-1 adapter)
"Oh, you started a Rube Goldberg machine. A Rube Goldberg machine... called JUSTICE." <a href='http://www.boomspeed.com/old_camper/amdforumsircchat.ht