"To be honest, it’s be hard to distinguish between two blindly-configured platforms—one Intel, one AMD—in most tasks. The exception is gaming, where AMD’s Trinity design really shines."
Considering the fact that mobile Ivy bridge i7's lowest price for OEMs is 378$, if AMD prices these chips to compete against i3-i5 Ivy bridge cpus, this can be a win for them in terms of performance / dollar and may even bring a price war. This is great news for consumers everywhere.
As I was reading it that's what I was thinking. Sure Intel kicks AMD's ass at processor tasks, BUT remember that these are laptops, and that these are APUs, and the majority of tasks that people with APUs will do are things like word processing, internet browsing, emailing, video watching, facebooking, IMíng and video chatting. Also, people will use them for ultra small form factor HTPC's, like they do with Liano. Sure you can also game on them, and most of the games that will be played on them are either casual games or MMOs.
As far as pricing goes, a lot will depend on the OEMs, and despite the anti trust settlements they still favor Intel over AMD, since even when they do offer AMD chips, they offer the lower end, and they are able to price the Intel units lower and make it up in volume.
Also going back to talk about the benchmarks, if you notice AMD sent their Trinity laptop with a 128GB SSD, which they used on all the laptops to prevent bias. However, as we know, most laptop manufacturers will include as stock a 320gb or 500gb 5400 RPM HDD, which, when you include things such as media transcoding or file compression/encryption, or basically any other CPU intensive task, will act as a bottleneck, helping to equalize the field.
Something else of note is the performance of Piledriver, which exceeds that of the Phenom based cores in Llano, is a pretty darn good improvement, and once it makes its way to the desktop, I can see a 15-25% gain over Bulldozer.
I don't see a price war, but I do see the OEMs finally start to realize that AMD has a competitive product and shouldn't take a direct "AMD based laptops" internet search to find. What it's going to take is AMD sending some teams around to college campuses and showing a AMD Trinity vs Intel Ivy Bridge side by side comparison in the student unions, as students make up a sizable chunk of yearly laptop sales, and since to students the Trinity values of lower power consumption in typical student tasks as well as superior gaming performance will appeal to them most of all.
And yes, I did see that little blip on the power consumption graph during video playback, and I think something else was at play there, such as the GPU not being put into the right C state or TurboCore cranking up the clocks for some reason, but my guess would be the GPU due to early (and not mature) drivers.
I do feel bad for those people who build Llano desktops because they can't use their old board with Trinity, but this is only because of the Northbridge change and not like Intel and they want to make more money
Sure the 300$+ i7's kick AMD's ass, but notice that Tom's compared mobile sandy bridge i5's to the A10, since they're probably going to be priced about the same.
As you can see in the conclusion page, the performance difference favors Intel in productivity (16%) and in content creation/encoding (17%). In gaming, however, AMD dominates at 42%.
Granted, Ivy bridge will narrow that gap, but mostly in gaming. Actually, in Anandtech's Trinity review you can see that the A10 trumps an Ivy Bridge i7 based notebook. My guess is that it's probably due to the added graphics horsepower and lack of graphics experience compared to AMD.
Regarding the 15-25% gain vs. Bulldozer.. Keep in mind that in Tom's review, the A10 is @ 2.3Ghz - 3.2Ghz max, and the Llano is @ 1.5Ghz - 2.4Ghz max, and the Piledriver cores are 20% faster on avg. so clock vs. clock I would expect about 10% average gain compared to Bulldozer. However, the mere fact that the A10 is clocked about 50% higher and the cpu power consumption is in mobile i5's territory, all at the same 32nm, and the fact that turbo is supposed to work better, is a great indicator that Piledriver will probably put Bulldozer to shame out of the box (thanks to the new turbo implementation and possibly higher stock clocks), and I hope that it will also allow greater overclocks with greater gains than Bulldozer, while keeping power consumption and heat in check.
Desktop Piledriver chips are probably not even being fabbed out yet, so who knows how much additional last minute tweaking AMD will do. One can only hope as well that when Trinity's successor comes up that it will have second gen GCN (as will be in the HD 8000 series) instead of first gen (as in the HD 7000 series). Just imagine what those OpenCL tests would look like
Originally posted by: black_zion Desktop Piledriver chips are probably not even being fabbed out yet, so who knows how much additional last minute tweaking AMD will do. One can only hope as well that when Trinity's successor comes up that it will have second gen GCN (as will be in the HD 8000 series) instead of first gen (as in the HD 7000 series). Just imagine what those OpenCL tests would look like
There's always room for hope, but AFAIK AMD has a track record of moving slow (meaning: Not moving as fast as Intel does because they have 1/100000 the money and employees), I've learned that they make a serious move or 2 every year, at best, tops. This is their first serious move this year (cpu division), their 2nd move is working hard with software develeopers for OpenCL applications.
I wouldn't be surprised if they put all their R&D and efforts into making some even more serious moves next year, even at the price of standing still for now.
I think I'll hold out until DirectX 12 to update, which hopefully should be the HD 8000 series but could be the HD 9000 series though, and the HD 9000 series should be the next die shrink process as well. My 5970 plays everything I want to play at 1920x1200 just fine, though I am looking forward to the day I can get rid of that noise machine...
I'd like a Corsair H series cooler as well, but I'd have to cut fan controller leads for my rear fans if I do
Yeah I know, gpu noise sucks. Have you tried undervolting it a bit? I have successfully undervolted my 4870 @ stock clocks and temps went down significantly from over 90*C + screaming fan to mid-70 and way less fan noise.
My guess is that DirectX 12 will only be introduced 2+ years from now in Windows 9, as right now Microsoft's main focus is to integrate their OS into low power machines. I believe there's not even a single DirectX 11 only title, probably thanks to the aging consoles. The new consoles + new OS (giving people more reasons to upgrade and stuff) should be enough to push that tech forward.
Personally, I'd rather buy a refreshed design based on an existing manufacturing process rather than a design that's just now been produced on a new process, since it usually introduces some significant optimizations in performance and perf / watt, and offers better thermals. That's the same reason I'm looking forward to seeing how the desktop Piledriver cores fares compared to PhII and Bulldozer, hopefully it'll be enough reason to make me go back to visiting this forum more often
The HD 5970 is already two undervolted HD 5870's at 1.05v, and the fan noise isn't to bad under most situations, especially now since it is summer and I drop my overclocks (summer in the South, evil humidity), that and a custom profile in MSI Afterburner.
The thing about CPUs though is that they've really hit a speed ceiling...3 years ago, maybe more. Not in the sense that there is no more performance, which there is, but it's sufficient for gaming, as current speeds of graphics cards and detail levels, as well as the crappy coding and direct ports in many cases, mean a Phenom II or gen 1 i7 aren't even fully loaded. Myself what I would love, and what AMD and Intel are basically working on, is mainly energy efficiency. Supercomputers now can achieve massive performance using GPUs (I believe one or two of the Top 10 supercomputers is just a few hundred CUDA cards), and the rest just use massive numbers of CPUs in parallel, and with APUs starting to gain speed and performance it won't be much longer before the heavy CPU intensive tasks are able to leverage the GPU for assistance as well via OpenCL, reducing the need for an ultrafast CPU even more...
One wonders, how much longer before it's like most any SciFi movie and there is just a unified computer processor and a few specialized subprocessors to aid in specialized tasks (like the cryptography subprocessor VIA puts in theirs)
Yeah summer is a bi*ch, it's hot and humid in my country as well.
Semiaccurate published an interesting article regarding Trinity's perf/watt (link). The mere fact that AMD can put up a fight vs. Intel's latest-and-greatest at 32nm vs. 22nm is quite impressive (although to be fair, when they quoted Techreport's power-consumption analysis, the author failed to mention that the Ivy-bridge notebook had a larger display).
There's no doubt that traditional cpu and gpu will soon be phased-out in some markets, you can already see how AMD's inferior cpu can match Ivy bridge by leveraging OpenCL with its IGP, and that is no small feat.
My favorite game right now is Starcraft 2, and since the game is shi**y coded it utilizes no more than 2 cores, so single-threaded performance is as important as gpu speed. However, my 3.8ghz phII is doing just fine even at 4v4 after disabling some cpu-intensive eye-candy (which is barely noticable), and my HD 4870 is by far the major bottleneck in gaming, so I'm with you on this one.