Topic Title: Using Disabled GPU for Floating Point
Topic Summary: using a trinity apu's GPU for floating point claculations
Created On: 05/23/2013 01:53 AM
Status: Post and Reply
Linear : Threading : Single : Branch
Search Topic Search Topic
Topic Tools Topic Tools
View similar topics View similar topics
View topic in raw text format. Print this topic.
 05/23/2013 01:53 AM
User is offline View Users Profile Print this message

Author Icon
gunner916
Peon

Posts: 2
Joined: 05/23/2013

Ok in new to the forums and have this idea i came up with as i was playing Borderlands 2 and would like to run this by all you Mods and Senior Forum Members to see if this is 1) Feasable in any way and 2) a all around good idea.The Details ok so before i get into it too much heres my system overview. A10-4800k Desktop APU using Radeon HD 7790 (love it) as Primary GPU. This in turn obviously turns off the integrated 7660D on the apu.The Idea. NVidia uses there GPU to calculate Physx. Those of us using pure AMD setups get stuck with these high floating point calculations on CPU. As i was playing at a easy 50-60 FPS on ultra setting 1080P i hit a heavy area of projectiles, cloth etc.. and my A10 @ 4.6GHz watercooled and 2GB 7790 dropped frames to around 22 maybe even less, lots of lag none the less. Well i figured google time and to see why and thus i read over and over the load, since Physx was on high, was most likely so hgih that on top of the processor running the game, which i would presume takes at least 75% if not more of the cpu, just cant do that and all those intense floating point calculations for physics. So me already knowing we have the ability for programs designed for it to run floating calcs on the gpu i figured we could do the same (kinda) for the unused 7660D on the APU. This would 1) run without being hindered by CPU load (to a extent since memory is shared and so is bandwith to the socket right?) and 2) i would assume it would calculate more than the 4 modules on the cpu even if cpu had no load (pure speculation on that one) plus its garunteed to be present since its part of the processor so not like millions of different combinations exist if a driver was to need to be done for it which i will assume is the case. This would in turn enable the use of Physx in games without the need for large overclocking of CPU and without a NVidia card (which is good for sales right) so my big overall question. Is this something that is possible to do? either software or maybe driver modification to create such a use? i know it seems far fetched but the additional power to the APU just as a quad core would make the APUs just that much better. I never have worked with drivers, making, modifying nothing so thats total no mans land i would have no clue where to start, and OpenCL i know utilizes the GPU but i know little on that and cant see virtual hardware being made with that software of even the system using it if such software could be made. Any input at this point is useful, i dont think its crazy and find it a useful idea.Thanks for looking through this and sorry it was rambling (newbie madness haha) if i can be guided to tutorials for things like drivers to make it possible i would more than happily attempt since i have and still do program in C++ and C# it wont be too foriegn
 05/23/2013 02:26 PM
User is offline View Users Profile Print this message

Author Icon
Dr_Asik
Peon

Posts: 6
Joined: 04/01/2011

Sure, if an OpenCL implementation of PhysX existed and game developers took advantage of it, I don't see why the on-die GPU couldn't be put to use. The fact that it's not connected to a display or crossfired with the main GPU isn't an impediment as far as I'm aware.

As for whether anyone could somehow hack an existing game to run its PhysX code on the APU that's extremely unlikely. You'd basically have to intercept the function calls and rewrite their implementation entirely, which would require godlike reverse-engineering skills - and break everytime the developers release a patch, like most exe hacks.

 05/23/2013 04:44 PM
User is offline View Users Profile Print this message

Author Icon
black_zion
80 Column Mind

Posts: 12458
Joined: 04/17/2008

PhysX is proprietary x87 code which is being replaced by OpenCL, DirectCompute, and Havok, and once the PS4 and XBOX One are released, which are powered by AMD APUs and will use traditional x86 and x64 code without any PhysX at all, PhysX should disappear.

-------------------------
ASUS Sabertooth 990FX/Gen3 R2, FX-8350 w/ Corsair H60, 8 GiB G.SKILL RipjawsX DDR3-2133, XFX HD 7970 Ghz, 512GB Vertex 4, 256GB Vector, 240GB Agility 3, Creative X-Fi Titanium w/ Creative Gigaworks S750, SeaSonic X750, HP ZR2440w, Win 7 Ultimate x64
 05/23/2013 11:38 PM
User is offline View Users Profile Print this message

Author Icon
gunner916
Peon

Posts: 2
Joined: 05/23/2013

this i wasnt sure of, seeing more games come out with physx i thought it would stay as nvidias way to try and compete even tho i would think it to be useless in this case. My main goal wasnt modding physx games to use opencl but was to create a virtual FPU using opencl so when physx software or any program that is heavy in floating point calcs would benefit from a available and currently unused source. windows manages what threads use what resources and a FPU or Coprocessor i do believe it would list under in windows so i would assume if creating a virtual hardware would be possible it would be the best way to do this, but honestly im not sure where to even start when it comes to making virtual hardware (like Elby Clonedrive) in C++ 2010. I appreciate your responses i learned a little and glad to see this idea isnt completely nuts lol
 05/24/2013 01:50 AM
User is offline View Users Profile Print this message

Author Icon
black_zion
80 Column Mind

Posts: 12458
Joined: 04/17/2008

GPU's already assist in computations (see also: GPGPU), and as time goes on more applications will start to take advantage of them. Process nodes will shrink to allow APUs to become as powerful as traditional CPUs and eventually be able to replace them which will effectively, in many cases, allow the on chip GPU to become a co-processor for computationally intensive tasks.

-------------------------
ASUS Sabertooth 990FX/Gen3 R2, FX-8350 w/ Corsair H60, 8 GiB G.SKILL RipjawsX DDR3-2133, XFX HD 7970 Ghz, 512GB Vertex 4, 256GB Vector, 240GB Agility 3, Creative X-Fi Titanium w/ Creative Gigaworks S750, SeaSonic X750, HP ZR2440w, Win 7 Ultimate x64
 05/24/2013 08:24 AM
User is offline View Users Profile Print this message

Author Icon
Vegan
Elite

Posts: 1226
Joined: 01/31/2010

I use NVIDIA and not many new games are using PhysX anymore. Most are now using Havok and the Unreal 3 engine.

idTech 5 like you see with Rage is a newer effort but I am awaiting Doom 4 to see what the engine can really do.

Frostbite 2 engine is using everything under the sun. Very demanding for bandiwdth as the whole game map can be blown to bits in real time. BF3 is a good example.

Now my current rig as an older CPU. If there was some extra CPU feature I would upgrade, but with dual graphics cards, an APU does not seem all that useful.

The APU is best seen for mobile applications, not much use for games on my box

 

 

 



-------------------------

Statistics
85354 users are registered to the AMD Support and Game forum.
There are currently 6 users logged in.

FuseTalk Hosting Executive Plan v3.2 - © 1999-2014 FuseTalk Inc. All rights reserved.