Ok in new to the forums and have this idea i came up with as i was playing Borderlands 2 and would like to run this by all you Mods and Senior Forum Members to see if this is 1) Feasable in any way and 2) a all around good idea.The Details ok so before i get into it too much heres my system overview. A10-4800k Desktop APU using Radeon HD 7790 (love it) as Primary GPU. This in turn obviously turns off the integrated 7660D on the apu.The Idea. NVidia uses there GPU to calculate Physx. Those of us using pure AMD setups get stuck with these high floating point calculations on CPU. As i was playing at a easy 50-60 FPS on ultra setting 1080P i hit a heavy area of projectiles, cloth etc.. and my A10 @ 4.6GHz watercooled and 2GB 7790 dropped frames to around 22 maybe even less, lots of lag none the less. Well i figured google time and to see why and thus i read over and over the load, since Physx was on high, was most likely so hgih that on top of the processor running the game, which i would presume takes at least 75% if not more of the cpu, just cant do that and all those intense floating point calculations for physics. So me already knowing we have the ability for programs designed for it to run floating calcs on the gpu i figured we could do the same (kinda) for the unused 7660D on the APU. This would 1) run without being hindered by CPU load (to a extent since memory is shared and so is bandwith to the socket right?) and 2) i would assume it would calculate more than the 4 modules on the cpu even if cpu had no load (pure speculation on that one) plus its garunteed to be present since its part of the processor so not like millions of different combinations exist if a driver was to need to be done for it which i will assume is the case. This would in turn enable the use of Physx in games without the need for large overclocking of CPU and without a NVidia card (which is good for sales right) so my big overall question. Is this something that is possible to do? either software or maybe driver modification to create such a use? i know it seems far fetched but the additional power to the APU just as a quad core would make the APUs just that much better. I never have worked with drivers, making, modifying nothing so thats total no mans land i would have no clue where to start, and OpenCL i know utilizes the GPU but i know little on that and cant see virtual hardware being made with that software of even the system using it if such software could be made. Any input at this point is useful, i dont think its crazy and find it a useful idea.Thanks for looking through this and sorry it was rambling (newbie madness haha) if i can be guided to tutorials for things like drivers to make it possible i would more than happily attempt since i have and still do program in C++ and C# it wont be too foriegn
Sure, if an OpenCL implementation of PhysX existed and game developers took advantage of it, I don't see why the on-die GPU couldn't be put to use. The fact that it's not connected to a display or crossfired with the main GPU isn't an impediment as far as I'm aware.
As for whether anyone could somehow hack an existing game to run its PhysX code on the APU that's extremely unlikely. You'd basically have to intercept the function calls and rewrite their implementation entirely, which would require godlike reverse-engineering skills - and break everytime the developers release a patch, like most exe hacks.
PhysX is proprietary x87 code which is being replaced by OpenCL, DirectCompute, and Havok, and once the PS4 and XBOX One are released, which are powered by AMD APUs and will use traditional x86 and x64 code without any PhysX at all, PhysX should disappear.
this i wasnt sure of, seeing more games come out with physx i thought it would stay as nvidias way to try and compete even tho i would think it to be useless in this case. My main goal wasnt modding physx games to use opencl but was to create a virtual FPU using opencl so when physx software or any program that is heavy in floating point calcs would benefit from a available and currently unused source. windows manages what threads use what resources and a FPU or Coprocessor i do believe it would list under in windows so i would assume if creating a virtual hardware would be possible it would be the best way to do this, but honestly im not sure where to even start when it comes to making virtual hardware (like Elby Clonedrive) in C++ 2010. I appreciate your responses i learned a little and glad to see this idea isnt completely nuts lol
GPU's already assist in computations (see also: GPGPU), and as time goes on more applications will start to take advantage of them. Process nodes will shrink to allow APUs to become as powerful as traditional CPUs and eventually be able to replace them which will effectively, in many cases, allow the on chip GPU to become a co-processor for computationally intensive tasks.