Forcing Nvidia Gpu Programmatically in Optimus Laptops

Forcing NVIDIA GPU programmatically in Optimus laptops

The Optimus whitepaper at http://www.nvidia.com/object/LO_optimus_whitepapers.html is unclear on exactly what it takes before a switch to GPU is made. The whitepaper says that DX, DXVA, and CUDA calls are detected and will cause the GPU to be turned on. But in addition the decision is based on profiles maintained by NVIDIA and, of course, one does not yet exist for your game.

One thing to try would be make a CUDA call, for instance to cuInit(0);. As opposed to DX and DXVA, there is not way for the Intel integrated graphics to handle that, so it should force a switch to the GPU.

Force system with nVidia Optimus to use the real GPU for my application?

From what I've read Delphi does not support export of variables.

That statement is incorrect. Here's the simplest example that shows how to export a global variable from a Delphi DLL:

library GlobalVarExport;

uses
Windows;

var
NvOptimusEnablement: DWORD;

exports
NvOptimusEnablement;

begin
NvOptimusEnablement := 1;
end.

I think your problem is that you wrote it like this:

library GlobalVarExport;

uses
Windows;

var
NvOptimusEnablement: DWORD=1;

exports
NvOptimusEnablement;

begin
end.

And that fails to compile with this error:


E2276 Identifier 'NvOptimusEnablement' cannot be exported

I don't understand why the compiler doesn't like the second version. It's probably a bug. But the workaround in the first version is just fine.

How to force exe file to run on Nvidia GPU on windows

As far as I know, there is no easy and/or automatic way to compile general C++ code to be executed in a GPU. You have to use some specific API for GPU computing or implement the code in a way that some tool is able to automatically generate the code for a GPU.

The C++ APIs that I know for GPU programming are:

  • CUDA: intended for NVIDIA GPUs only, relatively easy to use. Here you have a good introduction: https://developer.nvidia.com/blog/even-easier-introduction-cuda/

  • OpenCL: can be used for most of the GPUs in the market, but is not as easy to use as CUDA. An interesting feature of OpenCL is that the generated code can also run in CPU. An example here: https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/

  • SYCL: a relatively recent API for heterogeneous programming built on top of OpenCL. SYCL highly simplifies the GPU programming, I found this interesting tutorial which shows how easy to use SYCL is: https://tech.io/playgrounds/48226/introduction-to-sycl/introduction-to-sycl-2

Unfortunately, you will need to rewrite some parts of your code if you want to use one of these options. I believe that SYCL will be the easier choice and the one that will require less modifications in the C++ code (I really encourage you to take a look at the tutorial, SYCL is pretty easy to use). Also, it seems that Visual Studio already supports SYCL: https://developer.codeplay.com/products/computecpp/ce/guides/platform-support/targeting-windows

Forcing Machine to Use Dedicated Graphics Card?

Does it use NVidia dedicated graphics? AFAIK, the process of automatically switching from integrated to dedicated is based on application profiles. Your application is not in the driver's list of known 3D applications, and therefore the user has to manually switch to the dedicated GPU.

Try changing the executable name of your application to something the driver looks for. For example "Doom3.exe". If that works, then you've found your problem.

If that didn't help, try following the instructions on how to make the driver insert your application in its list of 3D apps:

https://nvidia.custhelp.com/app/answers/detail/a_id/2615/~/how-do-i-customize-optimus-profiles-and-settings

But the above is only for verifying if this is indeed your problem. For an actual solution to this, you should check with the graphics drivers vendors (AMD and NVidia) on the best way to insert a profile for your application into their lists. NVidia provides NVAPI and AMD has ADL and AGS. They're definitely worth a study.

Programically setting NvOptimusEnablement for Python-based OpenGL Programs

The problem is that those flags have to be known by the system when the process is created. Which makes sense because this is at this point that GPU resources can start being allocated. They only work when exposed from the executable that is run, or a library that is statically linked to it, it won't work with a dynamically linked library.

Therefore, there is no other way than running a Python interpreter that has been compiled with this flag, or configuring which graphics processor Python.exe should use from the Nvidia control panel.

However, when you publish an application using pyinstaller, it is possible to compile the bootloader to have these flags exposed, since it is the entry point of your program. You just have to add this to main.c: (or use my fork of pyinstaller: https://github.com/pvallet/pyinstaller )

__declspec(dllexport) unsigned long NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;

This post explains how to do it: how to recompile the bootloader of Pyinstaller



Related Topics



Leave a reply



Submit