How to Get the Nvidia Driver Version from the Command Line

How to get the nvidia driver version from the command line?

Using nvidia-smi should tell you that:

bwood@mybox:~$ nvidia-smi 
Mon Oct 29 12:30:02 2012
+------------------------------------------------------+
| NVIDIA-SMI 3.295.41 Driver Version: 295.41 |
|-------------------------------+----------------------+----------------------+
| Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
| Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
|===============================+======================+======================|
| 0. GeForce GTX 580 | 0000:25:00.0 N/A | N/A N/A |
| 54% 70 C N/A N/A / N/A | 25% 383MB / 1535MB | N/A Default |
|-------------------------------+----------------------+----------------------|
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0. Not Supported |
+-----------------------------------------------------------------------------+

How to get the CUDA version?

As Jared mentions in a comment, from the command line:

nvcc --version

(or /usr/local/cuda/bin/nvcc --version) gives the CUDA compiler version (which matches the toolkit version).

From application code, you can query the runtime API version with

cudaRuntimeGetVersion()

or the driver API version with

cudaDriverGetVersion()

As Daniel points out, deviceQuery is an SDK sample app that queries the above, along with device capabilities.

As others note, you can also check the contents of the version.txt using (e.g., on Mac or Linux)

cat /usr/local/cuda/version.txt

However, if there is another version of the CUDA toolkit installed other than the one symlinked from /usr/local/cuda, this may report an inaccurate version if another version is earlier in your PATH than the above, so use with caution.

How do I run nvidia-smi on Windows?

Nvidia-SMI is stored by default in the following location

C:\Windows\System32\DriverStore\FileRepository\nvdm*\nvidia-smi.exe

Where nvdm* is a directory that starts with nvdm and has an unknown number of characters after it.

Note: Older installs may have it in C:\Program Files\NVIDIA Corporation\NVSMI

You can move to that directory and then run nvidia-smi from there. However, the command prompt window will not persist, making it very difficult to see the information. Additionally it is challenging to determine what the nvdm* directory is as this changes and there are multiple directories of this format. To complicate matters, unlike linux, it can't be executed by the command line in a different path. It's better to find the exact location and create a shortcut that runs it in a periodic manner.

To find your exact location

  1. Open File Explorer (File Folder Icon on your Task Bar, Near Start / Cortana / Task View buttons).
  2. In the left Pane, click 'This PC'.
  3. In the main viewer, just to the top of the Icons, is a search bar. Type nvidia-smi.exe and hit enter. It will come up after some time.
  4. Right-click and choose 'Open File Location' and continue with the below instructions to make a desktop shortcut, or double click to run once (not recommended, as it runs and closes the window once complete, making it hard to see the information).

Make a shortcut that runs nvidia-smi and refreshes periodically

  1. Follow the above steps under 'To find your exact location'.
  2. Right click on nvidia-smi.exe (it may just say nvidia-smi in the viewpane) and choose create a shortcut. It will likely tell you that you can't create a shortcut here, and ask if you want to put it on your desktop. Hit yes.
  3. Now, on the desktop, right click on the shortcut you have just created, hit properties, and Under Shortcut > Target modify the string path to include -l < time you want it to refresh >.

For example, modify:

C:\Windows\System32\DriverStore\FileRepository\nvdm*\nvidia-smi.exe

to

C:\Windows\System32\DriverStore\FileRepository\nvdm*\nvidia-smi.exe -l 5

Then hit "Apply", and then "OK".

In this example, when you open the shortcut, it will keep the command prompt open and allow you to watch your work as nvidia-smi refreshes every five seconds.

How do I obtain the _actual_ CUDA driver version?

You can get it as a string using NVML's nvmlSystemGetDriverVersion() function:

char version_str[NVML_DEVICE_PART_NUMBER_BUFFER_SIZE+1];
retval = nvmlSystemGetDriverVersion(version_str,
NVML_DEVICE_PART_NUMBER_BUFFER_SIZE);
if (retval != NVML_SUCCESS) {
fprintf(stderr, "%s\n",nvmlErrorString(retval));
return 1;
}

printf("Driver version: %s\n", version_str);

This will result in something like:

Driver version: 470.57.02

nVidia driver version from WMI is not what I want

You can do this using NVML from nVidia's Tesla Deployment Kit. You can retrieve the internal driver version (the one you're accustomed to seeing for an nVidia driver) with code like this:

#include <iostream>
#include <string>
#include <stdlib.h>
#include <nvml.h>
#include <windows.h>

namespace {
typedef nvmlReturn_t (*init)();
typedef nvmlReturn_t (*shutdown)();
typedef nvmlReturn_t (*get_version)(char *, unsigned);

class NVML {
init nvmlInit;
shutdown nvmlShutdown;
get_version nvmlGetDriverVersion;

std::string find_dll() {
std::string loc(getenv("ProgramW6432"));
loc += "\\Nvidia Corporation\\nvsmi\\nvml.dll";
return loc;
}

public:
NVML() {
HMODULE lib = LoadLibrary(find_dll().c_str());
nvmlInit = (init)GetProcAddress(lib, "nvmlInit");
nvmlShutdown = (shutdown)GetProcAddress(lib, "nvmlShutdown");
nvmlGetDriverVersion = (get_version)GetProcAddress(lib, "nvmlSystemGetDriverVersion");

if (NVML_SUCCESS != nvmlInit())
throw(std::runtime_error("Unable to initialize NVML"));
}

std::string get_ver() {
char buffer[81];
nvmlGetDriverVersion(buffer, sizeof(buffer));
return std::string(buffer);
}

~NVML() {
if (NVML_SUCCESS != nvmlShutdown())
throw(std::runtime_error("Unable to shut down NVML"));
}
};
}

int main() {
std::cout << "nVidia Driver version: " << NVML().get_ver();
}

Note that if you're writing this purely for your own use on a machine where you're free to edit the PATH, you can simplify this quite a bit. Most of the code deals with the fact that this uses NVML.DLL, which is in a directory that's not normally on the path, so the code loads that dynamically, and uses GetProcAddress to find the functions in it that we need to use. In this case, we're only using three functions, so it's not all that difficult to deal with, but it still at drastically increases the length of the code.

If we could ignore all that nonsense, the real code would just come out to something on this general order:

nvmlInit();
nvmlSystemGetDriverVersion(result, sizeof(result));
std::cout << result;
nvmlShutdown();

Anyway, to build it, you'll need a command line something like:

 cl -Ic:\tdk\nvml\include nv_driver_version.cpp

...assuming you've installed the Tesla Deployment Kit at c:\tdk.

In any case, yes, I've tested this to at least some degree. On my desktop it prints out:

nVidia Driver version: 314.22

...which matches what I have installed.

get the CUDA and CUDNN version on windows with Anaconda installe

You could also run conda list from the anaconda command line:

conda list cudnn

# packages in environment at C:\Anaconda2:
#
# Name Version Build Channel
cudnn 6.0 0

Different CUDA versions shown by nvcc and NVIDIA-smi

CUDA has 2 primary APIs, the runtime and the driver API. Both have a corresponding version (e.g. 8.0, 9.0, etc.)

The necessary support for the driver API (e.g. libcuda.so on linux) is installed by the GPU driver installer.

The necessary support for the runtime API (e.g. libcudart.so on linux, and also nvcc) is installed by the CUDA toolkit installer (which may also have a GPU driver installer bundled in it).

In any event, the (installed) driver API version may not always match the (installed) runtime API version, especially if you install a GPU driver independently from installing CUDA (i.e. the CUDA toolkit).

The nvidia-smi tool gets installed by the GPU driver installer, and generally has the GPU driver in view, not anything installed by the CUDA toolkit installer.

Recently (somewhere between 410.48 and 410.73 driver version on linux) the powers-that-be at NVIDIA decided to add reporting of the CUDA Driver API version installed by the driver, in the output from nvidia-smi.

This has no connection to the installed CUDA runtime version.

nvcc, the CUDA compiler-driver tool that is installed with the CUDA toolkit, will always report the CUDA runtime version that it was built to recognize. It doesn't know anything about what driver version is installed, or even if a GPU driver is installed.

Therefore, by design, these two numbers don't necessarily match, as they are reflective of two different things.

If you are wondering why nvcc -V displays a version of CUDA you weren't expecting (e.g. it displays a version other than the one you think you installed) or doesn't display anything at all, version wise, it may be because you haven't followed the mandatory instructions in step 7 (prior to CUDA 11) (or step 6 in the CUDA 11 linux install guide) of the cuda linux install guide

Note that although this question mostly has linux in view, the same concepts apply to windows CUDA installs. The driver has a CUDA driver version associated with it (which can be queried with nvidia-smi, for example). The CUDA runtime also has a CUDA runtime version associated with it. The two will not necessarily match in all cases.

In most cases, if nvidia-smi reports a CUDA version that is numerically equal to or higher than the one reported by nvcc -V, this is not a cause for concern. That is a defined compatibility path in CUDA (newer drivers/driver API support "older" CUDA toolkits/runtime API). For example if nvidia-smi reports CUDA 10.2, and nvcc -V reports CUDA 10.1, that is generally not cause for concern. It should just work, and it does not necessarily mean that you "actually installed CUDA 10.2 when you meant to install CUDA 10.1"

If nvcc command doesn't report anything at all (e.g. Command 'nvcc' not found...) or if it reports an unexpected CUDA version, this may also be due to an incorrect CUDA install, i.e the mandatory steps mentioned above were not performed correctly. You can start to figure this out by using a linux utility like find or locate (use man pages to learn how, please) to find your nvcc executable. Assuming there is only one, the path to it can then be used to fix your PATH environment variable. The CUDA linux install guide also explains how to set this. You may need to adjust the CUDA version in the PATH variable to match your actual CUDA version desired/installed.

Similarly, when using docker, the nvidia-smi command will generally report the driver version installed on the base machine, whereas other version methods like nvcc --version will report the CUDA version installed inside the docker container.

Similarly, if you have used another installation method for the CUDA "toolkit" such as Anaconda, you may discover that the version indicated by Anaconda does not "match" the version indicated by nvidia-smi. However, the above comments still apply. Older CUDA toolkits installed by Anaconda can be used with newer versions reported by nvidia-smi, and the fact that nvidia-smi reports a newer/higher CUDA version than the one installed by Anaconda does not mean you have an installation problem.

Here is another question that covers similar ground. The above treatment does not in any way indicate that this answer is only applicable if you have installed multiple CUDA versions intentionally or unintentionally. The situation presents itself any time you install CUDA. The version reported by nvcc and nvidia-smi may not match, and that is expected behavior and in most cases quite normal.



Related Topics



Leave a reply



Submit