Recently, I have studied the compilation environment of NVIDIA Cuda Toolkit, and in the development version of xmake 2.1.10, support for the cuda compilation environment has been added, and the *.cu
code can be compiled directly .
For Cuda Toolkit related instructions and installation documents, please refer to the official document: CUDA Toolkit Documentation .
After downloading and installing the Cuda SDK, install it to the /Developer/NVIDIA/CUDA-x.x
directory by default on macosx. The CUDA_PATH
corresponding SDK directory can be found through the environment variables on Windows , and the directory
will be installed by default under Linux /usr/local/cuda
.
When xmake executes the $ xmake
command to compile the *.cu
code, it will try to detect these default installation directories, and then try to call the nvcc compiler to directly compile the cuda program. In most cases, you only need to execute:
$ xmake
Create and compile Cuda project
Before I compile, we can create an empty cuda project through xmake, for example:
$ xmake create -l cuda test
$ cd test
$ xmake
-l
Create a cuda code project by specifying the parameters, the project name is test, and the execution output is as follows:
[00%]: ccache compiling.release src/main.cu
[100%]: linking.release test
We can also try to run this cuda program directly:
$ xmake run
Then let's take a look at the xmake.lua
files of this cuda project :
-- define target
target("test")
-- set kind
set_kind("binary")
-- add include directories
add_includedirs("inc")
-- add files
add_files("src/*.cu")
-- generate SASS code for each SM architecture
for _, sm in ipairs({
"30", "35", "37", "50", "52", "60", "61", "70"}) do
add_cuflags("-gencode arch=compute_" .. sm .. ",code=sm_" .. sm)
add_ldflags("-gencode arch=compute_" .. sm .. ",code=sm_" .. sm)
end
-- generate PTX code from the highest SM architecture to guarantee forward-compatibility
sm = "70"
add_cuflags("-gencode arch=compute_" .. sm .. ",code=compute_" .. sm)
add_ldflags("-gencode arch=compute_" .. sm .. ",code=compute_" .. sm)
Most of them are similar to the C/C++ project description. The only difference is add_cuflags
that some compilation options specific to the cuda code are set. This part of the configuration can be adjusted according to the needs of the user.
For add_cuflags
more instructions, you can read the official document of xmake .
Cuda compilation environment configuration
By default, xmake can successfully detect the Cuda SDK environment installed in the system. The user does not need to do additional configuration operations. Of course, if it is not detected, the user can also manually specify the path of the Cuda SDK:
$ xmake f --cuda_dir=/usr/local/cuda
$ xmake
Come tell xmake where is your current Cuda SDK installation directory.
If you want to test xmake's detection support for the current cuda environment, you can run it directly:
$ xmake l detect.sdks.find_cuda_toolchains
{
linkdirs =
{
/Developer/NVIDIA/CUDA-9.1/lib
}
, bindir = /Developer/NVIDIA/CUDA-9.1/bin
, includedirs =
{
/Developer/NVIDIA/CUDA-9.1/include
}
, cudadir = /Developer/NVIDIA/CUDA-9.1
}
To test the detection situation, you can even help to contribute related detection code find_cuda_toolchains.lua to improve the detection process of xmake.
other instructions
Note: The current support for cuda has just been completed and has not yet been officially released. For more information about the support and progress of xmake for cuda, see: issues #158 .
If you want to try this feature, you can download and install the latest master version , or download the windows 2.1.10-dev installation package .
Original source: http://tboox.org/cn/2018/03/09/support-cuda/