Mathematica has some predefined CUDA functions, like CUDAMap, CUDAFourier, etc. My question is how can one call functions which are part of CUDA library but not part of predefined Mathematica functions? For example, CUDA 6.0 has a large Math library; so if I am to call cuFFT from that library instead of using CUDAFourier, how would I go about doing this? Do I have to write a C code which calls the cuda library function and then call the c function from Mathematica. That seems too round about way of doing something. I could not quite find it in the documentation. Thanks very much!
Answer
CUDALink allows you to write custom kernels but unfortunately it doesn't allow to use its CUDAMemory
in other functions. I found two methods to deal with it.
Documented method
You can use LibraryLink and call CUDA functions as in a regular CUDA program. For example, you can write something like fourier.cu. Then you can compile it and load "fourier_single" or "fourier" depending on your GPU architecture
SetDirectory@NotebookDirectory[];
Needs["CUDALink`"]
CreateLibrary[{"fourier.cu"}, "libfourier",
"Compiler" -> NVCCCompiler, "Libraries" -> "cufft"];
AppendTo[$LibraryPath, NotebookDirectory[]];
fourier = LibraryFunctionLoad["libfourier", "fourier", {{_Complex, 1}}, {_Complex, 1}];
I don't consider this method in details because it has a big disadvantage: you have to copy your data to GPU and backward every time you call the function.
Undocumented method
Let us look through system files of Mathematica. There are brilliant examples in $InstallationDirectory/SystemFiles/Links/CUDALink/CSource/
! They use undocumented MathematicaGPULibrary (WGL). This library allows to interact directly with CUDAMemory
by its ID. I adapted these examples to demonstrate the usage of 1D single precision CUFFT (say, "fourier.cu" in the same directory as the notebook)
#include
#include
#define wglState (wglData->state)
#define wglErr (wglData->getError(wglData))
#define WGL_SuccessQ (wglErr->code == WGL_Success)
#define WGL_FailQ (!WGL_SuccessQ)
#define WGL_Type_RealQ(mem) ((mem)->type == WGL_Real_t)
#define WGL_SAFE_CALL(stmt, jmp) stmt; if (WGL_FailQ) { goto jmp; }
WolframGPULibraryData wglData = NULL;
static int iFourier(WGL_Memory_t input, WGL_Memory_t output) {
int err;
err = LIBRARY_FUNCTION_ERROR;
if (input->type != output->type) {
return LIBRARY_TYPE_ERROR;
} else if (input->flattenedLength != output->flattenedLength) {
return LIBRARY_DIMENSION_ERROR;
}
WGL_SAFE_CALL(CUDA_Runtime_setMemoryAsInput(wglState, input, wglErr), cleanup);
WGL_SAFE_CALL(CUDA_Runtime_setMemoryAsOutput(wglState, output, wglErr), cleanup);
cufftHandle plan;
cufftPlan1d(&plan, input->flattenedLength, CUFFT_C2C, 1);
cufftExecC2C(
plan,
(cufftComplex *)CUDA_Runtime_getDeviceMemoryAsComplexFloat(input),
(cufftComplex *)CUDA_Runtime_getDeviceMemoryAsComplexFloat(output),
CUFFT_INVERSE
);
cufftDestroy(plan);
err = LIBRARY_NO_ERROR;
cleanup:
if (WGL_SuccessQ) {
CUDA_Runtime_setMemoryAsValidOutput(wglState, output, wglErr);
} else {
CUDA_Runtime_setMemoryAsInvalidOutput(wglState, output, wglErr);
}
CUDA_Runtime_unsetMemoryAsInput(wglState, input, wglErr);
if (WGL_SuccessQ && err == LIBRARY_NO_ERROR) {
return LIBRARY_NO_ERROR;
} else {
return LIBRARY_FUNCTION_ERROR;
}
}
EXTERN_C DLLEXPORT int oFourier(WolframLibraryData libData, mint Argc, MArgument * Args, MArgument Res) {
WGL_Memory_t input, output;
mint inputId, outputId;
int err = LIBRARY_FUNCTION_ERROR;
inputId = MArgument_getInteger(Args[0]);
outputId = MArgument_getInteger(Args[1]);
WGL_SAFE_CALL(wglData->setWolframLibraryData(wglData, libData), cleanup);
WGL_SAFE_CALL(input = wglData->findMemory(wglData, inputId), cleanup);
WGL_SAFE_CALL(output = wglData->findMemory(wglData, outputId), cleanup);
err = iFourier(input, output);
cleanup:
return err;
}
EXTERN_C DLLEXPORT int WolframGPULibrary_initialize(WolframGPULibraryData wglData0) {
wglData = wglData0;
return LIBRARY_NO_ERROR;
}
EXTERN_C DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {
return LIBRARY_NO_ERROR;
}
EXTERN_C DLLEXPORT void WolframLibrary_uninitialize( ) {
return;
}
Let's go back to Mathematica and compile the source and load the function
SetDirectory@NotebookDirectory[];
Needs["CUDALink`"]
CreateLibrary[{"fourier.cu"}, "libfourier.so",
"TargetDirectory" -> ".", "Compiler" -> NVCCCompiler,
"Defines" -> "CONFIG_USE_CUDA",
"Libraries" -> FileNameJoin@{$CUDALinkLibraryPath, "libCUDALink_Single.so"}];
fourier =
CUDAFunctionLoad[{FileNameJoin@{NotebookDirectory[],
"libfourier.so"}},
"oFourier", {{_Integer, "Input"}, {_Integer, "Output"}}, 256,
TargetPrecision -> "Single"];
Notes:
CONFIG_USE_CUDA
flag is necessary for using CUDA in WGL. Another flag isCONFIG_USE_OPENCL
.Initialization of WGL fails without
"Libraries" -> FileNameJoin@{$CUDALinkLibraryPath, "libCUDALink_Single.so"}
. However everything will work if you evaluate any built-inCUDA*
function before.For double precision change
Single
toDouble
,Float
toDouble
,C2C
toZ2Z
andcufftComplex
tocufftDoubleComplex
everywhere above and below.CUFFT_INVERSE
corresponds to forwardFourier
withFourierParameters -> {1, 1}
.Input and output is
_Integer
because they transfer integer IDs, not the data itself.Headers of WGL are located in
GPUTools`Internal`$GPUToolsSystemResourcesPath
Built-in functions like
CUDAFourier
are loaded in$CUDALinkPath/CUDALink.m
. It can be used as an example of loading WGL functions.
Tests (change 2^24
to an appropriate value if you don't have enough GPU memory):
x = RandomComplex[{-1 - I, 1 + I}, 2^24];
ByteCount[x]
(* 268435600 *)
f1 = Fourier[x]; // AbsoluteTiming
(* {0.404416, Null} *)
gx = CUDAMemoryLoad[x, TargetPrecision -> "Single"]
gf = CUDAMemoryAllocate[Complex, Length[x], TargetPrecision -> "Single"]
(* CUDAMemory["<587248096>", "ComplexFloat"] *)
(* CUDAMemory["<1373924031>", "ComplexFloat"] *)
fourier[gx, gf]; // AbsoluteTiming
f2 = CUDAMemoryGet@gf/Sqrt@Length@x;
Norm[f2 - f1]/Norm[f1]
(* {0.010405, Null} *)
(* 2.64293*10^-7 *)
CUDAMemoryUnload[gf];
gf = CUDAFourier[gx]; // AbsoluteTiming
f3 = CUDAMemoryGet@gf;
Norm[f3 - f1]/Norm[f1]
(* {0.183712, Null} *)
(* 2.25279*10^-8 *)
CUDAMemoryUnload[gf, gx]
It works! I tested it with GTX 560 (2.0 architecture, showed above) and 9600M GT (1.1 architecture without double-precision) on Linux. Both tests show that our fourier
is much faster then CPU and faster then built-in CUDAFourier
. On GTX 560 CUDAFourier
is relatively slow because it uses double precision and I don't know how to switch it to the single precision.
I will appreciate for any tests of WGL on other systems!
Comments
Post a Comment