FindCUDA: Fix literal block formatting

Fix locations of '::' manually to group literal blocks as desired.
This commit is contained in:
Brad King 2014-01-29 14:25:29 -05:00
parent bbc82d85e5
commit 1f8eb5db1c
1 changed files with 4 additions and 161 deletions

View File

@ -32,9 +32,7 @@
# script (in alphebetical order). Note that any of these flags can be # script (in alphebetical order). Note that any of these flags can be
# changed multiple times in the same directory before calling # changed multiple times in the same directory before calling
# CUDA_ADD_EXECUTABLE, CUDA_ADD_LIBRARY, CUDA_COMPILE, CUDA_COMPILE_PTX # CUDA_ADD_EXECUTABLE, CUDA_ADD_LIBRARY, CUDA_COMPILE, CUDA_COMPILE_PTX
# or CUDA_WRAP_SRCS. # or CUDA_WRAP_SRCS::
#
# ::
# #
# CUDA_64_BIT_DEVICE_CODE (Default matches host bit size) # CUDA_64_BIT_DEVICE_CODE (Default matches host bit size)
# -- Set to ON to compile for 64 bit device code, OFF for 32 bit device code. # -- Set to ON to compile for 64 bit device code, OFF for 32 bit device code.
@ -43,19 +41,11 @@
# nvcc in the generated source. If you compile to PTX and then load the # nvcc in the generated source. If you compile to PTX and then load the
# file yourself, you can mix bit sizes between device and host. # file yourself, you can mix bit sizes between device and host.
# #
#
#
# ::
#
# CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE (Default ON) # CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE (Default ON)
# -- Set to ON if you want the custom build rule to be attached to the source # -- Set to ON if you want the custom build rule to be attached to the source
# file in Visual Studio. Turn OFF if you add the same cuda file to multiple # file in Visual Studio. Turn OFF if you add the same cuda file to multiple
# targets. # targets.
# #
#
#
# ::
#
# This allows the user to build the target from the CUDA file; however, bad # This allows the user to build the target from the CUDA file; however, bad
# things can happen if the CUDA source file is added to multiple targets. # things can happen if the CUDA source file is added to multiple targets.
# When performing parallel builds it is possible for the custom build # When performing parallel builds it is possible for the custom build
@ -68,44 +58,24 @@
# this script could detect the reuse of source files across multiple targets # this script could detect the reuse of source files across multiple targets
# and turn the option off for the user, but no good solution could be found. # and turn the option off for the user, but no good solution could be found.
# #
#
#
# ::
#
# CUDA_BUILD_CUBIN (Default OFF) # CUDA_BUILD_CUBIN (Default OFF)
# -- Set to ON to enable and extra compilation pass with the -cubin option in # -- Set to ON to enable and extra compilation pass with the -cubin option in
# Device mode. The output is parsed and register, shared memory usage is # Device mode. The output is parsed and register, shared memory usage is
# printed during build. # printed during build.
# #
#
#
# ::
#
# CUDA_BUILD_EMULATION (Default OFF for device mode) # CUDA_BUILD_EMULATION (Default OFF for device mode)
# -- Set to ON for Emulation mode. -D_DEVICEEMU is defined for CUDA C files # -- Set to ON for Emulation mode. -D_DEVICEEMU is defined for CUDA C files
# when CUDA_BUILD_EMULATION is TRUE. # when CUDA_BUILD_EMULATION is TRUE.
# #
#
#
# ::
#
# CUDA_GENERATED_OUTPUT_DIR (Default CMAKE_CURRENT_BINARY_DIR) # CUDA_GENERATED_OUTPUT_DIR (Default CMAKE_CURRENT_BINARY_DIR)
# -- Set to the path you wish to have the generated files placed. If it is # -- Set to the path you wish to have the generated files placed. If it is
# blank output files will be placed in CMAKE_CURRENT_BINARY_DIR. # blank output files will be placed in CMAKE_CURRENT_BINARY_DIR.
# Intermediate files will always be placed in # Intermediate files will always be placed in
# CMAKE_CURRENT_BINARY_DIR/CMakeFiles. # CMAKE_CURRENT_BINARY_DIR/CMakeFiles.
# #
#
#
# ::
#
# CUDA_HOST_COMPILATION_CPP (Default ON) # CUDA_HOST_COMPILATION_CPP (Default ON)
# -- Set to OFF for C compilation of host code. # -- Set to OFF for C compilation of host code.
# #
#
#
# ::
#
# CUDA_HOST_COMPILER (Default CMAKE_C_COMPILER, $(VCInstallDir)/bin for VS) # CUDA_HOST_COMPILER (Default CMAKE_C_COMPILER, $(VCInstallDir)/bin for VS)
# -- Set the host compiler to be used by nvcc. Ignored if -ccbin or # -- Set the host compiler to be used by nvcc. Ignored if -ccbin or
# --compiler-bindir is already present in the CUDA_NVCC_FLAGS or # --compiler-bindir is already present in the CUDA_NVCC_FLAGS or
@ -113,19 +83,11 @@
# $(VCInstallDir)/bin is a special value that expands out to the path when # $(VCInstallDir)/bin is a special value that expands out to the path when
# the command is run from withing VS. # the command is run from withing VS.
# #
#
#
# ::
#
# CUDA_NVCC_FLAGS # CUDA_NVCC_FLAGS
# CUDA_NVCC_FLAGS_<CONFIG> # CUDA_NVCC_FLAGS_<CONFIG>
# -- Additional NVCC command line arguments. NOTE: multiple arguments must be # -- Additional NVCC command line arguments. NOTE: multiple arguments must be
# semi-colon delimited (e.g. --compiler-options;-Wall) # semi-colon delimited (e.g. --compiler-options;-Wall)
# #
#
#
# ::
#
# CUDA_PROPAGATE_HOST_FLAGS (Default ON) # CUDA_PROPAGATE_HOST_FLAGS (Default ON)
# -- Set to ON to propagate CMAKE_{C,CXX}_FLAGS and their configuration # -- Set to ON to propagate CMAKE_{C,CXX}_FLAGS and their configuration
# dependent counterparts (e.g. CMAKE_C_FLAGS_DEBUG) automatically to the # dependent counterparts (e.g. CMAKE_C_FLAGS_DEBUG) automatically to the
@ -137,10 +99,6 @@
# CUDA_ADD_LIBRARY, CUDA_ADD_EXECUTABLE, or CUDA_WRAP_SRCS. Flags used for # CUDA_ADD_LIBRARY, CUDA_ADD_EXECUTABLE, or CUDA_WRAP_SRCS. Flags used for
# shared library compilation are not affected by this flag. # shared library compilation are not affected by this flag.
# #
#
#
# ::
#
# CUDA_SEPARABLE_COMPILATION (Default OFF) # CUDA_SEPARABLE_COMPILATION (Default OFF)
# -- If set this will enable separable compilation for all CUDA runtime object # -- If set this will enable separable compilation for all CUDA runtime object
# files. If used outside of CUDA_ADD_EXECUTABLE and CUDA_ADD_LIBRARY # files. If used outside of CUDA_ADD_EXECUTABLE and CUDA_ADD_LIBRARY
@ -148,38 +106,22 @@
# CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME and # CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME and
# CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS should be called. # CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS should be called.
# #
#
#
# ::
#
# CUDA_VERBOSE_BUILD (Default OFF) # CUDA_VERBOSE_BUILD (Default OFF)
# -- Set to ON to see all the commands used when building the CUDA file. When # -- Set to ON to see all the commands used when building the CUDA file. When
# using a Makefile generator the value defaults to VERBOSE (run make # using a Makefile generator the value defaults to VERBOSE (run make
# VERBOSE=1 to see output), although setting CUDA_VERBOSE_BUILD to ON will # VERBOSE=1 to see output), although setting CUDA_VERBOSE_BUILD to ON will
# always print the output. # always print the output.
# #
# # The script creates the following macros (in alphebetical order)::
#
# The script creates the following macros (in alphebetical order):
#
# ::
# #
# CUDA_ADD_CUFFT_TO_TARGET( cuda_target ) # CUDA_ADD_CUFFT_TO_TARGET( cuda_target )
# -- Adds the cufft library to the target (can be any target). Handles whether # -- Adds the cufft library to the target (can be any target). Handles whether
# you are in emulation mode or not. # you are in emulation mode or not.
# #
#
#
# ::
#
# CUDA_ADD_CUBLAS_TO_TARGET( cuda_target ) # CUDA_ADD_CUBLAS_TO_TARGET( cuda_target )
# -- Adds the cublas library to the target (can be any target). Handles # -- Adds the cublas library to the target (can be any target). Handles
# whether you are in emulation mode or not. # whether you are in emulation mode or not.
# #
#
#
# ::
#
# CUDA_ADD_EXECUTABLE( cuda_target file0 file1 ... # CUDA_ADD_EXECUTABLE( cuda_target file0 file1 ...
# [WIN32] [MACOSX_BUNDLE] [EXCLUDE_FROM_ALL] [OPTIONS ...] ) # [WIN32] [MACOSX_BUNDLE] [EXCLUDE_FROM_ALL] [OPTIONS ...] )
# -- Creates an executable "cuda_target" which is made up of the files # -- Creates an executable "cuda_target" which is made up of the files
@ -193,43 +135,23 @@
# nvcc. Such flags should be modified before calling CUDA_ADD_EXECUTABLE, # nvcc. Such flags should be modified before calling CUDA_ADD_EXECUTABLE,
# CUDA_ADD_LIBRARY or CUDA_WRAP_SRCS. # CUDA_ADD_LIBRARY or CUDA_WRAP_SRCS.
# #
#
#
# ::
#
# CUDA_ADD_LIBRARY( cuda_target file0 file1 ... # CUDA_ADD_LIBRARY( cuda_target file0 file1 ...
# [STATIC | SHARED | MODULE] [EXCLUDE_FROM_ALL] [OPTIONS ...] ) # [STATIC | SHARED | MODULE] [EXCLUDE_FROM_ALL] [OPTIONS ...] )
# -- Same as CUDA_ADD_EXECUTABLE except that a library is created. # -- Same as CUDA_ADD_EXECUTABLE except that a library is created.
# #
#
#
# ::
#
# CUDA_BUILD_CLEAN_TARGET() # CUDA_BUILD_CLEAN_TARGET()
# -- Creates a convience target that deletes all the dependency files # -- Creates a convience target that deletes all the dependency files
# generated. You should make clean after running this target to ensure the # generated. You should make clean after running this target to ensure the
# dependency files get regenerated. # dependency files get regenerated.
# #
#
#
# ::
#
# CUDA_COMPILE( generated_files file0 file1 ... [STATIC | SHARED | MODULE] # CUDA_COMPILE( generated_files file0 file1 ... [STATIC | SHARED | MODULE]
# [OPTIONS ...] ) # [OPTIONS ...] )
# -- Returns a list of generated files from the input source files to be used # -- Returns a list of generated files from the input source files to be used
# with ADD_LIBRARY or ADD_EXECUTABLE. # with ADD_LIBRARY or ADD_EXECUTABLE.
# #
#
#
# ::
#
# CUDA_COMPILE_PTX( generated_files file0 file1 ... [OPTIONS ...] ) # CUDA_COMPILE_PTX( generated_files file0 file1 ... [OPTIONS ...] )
# -- Returns a list of PTX files generated from the input source files. # -- Returns a list of PTX files generated from the input source files.
# #
#
#
# ::
#
# CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME( output_file_var # CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME( output_file_var
# cuda_target # cuda_target
# object_files ) # object_files )
@ -242,10 +164,6 @@
# automatically for CUDA_ADD_LIBRARY and CUDA_ADD_EXECUTABLE. Note that # automatically for CUDA_ADD_LIBRARY and CUDA_ADD_EXECUTABLE. Note that
# this is a function and not a macro. # this is a function and not a macro.
# #
#
#
# ::
#
# CUDA_INCLUDE_DIRECTORIES( path0 path1 ... ) # CUDA_INCLUDE_DIRECTORIES( path0 path1 ... )
# -- Sets the directories that should be passed to nvcc # -- Sets the directories that should be passed to nvcc
# (e.g. nvcc -Ipath0 -Ipath1 ... ). These paths usually contain other .cu # (e.g. nvcc -Ipath0 -Ipath1 ... ). These paths usually contain other .cu
@ -253,17 +171,9 @@
# #
# #
# #
#
#
# ::
#
# CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS( output_file_var cuda_target # CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS( output_file_var cuda_target
# nvcc_flags object_files) # nvcc_flags object_files)
# #
#
#
# ::
#
# -- Generates the link object required by separable compilation from the given # -- Generates the link object required by separable compilation from the given
# object files. This is called automatically for CUDA_ADD_EXECUTABLE and # object files. This is called automatically for CUDA_ADD_EXECUTABLE and
# CUDA_ADD_LIBRARY, but can be called manually when using CUDA_WRAP_SRCS # CUDA_ADD_LIBRARY, but can be called manually when using CUDA_WRAP_SRCS
@ -273,91 +183,51 @@
# specified by CUDA_64_BIT_DEVICE_CODE. Note that this is a function # specified by CUDA_64_BIT_DEVICE_CODE. Note that this is a function
# instead of a macro. # instead of a macro.
# #
#
#
# ::
#
# CUDA_WRAP_SRCS ( cuda_target format generated_files file0 file1 ... # CUDA_WRAP_SRCS ( cuda_target format generated_files file0 file1 ...
# [STATIC | SHARED | MODULE] [OPTIONS ...] ) # [STATIC | SHARED | MODULE] [OPTIONS ...] )
# -- This is where all the magic happens. CUDA_ADD_EXECUTABLE, # -- This is where all the magic happens. CUDA_ADD_EXECUTABLE,
# CUDA_ADD_LIBRARY, CUDA_COMPILE, and CUDA_COMPILE_PTX all call this # CUDA_ADD_LIBRARY, CUDA_COMPILE, and CUDA_COMPILE_PTX all call this
# function under the hood. # function under the hood.
# #
#
#
# ::
#
# Given the list of files (file0 file1 ... fileN) this macro generates # Given the list of files (file0 file1 ... fileN) this macro generates
# custom commands that generate either PTX or linkable objects (use "PTX" or # custom commands that generate either PTX or linkable objects (use "PTX" or
# "OBJ" for the format argument to switch). Files that don't end with .cu # "OBJ" for the format argument to switch). Files that don't end with .cu
# or have the HEADER_FILE_ONLY property are ignored. # or have the HEADER_FILE_ONLY property are ignored.
# #
#
#
# ::
#
# The arguments passed in after OPTIONS are extra command line options to # The arguments passed in after OPTIONS are extra command line options to
# give to nvcc. You can also specify per configuration options by # give to nvcc. You can also specify per configuration options by
# specifying the name of the configuration followed by the options. General # specifying the name of the configuration followed by the options. General
# options must preceed configuration specific options. Not all # options must preceed configuration specific options. Not all
# configurations need to be specified, only the ones provided will be used. # configurations need to be specified, only the ones provided will be used.
# #
#
#
# ::
#
# OPTIONS -DFLAG=2 "-DFLAG_OTHER=space in flag" # OPTIONS -DFLAG=2 "-DFLAG_OTHER=space in flag"
# DEBUG -g # DEBUG -g
# RELEASE --use_fast_math # RELEASE --use_fast_math
# RELWITHDEBINFO --use_fast_math;-g # RELWITHDEBINFO --use_fast_math;-g
# MINSIZEREL --use_fast_math # MINSIZEREL --use_fast_math
# #
#
#
# ::
#
# For certain configurations (namely VS generating object files with # For certain configurations (namely VS generating object files with
# CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE set to ON), no generated file will # CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE set to ON), no generated file will
# be produced for the given cuda file. This is because when you add the # be produced for the given cuda file. This is because when you add the
# cuda file to Visual Studio it knows that this file produces an object file # cuda file to Visual Studio it knows that this file produces an object file
# and will link in the resulting object file automatically. # and will link in the resulting object file automatically.
# #
#
#
# ::
#
# This script will also generate a separate cmake script that is used at # This script will also generate a separate cmake script that is used at
# build time to invoke nvcc. This is for several reasons. # build time to invoke nvcc. This is for several reasons.
# #
#
#
# ::
#
# 1. nvcc can return negative numbers as return values which confuses # 1. nvcc can return negative numbers as return values which confuses
# Visual Studio into thinking that the command succeeded. The script now # Visual Studio into thinking that the command succeeded. The script now
# checks the error codes and produces errors when there was a problem. # checks the error codes and produces errors when there was a problem.
# #
#
#
# ::
#
# 2. nvcc has been known to not delete incomplete results when it # 2. nvcc has been known to not delete incomplete results when it
# encounters problems. This confuses build systems into thinking the # encounters problems. This confuses build systems into thinking the
# target was generated when in fact an unusable file exists. The script # target was generated when in fact an unusable file exists. The script
# now deletes the output files if there was an error. # now deletes the output files if there was an error.
# #
#
#
# ::
#
# 3. By putting all the options that affect the build into a file and then # 3. By putting all the options that affect the build into a file and then
# make the build rule dependent on the file, the output files will be # make the build rule dependent on the file, the output files will be
# regenerated when the options change. # regenerated when the options change.
# #
#
#
# ::
#
# This script also looks at optional arguments STATIC, SHARED, or MODULE to # This script also looks at optional arguments STATIC, SHARED, or MODULE to
# determine when to target the object compilation for a shared library. # determine when to target the object compilation for a shared library.
# BUILD_SHARED_LIBS is ignored in CUDA_WRAP_SRCS, but it is respected in # BUILD_SHARED_LIBS is ignored in CUDA_WRAP_SRCS, but it is respected in
@ -366,27 +236,17 @@
# <target_name>_EXPORTS is defined when a shared library compilation is # <target_name>_EXPORTS is defined when a shared library compilation is
# detected. # detected.
# #
#
#
# ::
#
# Flags passed into add_definitions with -D or /D are passed along to nvcc. # Flags passed into add_definitions with -D or /D are passed along to nvcc.
# #
# #
# #
# The script defines the following variables: # The script defines the following variables::
#
# ::
# #
# CUDA_VERSION_MAJOR -- The major version of cuda as reported by nvcc. # CUDA_VERSION_MAJOR -- The major version of cuda as reported by nvcc.
# CUDA_VERSION_MINOR -- The minor version. # CUDA_VERSION_MINOR -- The minor version.
# CUDA_VERSION # CUDA_VERSION
# CUDA_VERSION_STRING -- CUDA_VERSION_MAJOR.CUDA_VERSION_MINOR # CUDA_VERSION_STRING -- CUDA_VERSION_MAJOR.CUDA_VERSION_MINOR
# #
#
#
# ::
#
# CUDA_TOOLKIT_ROOT_DIR -- Path to the CUDA Toolkit (defined if not set). # CUDA_TOOLKIT_ROOT_DIR -- Path to the CUDA Toolkit (defined if not set).
# CUDA_SDK_ROOT_DIR -- Path to the CUDA SDK. Use this to find files in the # CUDA_SDK_ROOT_DIR -- Path to the CUDA SDK. Use this to find files in the
# SDK. This script will not directly support finding # SDK. This script will not directly support finding
@ -427,32 +287,15 @@
# Only available for CUDA version 3.2+. # Only available for CUDA version 3.2+.
# Windows only. # Windows only.
# #
#
#
#
#
# ::
#
# James Bigler, NVIDIA Corp (nvidia.com - jbigler) # James Bigler, NVIDIA Corp (nvidia.com - jbigler)
# Abe Stephens, SCI Institute -- http://www.sci.utah.edu/~abe/FindCuda.html # Abe Stephens, SCI Institute -- http://www.sci.utah.edu/~abe/FindCuda.html
# #
#
#
# ::
#
# Copyright (c) 2008 - 2009 NVIDIA Corporation. All rights reserved. # Copyright (c) 2008 - 2009 NVIDIA Corporation. All rights reserved.
# #
#
#
# ::
#
# Copyright (c) 2007-2009 # Copyright (c) 2007-2009
# Scientific Computing and Imaging Institute, University of Utah # Scientific Computing and Imaging Institute, University of Utah
# #
#
#
# ::
#
# This code is licensed under the MIT License. See the FindCUDA.cmake script # This code is licensed under the MIT License. See the FindCUDA.cmake script
# for the text of the license. # for the text of the license.