This command allows a user to quickly see the list of all available
test labels. The labels are also printed in verbose show only mode,
alongside their corresponding tests.
Pass the test when there is "Submission problem" in the output. This is
at least applicable to XMLRPC. Full error message is below:
------
Submission problem: Curl failed to perform HTTP POST request. curl_easy_perform() says: <url> malformed (-504)
.
Problems when submitting via XML-RPC
------
38c762c Merge 'remove-CTestTest3' into ctest-file-checksum
46df0b4 Activate retry code on any curl submit failure.
8705497 Checksum test should use CMAKE_TESTS_CDASH_SERVER
d0d1cdd Mock checksum failure output for old CDash versions
af5ef0c Testing for CTest checksum
86e81b5 CTest should resubmit in the checksum failed case
d6b7107 Fix subscript out of range crash
082c87e Cross-platform fixes for checksum/retry code
e525649 Checksums on CTest submit files, and retry timed out submissions.
The CTestTestFailedSubmit-http test was failing on the
hut11 Experimental dashboards with "Empty reply from
server" due to a localhost settings change.
At this point, CTestTest3 causes more problems than it's worth.
It uses CVS to grab a remote (over the network) copy of kwsys
code for testing. This causes some sort of problem nearly every
night on the nightly CMake dashboards. Worse: it causes problems
on different machines on different nights, then the next day, it's
fine again. So: remove this test and monitor the coverage.
If we lose a significant portion of code coverage, I will revert
this commit and re-activate the test. However, if we do not lose
a significant portion of code coverage, I will remove the code
for the test as well as removing it from the CMakeLists.txt file.
Brad King and I discussed this over the last few weeks, and we both
think we have sufficient coverage of all the checkout and update code
in other locally (non-network) based tests.
On the other hand, even if we do take a mild hit on coverage temporarily,
it should be relatively easy to increase our coverage again by adding
bits to those other locally based tests.
The bootstrap script works under MSYS, so test it. Use a launcher batch
file since 'ctest --build-and-test' is a Windows program and will not
honor the shebang line in the script.
Even though this test is checking that the ctest running it can handle
test output without newlines we should run the just-built CMake binary.
This allows the MemCheck test mode to check the correct CMake.
Add a LinkFlags test series to check that these properties work. Since
no link flag is accepted everywhere we test for presence of flags by
adding a bad flag and looking for the complaint in the test output.
Some Mac linkers produce the message
"file was built for unsupported file format which is not the
architecture being linked"
for this test. Update the test output regex to match it.
If defined and non-empty, the value of CMAKE_TESTS_CDASH_SERVER should point
to a CDash server willing to accept submissions for a project named
PublicDashboard. On machines that also run a CDash dashboard, set this
variable to "http://localhost/CDash-trunk-Testing" so that the CMake tests
that submit dashboards do not have to send those submissions over the wire.
The CTestSubmitLargeOutput test runs a dashboard that has a test that produces
very large amount of output on stdout/stderr. Since we do not even want to
attempt to send such large output over the wire, this test is off by default
unless the CMAKE_TESTS_CDASH_SERVER server is localhost. This test is expected
to cause a submission failure when sent to CDash. It passes if the submit
results contain error output. It fails if the submit succeeds.
CMAKE_TESTS_CDASH_SERVER: CDash server used by CMake/Tests.
If not defined or "", this variable defaults to the server at
http://www.cdash.org/CDash.
If set explicitly to "NOTFOUND", curl tests and ctest tests that use the
network are skipped.
If set to something starting with "http://localhost/", the CDash is expected
to be an instance of CDash used for CDash testing, pointing to a
cdash4simpletest database. In these cases, the CDash dashboards should be
run first.
We teach ADD_TEST_MACRO to transform names of the form "Namespace.Name"
to the directory "Namespace/Name" and the project name "Name". This
will allow new tests to be better organized.
We create test FortranC.Flags to try passing per-language flags from a
project into its FortranCInterface detect/verify checks. We wrap the
compilers with scripts that enforce presence of expected flags.
CMake does not enable Fortran for its own build, but it needs to find a
Fortran compiler to know if it is possible to enable Fortran tests.
Previously we searched for a hard-coded list of Fortran compilers which
was duplicated from the CMakeDetermineFortranCompiler.cmake module. We
now run CMake on a small test project that enables the Fortran language
and reports the compiler it found. This represents a more realistic
check of whether the Fortran tests will be able to find a compiler.
We create option CMake_TEST_INSTALL to enable a new CMake.Install test.
It tests running the "make install" target to install CMake itself into
a test directory. We enable the option by default for dashboard builds.
We configure an EnforceConfig.cmake script to load at CTest time.
Previously we loaded it from Tests/CTestTestfile.cmake, but now we load
it from the top level so it applies to all tests.
This test requires that the dashboard script it drives be invoked with
"ctest -C <config> -S ...". We create a "CTestTest_CONFIG" variable to
hold a configuration selected at test time. We use the configuration
given to the outer CTest, if any, and then default to either Debug or
the CMAKE_BUILD_TYPE.
We extend the CTestTestTimeout test to check that when a test times out
its children (grandchildren of ctest) are killed. Instead of running
the timeout executable directly, we run it through a cmake script that
redirects the timeout executable output to a file. A second test later
runs and verifies that the timeout executable was unable to complete and
write data to the log file. Only if the first inner test times out and
the second inner test passes (log is empty) does the CTestTestTimeout
test pass.