CTest: Add support for test fixtures

Add new test properties:

* FIXTURES_SETUP
* FIXTURES_CLEANUP
* FIXTURES_REQUIRED

to specify the roles and dependencies of tests providing/using
test fixtures.
This commit is contained in:
Craig Scott 2016-09-07 12:04:07 +08:00 committed by Brad King
parent 6b8812c27e
commit 73f47c9e46
28 changed files with 646 additions and 3 deletions

View File

@ -304,6 +304,9 @@ Properties on Tests
/prop_test/DEPENDS
/prop_test/ENVIRONMENT
/prop_test/FAIL_REGULAR_EXPRESSION
/prop_test/FIXTURES_CLEANUP
/prop_test/FIXTURES_REQUIRED
/prop_test/FIXTURES_SETUP
/prop_test/LABELS
/prop_test/MEASUREMENT
/prop_test/PASS_REGULAR_EXPRESSION

View File

@ -3,4 +3,8 @@ DEPENDS
Specifies that this test should only be run after the specified list of tests.
Set this to a list of tests that must finish before this test is run.
Set this to a list of tests that must finish before this test is run. The
results of those tests are not considered, the dependency relationship is
purely for order of execution (i.e. it is really just a *run after*
relationship). Consider using test fixtures with setup tests if a dependency
with successful completion is required (see :prop_test:`FIXTURES_REQUIRED`).

View File

@ -0,0 +1,46 @@
FIXTURES_CLEANUP
----------------
Specifies a list of fixtures for which the test is to be treated as a cleanup
test.
Fixture cleanup tests are ordinary tests with all of the usual test
functionality. Setting the ``FIXTURES_CLEANUP`` property for a test has two
primary effects:
- CTest will ensure the test executes after all other tests which list any of
the fixtures in its :prop_test:`FIXTURES_REQUIRED` property.
- If CTest is asked to run only a subset of tests (e.g. using regular
expressions or the ``--rerun-failed`` option) and the cleanup test is not in
the set of tests to run, it will automatically be added if any tests in the
set require any fixture listed in ``FIXTURES_CLEANUP``.
A cleanup test can have multiple fixtures listed in its ``FIXTURES_CLEANUP``
property. It will execute only once for the whole CTest run, not once for each
fixture. A fixture can also have more than one cleanup test defined. If there
are multiple cleanup tests for a fixture, projects can control their order with
the usual :prop_test:`DEPENDS` test property if necessary.
A cleanup test is allowed to require other fixtures, but not any fixture listed
in its ``FIXTURES_CLEANUP`` property. For example:
.. code-block:: cmake
# Ok: Dependent fixture is different to cleanup
set_tests_properties(cleanupFoo PROPERTIES
FIXTURES_CLEANUP Foo
FIXTURES_REQUIRED Bar
)
# Error: cannot require same fixture as cleanup
set_tests_properties(cleanupFoo PROPERTIES
FIXTURES_CLEANUP Foo
FIXTURES_REQUIRED Foo
)
Cleanup tests will execute even if setup or regular tests for that fixture fail
or are skipped.
See :prop_test:`FIXTURES_REQUIRED` for a more complete discussion of how to use
test fixtures.

View File

@ -0,0 +1,94 @@
FIXTURES_REQUIRED
-----------------
Specifies a list of fixtures the test requires. Fixture names are case
sensitive.
Fixtures are a way to attach setup and cleanup tasks to a set of tests. If a
test requires a given fixture, then all tests marked as setup tasks for that
fixture will be executed first (once for the whole set of tests, not once per
test requiring the fixture). After all tests requiring a particular fixture
have completed, CTest will ensure all tests marked as cleanup tasks for that
fixture are then executed. Tests are marked as setup tasks with the
:prop_test:`FIXTURES_SETUP` property and as cleanup tasks with the
:prop_test:`FIXTURES_CLEANUP` property. If any of a fixture's setup tests fail,
all tests listing that fixture in their ``FIXTURES_REQUIRED`` property will not
be executed. The cleanup tests for the fixture will always be executed, even if
some setup tests fail.
When CTest is asked to execute only a subset of tests (e.g. by the use of
regular expressions or when run with the ``--rerun-failed`` command line
option), it will automatically add any setup or cleanup tests for fixtures
required by any of the tests that are in the execution set.
Since setup and cleanup tasks are also tests, they can have an ordering
specified by the :prop_test:`DEPENDS` test property just like any other tests.
This can be exploited to implement setup or cleanup using multiple tests for a
single fixture to modularise setup or cleanup logic.
The concept of a fixture is different to that of a resource specified by
:prop_test:`RESOURCE_LOCK`, but they may be used together. A fixture defines a
set of tests which share setup and cleanup requirements, whereas a resource
lock has the effect of ensuring a particular set of tests do not run in
parallel. Some situations may need both, such as setting up a database,
serialising test access to that database and deleting the database again at the
end. For such cases, tests would populate both ``FIXTURES_REQUIRED`` and
:prop_test:`RESOURCE_LOCK` to combine the two behaviours. Names used for
:prop_test:`RESOURCE_LOCK` have no relationship with names of fixtures, so note
that a resource lock does not imply a fixture and vice versa.
Consider the following example which represents a database test scenario
similar to that mentioned above:
.. code-block:: cmake
add_test(NAME testsDone COMMAND emailResults)
add_test(NAME fooOnly COMMAND testFoo)
add_test(NAME dbOnly COMMAND testDb)
add_test(NAME dbWithFoo COMMAND testDbWithFoo)
add_test(NAME createDB COMMAND initDB)
add_test(NAME setupUsers COMMAND userCreation)
add_test(NAME cleanupDB COMMAND deleteDB)
add_test(NAME cleanupFoo COMMAND removeFoos)
set_tests_properties(setupUsers PROPERTIES DEPENDS createDB)
set_tests_properties(createDB PROPERTIES FIXTURES_SETUP DB)
set_tests_properties(setupUsers PROPERTIES FIXTURES_SETUP DB)
set_tests_properties(cleanupDB PROPERTIES FIXTURES_CLEANUP DB)
set_tests_properties(cleanupFoo PROPERTIES FIXTURES_CLEANUP Foo)
set_tests_properties(testsDone PROPERTIES FIXTURES_CLEANUP "DB;Foo")
set_tests_properties(fooOnly PROPERTIES FIXTURES_REQUIRED Foo)
set_tests_properties(dbOnly PROPERTIES FIXTURES_REQUIRED DB)
set_tests_properties(dbWithFoo PROPERTIES FIXTURES_REQUIRED "DB;Foo")
set_tests_properties(dbOnly dbWithFoo createDB setupUsers cleanupDB
PROPERTIES RESOURCE_LOCK DbAccess)
Key points from this example:
- Two fixtures are defined: ``DB`` and ``Foo``. Tests can require a single
fixture as ``fooOnly`` and ``dbOnly`` do, or they can depend on multiple
fixtures like ``dbWithFoo`` does.
- A ``DEPENDS`` relationship is set up to ensure ``setupUsers`` happens after
``createDB``, both of which are setup tests for the ``DB`` fixture and will
therefore be executed before the ``dbOnly`` and ``dbWithFoo`` tests
automatically.
- No explicit ``DEPENDS`` relationships were needed to make the setup tests run
before or the cleanup tests run after the regular tests.
- The ``Foo`` fixture has no setup tests defined, only a single cleanup test.
- ``testsDone`` is a cleanup test for both the ``DB`` and ``Foo`` fixtures.
Therefore, it will only execute once regular tests for both fixtures have
finished (i.e. after ``fooOnly``, ``dbOnly`` and ``dbWithFoo``). No
``DEPENDS`` relationship was specified for ``testsDone``, so it is free to
run before, after or concurrently with other cleanup tests for either
fixture.
- The setup and cleanup tests never list the fixtures they are for in their own
``FIXTURES_REQUIRED`` property, as that would result in a dependency on
themselves and be considered an error.

View File

@ -0,0 +1,47 @@
FIXTURES_SETUP
--------------
Specifies a list of fixtures for which the test is to be treated as a setup
test.
Fixture setup tests are ordinary tests with all of the usual test
functionality. Setting the ``FIXTURES_SETUP`` property for a test has two
primary effects:
- CTest will ensure the test executes before any other test which lists the
fixture(s) in its :prop_test:`FIXTURES_REQUIRED` property.
- If CTest is asked to run only a subset of tests (e.g. using regular
expressions or the ``--rerun-failed`` option) and the setup test is not in
the set of tests to run, it will automatically be added if any tests in the
set require any fixture listed in ``FIXTURES_SETUP``.
A setup test can have multiple fixtures listed in its ``FIXTURES_SETUP``
property. It will execute only once for the whole CTest run, not once for each
fixture. A fixture can also have more than one setup test defined. If there are
multiple setup tests for a fixture, projects can control their order with the
usual :prop_test:`DEPENDS` test property if necessary.
A setup test is allowed to require other fixtures, but not any fixture listed
in its ``FIXTURES_SETUP`` property. For example:
.. code-block:: cmake
# Ok: dependent fixture is different to setup
set_tests_properties(setupFoo PROPERTIES
FIXTURES_SETUP Foo
FIXTURES_REQUIRED Bar
)
# Error: cannot require same fixture as setup
set_tests_properties(setupFoo PROPERTIES
FIXTURES_SETUP Foo
FIXTURES_REQUIRED Foo
)
If any of a fixture's setup tests fail, none of the tests listing that fixture
in its :prop_test:`FIXTURES_REQUIRED` property will be run. Cleanup tests will,
however, still be executed.
See :prop_test:`FIXTURES_REQUIRED` for a more complete discussion of how to use
test fixtures.

View File

@ -5,3 +5,6 @@ Specify a list of resources that are locked by this test.
If multiple tests specify the same resource lock, they are guaranteed
not to run concurrently.
See also :prop_test:`FIXTURES_REQUIRED` if the resource requires any setup or
cleanup steps.

View File

@ -0,0 +1,8 @@
test-fixtures
-------------
* CTest now supports test fixtures through the new :prop_test:`FIXTURES_SETUP`,
:prop_test:`FIXTURES_CLEANUP` and :prop_test:`FIXTURES_REQUIRED` test
properties. When using regular expressions or ``--rerun-failed`` to limit
the tests to be run, a fixture's setup and cleanup tests will automatically
be added to the execution set if any test requires that fixture.

View File

@ -137,6 +137,16 @@ void cmCTestMultiProcessHandler::StartTestProcess(int test)
testRun->SetIndex(test);
testRun->SetTestProperties(this->Properties[test]);
// Find any failed dependencies for this test. We assume the more common
// scenario has no failed tests, so make it the outer loop.
for (std::vector<std::string>::const_iterator it = this->Failed->begin();
it != this->Failed->end(); ++it) {
if (this->Properties[test]->RequireSuccessDepends.find(*it) !=
this->Properties[test]->RequireSuccessDepends.end()) {
testRun->AddFailedDependency(*it);
}
}
std::string current_dir = cmSystemTools::GetCurrentWorkingDirectory();
cmSystemTools::ChangeDirectory(this->Properties[test]->Directory);

View File

@ -177,7 +177,8 @@ bool cmCTestRunTest::EndTest(size_t completed, size_t total, bool started)
passIt;
bool forceFail = false;
bool outputTestErrorsToConsole = false;
if (!this->TestProperties->RequiredRegularExpressions.empty()) {
if (!this->TestProperties->RequiredRegularExpressions.empty() &&
this->FailedDependencies.empty()) {
bool found = false;
for (passIt = this->TestProperties->RequiredRegularExpressions.begin();
passIt != this->TestProperties->RequiredRegularExpressions.end();
@ -201,7 +202,8 @@ bool cmCTestRunTest::EndTest(size_t completed, size_t total, bool started)
}
reason += "]";
}
if (!this->TestProperties->ErrorRegularExpressions.empty()) {
if (!this->TestProperties->ErrorRegularExpressions.empty() &&
this->FailedDependencies.empty()) {
for (passIt = this->TestProperties->ErrorRegularExpressions.begin();
passIt != this->TestProperties->ErrorRegularExpressions.end();
++passIt) {
@ -437,6 +439,23 @@ bool cmCTestRunTest::StartTest(size_t total)
this->TestResult.Name = this->TestProperties->Name;
this->TestResult.Path = this->TestProperties->Directory;
if (!this->FailedDependencies.empty()) {
this->TestProcess = new cmProcess;
std::string msg = "Failed test dependencies:";
for (std::set<std::string>::const_iterator it =
this->FailedDependencies.begin();
it != this->FailedDependencies.end(); ++it) {
msg += " " + *it;
}
*this->TestHandler->LogFile << msg << std::endl;
cmCTestLog(this->CTest, HANDLER_OUTPUT, msg << std::endl);
this->TestResult.Output = msg;
this->TestResult.FullCommandLine = "";
this->TestResult.CompletionStatus = "Not Run";
this->TestResult.Status = cmCTestTestHandler::NOT_RUN;
return false;
}
if (args.size() >= 2 && args[1] == "NOT_AVAILABLE") {
this->TestProcess = new cmProcess;
std::string msg;

View File

@ -49,6 +49,11 @@ public:
int GetIndex() { return this->Index; }
void AddFailedDependency(const std::string& failedTest)
{
this->FailedDependencies.insert(failedTest);
}
std::string GetProcessOutput() { return this->ProcessOutput; }
bool IsStopTimePassed() { return this->StopTimePassed; }
@ -106,6 +111,7 @@ private:
// The test results
cmCTestTestHandler::cmCTestTestResult TestResult;
int Index;
std::set<std::string> FailedDependencies;
std::string StartTime;
std::string ActualCommand;
std::vector<std::string> Arguments;

View File

@ -33,6 +33,7 @@
#include <cmsys/RegularExpression.hxx>
#include <functional>
#include <iomanip>
#include <iterator>
#include <set>
#include <sstream>
#include <stdio.h>
@ -762,6 +763,9 @@ void cmCTestTestHandler::ComputeTestList()
it->Index = cnt; // save the index into the test list for this test
finalList.push_back(*it);
}
UpdateForFixtures(finalList);
// Save the total number of tests before exclusions
this->TotalNumberOfTests = this->TestList.size();
// Set the TestList to the final list of all test
@ -791,6 +795,8 @@ void cmCTestTestHandler::ComputeTestListForRerunFailed()
finalList.push_back(*it);
}
UpdateForFixtures(finalList);
// Save the total number of tests before exclusions
this->TotalNumberOfTests = this->TestList.size();
@ -800,6 +806,169 @@ void cmCTestTestHandler::ComputeTestListForRerunFailed()
this->UpdateMaxTestNameWidth();
}
void cmCTestTestHandler::UpdateForFixtures(ListOfTests& tests) const
{
cmCTestOptionalLog(this->CTest, HANDLER_VERBOSE_OUTPUT,
"Updating test list for fixtures" << std::endl,
this->Quiet);
// Prepare some maps to help us find setup and cleanup tests for
// any given fixture
typedef std::set<ListOfTests::const_iterator> TestIteratorSet;
typedef std::map<std::string, TestIteratorSet> FixtureDependencies;
FixtureDependencies fixtureSetups;
FixtureDependencies fixtureDeps;
for (ListOfTests::const_iterator it = this->TestList.begin();
it != this->TestList.end(); ++it) {
const cmCTestTestProperties& p = *it;
const std::set<std::string>& setups = p.FixturesSetup;
for (std::set<std::string>::const_iterator depsIt = setups.begin();
depsIt != setups.end(); ++depsIt) {
fixtureSetups[*depsIt].insert(it);
fixtureDeps[*depsIt].insert(it);
}
const std::set<std::string>& cleanups = p.FixturesCleanup;
for (std::set<std::string>::const_iterator depsIt = cleanups.begin();
depsIt != cleanups.end(); ++depsIt) {
fixtureDeps[*depsIt].insert(it);
}
}
// Prepare fast lookup of tests already included in our list of tests
std::set<std::string> addedTests;
for (ListOfTests::const_iterator it = tests.begin(); it != tests.end();
++it) {
const cmCTestTestProperties& p = *it;
addedTests.insert(p.Name);
}
// This is a lookup of fixture name to a list of indices into the
// final tests array for tests which require that fixture. It is
// needed at the end to populate dependencies of the cleanup tests
// in our final list of tests.
std::map<std::string, std::vector<size_t> > fixtureRequirements;
// Use integer index for iteration because we append to
// the tests vector as we go
size_t fixtureTestsAdded = 0;
std::set<std::string> addedFixtures;
for (size_t i = 0; i < tests.size(); ++i) {
if (tests[i].FixturesRequired.empty()) {
continue;
}
// Must copy the set of fixtures because we may invalidate
// the tests array by appending to it
const std::set<std::string> fixtures = tests[i].FixturesRequired;
for (std::set<std::string>::const_iterator fixturesIt = fixtures.begin();
fixturesIt != fixtures.end(); ++fixturesIt) {
const std::string& requiredFixtureName = *fixturesIt;
if (requiredFixtureName.empty()) {
continue;
}
fixtureRequirements[requiredFixtureName].push_back(i);
// Add dependencies to this test for all of the setup tests
// associated with the required fixture. If any of those setup
// tests fail, this test should not run. We make the fixture's
// cleanup tests depend on this test case later.
FixtureDependencies::const_iterator setupIt =
fixtureSetups.find(requiredFixtureName);
if (setupIt != fixtureSetups.end()) {
for (TestIteratorSet::const_iterator sIt = setupIt->second.begin();
sIt != setupIt->second.end(); ++sIt) {
const std::string& setupTestName = (**sIt).Name;
tests[i].RequireSuccessDepends.insert(setupTestName);
if (std::find(tests[i].Depends.begin(), tests[i].Depends.end(),
setupTestName) == tests[i].Depends.end()) {
tests[i].Depends.push_back(setupTestName);
}
}
}
// Append any fixture setup/cleanup tests to our test list if they
// are not already in it (they could have been in the original
// set of tests passed to us at the outset or have already been
// added from a previously checked test). A fixture isn't required
// to have setup/cleanup tests.
if (!addedFixtures.insert(requiredFixtureName).second) {
// Already added this fixture
continue;
}
FixtureDependencies::const_iterator fixtureIt =
fixtureDeps.find(requiredFixtureName);
if (fixtureIt == fixtureDeps.end()) {
// No setup or cleanup tests for this fixture
continue;
}
const TestIteratorSet& testIters = fixtureIt->second;
for (TestIteratorSet::const_iterator depsIt = testIters.begin();
depsIt != testIters.end(); ++depsIt) {
ListOfTests::const_iterator lotIt = *depsIt;
const cmCTestTestProperties& p = *lotIt;
if (!addedTests.insert(p.Name).second) {
// Already have p in our test list
continue;
}
// This is a test not yet in our list, so add it and
// update its index to reflect where it was in the original
// full list of all tests (needed to track individual tests
// across ctest runs for re-run failed, etc.)
tests.push_back(p);
tests.back().Index =
1 + static_cast<int>(std::distance(this->TestList.begin(), lotIt));
++fixtureTestsAdded;
cmCTestOptionalLog(this->CTest, HANDLER_VERBOSE_OUTPUT, "Added test "
<< p.Name << " required by fixture "
<< requiredFixtureName << std::endl,
this->Quiet);
}
}
}
// Now that we have the final list of tests, we can update all cleanup
// tests to depend on those tests which require that fixture
for (ListOfTests::iterator tIt = tests.begin(); tIt != tests.end(); ++tIt) {
cmCTestTestProperties& p = *tIt;
const std::set<std::string>& cleanups = p.FixturesCleanup;
for (std::set<std::string>::const_iterator fIt = cleanups.begin();
fIt != cleanups.end(); ++fIt) {
const std::string& fixture = *fIt;
std::map<std::string, std::vector<size_t> >::const_iterator cIt =
fixtureRequirements.find(fixture);
if (cIt == fixtureRequirements.end()) {
// No test cases require the fixture this cleanup test is for.
// This cleanup test must have been part of the original test
// list passed in (which is not an error)
continue;
}
const std::vector<size_t>& indices = cIt->second;
for (std::vector<size_t>::const_iterator indexIt = indices.begin();
indexIt != indices.end(); ++indexIt) {
const std::string& reqTestName = tests[*indexIt].Name;
if (std::find(p.Depends.begin(), p.Depends.end(), reqTestName) ==
p.Depends.end()) {
p.Depends.push_back(reqTestName);
}
}
}
}
cmCTestOptionalLog(this->CTest, HANDLER_VERBOSE_OUTPUT, "Added "
<< fixtureTestsAdded
<< " tests to meet fixture requirements" << std::endl,
this->Quiet);
}
void cmCTestTestHandler::UpdateMaxTestNameWidth()
{
std::string::size_type max = this->CTest->GetMaxTestNameWidth();
@ -1829,6 +1998,24 @@ bool cmCTestTestHandler::SetTestsProperties(
rtit->LockedResources.insert(lval.begin(), lval.end());
}
if (key == "FIXTURES_SETUP") {
std::vector<std::string> lval;
cmSystemTools::ExpandListArgument(val, lval);
rtit->FixturesSetup.insert(lval.begin(), lval.end());
}
if (key == "FIXTURES_CLEANUP") {
std::vector<std::string> lval;
cmSystemTools::ExpandListArgument(val, lval);
rtit->FixturesCleanup.insert(lval.begin(), lval.end());
}
if (key == "FIXTURES_REQUIRED") {
std::vector<std::string> lval;
cmSystemTools::ExpandListArgument(val, lval);
rtit->FixturesRequired.insert(lval.begin(), lval.end());
}
if (key == "TIMEOUT") {
rtit->Timeout = atof(val.c_str());
rtit->ExplicitTimeout = true;

View File

@ -139,6 +139,10 @@ public:
std::vector<std::string> Environment;
std::vector<std::string> Labels;
std::set<std::string> LockedResources;
std::set<std::string> FixturesSetup;
std::set<std::string> FixturesCleanup;
std::set<std::string> FixturesRequired;
std::set<std::string> RequireSuccessDepends;
};
struct cmCTestTestResult
@ -251,6 +255,11 @@ private:
// based on LastTestFailed.log
void ComputeTestListForRerunFailed();
// add required setup/cleanup tests not already in the
// list of tests to be run and update dependencies between
// tests to account for fixture setup/cleanup
void UpdateForFixtures(ListOfTests& tests) const;
void UpdateMaxTestNameWidth();
bool GetValue(const char* tag, std::string& value, std::istream& fin);

View File

@ -192,6 +192,7 @@ add_RunCMake_test(ctest_start)
add_RunCMake_test(ctest_submit)
add_RunCMake_test(ctest_test)
add_RunCMake_test(ctest_upload)
add_RunCMake_test(ctest_fixtures)
add_RunCMake_test(file)
add_RunCMake_test(find_file)
add_RunCMake_test(find_library)

View File

@ -0,0 +1,81 @@
cmake_minimum_required (VERSION 3.6.2)
project(ctest_fixtures LANGUAGES NONE)
include(CTest)
macro(passTest testName)
set(someFile "${CMAKE_CURRENT_SOURCE_DIR}/test.cmake")
add_test(NAME ${testName}
COMMAND ${CMAKE_COMMAND} -E compare_files "${someFile}" "${someFile}")
endmacro()
macro(failTest testName)
set(someFile "${CMAKE_CURRENT_SOURCE_DIR}/test.cmake")
add_test(NAME ${testName}
COMMAND ${CMAKE_COMMAND} -E compare_files "${someFile}" "${someFile}xxx")
endmacro()
# Intersperse actual tests among setup/cleanup tests so that we don't
# define them in the same order as they need to be executed. Numbers
# at the end of each line correspond to the test numbers ctest will
# use for each test.
passTest(one) # 1
passTest(setupBoth) # 2
passTest(setupFoo) # 3
passTest(setupMeta) # 4
passTest(cleanupFoo) # 5
passTest(two) # 6
passTest(cleanupBar) # 7
passTest(three) # 8
failTest(setupFails) # 9
passTest(wontRun) # 10
passTest(cyclicSetup) # 11
passTest(cyclicCleanup) # 12
# Define fixture dependencies and ordering
set_tests_properties(setupFoo PROPERTIES FIXTURES_SETUP "Foo")
set_tests_properties(cleanupFoo PROPERTIES FIXTURES_CLEANUP "Foo")
set_tests_properties(setupBoth PROPERTIES FIXTURES_SETUP "Foo;Bar")
set_tests_properties(cleanupBar PROPERTIES FIXTURES_CLEANUP "Bar")
set_tests_properties(setupMeta PROPERTIES FIXTURES_SETUP "Meta"
FIXTURES_REQUIRED "Foo;Bar")
set_tests_properties(setupBoth PROPERTIES DEPENDS setupFoo)
set_tests_properties(setupFails PROPERTIES FIXTURES_SETUP "Fails")
set_tests_properties(one PROPERTIES FIXTURES_REQUIRED "Other;Foo")
set_tests_properties(two PROPERTIES FIXTURES_REQUIRED "Bar")
set_tests_properties(three PROPERTIES FIXTURES_REQUIRED "Meta;Bar")
set_tests_properties(wontRun PROPERTIES FIXTURES_REQUIRED "Fails")
@CASE_CMAKELISTS_CYCLIC_CODE@
# These are the cases verified by the main cmake build
#
# Regex: Test case list (in order)
# one 3, 2, 1, 5
# two 2, 6, 7
# three 3, 2, 4, 5, 8, 7
# setupFoo 3
# wontRun 9, 10
# cyclicSetup -NA- (configure fails)
# cyclicCleanup -NA- (configure fails)
#
# In the case of asking for just setupFoo, since there are
# no tests using the Foo fixture, we do NOT expect cleanupFoo
# to be executed. It is important not to pull in cleanupFoo
# if setupFoo is explicitly requested and no other test requires
# the Foo fixture, otherwise it would not be possible to run
# just a setup or cleanup test in isolation (likely to be
# needed during initial creation of such test cases).
#
# For the wontRun case, test 9 fails and test 10 should not run.
# The result of the set of tests should be failure, which is
# verified by the main cmake build's tests.
#
# For the two cyclic test cases invoked by the main cmake build,
# FIXTURES_... properties are added to the relevant test at the
# location marked with CASE_CMAKELISTS_CYCLIC_CODE. This creates
# a self-dependency which causes the configure step to fail.

View File

@ -0,0 +1 @@
set(CTEST_PROJECT_NAME "CTestTestFixtures.@CASE_NAME@")

View File

@ -0,0 +1,36 @@
include(RunCTest)
# Isolate our ctest runs from external environment.
unset(ENV{CTEST_PARALLEL_LEVEL})
unset(ENV{CTEST_OUTPUT_ON_FAILURE})
function(run_ctest_test CASE_NAME)
set(CASE_CTEST_FIXTURES_ARGS "${ARGN}")
run_ctest(${CASE_NAME})
endfunction()
#------------------------------------------------------------
# CMake configure will pass
#------------------------------------------------------------
run_ctest_test(one INCLUDE one)
run_ctest_test(two INCLUDE two)
run_ctest_test(three INCLUDE three)
run_ctest_test(setupFoo INCLUDE setupFoo)
run_ctest_test(wontRun INCLUDE wontRun)
#------------------------------------------------------------
# CMake configure will fail due to cyclic test dependencies
#------------------------------------------------------------
set(CASE_CMAKELISTS_CYCLIC_CODE [[
set_tests_properties(cyclicSetup PROPERTIES
FIXTURES_SETUP "Foo"
FIXTURES_REQUIRED "Foo")
]])
run_ctest(cyclicSetup)
set(CASE_CMAKELISTS_CYCLIC_CODE [[
set_tests_properties(cyclicCleanup PROPERTIES
FIXTURES_CLEANUP "Foo"
FIXTURES_REQUIRED "Foo")
]])
run_ctest(cyclicCleanup)

View File

@ -0,0 +1 @@
(-1|255)

View File

@ -0,0 +1,3 @@
Error: a cycle exists in the test dependency graph for the test "cyclicCleanup".
Please fix the cycle and run ctest again.
No tests were found!!!

View File

@ -0,0 +1 @@
Test project .*/Tests/RunCMake/ctest_fixtures/cyclicCleanup-build$

View File

@ -0,0 +1 @@
(-1|255)

View File

@ -0,0 +1,3 @@
Error: a cycle exists in the test dependency graph for the test "cyclicSetup".
Please fix the cycle and run ctest again.
No tests were found!!!$

View File

@ -0,0 +1 @@
Test project .*/Tests/RunCMake/ctest_fixtures/cyclicSetup-build$

View File

@ -0,0 +1,13 @@
Test project .*/Tests/RunCMake/ctest_fixtures/one-build
Start 3: setupFoo
1/4 Test #3: setupFoo +\.+ +Passed +[0-9.]+ sec
Start 2: setupBoth
2/4 Test #2: setupBoth +\.+ +Passed +[0-9.]+ sec
Start 1: one
3/4 Test #1: one +\.+ +Passed +[0-9.]+ sec
Start 5: cleanupFoo
4/4 Test #5: cleanupFoo +\.+ +Passed +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 4
+
Total Test time \(real\) = +[0-9.]+ sec$

View File

@ -0,0 +1,7 @@
Test project .*/Tests/RunCMake/ctest_fixtures/setupFoo-build
Start 3: setupFoo
1/1 Test #3: setupFoo +\.+ +Passed +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 1
+
Total Test time \(real\) = +[0-9.]+ sec$

View File

@ -0,0 +1,16 @@
cmake_minimum_required(VERSION 3.6.2)
set(CTEST_SITE "test-site")
set(CTEST_BUILD_NAME "test-build-name")
set(CTEST_SOURCE_DIRECTORY "@RunCMake_BINARY_DIR@/@CASE_NAME@")
set(CTEST_BINARY_DIRECTORY "@RunCMake_BINARY_DIR@/@CASE_NAME@-build")
set(CTEST_CMAKE_GENERATOR "@RunCMake_GENERATOR@")
set(CTEST_CMAKE_GENERATOR_PLATFORM "@RunCMake_GENERATOR_PLATFORM@")
set(CTEST_CMAKE_GENERATOR_TOOLSET "@RunCMake_GENERATOR_TOOLSET@")
set(CTEST_BUILD_CONFIGURATION "$ENV{CMAKE_CONFIG_TYPE}")
set(ctest_fixtures_args "@CASE_CTEST_FIXTURES_ARGS@")
ctest_start(Experimental)
ctest_configure()
ctest_test(${ctest_fixtures_args})

View File

@ -0,0 +1,17 @@
Test project .*/Tests/RunCMake/ctest_fixtures/three-build
Start 3: setupFoo
1/6 Test #3: setupFoo +\.+ +Passed +[0-9.]+ sec
Start 2: setupBoth
2/6 Test #2: setupBoth +\.+ +Passed +[0-9.]+ sec
Start 4: setupMeta
3/6 Test #4: setupMeta +\.+ +Passed +[0-9.]+ sec
Start 5: cleanupFoo
4/6 Test #5: cleanupFoo +\.+ +Passed +[0-9.]+ sec
Start 8: three
5/6 Test #8: three +\.+ +Passed +[0-9.]+ sec
Start 7: cleanupBar
6/6 Test #7: cleanupBar +\.+ +Passed +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 6
+
Total Test time \(real\) = +[0-9.]+ sec$

View File

@ -0,0 +1,11 @@
Test project .*/Tests/RunCMake/ctest_fixtures/two-build
Start 2: setupBoth
1/3 Test #2: setupBoth +\.+ +Passed +[0-9.]+ sec
Start 6: two
2/3 Test #6: two +\.+ +Passed +[0-9.]+ sec
Start 7: cleanupBar
3/3 Test #7: cleanupBar +\.+ +Passed +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 3
+
Total Test time \(real\) = +[0-9.]+ sec$

View File

@ -0,0 +1,14 @@
Test project .*/Tests/RunCMake/ctest_fixtures/wontRun-build
Start 9: setupFails
1/2 Test #9: setupFails +\.+\*\*\*Failed +[0-9.]+ sec
Start 10: wontRun
Failed test dependencies: setupFails
2/2 Test #10: wontRun +\.+\*\*\*Not Run +[0-9.]+ sec
+
0% tests passed, 2 tests failed out of 2
+
Total Test time \(real\) = +[0-9.]+ sec
+
The following tests FAILED:
.* +9 - setupFails \(Failed\)
.* +10 - wontRun \(Not Run\)$