- Overview
- Guides
- Concepts
- Considerations And Constraints
- Absolute File References
- Assembly Colocation Assumptions
- Concurrent Use Of Test Resources
- Cross Application Domain Testing
- Heavily Executed Code Under Test
- Implicit File Dependencies
- Multi Threaded Tests
- Netstandard Test Projects
- Project Atomicity
- Project Build Platform And Configuration
- Rdi Data Point Location
- Test Atomicity
- Unique Test Names
- Using NCrunch With Source Control
- Reference
- Global Configuration
- Overview
- Auto Adjust Clashing Marker Colours
- Build Log Verbosity
- Build Process Memory Limit
- Capabilities Of This Computer
- Coverage Marker Style
- Cpu Cores Assigned To NCrunch Or Ide
- Custom Environment Variables
- Disable Global Hotkey
- Engine Hosting Strategy
- Fast Lane Threads
- Fast Lane Threshold
- Grid Maximum Reconnection Attempts
- Grid Reconnection Delay
- Impact Detection Mode
- Listening Port
- Log To Output Window
- Logging Verbosity
- Marker Colours
- Max Failing Test Trace Log Size
- Max Number Of Processing Threads
- Max Passing Test Trace Log Size
- Max Test Runners To Pool
- NCrunch Tool Window Colors
- Node Id (Name)
- Password
- Performance Aggregation Type
- Performance Display Sensitivity
- Pipeline Optimisation Priority
- Rdi Storage Settings
- Sliding Build Delay
- Snapshot Storage Directory
- Solution Storage Data Limit
- Spinner Colours
- Terminate Test Runners On Complete
- Test Process Memory Limit
- Tests To Execute On This Machine
- Text Output Font
- Workspace Base Path
- Solution Configuration
- Overview
- Additional Files For Grid Processing
- Additional Files To Include
- Allow Parallel Test Execution
- Allow Tests In Parallel With Themselves
- Infer Project References Using Assembly
- Instrumentation Mode
- NCrunch Cache Storage Path
- Only Consider Tests Outofdate If Impacted
- Project Config File Storage Path
- Show Coverage For Tests
- Show Metrics For Tests
- Tests To Execute Automatically
- Project Configuration
- Overview
- Additional Files To Include
- Allow Dynamic Code Contract Checks
- Allow Static Code Contract Checks
- Analyse Line Execution Times
- Autodetect Nuget Build Dependencies
- Build Priority
- Build Process Cpu Architecture
- Build Sdk
- Collect Control Flow During Execution
- Consider Inconclusive Tests As Passing
- Copied Project Dependencies
- Copy Referenced Assemblies To Workspace
- Custom Build Properties
- Data Storage File Size
- Default Test Timeout
- Detect Stack Overflow
- Enable Rdi
- Files Excluded From Auto Build
- Framework Utilisation Types
- Ignore This Component Completely
- Implicit Project Dependencies
- Include Static References In Workspace
- Instrument Output Assembly
- Method Data Limit
- Ms Test Thread Apartment State
- Preload Assembly References
- Prevent Signing Of Assembly
- Proxy Process File Path
- Rdi Cache Size
- Required Capabilities
- Restrict Tostring Usage
- Run Pre Or Post Build Events
- String Length Limit
- Track File Dependencies
- Use Build Configuration
- Use Build Platform
- Use Cpu Architecture
- Runtime Framework
- Overview
- Atomic Attribute
- Category Attribute
- Collect Control Flow Attribute
- Distribute By Capabilities
- Duplicate By Dimensions
- Enable Rdi Attribute
- Environment Class
- Exclusively Uses Attribute
- Inclusively Uses Attribute
- Isolated Attribute
- Method Data Limit Attribute
- Requires Capability Attribute
- Restrict Tostring Attribute
- Serial Attribute
- String Length Limit Attribute
- Timeout Attribute
- Uses Threads Attribute
- Global Configuration
- Troubleshooting
- Tools
- Keyboard Shortcuts
- Manual Installation Instructions
Unique Test Names
Summary
When working with NCrunch, it is extremely important that the names of your tests be distinctive, consistent and unique. This includes test names that are automatically derived from theory or test case parameters. Test names that are not unique between tests or are derived from randomly generated parameters cannot be reliably executed by NCrunch and will cause considerable problems for the engine.
Why?
Under NCrunch, each test has a lifespan that goes far beyond that of the physical process in which it is constructed. Tests first come to life when they are discovered by NCrunch during an analysis step, which involves loading the test assembly and interrogating it for test methods through integration with a test framework. Once a test is discovered, it is stored within NCrunch's cache system (and later in a .cache file) along with all relevant data, such as the time taken to execute it, code coverage and performance information, trace output, etc.
For such data to retain meaning and for the test to be passed to a test framework for execution, there must be a reliable way to identify the test and separate it from other tests in the same assembly. Because of the ever changing state of source code, the only way to identify a test is using its physical name, which is usually a string derived from its fixture, method and parameters.
NUnit has its own internal method of identifying tests using a sequencing system, where the ID of a test is determined by its position in the assembly. This is inadequate for NCrunch, because when source code is changed, sequence is also changed. Inserting a test partway into an assembly would result in all successive identifiers being invalidated, losing important data. Thus NCrunch does not piggyback off this system and constructs its own test name using the various components of the test (fixture, method, parameters).
Why don't other runners have this problem?
Other test runners usually only show results from a single run through your test assembly. They discover the tests, hold them in memory, execute them, report the results, then discard everything on completion. This allows them to hold individual test cases in memory and identify them through memory address. Because there is no requirement to store results from the tests across multiple runs, or split execution over multiple processes simultaneously, there is no need to identify tests outside of the process in which they are initially discovered.
It is because of NCrunch's features that additional constraints on test distinctiveness and uniqueness exist.
Is this ever going to be fixed?
No. Unlike functional defects or temporary compatibility issues, problems caused by tests not being uniquely identifiable are derived from technical limitations rather than oversight or incorrect design. Features of test frameworks that allow such tests to exist are fundamentally incompatible with next generation test runners such as NCrunch. The only way to make these tests work would be to remove all the features that make NCrunch worth using.
Common Examples of Problematic Test Naming
NUnit TestCaseSource With User Defined Type
Consider the following code.
using System; using System.Collections.Generic; using NUnit.Framework; public class Fixture { public static IEnumerable<TestCaseParameter> TestCases { get { return new[] { new TestCaseParameter(1), new TestCaseParameter(2) }; } } [Test, TestCaseSource("TestCases")] public void Test(TestCaseParameter parameter) { if (parameter.Value == 1) Console.WriteLine("First"); if (parameter.Value == 2) Console.WriteLine("Second"); } } public class TestCaseParameter { public TestCaseParameter(int value) { Value = value; } public int Value; }
The above code contains two test cases that have different instances of the same user defined type passed as parameters. The user defined type does not implement .ToString(), so there is no way to tell the difference between any of its instances outside of the process in which they are created. This code results in two test cases constructed with the same visible and internal name. The correct solution to this problem is to implement .ToString() on the user defined type and ensure all relevant data is included in the result.
NUnit Test Case With Inconsistent Name
Consider the following code.
using System; using System.Collections.Generic; using NUnit.Framework; public class Fixture { public static IEnumerable<long> TestCases { get { return new[] { DateTime.Now.Ticks, }; } } [Test, TestCaseSource("TestCases")] public void Test(long value) { Console.WriteLine(value); } }
The above code contains a single test case that uses an inconsistent value for its sole parameter. Every time the test case is discovered, the parameter has a different value. NCrunch is unable to correlate data for this test case because it does not have a consistent name. This means that every time the code is compiled, the test is completely rediscovered and is treated as an entirely new test, with all existing coverage and result data discarded.
Avoid using unstable test case parameters that do not give a consistent value. A test case parameter must be fixed to a single distinctive value to be fed into a name that can uniquely identify the test.
NUnit Random Attribute
Consider the following code.
using System; using NUnit.Framework; public class Fixture { [Test] public void Test([Random(1)] long value) { Console.WriteLine(value); } }
This code uses NUnit's Random Attribute to automatically generate a parameter for a test method. Every time the test is discovered, it has a different parameter value. This results in NCrunch being unable to retain data for the test because it does not have a consistent name. Every time the code is compiled, the test is completely rediscovered and treated as a new test. All data is thus discarded.
Avoid using NUnit's Random Attribute. This attribute is not supported by NCrunch.
Xunit Theory With AutoFixture
AutoFixture is a popular library that can be used to provide automatic substitution of theory test case parameters where these parameters are not considered relevant to the test case. Parameters are substituted using randomly generated values.
Consider the following code.
using Ploeh.AutoFixture; using System.Collections.Generic; using Xunit; namespace XUnitAutoFixture { public class TestFixture { private static readonly Fixture Fixture = new Fixture(); public static IEnumerable<object[]> SomeTestData() { yield return new object[] { Fixture.Create<long>() }; yield return new object[] { Fixture.Create<long>() }; } [Theory, MemberData(nameof(SomeTestData))] public void Test(object value) { } } }
The above code creates two test cases using randomly generated parameters. AutoFixture encapsulates the random generation, so it is not visible here. Every time these test cases are created, they have different parameter values. This means the test cases always have a different name every time they are discovered by NCrunch. The names of the test cases also change inside subsequent test processes used for execution, preventing NCrunch from reliably instructing Xunit to target them for execution. The result is an error given every time the tests are executed by NCrunch.
Avoid using random generation to produce test case parameters when working with any framework or toolset. If necessary, hard code some values instead.