- Overview
- Guides
- Concepts
- Considerations And Constraints
- Absolute File References
- Assembly Colocation Assumptions
- Concurrent Use Of Test Resources
- Cross Application Domain Testing
- Heavily Executed Code Under Test
- Implicit File Dependencies
- Multi Threaded Tests
- Netstandard Test Projects
- Project Atomicity
- Project Build Platform And Configuration
- Rdi Data Point Location
- Test Atomicity
- Unique Test Names
- Using NCrunch With Source Control
- Reference
- Global Configuration
- Overview
- Auto Adjust Clashing Marker Colours
- Build Log Verbosity
- Build Process Memory Limit
- Capabilities Of This Computer
- Coverage Marker Style
- Cpu Cores Assigned To NCrunch Or Ide
- Custom Environment Variables
- Disable Global Hotkey
- Engine Hosting Strategy
- Fast Lane Threads
- Fast Lane Threshold
- Grid Maximum Reconnection Attempts
- Grid Reconnection Delay
- Impact Detection Mode
- Listening Port
- Log To Output Window
- Logging Verbosity
- Marker Colours
- Max Failing Test Trace Log Size
- Max Number Of Processing Threads
- Max Passing Test Trace Log Size
- Max Test Runners To Pool
- NCrunch Tool Window Colors
- Node Id (Name)
- Password
- Performance Aggregation Type
- Performance Display Sensitivity
- Pipeline Optimisation Priority
- Rdi Storage Settings
- Sliding Build Delay
- Snapshot Storage Directory
- Solution Storage Data Limit
- Spinner Colours
- Terminate Test Runners On Complete
- Test Process Memory Limit
- Tests To Execute On This Machine
- Text Output Font
- Workspace Base Path
- Solution Configuration
- Overview
- Additional Files For Grid Processing
- Additional Files To Include
- Allow Parallel Test Execution
- Allow Tests In Parallel With Themselves
- Infer Project References Using Assembly
- Instrumentation Mode
- NCrunch Cache Storage Path
- Only Consider Tests Outofdate If Impacted
- Project Config File Storage Path
- Show Coverage For Tests
- Show Metrics For Tests
- Tests To Execute Automatically
- Project Configuration
- Overview
- Additional Files To Include
- Allow Dynamic Code Contract Checks
- Allow Static Code Contract Checks
- Analyse Line Execution Times
- Autodetect Nuget Build Dependencies
- Build Priority
- Build Process Cpu Architecture
- Build Sdk
- Collect Control Flow During Execution
- Consider Inconclusive Tests As Passing
- Copied Project Dependencies
- Copy Referenced Assemblies To Workspace
- Custom Build Properties
- Data Storage File Size
- Default Test Timeout
- Detect Stack Overflow
- Enable Rdi
- Files Excluded From Auto Build
- Framework Utilisation Types
- Ignore This Component Completely
- Implicit Project Dependencies
- Include Static References In Workspace
- Instrument Output Assembly
- Method Data Limit
- Ms Test Thread Apartment State
- Preload Assembly References
- Prevent Signing Of Assembly
- Proxy Process File Path
- Rdi Cache Size
- Required Capabilities
- Restrict Tostring Usage
- Run Pre Or Post Build Events
- String Length Limit
- Track File Dependencies
- Use Build Configuration
- Use Build Platform
- Use Cpu Architecture
- Runtime Framework
- Overview
- Atomic Attribute
- Category Attribute
- Collect Control Flow Attribute
- Distribute By Capabilities
- Duplicate By Dimensions
- Enable Rdi Attribute
- Environment Class
- Exclusively Uses Attribute
- Inclusively Uses Attribute
- Isolated Attribute
- Method Data Limit Attribute
- Requires Capability Attribute
- Restrict Tostring Attribute
- Serial Attribute
- String Length Limit Attribute
- Timeout Attribute
- Uses Threads Attribute
- Global Configuration
- Troubleshooting
- Tools
- Keyboard Shortcuts
- Manual Installation Instructions
Integration Testing
While the most obvious use for NCrunch is for rapidly running unit tests, it can also be used for integration testing.
Integration testing is a much more challenging field to play in for a concurrent test runner - and thus NCrunch needs to be correctly configured before it can properly add value in this area. However, running your integration tests continuously is enormously valuable and can save considerable development time.
Integration Testing Challenges
Before explaining the various ways in which integration tests can be run within NCrunch, it's first worth examining the key elements that make them different from unit tests:
Integration tests often take a long time to run
This is often the single biggest concern for running integration tests in NCrunch. It's not unusual for integration tests to take many minutes (or even hours) to run. This creates some interesting challenges around delivering meaningful feedback quickly without tying up engine resources for an extended period of time.
Integration tests can have side-effects
Because they often interact with resources outside the host process, integration tests can easily be responsible for leaving areas of a database or file system in an inconsistent state. This can be somewhat of an irritation, as the test runner may be manipulating these resources while you are trying to use them for other tasks (i.e. running your application in a UI).
Integration tests can have issues with concurrency
The interaction between integration tests and external resources can cause some interesting situations when tests are being run concurrently. An example could be two tests that both try to write to the same file on the file system at the same time.
Dealing With Long Running Tests
NCrunch will tend to prioritise long running tests so that they fit later in the test pipeline. After a long running test is kicked off, it will continue to run in the background against the source code as it existed when the test was first started. It is fairly normal for there to be many changes made to the source code while the test is running. NCrunch will track each of these changes individually and use them in order to merge meaningful information out of the long running test when it completes. This means that regardless of how old or out of date the source code was when the test started executing, NCrunch will always make the best use of any relevant information provided by the test.
The main complication with long running tests lies in the fact that they can block up NCrunch's processing queue. With the default configuration, NCrunch will only run one test at a time. This means that a long running test can stop NCrunch from reporting any information on faster (and sometimes more relevant) tests for the duration of its execution.
Because of this, it is absolutely essential that you make use of NCrunch's ability to run tests in parallel.
However, even when using parallel execution, the behaviour of the NCrunch engine is such that it will not interrupt any test that is partway executing. This means it is still possible for a number of long running tests to collaboratively block up the processing queue. To prevent this from happening, where possible you should ensure you have a processing thread reserved for executing fast tests only by making use of the Fast Lane Threads configuration option.
Dealing With Test Side-Effects
There are many tricks you can use to engineer your integration tests so that they do not have side-effects. These tricks include correct use of database transactions and randomisation of file names and/or isolation of test activities on the file system.
Where it is impossible to remove side-effects, they can often still be managed unobtrusively.
The most common irritating side-effect experienced when continuously running integration tests against a database is that they can constantly nuke or manipulate test data held in the database that may actually be under use by someone performing manual testing.
An elegant solution to this problem is to ensure the database dependent tests are only run continuously against a different database or database schema. This can be achieved by introducing alternative behaviour in tests that are run by NCrunch vs when they are run with a manual test runner.
NCrunch will religiously set the environment variable 'NCrunch' equal to '1' inside each of its task runner processes. This applies to both build tasks and test tasks. You can make use of this environment variable to redirect your continuous tests to a different schema/database, for example:
if (Environment.GetEnvironmentVariable("NCrunch") == "1") { connectionString = "server=localhost;database=myDatabase_ncrunch;integrated security=SSPI"; } else { connectionString = "server=localhost;database=myDatabase;integrated security=SSPI"; }
You can also make use of the NCrunchEnvironment.NCrunchIsResident() method, which makes use of the environment variable itself. There are also other ways to introduce alternative build and test behaviour for code being executed by NCrunch.
Dealing With Concurrency Issues
When running integration tests concurrently, it's important that you properly attribute your tests with ExclusivelyUsesAttribute and InclusivelyUsesAttribute where appropriate. This will prevent the testing engine from running mutually exclusive tests at the same time. An easier but more poorly performing solution is to use SerialAttribute.
It is also possible to engineer your tests in such a way that parallel execution does not cause concurrency issues.
Consider that with the behaviour of NCrunch's test pipeline, it's possible for a test to be run concurrently with another version of itself (unless the Allow Tests To Run In Parallel With Themselves setting is changed from its default). Therefore you should make use of the above attributes for all tests that make use of resources outside the test application domain, regardless of whether these tests share resources with other tests.
Partially Continuous or Manual Execution
Sometimes it's simply easier to avoid running some integration tests continuously, and to run them manually instead. The best way to do this with NCrunch is to create a new custom engine mode with criteria set to exclude integration tests that you don't want to run continuously.
The engine mode customisation window allows you to add a filter that will exclude tests by their category. You can then categorise your tests according to whether they should be run continuously or not. Note: Tests that are not run continuously must be run manually regularly to ensure their code coverage and performance information stays up to date.
RDI Considerations
Large integration tests executed with RDI enabled can produce large log files that may consume a lot of disk space. However, the benefits of RDI scale very well with large tests, as the relevant parts of the execution run become so easy to access using the RDI overlays.
If you don't have sufficient resources to run big tests using RDI, you might find it worthwhile to use EnableRdiAttribute to selectively switch it off.