Integration Testing Frameworks for Testing a Distributed System

Distributing unit testing across virtual machines

I have had similar requirements but I am come from the Java side of the world. What you can easily do is having a distributed management of nodes / machines using jGroups.

Once you understand how it works you can build a distributed system of nodes by just using 100 lines of code. With this system you can span and control child processes on each of those systems and check output and everything yourself. Should only cost you a day to take a jGroup example and get this running.

Once you have the infrastructure copy code and execute it as independent process your control is easy. Then use some of those nodes and get Selenium up and use a number of browser windows and execute scripts (or use Sikuli) and do your testing. Since the Selenium process is again Java you can generate all kind of reports you print to console or send directly to the cluster since those processes can join the cluster using jGroups too.

Such a system can be implemented in a week and it is really under your control. Very simple to do and very extendable.

Also you can provide plugins for Jenkins, Jira or Quality Center to interact with it and trigger test execution and management.

Is there a .net framework that manages unit AND integration testing?

Yes. :)

In VS2008, when you create a Test Project, Visual Studio will also generate a test metadata file, or vsmdi file. A solution may have only one metadata file. This file is a manifest of all tests generated within the solution across all Test Projects. Openning the metadata file, opens the Test List Editor - a Gui for editting and executing the file.

From the Test List Editor, you may create Test Lists [eg UnitTestList, IntegrationTestList], and assign individual tests to a specific Test List. By default, Test List Editor lists all tests in an "All Loaded Tests" list, and a "Tests Not in a List" list to help in assigning tests. Use these to find or assign groups of tests to lists. Remember, a test may belong to only one list.

There are two ways to invoke a Test List

  • From Visual Studio, each list may be invoked individually from Test List Editor.
  • From command line, MSTest may be invoked with a specific list.

One option is good for developers in their everyday work-flow, the other is good for automated build processes.

I setup something similar on the last project I worked on.


This feature is very valuable*.

Ideally, we would like to run every conceivable test whenever we modify our code base. This provides us the best response to our changes as we make them.

In practice however, running every test in a test suite often means adding execution times of minutes or hours to build times [depending on size of code base and build environment] - which is prohibitively expensive for a developer and Continuous Integration [CI] environment, both of which require rapid turnaround to provide relevant response.

The ability to specify explicit Test Lists allows the developer, CI environment, and Final build environment, to selectively target bits of funcionality without sacrificing quality control or impacting overall productivity.


Case in point, I was working on a distributed application. We wrote our own Windows Services to handle incoming requests, and leveraged Amazon's web services for storage. We did not want to run our suite of Amazon tests every build because

  1. Amazon was not always up
  2. We were not always connected
  3. Response times could be measured in hundreds of milliseconds, which in a batch of test requests can easily balloon our test suite execution times

We wanted to retain these tests however, since we needed a suite to verify behaviour. If as a developer I had doubts about our integration with Amazon, I could execute these tests from my dev environment on an as needed basis. When it came time to promote a Final build for QA, Cruise Control could also execute these tests to ensure someone in another functional area did not inadvertently break Amazon integration.

We placed these Amazon tests into an Integration Test list, which was available to every developer and executed on the build machine when Cruise Control was invoked to promote a build. We maintained another Unit Test list which was also available to every developer and executed on every single build. Since all of these were In-Memory [and well written :] and would execute in about as long as it took to build the project, they did not impact individual build operations and provided excellent feedback from Cruise Control in a timely manner.

*=valuable == important. "value" is word of the day :)

Javascript Integration Testing Framework

We used Chutzpah to run our Javascript Logic and also test our Signalr Server HUB API.

Our Javascript tests were created using QUnit (chutzpah also supports Jasmine).

Chutzpah's test runner allows you to run your js tests inside Visual Studio by leveraging phantom.js, which uses a headless browser. You can run your server and js logic and verify/run the results in Visual Studio. Also, we self hosted our HUB using OWIN from signalr, which worked great for simulating our tests.

Chutzpah provides other capabilities, so I suggest checking it out to see what works best for you.

I would also check out how Jabbr runs tests. They also use Chutzpah along with some more sophisticated techniques that may work well for you.



Related Topics



Leave a reply



Submit