How to Run My Performance Tests More Than Ten Times

How can I run my performance tests more than ten times?

In the latest Xcode (11.0+) you don't need swizzling to change iterations count. Use the following function:

func measure(options: XCTMeasureOptions, block: () -> Void)

This will allow you to specify XCTMeasureOptions which has iterationCount property.

Interesting note from docs:

A performance test runs its block iterationCount+1 times, ignoring the first iteration and recording metrics for the remaining iterations. The test ignores the first iteration to reduce measurement variance associated with “warming up” caches and other first-run behavior.

How can I run my performance tests more than ten times?

In the latest Xcode (11.0+) you don't need swizzling to change iterations count. Use the following function:

func measure(options: XCTMeasureOptions, block: () -> Void)

This will allow you to specify XCTMeasureOptions which has iterationCount property.

Interesting note from docs:

A performance test runs its block iterationCount+1 times, ignoring the first iteration and recording metrics for the remaining iterations. The test ignores the first iteration to reduce measurement variance associated with “warming up” caches and other first-run behavior.

Testing with JMeter: how to run N requests per second

As with any network test, there's always going to be problems, especially with latency - even if you could send exactly 6 per second, they're going to be sent sequentially (that's just how packets get sent) and may not all hit in that second, plus processing time.

Generally when performance metrics specific x per second, it's measured over a period of time. Your API may even have a buffer - so you could technically send 6 per second, but process 5 per second, with a buffer of 20, meaning it'd be fine for 20 seconds of traffic, as you'd have sent 120, which would take 120/5 = 24 seconds to process. But any more than that would overflow the buffer. So to just send exactly 6 in a second to test is insufficient.

In the thread group, you're right setting number of threads (users) to 6. Then run it looping forever (tick it or put it in a while loop) and add a listener like aggregate report and results tree. The results you can use to check the right stuff is being sent and responded to (assuming you validate the responses) and in the aggregate report, you can see how many of each activity is happening per hour (obviously multiply by 3600 for seconds, but because of this inaccuracy it's best to run it for a good length of time).

The initial load test can now be run, and as a more accurate test, you can leave it running for longer (soak test) to see if any other problems surface - buffer overflows, memory leaks, or other unexpected events.

When doing performance testing, why are the initial iterations constantly slower than the average?

You want to take a look at at Eric Lippert's series on performance tests

Mistake #6: Treat the first run as nothing special when measuring
average performance.

In order to get a good result out of a benchmark test in a world with
potentially expensive startup costs due to jitting code, loading
libraries and calling static constructors, you've got to apply some
careful thought about what you're actually measuring.

If, for example, you are benchmarking for the specific purpose of
analyzing startup costs then you're going to want to make sure that
you measure only the first run. If on the other hand you are
benchmarking part of a service that is going to be running millions of
times over many days and you wish to know the average time that will
be taken in a typical usage then the high cost of the first run is
irrelevant and therefore shouldn't be part of the average. Whether you
include the first run in your timings or not is up to you; my point
is, you need to be cognizant of the fact that the first run has
potentially very different costs than the second.

...

Moreover, it's important to note that different jitters give different
results on different machines and in different versions of the .NET
framework. The time taken to jit can vary greatly, as can the amount
of optimization generated in the machine code. The jit compilers on
the Windows 32 bit desktop, Windows 64 bit desktop, Silverlight
running on a Mac, and the "compact" jitter that runs when you have a
C# program in XNA on XBOX 360 all have potentially different
performance characteristics.

In short JIT'ing is expensive. You shouldn't factor it into your tests unless that is what you want. It depends on typical usage. If your code is going to startup once and stay up for long periods, then discard the first tests, but if it is mostly going to be start and stops, then the first test will be important.

Testing custom ORM solution performance overhead - how to?

Have you seen ORMBattle.NET? See FAQ there, there are some ideas related to measuring performance overhead introduced by a particular ORM tool. Test suite is open source.

Concerning your results:

  • Some ORM tools automatically batch statement sequences (i.e. send several SQL statements together). If this feature is implemented well in ORM, it's easy to beat plain ADO.NET by 2-4 times on CRUD operations, if ADO.NET test does not involve batching. Tests on ORMBattle.NET test both cases.
  • A lot depends on how you establish transaction boundaries there. Please refer to ORMBattle.NET FAQ for details.
  • CRUD tests aren't best performance indicator at all. In general, it's pretty easy to get
    peak possible performance here, since in general, RDBMS must do much more than ORM in this case.

P.S. I'm one of ORMBattle.NET authors, so if you're interested in details / possible contributions, you can contact me directly (or join ORMBattle.NET Google Groups).

Limiting XCTests measure() to one run only

This is quite annoying.

Althought in Xcode 11, you can specify XCTMeasureOptions when calling self.measure { }, i.e., measure(options: XCTMeasureOptions, block: () -> Void). In the options, you can set iterationCount as 1.

BUT!

Setting iterationCount as 1, Xcode will actually run the measure block twice, according to the doc of iterationCount:

A performance test runs its block iterationCount+1 times, ignoring the first iteration and recording metrics for the remaining iterations. The test ignores the first iteration to reduce measurement variance associated with “warming up” caches and other first-run behavior.

And it did run twice in practice.

Meanwhile setting iterationCount to 0 produces no result cause according to the doc:

... iterationCount+1 times, ignoring the first iteration and recording metrics for the remaining iterations

I don't know if the Runtime Swizzling way of changing the measurement count can get rid of the warm-up run though.

Workaround

Set up a counter to...

  1. skip the first run:
    func testYourFunction() {
let option = XCTMeasureOptions()
option.iterationCount = 1
var count = 0
self.measure(options: option) {
if count == 0 {
count += 1
return
}
// Your test code...
}
}

  1. rollback the change:
    func testYourFunction() {
let option = XCTMeasureOptions()
option.iterationCount = 1
var count = 0
self.measure(options: option) {
defer {
if count == 0 {
try? setUpWithError()
// call `setUpWithError` manually
// or run your custom rollback logic
}
count += 1
}
// Your test code...
}

How to use multiple scenarios in jmeter

You can use multiple regular thread groups and in Test Plan,you can select Run thread Group consecutively.
Sample Image



Related Topics



Leave a reply



Submit