C1 or C2 Coverage Tool for Ruby

Does C1 code coverage analysis exist for Ruby?

At the moment, there are no C1 coverage tools for Ruby. In fact, there aren't any coverage tools other than RCov.

Until recently, it was only possible to write tools like this by patching or extending the MRI interpreter in C. Since approximately two years ago, it is also possible to extend JRuby in Java, and there is actually since last month a port of RCov for JRuby. However, this requires both a knowledge of Ruby and C, and a pretty deep knowledge at that, because fiddling around with the internals of MRI is not for the faint at heart.

But only with Rubinius will it be possible to write dynamic analysis tools such as code coverage tools in Ruby itself, making tool writing accessible to a much larger portion of the Ruby community. My hope is that this, coupled with the substantial financial backing of tool vendors (many major IDE vendors are either working on or have already introduced Ruby IDEs, including CodeGear (ex-Borland), IntelliJ, NetBeans, Eclipse, SapphireSteel (Ruby in Steel for Visual Studio) and even Microsoft) will lead to rapid innovation in the Ruby tooling space in 2009 and we will see things like C1, C2 coverage, NPath complexity, much more fine-grained profiling and so on.

Until then, the only idea I have is to use Java tools. The JRuby guys try to emit the proper magic metadata to make their generated bytecode at least penetrable by the Java tools. So, maybe it is possible to use Java coverage tools with JRuby. However, I have no idea whether that actually works, nor if it is supposed to work.

Testing code coverage for jRuby

To question #1, so far as I know, code coverage for JRuby 1.6.x running in 1.9 mode is blocked on this open defect: JRuby 6106

If you're running in 1.8 mode, you should be able to use the rcov gem.

Advantage to using describe/it over feature/scenario in specs? (besides syntactic sugar)

I think the question is a bit broad, but it is possible to answer with some advice and opinions based on my own experience.

  1. Is there any advantage to writing tests with describe/it over feature/scenario? (Besides syntactic sugar)

Not as far as I know. However, you may find some convenient test framework features are easier to implement in one scheme than another.


  1. By using Capybara's feature/scenario, does it slow down the test suite? (Compared to using RSpec's keywords)

Just using the keywords will not be a large factor in speed of processing. What kind of web driver and host simulation you are using will have a larger impact.


  1. What exactly are the tests I am writing (as explained in the code block)? Acceptance, unit, a combination?

I would call them acceptance tests. However, there is not always a clear dividing line, and you need to look at how the tests will be run, and how they will be used in your development process.

A mature development pipeline may have two or three separate test suites used for different purposes, and probably implemented using different test frameworks. You might want a set of very fast tests (usually unit tests) implemented to run as a quick automated test of new code commits for instance.


  1. Would writing tests like the above alone achieve higher coverage? (Our next goal is >80%)

The tests can exercise any user-accessible feature of the application, and any of your own code that is exercised can be considered covered. It is likely you can get higher than 80% C0 coverage (Ruby coverage tools don't usually provide deeper details such as C1), provided you do not have a lot of utility scripts or other code that is not user-accessible.


I suspect using a specific test framework's keywords will have minimal impact. However, using Capybara to acceptance test the application via the web interface is going to be much slower than running lower-level unit tests of individual modules.

Speed of tests can vary orders of magnitude. For tight unit tests around a fast module, I might expect to run 100 examples per second. On a web development project, I typically run 10-20 examples per second on unit tests, but maybe 1 example per second on acceptance tests (which is roughly the ballpark you are getting here). When using Capybara via a browser driver on a hosted copy of a site, I might expect to run one example in 10 seconds, so a suite with over 100 tests has to be run only for critical-path tests, such as versus release candidates.

Unit testing for shell scripts

UPDATE 2019-03-01: My preference is bats now. I have used it for a few years on small projects. I like the clean, concise syntax. I have not integrated it with CI/CD frameworks, but its exit status does reflect the overall success/failure of the suite, which is better than shunit2 as described below.


PREVIOUS ANSWER:

I'm using shunit2 for shell scripts related to a Java/Ruby web application in a Linux environment. It's been easy to use, and not a big departure from other xUnit frameworks.

I have not tried integrating with CruiseControl or Hudson/Jenkins, but in implementing continuous integration via other means I've encountered these issues:

  • Exit status: When a test suite fails, shunit2 does not use a nonzero exit status to communicate the failure. So you either have to parse the shunit2 output to determine pass/fail of a suite, or change shunit2 to behave as some continuous integration frameworks expect, communicating pass/fail via exit status.
  • XML logs: shunit2 does not produce a JUnit-style XML log of results.


Related Topics



Leave a reply



Submit