Using AI Code Assistants to Generate Unit Tests and Maximize Coverage
October 06, 2025 by Otso Virtanen | Comments
Code coverage measures which parts of the source code have been executed by tests. The metric used can be functions, lines or branches executed, or more complex metrics such as assessing independent conditions in statements. Depending on your code coverage tooling, the tests can be a combination of unit tests, functional tests, or even manual tests. High code coverage is desirable as it often indicates well-tested software. However, writing tests cases to achieve high code coverage can be laborious and expensive.
In this article we demonstrate with an example how to use AI code assistants and code coverage tools together to generate unit tests to increase code coverage. We also provide the AI code assistant with the means to verify that the coverage metric has increased by executing the newly generated unit tests and analyzing the code coverage report.
The example in this article uses Microsoft’s GitHub Copilot in Visual Studio Code with Coco, a code coverage solution for C/C++, C#, Tcl and Qt’s QML. This setup is generic and can be adapted for other code coverage tools and AI code assistants, such as Cursor and Claude Code. Similarly, while our example generates CppUnit tests, other unit test frameworks like GoogleTest or Catch2 would also work.
The Setup
We are building on an existing Coco example documented here. This is an example of a simple calculator using an expression parser including baseline unit tests. The example includes detailed steps on how to use Coco for various topics, from basic code coverage to more advanced topics like patch analysis. Install Coco and the parser example source code from here and see the developer documentation for more information.
New to code coverage? In a nutshell, you’ll be using a Coco-instrumented build, which acts like a bean counter: it keeps track of the lines executed and matches those lines with your test cases. Other than that, the functionality of the application doesn’t change, and you can use the same instrumented build for all your tests—including manual and functional tests. That’s it! |
We start with the parser_v4 version with a working setup of Copilot, Coco, and the ability to compile and run the parser example and its accompanying unit tests.
The image below shows the setup we have in place for improving the unit tests: