Site icon R-bloggers

muttest 0.2.0: More Mutators, Better Reporting, and Parallel Execution

[This article was first published on jakub::sobolewski, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Your tests pass. Coverage is high. Everything looks fine — until someone finds a bug in production that your didn’t catch – all because of a poor assertion.

Code coverage tells you which lines ran. It says nothing about whether those lines are actually tested. You can delete every assertion in your test suite, run covr, and still see 100%. Coverage is a measure of execution, not correctness. That gap is exactly what {muttest} was built to close — and 0.2.0 makes it much more capable than the previous version.

📝 See the full changelog here.

What Is Mutation Testing?

Mutation testing asks a harder question than coverage: if this code were subtly wrong, would your tests notice?

It works by making small, deliberate changes to your source code — swapping > for >=, flipping TRUE to FALSE, replacing && with || — and then running your test suite against each modified version. Each modified version is called a mutant. If your tests fail, the mutant is killed: your tests noticed the change. If your tests pass, the mutant survived: your tests are blind to that kind of bug.

The result is a mutation score:

< math xmlns="http://www.w3.org/1998/Math/MathML">< semantics>< mrow>< mtext>Mutation Score< mo>=< mfrac>< mtext>Killed Mutants< mtext>Total Mutants< mo>×< mn>100< mi mathvariant="normal">%< annotation encoding="application/x-tex">\text{Mutation Score} = \frac{\text{Killed Mutants}}{\text{Total Mutants}} \times 100\%Mutation Score=Total MutantsKilled Mutants×100%

Unlike coverage, this score reflects assertion quality, not just execution. A test suite full of expect_true(is.numeric(x)) checks will hit 100% coverage while missing every meaningful failure. Mutation testing exposes that.

Why You Should Care

Here is the canonical example. The function is_adult has a boundary condition:

# R/is_adult.R
is_adult <- function(age) {
  age >= 18
}

And these tests give 100% coverage:

# tests/testthat/test-is_adult.R
test_that("is_adult returns TRUE for adults", {
  expect_true(is_adult(25))
})

test_that("is_adult returns FALSE for minors", {
  expect_false(is_adult(10))
})

Both tests pass. Both would still pass if >= were accidentally replaced with >. The boundary value 18 is never tested, so neither mutant is killed:

#' R/is_adult.R — mutant 1: ">=" → ">"
is_adult <- function(age) {
  age > 18
}

Imagine this bug makes it to production. A 18 year old user tries to sign up, and the system rejects them. The bug is real, but your tests never saw it coming.

Running muttest exposes this immediately:

library(muttest)

plan <- muttest_plan(
  mutators = comparison_operators()
)
muttest(plan)

The progress table shows one survivor. The fix is a single test:

test_that("is_adult returns TRUE at the boundary age", {
  expect_true(is_adult(18))  # kills the >= → > mutant
})

This surviving mutant is not a problem to fix — it’s a specification you forgot to write.

The LLM Test Problem

Many developers now use LLMs to generate tests. Who likes to write tests themselves anyway?

LLMs are fast and produce syntactically correct code, but they may produce obvious cases, miss boundaries or just test properties of the code. The is_adult test suite above is what a language model might produce: structurally fine, semantically incomplete.

Mutation testing gives you an objective signal for how strong tests actually are, whether you wrote them yourself or they were generated by an LLM. A low mutation score doesn’t mean the LLM did a bad job — it means you now know exactly where to strengthen the assertions. LLM-generated tests need external validation just as much as human-written tests do.

muttest provides tools to help with this validation.


What’s New in 0.2.0

Expanded Mutator Library

The biggest addition in this release is a full roster of new mutators, organized into individual mutators and ready-made preset collections.

New individual mutators:

New preset collections — pass a single call and get the full set of relevant mutators:

The three operator presets from 0.1.0 are still there — arithmetic_operators(), comparison_operators(), logical_operators() — and now they have company.

A practical starting configuration covers most of what you’d want to catch in business logic:

plan <- muttest_plan(
  source_files = "R/my_file.R",
  mutators = c(
    arithmetic_operators(),
    comparison_operators(),
    logical_operators(),
    condition_mutations(),
    numeric_literals(),
    list(remove_negation())
  )
)

Layer in boolean_literals(), na_literals(), string_literals(), or index_mutations() based on what your code actually does.

Mutators Are Now Parametrized

Individual mutators accept configuration arguments. operator("+", "-") and boolean_literal("TRUE", "FALSE") let you define exactly which token to replace and with what — so you can express the mutations that matter for your domain without writing a custom mutator from scratch. The Mutator base class is also now exported for cases where you want to go further and build an entirely custom mutator.

Survived Mutants Are Now Reported

The ProgressMutationReporter previously showed you only killed and total mutant counts. In 0.2.0, it now reports survived mutants — the ones your tests missed.

This is the signal that matters. Survivors are not noise; each one represents a real gap in your test suite. Seeing them surfaced directly in the progress output makes the feedback loop tighter: run muttest, read the survivors, add a test, repeat.

i Mutation Testing
  |   K |   S |   E |   T |   % | Mutator  | File
v |   1 |   0 |   0 |   1 | 100 | > → <    | shipping.R
x |   1 |   1 |   0 |   2 |  50 | > → >=   | shipping.R
-- Survived Mutants -----------------------------------------------
shipping.R  > → >=
2-   if (weight_kg > 5) 15.00 else 5.00
2+   if (weight_kg >= 5) 15.00 else 5.00
-- Results --------------------------------------------------------
[ KILLED 1 | SURVIVED 1 | ERRORS 0 | TOTAL 2 | SCORE 50.0% ]

Timeouts and Improved Error Handling

Mutation testing works by running your test suite once per mutant. Some mutations produce code that hangs — an infinite loop, a blocking call, a computation that never completes. In 0.1.0 that would stall your entire run.

In 0.2.0, muttest() supports per-mutant timeouts. Set a timeout and any mutant whose test run exceeds it is marked as errored. The rest of the run continues unaffected.

Error handling in general has been improved. When test execution fails unexpectedly, errors are now captured and reported cleanly rather than surfacing as unhandled conditions that stop the whole run. This makes mutation testing more robust in real projects where test environments are not always perfectly controlled.

Parallel Execution

The 0.1.0 release ran mutants sequentially. In large files with many mutants, that adds up. muttest() now supports parallel execution with {mirai} under the hood: mutants can be run concurrently across multiple workers, cutting run time on larger repositories.


Getting Started

Install from CRAN:

install.packages("muttest")

Pick one file with meaningful logic — branching, comparisons, arithmetic. Define a plan:

library(muttest)

plan <- muttest_plan(
  source_files = "R/your_file.R",
  mutators = comparison_operators()
)

muttest(plan)

Read the output. Find the survivors. Add the tests they imply. Repeat.

Start with one file and one mutator preset. Aim for a meaningful score improvement each iteration rather than chasing 100% immediately. A score of 80%+ on critical business logic is a strong starting target.

Try it on a file where you suspect the tests are weak. The survivors will tell you exactly what to add.


I’d Love to Hear From You

{muttest} is still fresh and its features and interface might change. The new mutator library covers a wide range of patterns, but there are certainly mutations specific to your domain that aren’t covered yet. If you run into a case where the right mutation is missing, an existing mutator behaves unexpectedly, or something in the output is hard to interpret, please open an issue on GitHub.

Feature requests are equally welcome. If there’s a kind of code change you’d want to test for and there’s no good way to express it yet, please drop an issue in the repository.

To leave a comment for the author, please follow the link and comment on their blog: jakub::sobolewski.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Exit mobile version