Get insightful engineering articles delivered directly to your inbox.
By

— 5 minute read

Testing Our Shared ESLint Configs

It started out as a string of Fiddler on the Roof jokes about our favorite JavaScript linting tool:

Then Nicholas C. Zakas took me seriously:

When the creator of ESLint calls you out, you have to oblige, and so I did:

We then joked about the usefulness of the formatter:

All of this was fun to play around with, but getting a chance to see ESLint’s custom formatters in action spurred an idea for an actual improvement to InVision’s shared ESLint configs.

Stepping Back

When we first put our ESLint configs together, we included a basic test to verify everything worked as expected. To do this, we created a sample file (we’ll name it pass.js) of how we’d like our code to look and then ran that through our ESLint rules. It was nothing more complicated than running eslint .

If the file failed, it meant one of our rules was misconfigured. This helps when upgrading ESLint, especially when we transitioned to 1.x.

Having this check was great, but I couldn’t help remember a quote by Edsger W. Dijkstra that a previous co-worker told me:

“Testing shows the presence, not the absence of bugs.”

False Positives

There was a distinct issue with our test suite: The passing result looks exactly the same as the result from it not running at all.

In other words, when we run ESLint, we’re looking for a “0 errors/warnings” result, which could possibly happen if our rules are misconfigured (e.g. it’s not catching errors it should).

To work around this, we need a “negative” test. A way to say “look at this file and validate that errors are caught”. So we created a doppleganger of our pass.js file called fail.js.

Fail.js

This file contains syntax that should fail an ESLint check. I have to admit, it was kind of fun to write. Mismatched indention, unused variables; we broke all the rules.

The only problem was that the errors in the file would mean the test run would report as a failure. To invert the results, I found a clever little command line hack:

eslint examples/fail.js -f compact && echo 'Error: failures not caught' && exit 1 || exit 0

This script essentially reverses the status code returned from ESLint, exiting with a failure if ESLint passes, and passing if ESLint fails. It’s a bit backwards, but it serves our purpose of validating that ESLint is catching errors.

Getting Specific

This solution worked for a bit, but it was pretty fragile. It didn’t actually count the number of errors returned, just that an error was caught. So even if only 9 out of 10 errors are caught, the test still shows as passing.

This is where we harken back to the conversation that I began the post with. Witnessing the ability to handle the results from ESLint in a custom manner gave me an idea for improving our failure test suite.

Instead of blindly accepting that any sort of failure is a good thing, what if we checked the specific number of errors and warnings against an expected result? Here’s what our new script looks like:

ERRORS=13 WARNINGS=1 eslint examples/fail.js -f ./failure-reporter.js

A little shorter, and less complexity. We’re passing in two environmental variables, defining the number of errors and warnings we’re expecting to see. We also tell ESLint to use a customer formatter, which looks like:

function validateCountMatch(expected, actual, type) {
	if (expected != actual) {
		console.log("Expected " + expected + " " + type + " but found " + actual);
		return false;
	}

	return true;
}

module.exports = function( results ) {
	results = results || [ ];

	// accumulate the errors and warnings
	var summary = results.reduce( function( seq, current ) {
		seq.errors += current.errorCount;
		seq.warnings += current.warningCount;
		return seq;
	}, { errors: 0, warnings: 0 } );

	var errorCountMatches = validateCountMatch(process.env.ERRORS, summary.errors, "errors");
	var warningCountMatches = validateCountMatch(process.env.WARNINGS, summary.warnings, "warnings");

	if ( errorCountMatches && warningCountMatches ) {
		process.exit(0);
	} else {
		process.exit(1);
	}
};

The reporter is pretty straightforward. It tallies the number of errors and warnings found, then checks that count against the expected values, reporting a failure message if they differ. It then calls process.exit with the proper exit code to complete the script (Note: We do have to call process.exit(0), otherwise it will respond with ESLint’s failure code due to the code errors.)

Here’s what the output looks like if our tests don’t pass:

Console output with text “Expected 1 warnings but found 2”

With our new formatter, we can now be more certain that updates don’t unexpectedly turn off flags or miss failures.

If you haven’t checked out ESLint yet, the flexibility of the tool shown here should hopefully convince you to invest some time in researching it.

By
Kevin is a Sr. Front End Engineer at InVision.

Like what you've been reading? Join us and help create the next generation of prototyping and collaboration tools for product design teams around the world. Check out our open positions.