Developers don’t really do proper TDD… do they??

Photo credit: Kevin Ku

As a long-time IT contractor and occasional trainer/consultant, I’ve worked on a lot of projects with a whole lot of different development teams. Within each company I’ve also talked to other teams (hey, I’m a friendly guy!) and as a result formed a pretty clear opinion about unit testing attitudes and practices in the industry. At least in London, that is.

So the admittedly ad-hoc conclusion I’ve reached is (drum roll)…

The majority of developers do write at least some unit tests, but don’t often follow a strict Test Driven Development/test-first approach.

So far, this is neither a good nor a bad thing… in my mind it’s the existence and effectiveness of the tests that really counts. Usually, this means tests that cover failure scenarios as well as “happy path” success scenarios.

But that’s the other thing I’ve noticed…

With some admirable exceptions, the vast majority of tests have tended to be “happy path” tests.

Very few of the tests I’ve seen cover “unhappy path” scenarios — i.e. how does the system react when things go wrong?

This problem can be attributed to any number of factors — schedule pressure, timeboxed activities, team policy or culture, project context (prototypes get a free pass), priority of time-to-market, etc. But one major factor I believe to be the case is today’s emphasis on code coverage metrics (“what % of code do our tests cover?”), rather than test coverage (“what proportion of our test plan, which in turn is based on the requirements— including non-functional and failure handling — do the tests cover?”).

I believe this is simply because the former is more easily measured. Tool vendors’ marketing depts, start your engines!

I’ll refer to test coverage as requirements coverage in future articles, as I believe (as do others) that people tend to confuse test coverage with its stablemate code coverage.

But let’s talk numbers…

Photo credit: Lukas Blazek

Relatively speaking, the conclusion I reached was pretty anecdotal. Viewed as a kind of informal survey, the sample size (“projects I have recently seen”), while large enough to form a reasonable opinion, isn’t nearly large enough to be considered a true or accurate picture.

So it makes sense to turn to academia, research, to find some meaningful quantitative insight on the subject of unit testing adoption…

Photo credit: Mohammad Metri (photo unceremoniously cropped by me)

Ah. Yes.

As it turns out, not a lot of up-to-date research has been done into unit testing practices.

But let’s work with what we’ve got…

Back in 2006 there was a survey conducted by Per Runeson at Lund University. Of course, testing tools, understanding and strategies have improved massively since then; however it’s still notable that the issue of unit identification (that is, deciding what exactly to write a test for) was called out as both a strength and a difficulty with unit testing.

More recently, a 2014 survey, conducted by Ermira Daka and Gordon Fraser at the University of Sheffield, gives a more up-to-date picture, though the sample size wasn’t huge. However, one fairly major take-home was this:

Respondents mostly agreed that the biggest difficulty with unit testing is identifying what code to test.

Even though both surveys are now rather dated, I still believe this is a problem today. Given that the “unit” under test can range from a single function to a whole class, just how much code to cover with one test isn’t always an obvious or intuitive choice. This difficulty could be another reason for the apparent paucity of “unhappy path” tests in projects.

(Another, perhaps more worrying, reason may be a lack of thought going into dealing with failure cases in the code itself. But that’s a story for another day).

But with all of that said, the above should still be taken with a grain of salt due to the small sample size overall, and the studies’ respective ages.

So how about some bigger, more recent numbers?

I know there are other surveys out there, e.g. this one from 2017 which is interesting in its own right but (for our purposes) doesn’t zero in on developer-written tests. If you know of some recent, more relevant studies, do post a link in the comments.

Meanwhile, in my own humble attempt to gather more up-to-date data, I’ve put together a survey on unit testing practices via good old Survey Monkey:

So here’s the invitation — if you’re a software developer, whether or not you’re a fan of unit tests, please do take a minute to complete the survey. You’ll be helping to form a more realistic view on unit testing attitudes and practices in 2019. And don’t forget to share the link with your network. Thanks!

Assuming the survey has a decent sized response, I’ll publish the results in a near-future article — and will update this article with the link.

It would also be great to hear about your experiences with teams’ attitudes toward unit testing — and acceptance testing etc — in the comments.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store