What is Test Driven Development (TDD)?

test driven development

Test Driven Development or TDD is a technical practice that is often used by people doing Agile software development. It is a different approach to both testing and development. It can be difficult and cumbersome at first, but many people believe that it has big payoffs in the long term. I’ll talk about what those payoffs are later. For now, I’ll explain what it is and how it works.

So what exactly is Test Driven Development (TDD)?

Test Driven Development is an approach to programming that insists that it should be the tests that drive the programming, not the programming that drives the tests. To help explain that, let’s think about the traditional approach to software development. Someone (a product owner, UX designer, business analyst, could be anyone) comes up with some sort of requirements or user story (although remember, I’d like it if we stopped talking about “requirements”). Then the developer writes and builds code that meets the acceptance criteria of the user story. Then the code will be merged and tested, maybe by a tester in a QA or SIT environment. The tester (or anyone really) might find defects. In which case they would send the code back for someone to fix up.

The point here is that there is basically a cycle: write code, run tests, then fix up the code. Test Driven Development turns this cycle on its head. It says that we should do tests first, then write code, then fix up the code.

Write tests before you write the code

In Test Driven Development, you write and run the tests first, then you write the code (until the tests pass), then you refactor the code. You might be thinking “This doesn’t make any sense! You run tests before you even have the code? What’s the point?”. The point is that you write code from the perspective of getting the tests to pass.

It goes: Red, Green, Refactor

The basic cycle is Red, Green, Refactor. That means you start with writing some tests and they fail (Red). Then write code until the tests pass (Green). Then you refactor your code. That is, you improve it, without affecting the behaviour. And because you have tests in place, you can quickly run them again and be sure that your refactoring hasn’t broken anything.

You will need to be regularly refactoring as part of your cycle because you start off by just putting in enough code to get all the tests to pass. This isn’t likely to be good quality code. Getting into the regular habit of refactoring (and having the ability to automatically test your changes at every step) will help improve your code and build good habits.

What are the types of tests?

There is actually more than one type of test (and I’m here just talking about automated functional tests; there are different types of manual tests, and there are a whole bunch of types of non-functional tests, but they are for another blog post). Most people break them down into roughly three types: Unit tests, Integration tests, and Acceptance Tests (also sometimes called Functional Tests or End to End Tests). These types of tests form the Test Pyramid.

test pyramid test driven development
As you can see, the idea is that most of your tests should be unit tests, with a smaller amount of integration tests, and a smaller still amount of acceptance tests. Google recommends roughly a 70 / 20 / 10 split in this article on End-to-end Tests.

Unit Tests

Unit tests are isolated to a specific component of code. You can think of it as code “testing itself”. A unit test should have no dependency on any other component or system. If your unit test relies on a response from a database or an API, it is not a unit test, it is an integration test. An example of a unit test might be a test to check the output of a constructor. Here is an example in Python:

 

 

 

import unittest
class ConstructorTest(unittest.TestCase):

def test_create_customer(self):
testCustomer = Customer(“12345”, “John Smith”, “Active”)
self.assertEqual(testCustomer.id, “12345”)
self.assertEqual(testCustomer.name, “John Smith”)
self.assertEqual(testCustomer.status, “Active”)

class Customer:

def __init__(self, id, name, status):
self.id = id
self.name = name
self.status = status

if __name__ == ‘__main__’:

unittest.main()

This simple code consists of a class Customer that creates objects with three string properties: an ID, a name and a status. It also has one unit test object, Constructor Test (inheriting from the TestCase class of the python unittest library). This test creates a customer with some properties, then performs a test method with three assertions (via the assertEqual method on the TestCase object) that the object has been created with the correct properties.

If you were doing Test Driven Development, you would start with just the ConstructorTest code, run it (it would fail, not being able to create a Customer object since no such class exists). Then you would create a plain Customer class with an empty constructor taking three parameters (the test would now fail the assertions since it wouldn’t have the right properties), then you would fill out the constructor so that Customer objects have the assigned properties.

If you’re wondering whether there is much value in this unit tests, you’re right, there isn’t. But it’s a start, and it will help avoid problems where someone makes breaking changes to the constructor further down the track. A more valuable unit test would be one where the constructor has some default values. You could then add a unit test that doesn’t pass in all the arguments, but checks that values are still assigned. Here’s an example of a second unittest method added to the ConstructorTest class that illustrates this (the Customer class constructor has also been changed to assign a default status of “Active”).

class ConstructorTest(unittest.TestCase):

def test_create_customer_without_status(self):
testCustomer2 = Customer(“12345”, “John Smith”)
self.assertEqual(testCustomer2.id, “12345”)
self.assertEqual(testCustomer2.name, “John Smith”)
self.assertEqual(testCustomer2.status, “Active”)

class Customer:

def __init__(self, id, name, status=”Active”):
self.id = id
self.name = name
self.status = status

If you wanted to then save customer objects to a database via an ORM, and test that they were saved, you would need an integration test, since a database is a separate component beyond the Customer class.

Don’t forget refactoring! There are some quick improvements that you could make to both the tests (defining string constants instead of repeating string literals for example) and the Customer class (throwing exceptions if someone tries to create a Customer without an ID or name — that would also be a good example of a negative test actually!). After this refactoring (and possible further tests), you can quickly run your tests and make sure your changes haven’t broken anything else.

Integration Tests

Integration tests are tests involving two or more separate components in a system. For example, you could write a test that checked that the Customer objects you created have been saved correctly to a database. The boundaries of the integration tests could be inside your application (i.e. testing your application’s web-tier versus the application’s database), or outside your application or even your organisation (i.e. testing the connection between an edge API and a platform, or between a web application and an external API hosted by another company). Some components may have responses stubbed or mocked (i.e. fake pre-canned responses rather than real live responses).

Acceptance Tests

Acceptance tests (also known as functional tests or end-to-end tests) are the final layer, and test a system all the way from the interface layer and through all components. For a website or web application, these are often done using a browser testing tool such as Selenium. Acceptance tests can be seen as the “ultimate” or “final” tests, since they are testing the real system in its entirety as customers use it. However, there are some big problems with acceptance tests:

  • they are typically very slow. Unit and Integration tests often run in a few milliseconds. A full suite of acceptance tests can take hours to run.
  • they can be “flaky” and provide lots of false positives or false negatives, due to the large number of intermediate systems and dependencies involved, and the complexity of the client applications used e.g. browsers or mobile apps
  • they are fragile and difficult to maintain, since any changes to the UI of a system can break the tests (even if the UI components and business logic are all still sound).

If you are doing proper Test Driven Development and running your tests frequently, you can’t afford to have a huge suite of slow, flaky acceptance tests. Hence the testing pyramid: focus more on the unit and integration tests.

How to do Test Driven Development: the details

You might still be confused at this point about the actual workflow for Test Driven Development. There are two main approaches, and I’ll describe each of them here.

Unit and Integration Test Driven Development aka TDD

The original and “classic” TDD focuses mainly on unit and integration tests. The core activity is the creation and running of unit tests, but you start with the layer above, the integration test. Then you get the integration test passing via code changes which need unit tests around (which are of course written first). Then you jump back up a level and check the integration test is working. So the workflow goes something like this:

  1. Write an integration test
  2. Run it, watch it fail
  3. Think of some code that would allow the integration test to pass
  4. Write a unit test for that code. Watch it (the unit test) fail
  5. Write just enough code for the unit test to pass
  6. Run the integration test again, it should pass
  7. Refactor, run tests to make sure nothing broke
  8. Do some more refactoring, run some more tests

Done!

There is actually an optional further step: think of some more tests. Just because you’ve got a handful of tests to pass doesn’t mean you’re done testing. What about boundary cases? Unhappy paths? Exception handling? Concurrency and threadsafe issues? There are usually some more tests you can think of other than just getting a basic integration test to pass. You can think of these extra tests as more part of your refactoring, however. That is because they are extra tests on top of the original functionality or user story you started with, rather than a whole new user story.

Sometimes you add these extra tests and code at the time you build the original feature, and sometimes you go back and add them later. Going back and adding extra tests and code is part of maintaining a well-crafted codebase and paying down technical debt. Every software team needs to do it.

Acceptance Test Driven Development aka ATDD

There is another more modern approach: ATDD or Acceptance Test Driven Development. This approach adds an extra layer on top of the process above: acceptance tests. So you start with an acceptance test and watch it fail. For example, click on an Inbox link on a webpage (with nothing behind it). Then you think of some integration-level code that would allow that to pass: a service call to retrieve messages. Then you write an integration test for that call, which fails. Now you need to build the implementation, which of course starts with a unit test that fails. Then you go through the cycle: code until the unit test passes, refactor, move on. Then jump back up, get the integration test to pass, then jump back up, get the acceptance test to pass.

This approach is a bit slower and more complex than the traditional approach, but it has its advantages:

  • You are guaranteed to build up a small set of acceptance tests as you go. Remember you don’t want a lot of these, but having some is important.
  • It encourages teams to start off by thinking of the user perspective when implementing stories, rather than the code perspective. That is, it moves from a focus on implementation details (unit tests) and towards business / customer experience (acceptance tests).

What about Behaviour Driven Development aka BDD?

BDD is similar to ATDD but taken a step further. It is a discipline that attempts to produce a complete set of “executable specifications” for a system. Any development work begins with defining and implementing functional / acceptance tests in a human-readable, customer focused, domain specific language. This often takes the form of GWT (“Given… When… Then…”) acceptance criteria. While writing acceptance criteria is nothing new to many agile practitioners, BDD turns these directly into automated tests via a language like Gherkin (part of a BDD testing framework called Cucumber).

Behaviour Driven Development requires a significant mind shift and a new approach to software development. I would only recommended it for more mature agile teams with a strong grounding in Test Driven Development.

Test Driven Development and Agile

Some of you may be wondering if there is a relationship between TDD and Agile. There certainly is! In fact, the people from Extreme Programming claim that you’re not really Agile if you’re not doing some form of Test Driven Development. I’m not sure if I would go quite that far, but it is certainly worth considering.

Test Driven Development enables short development cycles

Because TDD gives you a suite of fast automated unit tests, it enables developers to move quickly in short bursts. You can make a small change, build, and check in a few seconds if your change has broken anything. If it has, you can quickly fix it and quickly check if your fix has definitely worked and definitely not broken anything further.

Test Driven Development encourages thinking about tests and testability

Testing and quality are enormously important in Agile, especially because of the rate of change to the codebase. TDD gets everyone (not just developers, but testers, product owners, analysts, everyone) thinking about tests and test suites and test coverages and testability of code from the very beginning. Even before people write code. The more testable the code is, the better the codebase is likely to be.

Test Driven Development enables and encourages refactoring

Refactoring is important in general but is especially important to Agile. That is because there are often a large number of teams making frequent changes to a codebase that came about organically without much upfront design. Refactoring helps keep the codebase healthy and that technical debt is under control. TDD not only enables easy refactoring by ensuring that anyone can run a large set of regression tests on demand, but it encourages (actually mandates it). If you’re not refactoring, you’re not doing Test Driven Development.

Conclusion

I hope you enjoyed this article on Test Driven Development and found it helpful. It ended up being a lot longer than I originally planned, but that’s not really a bad thing! I’m also quite new to TDD and am learning more every day. I’m going to update this post as I learn more so check back every now and then and see what I’ve changed. And if you have any feedback I’d love to hear about it in the comments!

Leave a Comment:

2 comments
Add Your Reply