The Big TDD Misunderstanding
💡Rumors have it that the term “unit” in “unit test” originally referred to the test itself, not to a unit of the system under test. The idea was that the test could be executed as one unit and does not rely on other tests running upfront (see here and here). The Evolution of TDD: A Historical Perspective
Test-Driven Development wasn't born in a vacuum. It emerged gradually through the practices of programmers who discovered that writing tests before implementation led to better design decisions.
Kent Beck is often credited with "rediscovering" TDD in the late 1990s while working on the Chrysler Comprehensive Compensation System. However, the core idea dates back to earlier programming paradigms. NASA's Project Mercury in the 1960s used a "test-first" approach where programmers wrote test cases before writing code.
As software development matured through the 1970s and 1980s, testing remained largely an afterthought—something done after implementation. The waterfall model reinforced this sequential approach: requirements, design, implementation, verification (testing), and maintenance.
The agile movement of the early 2000s catalyzed TDD's popularity. Beck's inclusion of TDD as a core practice in Extreme Programming (XP) brought it mainstream attention. The publication of his book "Test-Driven Development: By Example" in 2002 codified the red-green-refactor cycle that became TDD's signature rhythm.
Over time, interpretations of TDD diverged. Some developers emphasized isolation and mocking of dependencies (the "London school"), while others preferred more integrated tests (the "Detroit/Chicago school"). This divergence led to confusion about what constitutes a proper "unit" in unit testing.
The misunderstanding persists today, with many developers believing TDD mandates isolated component testing when its original intent was more focused on incremental design through executable specifications.