30 Best Practices for Software Development and Testing

Software development best practices

Joining a new company with established programming practices and culture can be intimidating. When I joined the Ansible team, I decided to compile a list of software engineering principles and practices that I have learned and strive to follow. These principles, although not definitive or exhaustive, should be applied with wisdom and flexibility.

As a passionate advocate for testing, I strongly believe that good testing practices not only ensure a minimum quality standard often lacking in many software products but also guide and shape development itself. Many of these principles are related to testing practices and ideals. While some of them are Python-specific, most are applicable to any programming language. For Python developers, it’s essential to refer to PEP 8 for programming style and guidelines.

It’s worth noting that programmers are often opinionated, and strong opinions usually reflect great passion. Feel free to disagree with these points, as we can engage in discussions and debates in the comments.

Development and Testing Best Practices

1. YAGNI: “You Ain’t Gonna Need It”

Don’t write code that you think you might need in the future but don’t need yet. Coding for imaginary future use cases often leads to dead code or the need for rewriting because the envisioned use case always turns out to be slightly different. In code reviews, I question the presence of code meant for future use. Design APIs to permit future use cases, but only if necessary. For agile programming, I highly recommend Kent Beck’s “Extreme Programming Explained.”

2. Tests Don’t Need Testing

Tests should focus on the code you write, not on testing other people’s code or external libraries unless it’s absolutely necessary. Instead, test infrastructure, frameworks, and libraries used for testing. This keeps your tests focused and efficient.

3. Extract Reusable Code after the Third Time

When you find yourself writing the same piece of code for the third time, it’s a clear sign that it should be extracted into a general-purpose helper function. By breaking out reusable code, you gain more clarity about the general-purpose problem you’re solving. Remember to write tests for these helper functions as they become reused.

4. Design APIs for Simplicity and Flexibility

For both external-facing and object APIs, prioritize simplicity for common cases while allowing for complexity and flexibility in more advanced use cases. Start with a design that requires minimal configuration and parameterization. Add options or additional methods as needed for more complex scenarios.

5. Fail Fast and Clearly

Check input and promptly fail on nonsensical input or invalid state. Use exceptions or error responses to communicate the exact problem to the caller. Avoid unnecessary type checking for input validation unless required. However, allow for innovative use cases of your code.

6. Test Units of Behavior, Not Implementation

Unit tests should focus on the behavior of your code rather than its specific implementation. Treating test objects as black boxes and testing through the public API results in more modular and maintainable code. In cases where you need to test specific complex states, it may be necessary to call private methods. Writing tests first can help enforce better code structure and behavior. For an excellent introduction to test-driven development, I recommend Kent Beck’s “Test Driven Development by Example.”

7. Aim for 100% Test Coverage

For unit tests, including test infrastructure tests, strive for 100% coverage. While it’s impossible to cover all permutations and combinations of state, it’s important to test all code paths. Leaving code paths untested should only be done for valid reasons, such as genuinely untestable code or scenarios covered elsewhere. Measure coverage and reject any changes that reduce the coverage percentage to ensure progress in the right direction.

8. Write Less Code and Delete Unnecessary Code

Code is prone to errors and requires maintenance. Writing less code and deleting unnecessary code should be a mantra. Avoid writing code that isn’t needed, and regularly remove code that is no longer necessary.

9. Strive for Readable Code, Not Just Comments

While code comments can be useful, they often become outdated and misleading over time. Instead, focus on writing readable and self-documenting code through good naming practices and a consistent programming style. Only comment code that can’t be made obvious, such as code working around obscure bugs, unlikely conditions, or necessary optimizations. Comment the intent of the code rather than just describing what it does. Kernighan and Pike, authors of “The Practice of Programming,” share this viewpoint.

10. Think About Possible Errors and Failure Points

Always consider potential failures, invalid input, and other scenarios that might go wrong. This mindset helps catch many bugs before they occur.

11. Separate Stateful and Side-Effect-Filled Code

Logic that is stateless and side-effect free is easier to unit test. Separate logic from stateful and side-effect-filled code into smaller functions. This makes mocking and unit testing easier. However, stateful and side-effect-filled code still requires testing, ideally with a single comprehensive test and mocking in other tests.

12. Prefer Functions over Types and Objects

Avoid using globals whenever possible. Instead, opt for functions over types and objects. Using built-in types and methods provided by Python is often more efficient than creating custom types, unless you’re writing code in C.

13. Utilize Python’s Built-In Types for Better Performance

Using Python’s built-in types and methods tends to be faster than creating custom types, unless you’re writing code in C. Whenever possible, leverage the standard built-in types for improved performance.

14. Embrace Dependency Injection for Clear Dependencies

Adopt the dependency injection pattern to clearly define and manage your dependencies. Have objects and methods receive their dependencies as parameters rather than instantiating new objects themselves. While this might increase API complexity, it helps maintain clarity and prevents methods from becoming overloaded with excessive dependencies. Martin Fowler’s “Inversion of Control Containers and the Dependency Injection Pattern” is a definitive resource on this topic.

15. Aim for Small and Focused Unit Tests

Strive for smaller, tightly scoped unit tests that provide specific information when they fail. These tests pinpoint what went wrong, making debugging easier. Tests should execute quickly, ideally taking less than 0.1 seconds. Tightly scoped unit tests, combined with higher-level integration and functional tests, ensure proper cooperation between units of code. Treat your tests as a de facto specification for your code to make it more understandable. Gary Bernhardt’s “Fast Test, Slow Test” is an excellent presentation on unit testing practices.

16. Carefully Design External-Facing APIs

Designing well-thought-out external-facing APIs is crucial. Consider future use cases and design with simplicity in mind. Changing APIs is challenging and can lead to compatibility issues. Focus on making the simple things simple.

17. Break Up Long Functions and Modules into Smaller Parts

If a function or method exceeds 30 lines of code, consider breaking it up into smaller, more focused parts. For modules, aim for a maximum size of around 500 lines. While test files can be longer, smaller, more modular units tend to be more maintainable.

18. Avoid Work in Object Constructors and init.py

Object constructors are difficult to test and often result in surprises. Avoid performing work in constructors. Similarly, avoid adding code to __init__.py files, except for imports for namespacing. Code in __init__.py is typically unexpected by other programmers.

19. Prioritize Test Readability over DRY Principle

In tests, readability of individual test files matters more than maintainability. While reusability is important, excessive focus on the DRY (Don’t Repeat Yourself) principle can sometimes hinder readability. Strive for readable tests that provide valuable information in isolation.

20. Refactor Code to Match the Problem Domain

Refactor your code when necessary to align with the problem domain. Programming is about creating abstractions, and the closer your abstractions match the problem you’re solving, the easier your code will be to understand and maintain. As systems grow, their abstractions and structure may need to change to accommodate expanding use cases. Neglecting refactoring results in technical debt, which becomes more costly and problematic over time. Michael Feathers’ “Working Effectively with Legacy Code” is an excellent resource on refactoring and testing.

21. Prioritize Correctness over Performance

When addressing performance issues, focus on making your code correct first before optimizing for speed. Profile your code to identify true bottlenecks rather than making assumptions. Writing complex, obscure code for the sake of performance should only be done after profiling and confirming its worth. Include timing tests within your test suite to prevent performance regressions. Remember that adding timing code can impact performance characteristics.

22. Smaller Unit Tests Provide Clearer Insight

Smaller, more tightly scoped unit tests offer valuable information when they fail. They highlight specific problems, making it easier to identify and fix issues. Unit tests should ideally execute quickly, as there’s no room for slow unit tests. Fast unit tests act as a de facto specification for your code. For an excellent introduction to generators, I recommend David Beazley’s “Generator Tricks for Systems Programmers.”

23. Embrace the “Not Invented Here” Mindset

The “Not Invented Here” mindset is not as detrimental as some claim. When you write the code yourself, you understand its behavior, can maintain it better, and have the freedom to extend and modify it as needed. This principle aligns with YAGNI, where specific code for required use cases is preferred over general-purpose code with unnecessary complexity. However, be cautious of owning more code than necessary to avoid unnecessary burden.

24. Foster Shared Code Ownership

Shared code ownership should be a goal. It’s essential to discuss and document design and implementation decisions. Addressing design mistakes during code reviews is preferable to leaving them unaddressed. Siloed knowledge leads to inefficiencies and hampers collaboration. Code review should focus on reviewing and discussing code quality rather than debating design choices. However, discussing design decisions is still better than ignoring them altogether.

25. Leverage the Power of Generators

Generators are often shorter and easier to understand compared to stateful objects for iteration or repeated execution. Take advantage of generators in your code for improved clarity and simplicity. For an excellent introduction to generators, explore David Beazley’s “Generator Tricks for Systems Programmers.”

26. Think like an Engineer and Design Robust Systems

Let’s approach programming as engineering. Focus on designing and implementing robust systems rather than letting them grow organically and become unwieldy. Programming is a balancing act, and over-engineering can be as detrimental as under-designed code. Robert Martin’s works are highly recommended, especially “Clean Architecture: A Craftsman’s Guide to Software Structure and Design” and “Design Patterns.”

27. Eliminate Intermittently Failing Tests

Intermittently failing tests decrease the value of your test suite. Over time, if tests consistently fail, they become ignored or assumed to be failing regardless. Fix or remove intermittently failing tests, even though it can be a challenging task. The effort is worth the improvement in reliability.

28. Prefer Specific Conditional Changes over Arbitrary Sleeps

In tests, wait for a specific condition to change rather than using arbitrary sleep durations. Sleeps make tests harder to understand and slow down your test suite. Waiting for a specific condition improves the reliability and efficiency of your tests.

29. Always Verify Test Failures

To ensure that your tests are effective, intentionally introduce bugs and verify that the tests fail. Running tests before completing the feature under test or deliberately adding bugs ensures that your tests are actually testing something. Accidentally writing tests that don’t test anything or can never fail is a common pitfall.

30. Foster a Healthy Development Environment

Developing software solely based on constant feature delivery is not sustainable. Allow developers to take pride in their work, address technical debt, and strive for high-quality, bug-free products. Ignoring technical debt slows down development and leads to a suboptimal end product.

Thanks to the Ansible team, especially Wayne Witzel, for their valuable comments and suggestions in refining these principles.

Ready to break free from the IT processes holding you back? Download our free eBook, Teaching an elephant to dance, to unleash your software’s full potential.