- Unchecked complexity leads to system failure. Without proactive management and clear boundaries, complexity makes systems unmaintainable.
- Engineers, not just systems, have complexity limits. Even structured software becomes unmanageable if developers can’t process the interactions and dependencies efficiently.
- More engineers don’t mean faster delivery. Scaling teams without better structure increases complexity, not velocity.
- Cognitive biases distort complexity management. Teams tend to underestimate risks and overvalue past decisions.
Introduction: Why Complexity Matters
Complexity isn’t just an unavoidable reality of software development—it’s the reality. As systems grow, so does the cognitive load on the humans maintaining them. If we don’t actively manage complexity, it manages us, leading to unmaintainable code, burned-out engineers, and systems that collapse under their weight.
This document attempts to frame our approach to measuring, understanding, and controlling complexity—not just in code, but in the relationships between teams, systems, and ideas.
Dunbar’s Number and the Limits of Human Connection
Dunbar’s Number, typically cited as ~150, refers to the cognitive limit on the number of stable relationships a human can maintain.1 While this originated in anthropology, it has direct implications in software engineering:
- Team Scaling – A development team of 5-10 works differently than a division of 200. Beyond a certain point, formal coordination (standups, meetings, Slack channels) starts breaking down.
- Service Boundaries – If a single service requires too many teams to stay in sync, it’s probably too big.
- Mental Models – A developer can hold a limited number of interacting components in their head. When a system exceeds that threshold, it becomes impossible to reason about without extensive documentation.
The Mythical Man-Month and Team Scaling
Frederick P. Brooks’ classic book The Mythical Man-Month2 established that adding manpower to a late project makes it later. This principle is directly relevant when considering how complexity affects teams and software. Simply scaling up engineering effort does not reduce complexity—it often exacerbates it.
Practical Implication: To avoid cognitive overload, keep teams, services, and responsibilities bounded.
Functional Size and Cognitive Load
A system’s functional size is one way to quantify complexity. Function Points (ISO/IEC 20926:2009) attempt to measure this based on user-facing functionality, not lines of code.
“Productivity falls for all types of projects as they exceed 1,000 Function Points.”3
This aligns with the idea that as a system grows past a certain threshold, human understanding drops off a cliff.
Cyclomatic Complexity
Thomas McCabe introduced Cyclomatic Complexity (CC) in 1976³ as a way to measure the number of linearly independent paths through a program. In plain English, it’s a way to quantify how much of a nightmare your code is to understand.
Rules of Thumb
- CC < 10: Simple, understandable code.
- 10 < CC < 20: Moderate complexity, may require refactoring.
- 20 < CC < 50: Difficult to test, maintain, and reason about.
- CC > 50: Unmanageable.
Hidden Complexity of The Tech Stack
- Every new tool, framework, and integration adds more mental overhead.
- People assume we can “just add another microservice” without recognizing that each new service adds complexity debt.
The reality is that we’re not just building software, we’re managing cognitive load.
Cognitive Biases
Human brains are not wired to manage massive complexity. A few relevant biases:
- The IKEA Effect – If we built it, we think it’s great, even if it’s terrible.
- Hindsight Bias – “We should have known this would happen” (no, we shouldn’t have, because complexity hides these failures in advance).
- The Law of Instrument – “We have Kubernetes, so everything is a Kubernetes problem.”
- Optimism Bias – “We can refactor it later” (no, you won’t).
- Availability Heuristic – We judge the likelihood of events based on how easily examples come to mind, leading to underestimation of rare but critical risks.
- Confirmation Bias – We seek out and prioritize information that supports what we already believe, ignoring contradicting evidence.
For more information – or if you’re in need of a soporific – see the List of Cognitive Biases at Wikipedia.
References
- Dunbar, R. I. M. (1992). Neocortex size as a constraint on group size in primates. Behavioral and Brain Sciences, 16(4), 681-694.
- Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
- Jones, C. (2008). Applied Software Measurement: Global Analysis of Productivity and Quality. McGraw-Hill.
- McCabe, T. J. (1976). A Complexity Measure. IEEE Transactions on Software Engineering, SE-2(4), 308-320.
Leave a Reply