Is software development considered an art or science? According to the world of academia, the answer is science since my diploma clearly states “Bachelor of Science.” Respectfully, however, I disagree; in my opinion, well-engineered and useful software is realized through a combination of art and science.
Related: How the "need for speed" helps us reimagine DevSecOps
When developing software systems, sure, there are certain rules one must follow – syntax, patterns, architecture and tools (this is the stuff that ChatGPT has promised to be quite good at). But on the other side, there is the very human element of framing a problem and using our accumulated knowledge of the world to build a system that solves that problem.
So application development is a creative process that requires skill and ingenuity. If it is a process, then perhaps it can be broken into subprocesses. If each subprocess is defined and bounded, then it can be measured.
"Many things can be measured, but very few should be."
I recently came across a quote which read: “Many things can be measured, but very few should be.” In general, we lean on metrics because they provide a feedback loop which we can use to express and measure progress.
The concept of measuring the development process, or developer productivity, has been around for a while. There are lots of articles and academic papers – even books – on the topic. There are even companies that build products with a goal of increasing productivity, a noble cause to be certain.
The challenge is how do you measure something that is part art and part science? Of all the things we can measure, which should we measure? The truth is there is no universal truth here. The answer really depends on your desired outcome.
There are a handful of measures that most folks in the DevOps industry, aka DevOps Research and Assessment (DORA), agree are worth paying attention to:
- Deployment Frequency measures the frequency with which increments of code are deployed to staging, testing and production;
- Mean Time to Restore (MTTR) measures the time it takes to restore service after a failure in production. MTTR is an important component for systematic incident management;
- Lead Time to Change measures the time code takes to go from committed to successfully running in production. This is effectively a measure which provides visibility into the cycle time from development through to release; and
- Change Failure Rate identifies the percentage of deployments that fail to enter production and the overall risk that this poses to development.
The important thing to keep in mind is that even with all the research that led to these four metrics, they are not the end-all-be-all for measuring success. These are referred to as lagging metrics, or output measurements, such as speed, throughput, quality and stability. No single metric is inherently more important than the rest. Together they create a balance to ensure that intense focus on speed and throughput does not come at the expense of stability and quality. Lagging metrics are the end; they do not measure the means to that end.
It is also important to keep in mind that lagging measures are hard to improve without leading measures to complement them. Leading metrics are inputs - they look ahead to try and predict future outcomes. One example of a leading/lagging pair might be code coverage for Quality; another might be pull request turnaround time for speed. The key takeaway here is that leading indicators help us influence lagging metrics (outcomes).
Related: The art of possible in DevSecOps
For the time being, software is designed and engineered by humans, and any efforts toward efficiency should also invest heavily in improving the developer experience. Understanding and improving the pain points that slow development will not only enhance productivity, but it will attract and retain the best and brightest talent.
Software development is a craft, and every artisan with a love of their craft needs the best tools to perfect their work. While it is essential to take measurements on the process side of development work, it is equally important to measure the human side. Being thoughtful and deliberate in setting targets or drawing conclusions can lead to optimal desired outcomes and happy development teams that realize their fullest potential.
The best approach is to collect and review data through a process of continuous analytics, identify opportunities, implement structural improvements, rinse and repeat. DevSecOps tools and processes are invaluable resources in achieving these goals. They often provide the hooks necessary to collect important data points that feed in to leading/lagging metrics.
So is software development art or science? I say both.