« The Agility of Las Vegas | Main | Yes, No, or Somewhat »

March 23, 2008

Software Design: Quality vs. Performance

Last week I taught a course on software design--meaning low-level architectural design, not user-visible design.

In the course we talk about various ways to design your software so that it is easier to develop, easier to maintain, easier to test, and so on. The basic rule is to encapsulate variation, meaning to hide things that are different behind a common programming facade so you can treat them as the same. If you look at, say, the way a networking stack is built, your network transport uses an encapsulation that treats all network card drivers the same, and then the network redirector uses an encapsulation that treats all transports the same, and then the rest of the system uses an encapsulation that treats all file systems (which is what the redirector exposes itself as at the top) the same. If you burrow into the details of any of these components, they are using various encapsulations to treat timers and user data buffers and configuration settings and all that the same.

This is all fine and dandy and if you can come up with a design that nicely encapsulates your variation, you will have a piece of software that you can explain to someone else AND feel proud while doing it.

Then performance rears its ugly head.

It's not that performance is a bad aspect of design; in fact it's really the only aspect of your internal design that a user will notice. The problem is that performance directly contradicts ALL of the other criteria that make software "good". Software that is nicely encapsulated simply runs slower than software that breaks encapsulation. If you start out with a clean design and then make it run faster, then the result will be harder to develop, harder to maintain, harder to test, and harder to explain to someone else. It will be "worse" by any measure you can think of--except for speed. Now, speed used to be about the only metric we had for measuring software quality--but the industry has matured a bit. People program in C# instead of C, acknowledging that the result will run slower, but that all the other benefits are worth the tradeoff.

So that is the basic tension in software design: performance vs. everything else. Experience in software design is not so much about knowing design patterns and all that jazz; it's about knowing when to break your encapsulation to make it run faster, and when not to do that.

You would think, perhaps, that cleaner design would run faster. It seems like that should be the case--but it never is. This is one of the reasons that people rail against the perils of premature optimization. The nominal reason is that spending time optimizing early is wasted if the thing you are optimizing doesn't wind up as a performance bottleneck; but I think underlying that is the realization, conscious or unconscious, that optimization means making the code worse, so you should avoid it until you have evidence it is necessary. Of course many programmers think optimization is "cool", both on general principle, and because people can see the results; conversely they don't give two figs for any other aspect of code quality...you can guess the result.

Posted by AdamBa at March 23, 2008 07:24 PM

Trackback Pings

TrackBack URL for this entry:
http://proudlyserving.com/cgi-bin/mt-tb.cgi/658

Comments

Hi Adam,

I humbly disagree. Proper encapsulation (at the object level) allows you to try different architectures that can dramatically improve performance while maintaining encapsulation and readability. New algorithms & data structures are not of more or less quality, they are just different. Proper encapsulation (i.e. quality) is what allows you to make the really big performance gains.

Poor quality comes from poor design, where internal structures and methods are exposed, resulting in complicated inter-dependencies that grow worse over time. The bad design choices were not made for performance reasons, but from inexperience and time to market pressure. On larger projects, quality suffers from the lack of a single architect who maintains and enforces design coherence (re: mythical man month).

Posted by: Peter Barszczewski at March 24, 2008 06:45 AM

Hey Peter! I agree that having encapsulation can help you make selected changes to your algorithms. However I would consider that a maintainability issue. Once you have your new algorithm in place, it would still be faster without the encapsulation.

I'll point out--I think encapsulation is GOOD, and most people violate it incorrectly without having a performance need. And of course some people make bad choices due to not being good developers. But if you look at some Microsoft projects that ran into trouble and wound up with an unmaintainable pile of slop, their big issue was that they had a nice clean design, but it was too slow...and then their problems began.

- adam

Posted by: Adam Barr at March 25, 2008 07:35 AM