May 16, 2008
Breaking TrainingYesterday I took an all-day training course based on Barry Oshry's theories on organizations. This is the same stuff that Steven Sinofsky blogged about a few years ago: the idea that organizations have tops, middles, and bottoms. Tops "own" delivery of something, bottoms do the actual work, and middles are those who are neither tops nor bottoms. It's not that people are always fixed on one role (they rarely are), but that at various times people act in those roles (or "occupy those spaces" in the vernacular of the course).
The learning of the course comes from the realization that people in each space have predictable conditions acting on them. Tops deal with complexity and responsibility; middle suffer "tearing" (rhymes with caring--it means being pulled between multiple goals); and bottoms suffer vulnerability and disregard. Like any such classification system, the goal is to recognize other people's situations and work to ease their condition (this is what personality classification systems like Myers-Briggs or the Insights colour wheel are trying to drive; Oshry's work is about how people's position in an organization affects them, but the facilitator did comment that about 20% of how people behave is based on their underlying personality, not their top/middle/bottomness).
I agree with Sinofsky when he says that three days is too long for this material; I thought one day was too long (there was also some stuff about customers, who suffer from neglect, that appeared bolted-on and didn't strike me as particularly perceptive). I think 2 hours would be fine, or you could read one of Oshry's many books on the subject (if it has the word "system", "organization", or "middle" in the title, it's about this). I'll emphasize that I DID find it to be insightful and I know one group of middles, in particular, that I am going to try to "integrate" (which is what middles need to do).
More interesting was Sinofsky's description of what happened when he went through a 3-day version of the course (called the "Power Lab") at an offsite in Cape Cod, with a group of other Microsoft senior leaders: "Without going into too many details, suffice it to say that a group of Microsoft people managed to 'break' the simulation. We had the 'facilitators' in tears and ended the game two days early. It was torture." Sinofsky, to his credit (and with the benefit, at the time, of eight years of hindsight), appears to have mixed feelings about the fact that the Microsoft people broke the simulation, but I suspect that others were quite proud of the fact.
As it happens, I had read Sinofsky's blog entry (linked to above) the day before I took the course; I found it when searching for information beforehand. When I read it, my assumption was that "breaking" the course really meant "obnoxiously refusing to fully engage in the exercises"--not particularly difficult to do if you care to try, and nothing to be proud of unless your role model is a three-year-old. When I got to the course yesterday, I learned that the facilitator had actually been involved with the infamous 1997 Power Lab that Sinofsky was at, and he confirmed my suspicions. Evidently the assembled Microsoft crew quickly decided that they were the cleverest people anywhere in Massachusetts, and set out to prove it to themselves by behaving in an incredibly condescending manner towards everyone that was helping to facilitate the exercise. Given this, the facilitators called off the training partway through. Luckily, a few of the participants realized later that the arrogance displayed in Cape Cod MIGHT JUST POSSIBLY bleed over into how other people perceived Microsoft, and started an initiative to change the culture within Microsoft, which ultimately wound up being the true lasting effect from the Power Lab. So in the end, the Lab "broke" the Microsofties, not the other way around.
The reason I find this fascinating is because there really is something in the culture of Microsoft that inspires people to try to "win" any training simulation they participate in (I suppose using brainteaser questions in your interviews for years and years might bias in favor of hiring people like that). We see this even in the benign exercises we do in Engineering Excellence training--people trying to figure out the gotcha, and being so obsessed with showing off their intelligence and not making "mistakes" in front of others that they refuse to fully participate. The goal of trying to outwit the designers of the training them becomes a meta-exercise, which reduces the learning from the real exercise (we try to mitigate this by giving the typical "you only get out what you put in" disclaimer ahead of time, but it doesn't seem to help much). I have observed this, in particular, the more technical the audience is--when you have a bunch of sales or HR people in the room they seem perfectly content to follow wherever the instructor chooses to lead them, whereas an audience of devs is prone to make snarky comments like, "I see, this is your opening exercise--OK, I'll play along with it this time."
Now, I do have to say that the tops/middles/bottoms training in general is somewhat in-your-face and amplifies any such latent tendencies to want to break the training. Ours was relatively mild--we were led to believe that our performance before lunch would affect the quality of the lunch we were allowed to eat, which I think is inappropriate, but not particularly life threatening (as it happens I enjoyed the peanut-butter-on-flannel-bread sandwich that I actually ate more than I think I would have enjoyed the chicken and rice dish that was later made available to me). In addition, the exercises involved groups of people and the discussion afterwards had an undertone of "tell me what you didn't like about how another group behaved" which made it more confrontational then I thought necessary, but again only enough to make some people a bit uncomfortable, not enough to flip their "disengage" switch. You do hear stories of very extreme versions of this training, where the participants are classified as elites, middle class, and immigrants (or similar) and by the end of the first day they are attacking each other with handmade weapons (see the Stanford Prison Experiment website for another example of this). This would seem to be overkill, since as I mentioned, the basic idea is pretty simple and can be driven home, if necessary, with a half-hour exercises involving zero threats to your physiological needs. I don't know how extreme the Cape Cod Power Lab was, but from Sinofsky's comments about not showering and starving, it was at least somewhat immersive. In such an environment I might also be inspired to try to break the game--not necessarily to show how smart I am, but just to get a decent meal.
May 14, 2008
Things That Everybody KnowsI saw an article today about how the Smart ForTwo (that tiny car you see around) had earned top marks in safety tests conducted by the Insurance Institute for Highway Safety. Despite this, the Institute decided to disqualify the car from potentially earning its "Top Safety Pick" designation because it is just too dang small. "All things being equal in safety, bigger and heavier is always better," says the president of the Institute.
The idea that bigger, heavier cars are safer is something that everybody just "knows". The fact that it's actually false doesn't seem to matter much. Malcolm Gladwell pointed this out a few years ago (he even quoted Clotaire Rapaille!). It's true that in a collision between a small car and a large car it is safer to be in the large car, but small cars do so much better at avoiding accidents that you are much safer in a small car. The result is that the driver of a Volkswagen Jetta, say, is about half as likely to die in an accident as the driver of a Ford Explorer. So now we have the president of the Insurance Institute for Highway Safety, who even a hardened cynic would presume is trying to save lives, instead giving advice that will cost lives, just because he is going with a fact that everybody "knows".
Here's another fact that everybody knows: applications are migrating from the desktop to the web. It must be true because I keep reading it everywhere, for example in this Business Week article about Microsoft's battles with Google: "So far, the shift to online software is more of a drip than a flood. The programs often don't work as smoothly as, say, Microsoft Office, and they can require some tech savvy to use. But the shift seems sure to accelerate in the years ahead." The shift is sure to accelerate--just like it's sure that big heavy cars are safer than small light ones.
Let's draw an analogy. Imagine that public transit were free, like it should be. Now consider public transit vs. cars. Public transit (in this scenario) doesn't cost anything because it is paid for by a combination of ads and other sources of money that exist for their own purposes (such as government subsidies designed to reduce road maintenance costs). Public transit also doesn't require the user to do a lot of upfront planning, such as buying a car--they just hop on when they need to go somewhere. And at first blush, public transit looks a lot like what cars provide--it takes people places without undue exposure to the elements. This starts to look like a classic Innovator's Dilemma, with car being pushed hopelessly upmarket by current owners' incessant demand for better cupholders, allowing the "good enough" public transit to displace it.
So free public transit will displace cars, right? Of course people know it won't, and Toyota would not respond to the notion of free public transit by investing heavily in its own bus service; executives know that although on the surface free public transit looks better than a private car, if you dig a bit deeper you will find a variety of ways in which cars are better--enough that you know that in most cities, public transit adoption will hit an upper limit fairly quickly.
So now you have free software, which is supported by ads and other money sources, and doesn't require any upfront planning--the user can go to the website whenever they want (they can also save their data online, but this is a feature that a standalone app could offer fairly trivially). And on the surface, a word processor on the web looks a lot like a standalone word processor. But my feeling is that when you dig deeper, you discover a bunch of reasons why a standalone word processor is actually vastly superior to a web-based one, and although the web-based one will gain some adoption, it will never replace it. But, as often happens with the tech industry, people writing about software don't seem to have the equivalent of the gut feel that would cause any automotive writer to reject the idea that public transit could displace the private car. The hue and cry about the lightweight-application-with-data-stored-centrally replacing the standalone application is now well into its second decade; I expect it to continue to be something that everybody "knows" for the foreseeable future.
May 06, 2008
"Hard Code" KerfuffleAs I mentioned a little while ago, Eric Brechner's "Hard Code" column is now a public blog. This means you get a bit of a glimpse inside Microsoft in a public forum. Eric writes in his own inimitable style in order to provoke discussion, an area that he has certainly succeeded with his most recent column about recovering from errors.
First of all, ANY discussion of errors/exceptions/asserts/etc will generate controversy because it's an area where everybody seems to have an opinion on the right way to do it. Like all programming opinions, it's based in large part on the previous formative or traumatic experiences of the individual. Since we've all had different experiences we all have different opinions, and since we're programmers we have zero inclination to believe that differing opinions have any merit.
If you want to follow the argument along at home, it helps to know who the people involved are:
- Eric Brechner: aka I.M. Wright, Director of Learning and Development for Engineering Excellence, which means he owns the various different discipline excellence teams (as in Dev Excellence, Test Excellence, etc).
- Alan Page: Manages the Test Excellence team, reports to Eric.
- Alan Auerbach: Works on the Dev Excellence team.
- Larry Osterman: Way oldtime Microsoft employee and blogger.
- Kinshuman: (I assume it's the same guy) Works on Watson at Microsoft.
- Various other people: Don't know who they are.
There's also me, who manages the Dev Excellence team, thus reports to Eric and is Alan Auerbach's manager.
OK, so the fun started when Eric wrote his column saying that letting Watson catch exceptions was bad, instead you should handle them and crash. Larry blogged that this was a really stupid thing to say, and Kinshuman concurred in a comment. Alan Auerbach jumped in to defend Eric and also state that asserts are bad, Alan page replied and said that asserts are good, then Alan and Alan got into a brief back-and-forth on that.
Most of the arguments are of the ships-passing-in-the-night variety. Larry is saying it's bad to catch all exceptions; Eric is saying it's bad to let all exceptions through. These aren't contradictory positions. If you have spent a lot of your career working in an error-code-returning environment (like Larry, or Joel Spolsky, or Raymond Chen, or me) you probably have a natural bias against structured exceptions, but they are a fact of life in some environments (like .NET). But the more relevant fact here is that most people seem to have an argument pro or con exceptions that they deploy whenever they get a whiff of a discussion on the topic, and not much Socratic dialogue ensues.
Eric made a side comment about asserts in his article ("It's like the logic behind asserts—the moment you realize you are in a bad state, capture that state and abort") which misrepresents what asserts are for ("capturing the state" yes, "abort"--with the implication that it's similar to throwing an exception--no) although I think he threw that in there without really thinking about it. But it led to an interesting argument between Alan and Alan: are asserts good or not? I always liked asserts because I worked on networking code and an error might only occur once in a blue moon, so I wanted to break into the debugger when it happened; Alan (Auerbach's) assertion that you don't need asserts because you can set a breakpoint only holds true if you work on reproducible bugs, and I used to scoff at people like that--how hard can fixing your bugs be if every one of them repros on demand? But now that I think about it, relying on any kind of stress failure debugging to catch your errors is pretty outmoded. If I were writing a network protocol today, the first thing I would do is write a fake version of the layer below me that did odd things on demand, and next I would write a fake version of the layer above me that did odd things on demand, and then I would beat on my protocol with this in ever-more-interesting configurations. In such an environment all of my crashes *would* be reproducible and I could set breakpoints as needed. It's funny because I definitely thought of writing automated tools for stress (when I wrote my first NT network card driver in 1990 I also wrote a packet-blasting-and-counting protocol to help test it, which wound up becoming part of the network driver development kit) but never for causing the unexpected timing and dropped packets that lead to those hard-to-debug problems in protocols. I guess I have learned something in 20 years.