« October 2006 | Main | December 2006 »

November 28, 2006

The Annoying Need for "Notepad Paste"

I guess this is an example of software trying to do too much. Most of the time when I cut-and-paste, I just want the text, not the formatting. I know the software is trying to do the right thing, but I usually don't trust it anyway. When I paste from Word to Powerpoint, say, I have little faith that the data is really the way it appears. Even if it LOOKS right, I'm suspicious that there is some magic code embedded in there such that when I do something unexpected (like, say, hit delete at the end of the previous line) the whole thing is going to turn purple. Yes, I know Ctrl-space is supposed to remove formatting, but I don't really believe that either. And I don't want to hunt around for the little icon with paste options, only to find that it is not offering "Paste text only" as an option.

So I do the "notepad paste". You probably know what I mean. You run notepad, copy the data in the first app, paste it into notepad, reselect it in notepad, and paste that into the other app. It works because notepad is so simple that all it provides in the source data is the text itself. So the paste operation automatically takes on the formatting, bulleting, fonting, etcing of the target document, which is what I want. It works perfectly every time.

Hopefully the next version of Office will make it a goal to eliminate the notepad paste. If not, they should make it a standard keyboard shortcut. Ctrl-N could be "notepad paste". Ship it.

Posted by AdamBa at 12:25 PM | Comments (12) | TrackBack

November 27, 2006

Content Creation - Top Down or Bottom Up?

I was working on a slide deck for a course, and one of the slides was the "Objectives" one near the beginning. I sent a rough draft out for review and one of my colleagues said that my objectives were a little weak.

Which is true, I was mainly interested in the main content, and viewed the Objectives slide like the table of contents. Not that it WAS the table of contents, just that it's the kind of thing you fill in later, once you have the bulk of the content laid out the way you want. He replied that he had found it was very important to get the objectives correct first, then use that to guide you when you write the presentation.

This sounds right, doesn't it? I mean, you don't start writing the slides before you know the topic, and since the objectives state what the audience will learn, it would seem logical that you would want to know them before you wrote the rest of the deck.

The only problem is that I don't work that way. For example, when I wrote Proudly Serving My Corporate Masters, I started out by brainstorming all the stuff I wanted to talk about. The points I wanted to make, the stories I wanted to tell, the mysteries I wanted to elucidate. There were more than a hundred of these. Then, I worked to collect these together into logical groups, and organized them into chapters with the various bits ordered in such a way as to allow an orderly story to be told. Then I started writing, basically elaborating on each set piece and filling in the connective tissue as needed.

If you've read the book, you might say that it does jump back and forth a bit too much, with no over-arching theme except "Here are some thoughts about Microsoft." When I initially organized the pieces I wound up with 6 chapters, which I later split into 17 (one of those got cut later), and you can see where the seams are. But, certainly, you would not think that there were 100+ different sections of the book; I was able to knit most of my original brainstorming together in a reasonably cohesive way.

I do presentations the same way. For my recent interviewing presentation, I had a bunch of things I wanted to talk about. I got them all down on (electronic) paper, then looked for a through line so I could tell a story. In the end I wound up with 2 sections plus a "miscellaneous topics" section, so I didn't do a perfect job of this. But certainly it was only after I had this all organized that I could go back and write the objectives. And in fact for that talk I wound up not having the objectives laid out as such, instead I just had a "What this talk will cover" slide--but even that I would not have been able to write before I had the rest done.

I was actually inspired in this creation method by an interview I read with the band Metallica. They said that they would collect riffs--little bits of music--as they were touring and whatnot. Then, when it came time to write songs, they would listen to all the riffs, decides which ones went well together, and piece them into a song. Metallica in recent years has been coming out with "normal" songs, but if you go back and listen to their pre-"Black Album" phase, you will recognize this; the songs dart around, switch tempo, wander into the weeds for a while, then abruptly resurface for one more charge through the chorus ("Frayed Edge of Sanity" is a good example of this, as a bonus it has that excellent "Wizard of Oz" sample). After reading this, I figured I could do it that way also.

Now, I'm not talking about just blindly writing with no goal in mind. For the interviewing talk, I knew that the overall audience goal would be becoming a better interviewer. Although, the reason I gave a talk on interviewing was because I had been noodling around in my head on the topic. So perhaps even that was somewhat bottom-up. But overall I would say that I start with a very high-level idea, jump into the details, and only later does the mid-level organization emerge.

You could, of course, draw an analogy to how software is designed. One thing Microsoft has been pushing is that our software releases have a more top-down plan: that each release have a value proposition, essentially the 10-second "why should I buy this" pitch for customers. So, which comes first, the proposition or the value?

I would suspect that in the past, to the extent that these have been done, the details have come first. That is, for some version of Windows there has been one team doing something with USB device support and another team doing something with web publishing and another team doing something with graphics and another team doing something with some partner companies, and out of that you can brew up a scenario involving publishing your digital camera pictures, which can then become part of your value proposition.

I don't know if this is the right way, or if the right way for a presentation is the same as the right way for a large software product. Maybe my presentations should be entirely top-down, and maybe our software should also. But for the moment, I'm sticking with my top-bottom-middle spiral.

Posted by AdamBa at 12:35 PM | Comments (0) | TrackBack

November 23, 2006

Tagging O.J.

In the category of "Things that make you go [vomit noise]", O.J. Simpson was planning to release a book called "If I Did It", in which he gave a theoretical confession to killing his wife and Ron Goldman. In an unexpected gesture of common sense the book was cancelled, but for a while it was being pre-sold on Amazon.

Amazon now lets people tag books. So what were the tags for "If I Did It"? I checked a few days before it was cancelled and the tags with more than vote were (in order, number in parentheses): shameful (11), boycott (7), disgusting (7), murderer (5), pathetic (4), repulsive (4), controversial (2), domestic violence (2), guilty (2), narcissistic personality disorder (2), scum (2), sick (2), and tasteless (2).

Not too much dispute there...after it got cancelled, the two-or-more tag list was: boycott (35), disgusting (28), shameful (19), murderer (12), pathetic (11), repulsive (9), guilty (8), sick (8), scum (7), boycott regan books (6), evil (6), blood money (5), liar (5), shame on amazon (5), killer (4), sociopath (4), controversial (3), domestic violence (3), horrible (3), narcissistic personality disorder (3), no conscience (3), wife killer (3), boycott amazon (2), butcher (2), delusions of grandeur (2), dont hate the playa (2), hang him (2), revolting (2), shameless (2), and tasteless (2). Also, the following tags all had one vote: "amazon dont sell this", "amazon is sick to sell", "amazon should be ashamed", "amazon should be embarrased", "amazon shouldnt sell", and "amazon shouldnt sell this book".

That was a few days ago. I just checked and the listing is still there, although the cover image is gone. For some unknown reason, it is replaced with an image of the word "rock star". Oh, I see, that was a "customer image" which they evidently decided to show since there was nothing better. In the tagging department, "boycott" is still in first place, having extended its lead over "disgusting" to 50 to 31.

Meanwhile over on eBay, 4 people are selling copies of the book (it was sent to some bookstores before being recalled), ranging in price (right now) from $3550 to $21,100. One of the sellers was clever enough to paint in the word "I Did" on O.J.'s forehead in the book cover image. There are also a couple of people squatting on the term in unrelated auctions, like the "MINT Dell DJ 20 GB MP3 Like Ipod Nano, if i did it" and the cleverly-worded "Neil Diamond '92 Shirt,if you did buy it,I 'd miss it!" (that seller also has several other auctions whose titles contain the word "if", "I", "did" and "it").

Well, I thought it was interesting. Too lazy to post links to all this stuff however. The book is toast, so that's something to be thankful for on this day.

Posted by AdamBa at 10:37 PM | Comments (0) | TrackBack

November 20, 2006

Potential Benefits of Distributed Development

One difference between Google and Microsoft is that Google is actively embracing remote development sites. Microsoft is actually getting more distributed than we used to be. For example we bought a company called Vexcel that is headquartered in Boulder, Colorado, and unlike in the early years when we likely would have moved them to Redmond, they are staying put in the shadow of the Rockies (that linkted-to page on Vexcel's site, plus other ones like this, are funny because someone obviously did a search-and-replace of "Vexcel" with "Microsoft", leading to off-kilter phrases like "Microsoft's worldwide presence is magnified by its Australian and Chinese sales offices" and "Microsoft Corporation's headquarters in Boulder, Colorado"). But Google seems to be racing to establish as many remote sites as possible, independent of acquisitions.

Google is going this, apparently, to hire as many people as they can who don't want to relocate to Silicon Valley. And I don't know whether they plan to segment their products to certain sites (Talk over here, Search over there) or mingle them all together, although the rumors I hear are about mingling. The mingling fits in with a theory I am incubating, which is that distributed development is actually a good thing for a product.

I don't mean the obvious ways, such as a distributed team likely being more diverse, or being able to work on the product for more hours each day. What I am talking about is a distributed team being forced to use techniques that result in higher-quality software.

When we talk to teams about quality, a lot of what we emphasize is up-front planning. Meaning that you really think about your design, and your interfaces, and how pieces fit together. And there's another area we need to focus on that is coming out of some of the analysis of Windows bugs that are being done: the time between design and coding. These are things that are too minor to be part of a design document and design review, but need to get figured out before you start coding. Like knowing precisely what that API parameter or registry key means before using it. For lack of a better term, we'll call it "personal design review".

My concern is that having your team all in one place makes you lazy about this kind of thing because you figure that you can always ask the person next door for details when you need them, and if you get some part of your design wrong you can have a quick hallway meeting to sort it out. With a distributed team, especially one that is time-shifted, you know that it is hard to fix these things later, so you take the effort to design and document them right to begin with (open-source development creates essentially the same effect). So, in this case, it could indeed be that "absence makes the code grow better" (or something like that).

Posted by AdamBa at 09:01 PM | Comments (0) | TrackBack

November 16, 2006

Programmers Who Can't Write Strlen()

A few days ago I gave a talk at Microsoft on "Choosing Technical Interview Questions". The talk was quite popular, there were 180 registered and 508 people on the waitlist. A timely topic it appears. 180 was the official room capacity but there wound up being 241 people in the audience (people scan in with their cardkey so we get an exact count). And the talk went pretty well if I do say so myself. The video should be up soon (go to mylearning and search by the title, if there is a "View" button then the recording is there).

One of the subjects I touched on was what kind of coding question was too easy to bother asking. The C standard library has a function called strlen() which calculates the length of a string (if you've been hiding under a rock and/or programming in one of them "modern" languages, I'll explain that a "string" in C is just an array of characters, terminated by a character with value zero. So the question could be generalized as "You have an array of elements. Count the number of non-null elements at the beginning of the array"). My feeling is that strlen() is too easy a question to ask during an interview.

When I state this, someone will usually say, "But you'd be amazed how many candidates can't do strlen()". To which my standard reply is, sure, maybe they can't, but you might as well ask a harder question, because the strlen()-inept will also fail the harder one, and you'll be setting the bar a bit higher.

But then I got to thinking. Is there anybody who has, say, a Computer Science degree, that can't code up strlen()? I mean it really is an easy problem in any language. You walk the array until you find a null, and keep track of how many elements it takes to find one. C has pointer arithmetic so if you've got some ninja skillz you don't need to keep a separate counter, you can just figure out how far the null is from the beginning of the array. Which leads to the only possible error I can think of someone making, that is having an off-by-one error in their calculation at the end. I wouldn't call that "can't do strlen()" however, I would call that "having a bug in their strlen()".

Someone pointed me to Steve Yegge's column on phone screens (from back when he worked at Amazon) where he advocates asking candidates a trivial programming problem on the phone. Now he doesn't come out and say "You'd be amazed at some candidates who just can't write code", but he implies it by having that as a "weeder" question. Actually while I would consider some of his examples to be trivial (print odd numbers from 1 to 99, find largest number in an array), I would say that some of the others aren't (reverse a string, sum integers in a file). And I have serious issues with some of the other things he writes in that post, which I won't get into. Suffice it to say that he also seems to be (or seemed to be) in the camp that some programmers just can't write code, although he doesn't actually state that he has seen that.

I asked around the Development Engineering Excellence team and basically got consensus that yes, nobody had actually ever seen someone completely fail at strlen(), it was more that they made small mistakes. OK, that's fine. But for now I will assume that the stories of the strlen() disaster are an urban legend.

Posted by AdamBa at 11:09 PM | Comments (9) | TrackBack

November 13, 2006

History is Written by the Winners

I didn't make that quote up, of course, but I was thinking about it the other day. We were discussing a guy name Eliyahu Goldratt, who wrote a book called The Goal. It's a novel that is really a business book about the Theory of Constraints, a previous Goldratt invention (and previous Goldratt book). And it may or may not be autobiographical. But the story goes that Goldratt was working at some software company and invented this theory, and decided it was a better idea than the software he was working on. So he made the bold decision to jettison his company and go out on his own as an author and consultant. Needless to say he is now richer and happier than ever before.

The lesson here is clear. If you have a dream, you need to abandon your current well-paying job and pursue the dream. There will be risk, but there is no doubt that in the end you will be rich and happy, just like Goldratt. Right?

Well, it depends. The problem is you are only getting one side of the story. You don't hear about the people who have a dream and drop everything to go after it and wind up poor and miserable, wishing they had just stayed put. This is because nobody writes books called I Tried to Go After My Dream But I Messed Up and Now I'm a Loser. I doubt anybody would want to write such a book, and in any case they wouldn't wind up telling their story on Oprah. I'm sure such people exist, you just don't hear about them.

The same half-message is one of the factors leading to helicopter parenting. You hear about the kid who wins the Olympic gold medal, who explains how her parents nudged her to devote herself to her sport, how it took her a while to come to enjoy it, but now all her hard work has paid off. What you don't hear about is the other kids, the ones who made 99% of the sacrifices but received 0% of the rewards, who may now resent their parents and pine for a lost childhood.

And you see this same effect elsewhere, especially on the non-fiction bestseller list. Generals who win their battles, heirs to the original title quote; but also stock market pickers who get it right; actresses who hit the big time; politicians who win their election; bloggers who hit the A-List. Heck, this even happen at Microsoft: a team will ship their software on time, and whatever they did will become the accepted wisdom for a few years. Is it true genius, or just temporary luck? You hear about the success stories, and it's fine to celebrate success. The message they inevitably promulgate, that if you do what they did you will have the results they did, is an easy one to believe, but it may be a hard one to justify.

Posted by AdamBa at 09:52 PM | Comments (2) | TrackBack

November 10, 2006

Golden Stairs

Johnny Sain died the other day. He was a major-league baseball pitcher in the 1940s and 1950s, but I know him from the book Ball Four. Although he is only a peripheral character, he acts as a mentor of sorts to Jim Bouton, and come across as one of the few wise people in baseball at the time.

His most famous advice was on asking for more money. This was back in the days before free agency, when baseball contracts essentially bound players to their teams for whatever salary they cared to offer, but Sain was adamant that you could always go ask for a raise if you deserved it. He told Bouton, "Now, don't be afraid to climb those golden stairs. Go in there and get what you're worth."

Those golden stairs. In honor of Johnny Sain, I'll keep that in mind the next time reviews come around.

Posted by AdamBa at 10:47 PM | Comments (1) | TrackBack

November 06, 2006

Programming the Univac I in 1956

For today's post, we have a guest blogger. He's Michael Barr, Peter Redpath Emeritus Professor of Pure Mathematics at McGill University (but NOT the embedded systems expert of the same name), and more importantly my father:
My first experience programming was about 50 years ago, although the program I wrote was never debugged or run. I think the basic idea was sound, but the machine cost about $5 a minute to use and ran at kHz speeds. Also the MTBF was about 5 minutes, so the only program you could run was a serious one.

The Univac I was one of the first, if not the first, commercially available computers. You plunked down your million 1950 dollars, found a space large enough for a basketball court, got an electrical supply sufficient for a small village and you could get started computing. Of course there were details like a room full of vacuum tubes since the ones in the machine were continually burning out. That was why the arithmetic operations were done in triplicate; if two of the them gave the same answer, it was assumed correct. Otherwise the machine stopped on error. The entire memory contents were dumped to tape every minute or so in order not to have to restart from the beginning. Input was by paper tape, although I think they eventually added a punch card reader.

The University of Pennsylvania needed the room (essentially the basement of the Math and Physics building), the electric power, the service technicians, but not the million dollars. That is because the Moore school at Penn had been the home of EE professors Eckert and Mauchly, who built the ENIAC, one of the very first computers anywhwere and then went off to found the Eckert-Mauchly division of Remington-Rand that sold the the Univac I. About to put the Univac II on the market, which I believe used discrete transistor logic, they had one obsolete machine left and donated it to Penn. I don't know what the business tax rate was, but the personal tax rate was very high (up to 91% on taxable income above, IIRC, $200,000) and it probably was worth more to them as a charitable deduction than as a heavily discounted obsolete machine.

As a student I worked at something called the Johnson Foundation at Penn. (Originally founded in honor of Eldridge Reeves Johnson, who may have founded the Victor Talking Machine company, but was the one who sold it to RCA.) They had a rather large analog computer (the JFEAC--Johnson Foundation Electronic Analog Computer) that was used to solve the kind of chemical rate equations that some of the researchers at the JF worked on. I was, in fact, the operator of that machine. It was programmed by attaching modules to one another by phone cables. When the Univac came on campus, they decided to switch to that. In the meantime (as it happened, a very long meantime), they kept the analog computer running, so I was not involved in programming it. Truth-to-tell, it was hardly ever used, partly because tubes were always burning out and partly because no one ever actually came to me with a chemical kinetic problem to solve

Ah the memory. There were two kinds. The "fast" memory consisted of 100 tanks of mercury that had an electric to sound transducer at one end and a sound to electric transducer at the other. Think of it as tube in which 10 words of memory were coursing through. At any one time, one was just coming into the entry and one was exiting and the other eight were trapped at various places in the tube. I guess you could describe it as time-multiplexed. So of the 1000 memory locations, 900 were in the tanks at any one time and the remaining hundred were available for use. To read/store from memory, the machine had to wait for the right location to come around. One of the optimizations the programmers sometimes did for important parts of the program was called "minimal latency coding", trying to work out where to store something so that the machine didn't have to wait long for that location to become available. People had tables of average instruction time. I don't remember actually seeing the mercury tanks but I would guess they were about a foot long. The speed of sound in mercury is 1450 meters/second, so it would take perhaps 200 microseconds for the signal to travel the length of the tube.

The slow memory consisted of 12 magnetic tape readers. They were definitely state of the art, but were not random access no matter what. They were each about eight feet high, the size of a refrigerator. The program the JF eventually came up with used 80,000 words and about half of it consisted of "overlay" instructions, that is instructions to read the next routine to be used from the tape into the internal memory.

A word (really a double word, but the smallest addressable unit) consisted of 72 bits divided into 12 bytes of 6 bits each. It is now commonly thought that a byte is 8 bits (and the French word for byte is octet), but various machines had byte sizes from 6 to 9 bits. Naturally, they used only uppercase letters, numbers, and punctuation. I don't recall if the remaining 20 or so characters had any meaning. I don't think anyone had anticipated text processing.

The machine was hard wired to interpret a kind of primitive assembly language. There were two instructions per word and they had the format "A  324", which meant to add the contents of memory location 324 to the accumulator (essentially the arithmetic register), "M  324", which meant multiply the contents of location 324 by the accumulator and so on. In all cases the result was stored in the accumulator from which it could be written to memory. Multiply and divide used a second register. So there must have been instructions to move between the second register, and also tape instructions, but I forget them. There was talk of a "three memory instruction" machine, which would be able to, say, add two memory locations and store the result in a third, but this wasn't such a machine. Everything was accumulator and memory. The instructions were stored as written, so to store "A  234" it would store the encoding of an "A" (it used some encoding, not ASCII), then the encoding for a space, then the encoding for a second space (I recall some of the instructions were two characters), then the BCD encodings of 2, 3, and 4. This storage system was of course very wasteful of memory.

On overflow, the machine executed the instruction found at memory location 000, which was usually an unconditional jump. If it was anything other that such a jump, it resumed execution where it had been. Notice, incidentally, how inefficient the storage was, using 36 bits for each instruction. They used the extremely limited memory very inefficiently. Of course, the probably felt that they had as much storage as necessary on the magnetic tapes. And if that was slow by our standards, the computer was so much faster than the alternative that it certainly did not seem slow. But note that all programming was done in absolute memory locations. Later, Univac released a real assembler that allowed you to use relative addressing and also linked subroutines automatically, with a mechanism for inputting parameters. However the idea of a re-entrant program had not occurred to anyone so that unless you were using a parameter-free subroutine, you had to have as many copies of in memory (or overlaid) as there were uses in your program.

The machine was in a big room, with the control console, the paper punch, the reader, and the tape servos. The whole thing was built with vacuum tubes, not transistors. It was the largest vacuum tube computer ever built in the US; I heard the Soviets built some larger ones. The actual tubes were off in a separate room. Then there was a side room with a giant capacitor. You know capacitance is typically measured in micro-farads or micro-micro-farads. This thing had a one farad capacitor. Two thousand electrolytic capacitors in parallel, each 500 micro-farads, which occupied an entire room. I assume it was to deliver power to the machine.

I was curious about the computer and I had access to the programming manuals so I decided to try my hand at programming on my own, without any expectation that I would ever be able to run it. They had programming sheets (on a pad) that were laid out with spaces for (line) numbers on the left and then 12 boxes, into which you put two successive instructions. Of course these were written longhand and then someone else punched them into the paper tape. The program I wrote was to do double precision arithmetic. I only wrote it out and tried to check the logic, I never had it punched on tape or run. About 20 years later, I actually wrote such a program on a Wang minicomputer that my department bought in 1975 (and retired seven or eight years later). I never finished debugging that program either, inciedntally. The arithmetic on the Univac was BCD, incidentally. Pure binary would have seemed just too weird, especially to businessmen. Even today, many people are convinced that binary arithmetic is inherently less accurate than decimal, when actually the opposite is true. Since six bits were used to store each digit and another six for the sign, the result is that the range of numbers representable in single precision was plus or minus 10^{11}, while pure binary representation could have represented numbers up to plus or minus 2^{71}, about 2*10^{21}.

Anyway, the idea was independent of such considerations. To do double precision arithmetic, you have to work with numbers of the form a + tb, where t is a number of the form 10^{-k} or 2^{-k}, depending on the coding and k is the length of a single precision number. Let us represent such a number by (a,b). Then (a,b) + (c,d) = (a+c,b+d) or (a+c+1,b+d) depending on whether there is an overflow in the addition of b+d. Of course, there is also a potential overflow in the addition of a+c, but let us ignore that since that leads to an error state. An mechanism for dealing with overflow (more precisely, for allowing the programmer to write an overflow handler) was hard-wired in the machine.

Subtraction is handled in the same way as addition. Multiplication is more complicated, but the essential formula is that (a + tb)*(c + td) = (ac + t(ad+bc) + t^2(bd)), but the last term is too small to be saved. Actually, the machine did save the insignificant digits of the product, else this multiplication would not have been readily possible. The true formula used has to be (a,b)*(c,d) = ((ac)_h + t((ac)_l+(ad)_h+(bc)_h), where _h and _l stand for the high and low bits of the product. The understanding was that all numbers were less than 1, so multiplication could not overflow, although the addition could. although this was a problem it was manageable.

Division is more complicated. The basic formula is (a+tb)/(c+td) = (a+tb)*(c-td)/(c^2-t^2d^2). Since t^2 is insignicant, this can be replaced by (a+tb)*(c-td)/c^2. The multiplication routine can be used to compute the numerator, say (e+tf) and then you can divide twice by c. Since (e+tf)/c^2 = e/c^2 + t(f/c^2), this can be done in single precision and the results combined. You have to divide e by c and add the low precision digits of the result to f and then divide the new f by c. Repeat. Sort out all the possibilities of overflow. The result was not pretty and I rather doubt I had all the details worked out. High precision arithmetic is not for the faint of heart. My program was probably around 100 lines (so 200 instructions), although it would have been longer if debugged. I forgot about incorporating the lower bits from multiplication and division.

There is another possibility. When I was in school, I learned long division. We divided one digit at a time. It was slow, but it worked and there was no built-in limit to length. In binary, it is even easier, since the only possible next bit is 0 or 1. Try 1 and if it is too large, the next bit has to be 0. It works and it might be faster. I do know that in the 80s, I wrote a square root program along those lines in assembly language using Forth assembler. It was much faster than the square root supplied by my Forth vendor, based on a Newton iteration that used the rather slow division of the 8088 chip. I sent it to LMI and it was eventually included in their Forth distribution. There is life in those old algorithms yet.

This is Adam again. I was thinking about this and realized that this story is from 1956, and the original IBM PC came out in 1981, which is exactly halfway between then and now. Of course the PC was a general purpose computer. But the PC had about 64K of memory and a 1 MHz (or maybe it was 2 MHz) bus. So it had about 64 times as much memory as the Univac I (technically less since the Univac I used these 72-bit words, but it stored stuff so inefficiently that we'll call it a wash). Meanwhile today's computers have roughly 16,000 times as much memory as the original PC. And those mercury tanks might have taken an average of 100 microseconds to access memory, but the jump from the original PC's bus to today's machines may also be a larger factor of magnitude (I don't know enough about how buses really operated to figure out how to convert between a machine that could read 72 bits in 100 microseconds and one that transferred 16 bits at a time over a 1 MHz bus, or today's machines that transfer 32 bits over a 533 MHz bus, but if you just do the obvious calculation you get a bandwidth of 90K/sec for the Univac I, 2M/sec for the IBM PC, and about 2 gig/sec for today's machines, so again the jump is much greated from 1981 to 2006 than it was from 1956 to 1981). The Wikipedia article linked to above, incidentally, has some good information and links on the Univac I.

There is one aspect of this story that I find fascinating, which is how my father wound up at the Johnson Foundation to begin with. He grew up in Philadelphia in a family he described as "barely lower middle class". When he graduated from high school (in 1954) there was nothing saved for college. Tuition at Penn (which was just a few miles from their house) was $700. There was a vague notion that he could get a scholarship, but nothing was forthcoming. Then, what he describes as "a miracle" happened. He was looking for a summer job in June 1954 and saw an ad in the paper. It turned out not to be a summer job, but instead an opportunity to work full-time at the Johnson Foundation while allowing him to take classes at Penn's night school, as well as summer classes, and earn a degree in 5 years. He would be paid a salary and also be able to attend Penn for half-price, as an employee. He took the job (after first convincing somebody in the Pennsylvania government that it was OK for him to work in a lab before he turned 18). He wound up working there for 3 years, then switched to a regular student with a partial scholarship and loan, graduating after 4 1/2 years. After that he got a Ph.D. in mathematics from Penn and on to becoming a professor at McGill. But it's really astonishing to wonder where he would have wound up had he not gotten that job at the Johnson Foundation.

Posted by AdamBa at 10:26 PM | Comments (0) | TrackBack

November 04, 2006

Bad Website

My wife had her birthday recently and I decided to buy her a gift certificate from Forth & Towne. F&T is a spinoff from the Gap, which focuses on chic clothes for women over a certain age. I happened to read in the paper that they had just opened a store at Pacific Place in downtown Seattle. So I went to their website to get the phone number, to check if they sold gift certificates (in particular I wanted to make sure the cards they issued were snooty-looking Forth & Towne ones, not generic Gap ones).

First of all, they have one of those annoying Flash websites that takes forever to load and doesn't allow deep linking. But worse, they didn't have the phone number on the site. They did have a listing for the store, noting that it was scheduled to open on October 25 (this was about October 30 that I was looking). But no phone number. In fact, today they still have no phone number listed. And they supposedly opened another store at Alderwood Mall on November 1, but it's still shown as a future event, and also has no phone number. So how likely am I to actually plan a trip there based on what is on the website?

It's annoying because the most basic function of such a website would seem to be giving people the phone number and directions to a store. I mean if all you do is buy a Web domain for your store and put up a one-pager, basic HTML site, you will have your address and phone number, and probably a link to a map site to show your address. But amidst all the fluff on the site (and this site is extra-fluffy), they neglected this basic information.

In my case I really did want to go there, so I plowed ahead. I first tried superpages.net, but it couldn't find it. Then I went to the Gap website to get the phone number of the Gap store downtown, but when I called they didn't know the Forth & Towne number. They advised me to call 411. Which I did, and miraculously they had the number. Which (and I can't believe I'm even providing the free pub by telling you this, but it does complete the paragraph nicely) if you care, is 206 223-2704. And they did indeed have gift cards and they did indeed look like Bellagio hotel keys. And I did buy one and my wife was happy and she will one day transport herself to the plush confines of the store and spend far more than the value of the gift certificate. The system has routed around the problem and is operating normally.

As an aside, I find the name "Forth & Towne" amusing. I can only imagine how many hours of consultant time were billed to produce it. It is trying to appear upscale and swanky, with the "Forth" part alluding to "Fourth" which makes you think of "Saks Fifth Avenue". I guess they didn't want to use "Fourth" because it might imply an actual street name (the Seattle store is actually near Sixth & Olive, which doesn't sound too bad). But while doing this they managed to include the name of a programming language. What's next, Cobol & Country?

P.S. While doing research for that last part, I clicked on the directions link for the Seattle store on the F&T website. Somewhat mind-bogglingly, they botched that one too. The link you click on sends you to a Google map of 1500 4th Ave, which is near the Pacific Place mall, but not quite there (and far enough away that you wouldn't be able to see the mall from that spot). So they managed to get both the phone number and the address wrong! Impressive work.

Posted by AdamBa at 09:32 AM | Comments (5) | TrackBack

November 02, 2006

La Poutine de Chez Nous

Do you know what this is?

The gastronomically curious and the maple syrup addled may recognize this as poutine, that weirdoid emulsion of french fries, cheese, and gravy that emerged form the wilds of Quebec around the same time as Rene Simard. The astonishing thing is that this picture was taken right here on Microsoft's campus, in the Building 34 cafeteria.

It turns out that starting this month, Cafe 34 will be offering poutine for the first week of every month. A guy in a chef's uniform said it was "for the Canadians". So today is day 4 of the poutine invasion. I asked if anybody was actually eating it and the guy said they couldn't make it fast enough (no word on whether anybody was ordering it a second time, but it is Thursday and people are still lining up).

The actual product wasn't bad. The cheese lumps were much too large, so they didn't half-melt like they are supposed to, but it tasted reasonable (by this I mean "it tastes like poutine from Montreal", I am not making an editorial comment on whether it should be considered food, etc). Also, this presumably means you can order fries with gravy, which will be heartening on those cold gray Seattle winter days.

Posted by AdamBa at 03:16 PM | Comments (3) | TrackBack