Disclaimer: The material here is essentially unedited, except here and there to improve readability. It in no way resembles an organized corpus or FAQ.
Reply: Not so much a provocative issue as an exercise that should provoke the reader into exclaiming at some point, "Holy smoke, so that's how Smalltalk (or objects or reuse or inheritance or whatever) works. I wouldn't have realized it if I hadn't fooled around with this damn exercise."
Reply: First piece of advice: Chapter 4 alone is harder and more demanding than 7 or 8 chapters of a whole lot of other books, so don't get discouraged. It can take a long time, even for very sharp people, to work through that one chapter. The exercises are supposed to make you scratch your head a little and not just be a bunch of cookbook-like steps. Learning Smalltalk properly requires a little of this sitting around and wondering and stumbling.
Second piece of advice: Ask! If you spend too long alone working on any problem, you can reach the point of diminishing returns. So ask someone. If you don't have a Smalltalk expert down the hall, ask on Usenet in comp.lang.smalltalk. Or find contacts through a local Smalltalk Users Group. You can ask me too; it might take me longer to get around to answering, but I'll do my best.
Last piece of advice: If you're referring to the last section (4.5), those exercises should be considered more challenging than the rest. We often tell people that they are "optional" so that they do not become unduly discouraged.
Reply: First, congratulations on graduating. You don't say when you would start work, but if it's immediately after graduation then that's probably not enough time to plunge far into the subject. If, on the other hand, you actually have a little time for self-study before you start your job, I would suggest a somewhat radical course of action: Get yourself a Smalltalk system and start playing with it. All the commercial ones come with beginner tutorials. This will by no means make you proficient in OOP or Smalltalk, but it will give you an inkling -- I mean a right-brain rather than a left-brain inkling -- of what the subject is really about. Then start trying to write a simple-minded application. I mean something really simple, as simple an app as you can think up, perhaps inspired by some elementary exercise you already solved early in your college programming career. You will blunder a lot and write inelegant code, and you will probably wish you had an expert around who could answer your dumb questions. No matter, you are still bound to learn something useful (and you'll presumably find plenty of experts once you start work).
Now, how does someone on a student budget afford a Smalltalk system? There are educational discounts, but there are also freeware Smalltalks. Some upcoming Smalltalks are available as Betas and can be downloaded from the Web. (Tune into comp.lang.smalltalk on Usenet.) But their tutorials are unlikely to be polished yet. So for a basic, rock-solid Smalltalk, consider Smalltalk Express from Smalltalk Systems). They have packaged up a reliable old Smalltalk as freeware.
Finally, after you've played with Smalltalk, for a few hours, or days, or weeks, then have a look at the books. They will make a lot more sense after your Smalltalk experience (even if you were to become a C++ or Java programmer instead of a Smalltalk one). And they can help you answer some of your questions. And your questions are likely to be good, concrete ones, instead of the airy ones that someone without any OO programming exposure would have. Which books? I'm going to suggest just a few. Designing Object-Oriented Software by Wirfs-Brock et. al. is an old standby, obviously more concerned with design than Smalltalk per se, but with a strong Smalltalk bias nonetheless. Smalltalk: the Langauge by David Smith (an IBMer at Watson Research) has lots of meaty programming examples, but makes no pretense of helping with design. Smalltalk with Style by Skublics/Klimas/Thomas is a tiny but genuinely useful collection of tips. And my book, which you already know about. All of these suggestions have a Smalltalk bias; there are of course interesting OO, non-Smalltalk books, but that's getting away from your original query.
Reply: The free PPD (ParcPlace Digitalk) Smalltalk (aka Smalltalk Express) is good. It comes with Window Builder which is a nice, widely-used, relatively lightweight (that's "lightweight" in the good sense) GUI builder. Hadn't heard of Squeak, but it's nice that Dan Ingalls is still around. (Added later: Squeak is the freeware back-to-basics brainchild of several Smalltalk luminaries, including Alan Kay, who are now working at Walt Disney Systems. It's Smalltalk written in Smalltalk itself.)
Reply: Does ownership work? Sometimes, sometimes not. Like any tool, you can hurt yourself if you're not careful. Do multiple developers work on a single class? This is rare. If it happens a lot, then it's a sign that something needs refactoring. Maybe the team should step back from their noses and look at the design and the application partitioning from a little farther away. The problem is probably not entirely avoidable -- vacations and illness being examples -- but if it happens to the extent that you describe then it's not healthy. Then you mention "classes" in plural, and that sounds particularly unhealthy. I've heard of one project where the Envy factorization is so hopeless that it's too expensive to redo it. Here's a philosophical perspective that I think will resonate with you, because I know you are sensitive to the costs of bad design: Envy is Smalltalk. Everything is an object. Applications are objects. It's hard to design objects well, and people generally don't invest enough time in designing them. Conclusion: applications end up being poorly designed, which is just as catastrophic as any other poor design.
Reply: Yup. There are 2 schools of thought. We know you belong to the anal-retentive one, else you wouldn't have asked. The anal-retentive school actually versions these junk editions with names like "Junk1.4". The California school just lets them be. They're not versions, they're editions, so who cares? BTW, if you ever clone your repository to save space, the editions get left behind. This is probably a moot point, because in practice one rarely clones the repository.
Reply: I doubt I can provide much insight, but here's what comes to mind: As a general rule, finding good customers is the name of the game. The best customers are repeat or long-term ones. In any case, finding customers is the challenging part, and can be tougher than putting together or offering high quality service. Or, I guess what I'm trying to say is that you can be the best consultant or educator in the world, but that doesn't make customers automatically come banging at your door. I was lucky to have good contacts in the industry.
As for Java, both supply and demand for it exceed that for Smalltalk now, which probably means it'll be easier to find Java work, but you won't make as much money at it. In other words, the money is still in Smalltalk, but it's harder than before to find the work.
Reply: Es passiert oft. Ich bin es gewohnt und niemals fuehle ich mich beleidigt. Ein relativ beruehmter Buchhandlungskettenladen katalogisiert das Buch auch nach meinem Vorname. Na ja.
Reply: Es fiel mir ein, dass ich etwas ueber Objekte haette hervorheben sollen. Und zwar, das Wichigste sind nicht die "Software Engineering" Aspekte, sondern die kognitiven Aspekte. Der Schwerpunkt ist, dass ein Objektzugang fuer viele Probleme angebracht ist, weil die Objekte eine menschliche Schnittstelle bilden, zwischen einerseits denjenigen, die ein konkretes Problem haben aber keine Techniker sind, und andererseits den Programmierern. Die Berechtigung dieser Behauptung ist die dem Buch zugrundeliegende Motivation.
Trotzdem wird allgemein verbreitet, dass die technische Aspekte im Zentrum stehen. Teilweise ist das natuerlich richtig. Ohne die technische Aspekte bringt es nichts. (Also muss jedes Buch voellig um diese Aspekte gehen.) Nur vergisst oder merkt man nicht, dass sie blosse Unterstuetzungen fuer den kognitiven Zweck sind. Es ist wie man durch GB und PSG (Note: GB = Government and Binding, PSG = Phrase Structure Grammar) usw arbeiten wuerde, ohne sich zu fragen, was sie mit Menschen oder Denken zu tun haben.
Dieser kognitive Schwerpunkt wird schon ab und zu im Buch betont. Fuer Sie sind moeglicherweise diese Stellen interessanter als der Rest, weil kognitive Fragen im Mittelpunkt unseres Seminars stehen.
Ach! Noch einen kleinen Vorschlag. Im allgemeinen sind die sogennanten "Commentary" Abschnitte weniger spezialisiert. Sie sind Versuche auf Beziehungen zwischen den rein technischen Dingen und den Fremdbereichen hinzuweisen. Deswegen koennen Sie moeglicherweise mehr Interesse dafuer haben.
Reply: I read Milner's lecture this morning and skimmed lightly over the Wegner, which seemed to resemble his CACM article. Some shallow ruminations:
Whether they mean the same thing when they say "interaction" I'm not sure:
Milner's work resembles sub-atomic physics: it's kind of nice to read about, but I can't imagine dedicating my professional energies to it. (Maybe when I was young and idealistic.) Or, back to the theme of language understanding: it's a level of explanation (those are his own words, "level of explanation") like how mitochondria power brain cells instead of, say, how/whether/when people fit an excerpt they've just read into a frame or script.
Wegner talks about the non-Goedel-completeness of interaction machines, but this strikes me as a red-herring. At least I don't see why it's an important observation. Arithmetic is incomplete, yet we still do OK with balancing checkbooks and modelling rocket ships. What then is so profound about objects or interaction machines being incomplete?
I don't think, though, that Milner's paper sheds much light on what Wegner is saying.
Reply: The business about variables-being-objects or assignments-being-messages -- the resonance between your interpreter and Milner's thinking -- is tempting me to renounce my orthodox view. Actually, Eric Clayberg started me down this path already, so I guess that puts you into yet more good company, or vice-versa. He took me to task for emphasizing that assignments are _not_ messages. (See the footnote on page 34 of my book for a hint of the debate. You can tell there how stubbornly I cling to orthodoxy.)
Not only are Milner and Wegner on somewhat different wavelengths, I'll bet Milner would find Wegner to be a loose cannon. Wegner's very premise is that the software worth thinking about hasn't a prayer of being reduced to the orderliness that Milner seeks. It would be fun to see them together on some kind of panel discussion.
To the matter of what technically distinguishes a traditional Turing machine from a Wegner interaction machine, I'm still unsure. Why is the, as you say, _non-determinism_ of the sequence of inputs relevant, rather than the infinite-ness of the number of inputs (i.e. interactions)? Suppose we are limited to 4 interactions, but they are "non-deterministic". If the machine can handle any tape, then it ought to be able to handle a tape with any 4 interactions inscribed on it. In particular it ought to be able to handle a tape with these specific 4 interactions, even though the non-determinism implies that I don't know enough to inscribe this particular tape before beginning. In other words, the machine can handle many tapes, one of which represents the actual 4 interactions, but I as the experimenter who sets tapes against the machine can't know enough before the experiment to select this particular tape off my shelf of tapes. This argument demonstrates that Turing machines can handle a finite number of interactions. But not infinitely many. I suspect I'm not understanding some basic idea which would deflate this argument. If you want to point me to some good clarifying literature, please do. (BTW, I'm attaching a web page I found which helps me think about the Chomsky hierarchy.)
Having said all this, I agree with what you say about the essential advantage of interaction machines. I.e. Turing machines are impractical for real problems while interaction machines suggest a practical way to solve real problems.
Reply: Let's see, I could change the part about Alan Kay's observations to read: "Alan Kay remarks that computing has been moving in the direction of later and later binding. Java, by this measure, takes a step backwards from Smalltalk. Not quite all the way back, that is to say not as far back as C++." Just kidding. I have nothing against Java, other than that it gets overhyped, but I guess that happens to everything. I'm not ready yet to convert the book, although one of these days I may give that a try too. Probably I'll wait until the Java boom is over, so that I can keep my reputation for bad timing intact.
Reply: Book pricing is, I've learned, too complicated a subject for mere authors to comprehend. So don't blame me. (Note: Amazon later removed its discount.)
That leads up to a related topic. The spectrum of commercially viable Smalltalks seems pretty small: IBM and ParcPlace/Digitalk/ObjectShare. The latter has a pretty confusing product line right now. Are there any others? The environments are also pretty pricey. The IBM Visual Age for Smalltalk Pro is $3500; for C++ it's $370. I suppose this reflects the size of the market and the need to amortize costs over a much smaller developer base. I'm curious because I'm wondering if learning Smalltalk has any long-term commercial significance, i.e. could I ever base a product on Smalltalk? Is Smalltalk going to end up like APL or Prolog?
Reply: Your comments on Smalltalk all seem reasonable. I've heard that IBM has been eating PPD's lunch ever since they practically self-destructed. On the other hand, one has to wonder whether it's enough to be a big fish in a small pond. There have been a host of other commercial Smalltalks (Enfin had one, for example) in the last few years, but I don't know that any became big factors. And two other new ones were beta-ing about 6 months or a year ago, one from England. Sorry, I've forgotten the names.
Prognosis and advice? I'm the guy whose investments always turn out badly, so either pay me no heed or listen carefully and do the opposite. First of all, watch the piece of PPD (Smalltalk Systems) that went to Eric Clayberg. Eric is (a) super technically, (b) knows what the heck it means to build a product, and (c) knows what it means to provide customer service. He was the driving force behind the original ObjectShare, which earned a solid reputation, which is why PPD changed its name to it. The odds are probably stacked against him, but he strikes me as someone who might salvage something out of the PPD rubble. Second, it may happen that Smalltalk goes the way of APL and Prolog (and Lisp?). But it really differs from them paradigmatically, and has gone farther than any of them in terms of client-server capabilities. This guarantees absolutely nothing; it just says its fate must not logically follow the fate of the others. Third, products. Where Smalltalk's edge manifests itself substantively isn't in mass market Windows products -- its footprint is too big. Rather it's in specialized, industry-specific, unusually complex problems. Problems where you might not ever be able to finish a shippable release if you didn't have a programming environment that could keep up with your cognitive discoveries about the problem domain. You know how it goes: Understanding what the problem really is, and all its facets and ramifications, is the big part of the battle, because the people who want the software to be built don't have it worked out either. Businesses are fuzzy entities and their critical problems are fuzzier than they themselves imagine. On the other hand, even though Smalltalk seems like a rational and preferable approach to these problems, in the end it goes awry just like everything else. The bad news for Smalltalk has been the number of spectacular failed projects, run and staffed by enthusiasts who were inexperienced or lacked common sense. The good news these days is that a lot of that chaff has been shaken out. I think there aren't nearly as many Smalltalk projects starting up as there were 2 or 3 years ago, but they are being staffed by generally higher quality people. Net: I think the prognosis isn't great, but it's not clearly terminal either.
Reply: Well, how's this for one. People worry about two Smalltalk negatives: (1) their PC has to have enough speed and storage to execute Smalltalk; and (2) Smalltalk is "slower" than hand-optimized C code. Client-server apps then fit like a glove. (1) They aren't intended for mass market home computers, but for critical apps, where a moderately powerful workstation is cost-justifiable. (2) They have built-in network latencies, let's say as a coarse rule of thumb perhaps .5 second, which is enough to swamp any response-time degradation in a good Smalltalk design. Meanwhile, you get Smalltalk's benefits: close association of problem and programming, rapid development environment, tractable design. I guess another way to say this is that everyone would like the benefits, and in client-server computing they don't have a good excuse not to have them.
Reply: Maybe. This was a huge drawback in Smalltalk-80, but nowadays in, say, IBM Smalltalk, the building blocks are actually the underlying platform controls / widgets. So you get native look and feel.
Reply: Here's a sort-of answer. Some platform-specific controls are there (remember I'm only talking about the IBM Smalltalk products). Like trees. They are in a different part of the class hierachy, prefixed by Ew (for ExtendedWidget) as opposed to Cw (Motif CommonWidget). The bad news: They emulate instead of directly calling a Windows control. (They are built using Cw's.) The consoling news: they port automatically between, at least OS/2, NT, and 95.
Reply: A roundabout, thinking out-loud answer: The state-of-the-art computer linguistic approach to natural language processing seems to be something called HPSG (Head Driven Phrase Structure Grammar), not to be confused with GPSG, or Government and Binding, or the Minimalist Program, or TGs (Transformation Grammars), etc. I don't know squat about it, except that it's extremely lexicon driven. That is, grammar rules virtually disappear and the entries in the lexicon are very smart, knowing how they recombine with other words or kinds of words. What does that sound like? Objects of course. So "Chomsky" would, as far as HPSG is concerned, be a lot like Smalltalk. Or maybe SELF. Maybe CLOS multimethods would be nice, but possibly even they are overkill. "Chomsky" wouldn't be like Prolog.
But if I look at a wider set of problems, I see perhaps other needs. In speech processing one needs the FFT (Fast Fourier Transform) to make spectrograms and linear regression to calculate the fundamental frequency. And to translate between languages one could use either statistical techniques or knowledge representation techniques. Question: Is Smalltalk the right language for mathematical algorithms, statistics, and knowledge representation to boot? The answer is not obviously yes to me. At least the objects don't leap out and grab me the way they do in so many problem domains. My brain thinks of algorithms functionally rather than object-ly. It's all doable in objects of course, without jumping through any hoops. (Class SoundSignal has methods produceSpectrogramWithFft or extractFundamentalFrequency and so forth. One would probably soon reify produceSpectrogramWithFft into class FFT and so forth.) But this doesn't somehow have the satisfying feel of class BusinessObject or LetterOfCredit. And also in these mathematical domains, an orthodox arithmetic operator precedence might be more comfortable than Smalltalk's.
Reply: It's a semi-long story, but I had been hearing so many Latin examples that were meaningless to me that I thought maybe I needed to learn some Latin. Then I talked to someone who suggested that Ancient Greek or Chinese would be more useful. I thought a little about some of the things I'm trying deep down to understand--language universals for example--and I figured I'm better off getting away from the Indo-European family. On top of that, the little Chinese I once knew has been seriously blocked by German, to the point where it has been frustrating trying to talk to people like my aunt, with whom I have no other way to communicate. Hence Chinese. It'll set back my German, but one can't have one's cake and eat it too. Some day I'd like to learn a little of one of Turkish, Japanese, or Hungarian, all of which are agglutinating I think, and each of which represents a different non-IE family.
Reply: Re: those so-called critical years of language acquisition, ages 2-4 or whatever. We are amazed at what the child can say when he's 4 years old. But 100% of his linguistic life is spent on one language. He knows no other. Hypothesize an adult learner, age 20, who is thrown into a pure L2 (second language) environment for 2 years. 2/22 or 9% of his linguistic life is spent on L2. 91% of his linguistic life has been spent on L1. Is it any wonder that he appears and feels so linguistically stunted in L2? Back to the child. Why aren't we amazed at what he can't say? He can't do relative clauses and lots of other complex constructions. He can't understand the NBC nightly news. (Even 10 year-old kids don't understand the TV news.) I guess I'm in one of my phases when UG rankles me. The theory is just too airy. Some things don't stand up to scrutiny. To have phrases, UG assumes that a language has parts of speech. But the parts of speech in different languages are all over the floor. Westerners expect verbs and the verbs have valences (some verbs expect subjects, some subjects and one direct object, etc.). Valences are what Chomsky-ites need to talk about the roles of nominal actors. (I think this part of the orthodox theory is called Theta-roles.) But a couple of weeks ago I just read up a little on Tagalog. The verb-like parts of speech there have no valences. There are no obligatory nominals. There go Theta-roles, right out the window. UG also leaves some pretty crucial stuff unexplained: 50% of UG is the mental lexicon. But UG just asserts that we have such a thing with, say, 60,000 entries, each one containing a lot of morpho-syntactic, let alone semantic information. It doesn't explain how we acquire or build it. (It isn't so foolhardy that it asserts that we get it during those critical years up to age 4.) I just have this feeling that if you poke at the individual ideas in UG, you can break them one-by-one, and end up with nothing left.