Information Processing

Just another weblog

Archive for September 2007

Blade Runner returns

with one comment

It’s the 25th anniversary of Blade Runner! Interestingly, Blade Runner was a money loser; the summer of 1982 was dominated by the Spielberg blockbuster E.T.

The director’s cut came out 15 years ago. This new release is a lovingly crafted digitized version with improved special effects.

Wired Q&A with director Ridley Scott. Full transcript (long, with audio). Apparently Scott never finished the Philip K. Dick novel Do Androids Dream of Electric Sheep? on which the screenplay was based.

Deckard is a replicant (Wired interview):

Scott: The whole point of Gaff was — the guy who makes origami and leaves little matchstick figures around, right? The whole point of Gaff, the whole point in that direction at the very end, if Gaff is an operator for the department, then Gaff is also probably an exterminator. Gaff, at the end, doesn’t like Deckard, and we don’t really know why. And if you take for granted for a moment that, let’s say, Deckard is Nexus 7, he probably has an unknown life span and therefore is starting to get awfully human. Gaff, just at the very end, leaves a piece of origami, which is a piece of silver paper you might find in a cigarette packet. And it’s of a unicorn, right? So, the unicorn that’s used in Deckard’s daydream tells me that Deckard wouldn’t normally talk about such a thing to anyone. If Gaff knew about that, it’s Gaff’s message to say, “I’ve basically read your file, mate.” Right? So, that file relates to Deckard’s first speech to Rachael when he says, “That isn’t your imagination, that’s Tyrell’s niece’s daydreams. And he describes a little spider on a bush outside the kitchen door. Do you remember that?

Wired: I don’t remember the — oh, the spider. Yeah.

Scott: Well, the spider is an implanted piece of imagination. And therefore Deckard has imagination and even history implanted in his head. He even has memories of his mother and father in his head, maybe a brother or sister in his head. So if you want to make a Nexus that really believes they’re human, then you’re going to have to think about their past, and you’re going to have to put that in their mind.

Wired: Why didn’t the unicorn dream sequence appear in either the work print or the original release?

Scott: As I said, there was too much discussion in the room. I wanted it. They didn’t want it. I said, “Well, it’s a fundamental part of the story.” And they said, “Well, isn’t it obvious that he’s a replicant here?” And I said, “No. No more obvious than he’s not a replicant at the end. So, it’s a matter of choice, isn’t it?”

Wired: As a fan reading people’s comments about this, I’ve come across statements of Harrison Ford saying that he was not a replicant.

Scott: I know.

Wired: And watching the director’s cut, it seemed to me when Ford picks up the origami unicorn at the end of the movie —

Scott: And he nods.

Wired: The look on his face says, “Oh, so Gaff was here, and he let Rachael live.” It doesn’t say, “Oh my God! Am I a replicant?”

Scott: No? Yeah, but then you — OK. I don’t know. Why is he nodding when he looks at this silver unicorn? It’s actually echoing in his head when he has that drunken daydream at the piano, he’s staring at the pictures that Roy Batty had in his drawer. And he can’t fathom why Roy Batty’s got all these pictures about. Why? Family, background, that’s history. Roy Batty’s got no history, so he’s fascinated by the past. And he has no future. All those things are in there to tap into if you want it. But Deckard, I’m not going to have a balloon go up. Deckard’s look on his face, look at it again now that I’ve told you what it was about. Deckard, again, it’s like he had a suspicion that doing the job he does, reading the files he reads on other replicants, because — remember — he’s, as they call them, a blade runner. He’s a replicant moderator or even exterminator. And if he’s done so many now — and who are the biggest hypochondriacs? Doctors. So, if he’s a killer of replicants, he may have wondered at one point, can they fiddle with me? Am I human, or am I a replicant? That’s in his innermost thoughts. I’m just giving the fully flushed-out possibility to justify that gleaming look at the end where he kind of glints and kind of looks angry, but it’s like, to me, an affirmation. That look confirms something. And he nods, he agrees. “Ah hah, Gaff was here.” And he goes for the elevator door. And he is a replicant getting into an elevator with another replicant.

Wired: And why does Harrison Ford think otherwise?

Scott: You mean that he may not be or that he is?

Wired: Well, he is on record saying that, as far as he’s concerned, Deckard is not a replicant.

Scott: Yeah, but that was, like, probably 20 years ago.

Wired: OK, but —

Scott: He’s given up now. He’s said, “OK, mate. You win, you win. Anything, anything, just put it to rest.”

Written by infoproc

September 30, 2007 at 2:36 pm

Live in the UK…

leave a comment »

Sorry for the lack of posts — it’s the first week of the fall term here and I’ve been too busy.

For those of you who are masochistic enough to want to view my seminar Curved space, monsters and black hole entropy at the Newton Institute, follow this link.

Written by infoproc

September 26, 2007 at 6:01 pm

Posted in physics

Paul Graham against philosophy and literary theory

with 2 comments

An good friend of mine did a PhD in philosophy at Stanford, specializing in language and mind and all things Wittgenstein. After several years as a professor at two leading universities, he left the field to earn a second PhD in neuroscience, working in a wet lab. I have a feeling he might agree with much that Paul Graham writes in the essay quoted below. In numerous conversations over the years about his dissertation research, I could never quite see the point…

Outside of math there’s a limit to how far you can push words; in fact, it would not be a bad definition of math to call it the study of terms that have precise meanings. Everyday words are inherently imprecise. They work well enough in everyday life that you don’t notice. Words seem to work, just as Newtonian physics seems to. But you can always make them break if you push them far enough.

I would say that this has been, unfortunately for philosophy, the central fact of philosophy. Most philosophical debates are not merely afflicted by but driven by confusions over words. Do we have free will? Depends what you mean by “free.” Do abstract ideas exist? Depends what you mean by “exist.”

Wittgenstein is popularly credited with the idea that most philosophical controversies are due to confusions over language. I’m not sure how much credit to give him. I suspect a lot of people realized this, but reacted simply by not studying philosophy, rather than becoming philosophy professors.

…Curiously, however, the works they produced continued to attract new readers. Traditional philosophy occupies a kind of singularity in this respect. If you write in an unclear way about big ideas, you produce something that seems tantalizingly attractive to inexperienced but intellectually ambitious students. Till one knows better, it’s hard to distinguish something that’s hard to understand because the writer was unclear in his own mind from something like a mathematical proof that’s hard to understand because the ideas it represents are hard to understand. To someone who hasn’t learned the difference, traditional philosophy seems extremely attractive: as hard (and therefore impressive) as math, yet broader in scope. That was what lured me in as a high school student.

This singularity is even more singular in having its own defense built in. When things are hard to understand, people who suspect they’re nonsense generally keep quiet. There’s no way to prove a text is meaningless. The closest you can get is to show that the official judges of some class of texts can’t distinguish them from placebos. [10]

And so instead of denouncing philosophy, most people who suspected it was a waste of time just studied other things. That alone is fairly damning evidence, considering philosophy’s claims. It’s supposed to be about the ultimate truths. Surely all smart people would be interested in it, if it delivered on that promise.

Because philosophy’s flaws turned away the sort of people who might have corrected them, they tended to be self-perpetuating. Bertrand Russell wrote in a letter in 1912:

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject. [11]

His response was to launch Wittgenstein at it, with dramatic results.

I think Wittgenstein deserves to be famous not for the discovery that most previous philosophy was a waste of time, which judging from the circumstantial evidence must have been made by every smart person who studied a little philosophy and declined to pursue it further, but for how he acted in response. [12] Instead of quietly switching to another field, he made a fuss, from inside. He was Gorbachev.

The field of philosophy is still shaken from the fright Wittgenstein gave it. [13] Later in life he spent a lot of time talking about how words worked. Since that seems to be allowed, that’s what a lot of philosophers do now. Meanwhile, sensing a vacuum in the metaphysical speculation department, the people who used to do literary criticism have been edging Kantward, under new names like “literary theory,” “critical theory,” and when they’re feeling ambitious, plain “theory.” The writing is the familiar word salad:

Gender is not like some of the other grammatical modes which express precisely a mode of conception without any reality that corresponds to the conceptual mode, and consequently do not express precisely something in reality by which the intellect could be moved to conceive a thing the way it does, even where that motive is not something in the thing as such. [14]

The singularity I’ve described is not going away. There’s a market for writing that sounds impressive and can’t be disproven. There will always be both supply and demand. So if one group abandons this territory, there will always be others ready to occupy it.

[10] Sokal, Alan, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity,” Social Text 46/47, pp. 217-252.

Abstract-sounding nonsense seems to be most attractive when it’s aligned with some axe the audience already has to grind. If this is so we should find it’s most popular with groups that are (or feel) weak. The powerful don’t need its reassurance.

[11] Letter to Ottoline Morrell, December 1912. Quoted in:

Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991, p. 75.

[12] A preliminary result, that all metaphysics between Aristotle and 1783 had been a waste of time, is due to I. Kant.

[13] Wittgenstein asserted a sort of mastery to which the inhabitants of early 20th century Cambridge seem to have been peculiarly vulnerable—perhaps partly because so many had been raised religious and then stopped believing, so had a vacant space in their heads for someone to tell them what to do (others chose Marx or Cardinal Newman), and partly because a quiet, earnest place like Cambridge in that era had no natural immunity to messianic figures, just as European politics then had no natural immunity to dictators.

[14] This is actually from the Ordinatio of Duns Scotus (ca. 1300), with “number” replaced by “gender.” Plus ca change.

Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson, 1963, p. 92.

Written by infoproc

September 22, 2007 at 1:23 pm

Posted in sad but true

The world is our laboratory

with one comment

Here is a nice profile of Myron Scholes that originally appeared in the journal Quantitative Finance. It was written by, of all people, statistical physicist Cosma Shalizi.

Below is a compact summary of the Black-Scholes result for option pricing, emphasizing the importance of perfect hedging. With perfect hedging you can price the option as long as you know the future probability distribution for the underlying — it doesn’t have to be log-normal or have fixed variance.

The solution, in hindsight, is wonderfully simple. The proper price to put on an option should equal the expected value of exercising the option. If you have the option, right now, to sell one share of a stock for $10, and the current price is $8, the option is worth exactly $2, and the option price tracks the current stock price one-for-one. If you knew for certain that the share price would be $8 a year from now, the present value of the option would be $2, discounted by the cost of holding money, risklessly, for a year — say $1. Every $2 change in the stock price a year hence changes the option price now by $1. If you knew the probability of different share prices in the future, you could calculate the expected present value of the option, assuming you were indifferent to risk, which few of us are. Here is the crucial trick: a portfolio of one share and two such options is actually risk-free, and so, assuming no arbitrage, must earn the same return as any other riskless asset. Since we’re assuming you already know the probability distribution of the future stock price, you know its risk and returns, and so have everything you need to know to calculate the present value of the option! Of course, the longer the time horizon, the more we discount the future value of exercising the option, and so the more options we need to balance the risk out of our portfolio. This fact suffices to give the Black-Scholes formula, provided one is willing to assume that stock price changes will follow a random walk with some fixed variance, an assumption which “did not seem onerous” to him at the time, but would now be more inclined to qualify.

Scholes on models, mathematics and computers:

Starting from an economic issue and looking for a parsimonious models, rather than building mathematical tools and looking for a problem to solve with them, has been a hall-mark of Scholes’s career. “The world is our laboratory”, he says, and the key thing is that it confirm a model’s predictive power. There is a delicate trade-off between realism and simplicity; “tact” is needed to know what is a first-order effect and what is a second-order correction, though that is ultimately an empirical point.

The evaluation of such empirical points has itself become a delicate issue, he says, especially since the rise of computerized data-mining. While by no means objecting to computer-intensive data analysis — he has been hooked on programming since encountering it in his first year of graduate school — it raises very subtle problems of selection bias. The world may be our laboratory, but it is an “evolutionary” rather than an “experimental” lab; “we have only one run of history”, and it is all too easy to devise models which have no real predictive power. In this connection, he tells the story of a time he acted as a statistical consultant for a law firm. An expert for the other side presented a convincing-looking regression analysis to back up their claims; Scholes, however, noticed that the print-out said “run 89”, and an examination of the other 88 runs quickly undermined the credibility of the favorable regression. Computerization makes it cheap to do more runs, to create more models and evaluate them, but it “burns degrees of freedom”. The former cost and tedium of evaluating models actually imposed a useful discipline, since it encouraged the construction of careful, theoretically-grounded, models, and discouraged hunting for something which gave results you liked — it actually enhanced the meaning and predictive power of the models people did use!

Written by infoproc

September 21, 2007 at 8:40 pm

Information theory, inference and learning algorithms

with one comment

I’d like to recommend the book Information theory, inference and learning algorithms by David Mackay, a Cambridge professor of physics. I wish I’d had a course on this material from Mackay when I was a student! Especially nice are the introductory example on Bayesian inference (Ch. 3) and the discussion of Occam’s razor from a Bayesian perspective (Ch. 28). I’m sure I’ll find other gems in this book, but I’m still working my way through.

I learned about the book through Nerdwisdom, which I also recommend highly. Nerdwisdom is the blog of Jonathan Yedidia, a brilliant polymath (theoretical physicist turned professional chess player turned computer scientist) with whom I consumed a lot of French wine at dinners in the Harvard Society of Fellows.

Written by infoproc

September 20, 2007 at 3:06 pm

Our brilliant leaders

with one comment

In case you haven’t followed, Greenspan has made a number of controversial comments in his new book and in related press interviews. I agree with his criticisms of the Bush administration for fiscal profligacy, but like everyone else I find his argument for the necessity of the Iraq war to be nutty. I like the following comments from ParaPundit, found via Steve Sailer.

ParaPundit: …I try to be polite about individuals. But the invasion of Iraq was a huge mistake and any prominent figure who makes lame arguments about the invasion must not go unchallenged. Saddam was moving towards control of the Strait of Hormuz? I’d be embarrassed to say something so obviously wrong. One doesn’t need to do fancy calculations or read tons of history books or follow complex theories to know that Saddam was not moving toward control of the Strait of Hormuz. That’s nuts. But where is this coming from? If Greenspan had this view 20 years ago then one can’t blame it on senility. So what is going on? Can someone explain this? Is Greenspan overrated in general? Or is he only good at some narrow specialty and foolish about much else?

Greenspan is another example of a general problem we face: We are poorly led. We give our elites – especially our political elites – far too much respect and deference. These people are nowhere near as competent as they make themselves out to be. The really talented people in America are in investment banks and Silicon Valley start-ups. [OK, this is an exaggeration, and I hope he means I-banks broadly defined.] They aren’t in Washington DC in high government positions. Though I bet there are some smart people on K Street manipulating the yahoos in government.

We mostly are better off if the sharpest people are in venture capital-funded start-ups and investment banks. The private sector generates the wealth. But we need some small handful of sharpies in key positions of power who can recognize when nonsense is being spoken and say no to stupid policies.

Written by infoproc

September 19, 2007 at 2:26 pm

Crisis in American Science

with 14 comments

The Chronicle of Higher Education has a long article about the bleak job prospects facing academic scientists these days. I’m interviewed in the piece, which ran with the picture on the right. The Chronicle must have a big budget because they sent a photographer to my office for several hours to get the shot! The editor wanted a geeky guy reading the Wall Street Journal, and I guess I’m your man 🙂

The article covers a lot of ground, but one thing that I think could have been emphasized more is that, no matter how dismal the career path becomes for US scientists, there will still be foreigners from India, China and eastern Europe willing to try their luck, as well as a sprinkling of American-born obsessives (like me) who should know better. However, a significant number of talented Americans will simply choose to do something else.

The graphic below from the article shows physics job prospects since 1979. I notice my postdoc career coincided with the global minimum — the worst period in 30 years 😦

The Real Science Crisis: Bleak Prospects for Young Researchers

Tight budgets, scarce jobs, and stalled reforms push students away from scientific careers


It is the best of times and worst of times to start a science career in the United States.

Researchers today have access to powerful new tools and techniques — such as rapid gene sequencers and giant telescopes — that have accelerated the pace of discovery beyond the imagination of previous generations.

But for many of today’s graduate students, the future could not look much bleaker.

They see long periods of training, a shortage of academic jobs, and intense competition for research grants looming ahead of them. “They get a sense that this is a really frustrating career path,” says Thomas R. Insel, director of the National Institute of Mental Health.

So although the operating assumption among many academic leaders is that the nation needs more scientists, some of brightest students in the country are demoralized and bypassing scientific careers.

The problem stems from the way the United States nurtures its developing brainpower — the way it trains, employs, and provides grants for young scientists. For decades, blue-ribbon panels have called for universities to revise graduate doctoral programs, which produced a record-high 27,974 Ph.D.’s in science and engineering in 2005. No less a body than the National Academy of Sciences has, in several reports, urged doctoral programs to train science students more broadly for jobs inside and outside academe, to shorten Ph.D. programs, and even to limit the number of degrees they grant in some fields.

Despite such repeated calls for reform, resistance to change has been strong. Major problems persist, and some are worsening. Recent data, for example, reveal that:

Averaged across the sciences, it takes graduate students a half-year longer now to complete their doctorates than it did in 1987.

In physics nearly 70 percent of newly minted Ph.D.’s go into temporary postdoctoral positions, whereas only 43 percent did so in 2000.

The number of tenured and tenure-track scientists in biomedicine has not increased in the past two decades even as the number of doctorates granted has nearly doubled.

Despite a doubling in the budget of the National Institutes of Health since 1998, the chances that a young scientist might win a major research grant actually dropped over the same period.

…Stephen D.H. Hsu is just the type of scientist America hopes to produce. A professor of physics at the University of Oregon, Mr. Hsu is at the forefront of scholarship on dark energy and quantum chromodynamics. At the same time, he has founded two successful software companies — one of which was bought for $26-million by Symantec — that provide the sorts of jobs and products that the nation’s economy needs to thrive.

Despite his successes, Mr. Hsu sees trouble ahead for prospective scientists. He has trained four graduate students so far, and none of them have ended up securing their desired jobs in theoretical physics. After fruitless attempts trying to find academic posts, they took positions in finance and in the software industry, where Mr. Hsu has connections. “They often ask themselves,” he says, “Why did I wait so long to leave? Why did I do that second or third postdoc?” By and large, he says, the students are doing pretty well but are behind their peers in terms of establishing careers and families.

The job crunch makes science less appealing for bright Americans, and physics departments often find their applications for graduate slots dominated by foreign students who are in many cases more talented than the homegrown ones. “In the long run, I think it’s bad for the nation,” he says. “It will become a peripheral thought in the minds of Americans, that science is a career path.”

Melinda Maris also sees hints of that dark future at the Johns Hopkins University. Ms. Maris, assistant director of the office of preprofessional programs and advising, says the brightest undergrads often work in labs where they can spot the warning signs: Professors can’t get grants, and postdocs can’t get tenure-track jobs.

Such undergraduates, she says, “are really weighing their professional options and realize that they’re not going to be in a strong financial position until really their mid-30s.” In particular, those dim prospects drive away Americans with fewer financial resources, including many minority students.

…Almost every project aimed at improving graduate education suggests that departments should expose students to the breadth of jobs beyond academe, but faculty members still resist. When Mr. Hsu, the University of Oregon physicist, brings his former students back to talk about their jobs in finance or the software industry, it rankles some other professors.

Doctoral students pick up on that bias. “It was kind of a taboo topic,” says Ms. Maris, the career adviser at Johns Hopkins, who recently earned a Ph.D. in genetics at Emory University and did one year of a postdoc at Hopkins before she decided to leave research.

Bruce Alberts, a former president of the National Academy of Sciences, says universities and the nation must take better care of young scientists. Now a professor of biochemistry at the University of California at San Francisco, Mr. Alberts says the current system of demoralized and underemployed Ph.D.’s cannot be sustained. “We need to wake up to what the true situation is.”

Students may be quietly starting to lead the way — to recognize that they need to look beyond traditional ways of using their Ph.D.’s. When Mr. Alberts’s colleagues polled second-year doctoral students last year, a full quarter of them expressed interest in jobs such as patent law, journalism, and government — jobs that their professors would not consider “science.”

Of course, students might not be willing to share those desires yet with their mentors. The poll was anonymous.

Written by infoproc

September 17, 2007 at 8:52 pm