What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Andrew Jaffe: Leaves on the Line

Andrew Jaffe's blog: cosmology, Bayesianism, politics, music, ...

A feed by Andrew Jaffe

JSON


JSONfeed

Permalink - Posted on 2017-05-19 14:46, modified on 2017-05-23 23:27

More technical stuff, but I’m trying to re-train myself to actually write on this blog, so here goes…

For no good reason other than it was easy, I have added a JSONfeed to this blog. It can be found at http://andrewjaffe.net/blog/feed.json, and accessed from the bottom of the right-hand sidebar if you’re actually reading this at andrewjaffe.net.

What does this mean? JSONfeed is an idea for a sort-of successor to something called RSS, which may stand for really simple syndication, a format for encapsulating the contents of a blog like this one so it can be indexed, consumed, and read in a variety of ways without explicitly going to my web page. RSS was created by developer, writer, and all around web-and-software guru Dave Winer, who also arguably invented — and was certainly part of the creation of — blogs and podcasting. Five or ten years ago, so-called RSS readers were starting to become a common way to consume news online. NetNewsWire was my old favourite on the Mac, although its original versions by Brent Simmons were much better than the current incarnation by a different software company; I now use something called Reeder. But the most famous one was Google Reader, which Google discontinued in 2013, thereby killing off most of the RSS-reader ecosystem.

But RSS is not dead: RSS readers still exist, and it is still used to store and transfer information between web pages. Perhaps most importantly, it is the format behind subscriptions to podcasts, whether you get them through Apple or Android or almost anyone else.

But RSS is kind of clunky, because it’s built on something called XML, an ugly but readable format for structuring information in files (HTML, used for the web, with all of its < and > “tags”, is a close cousin). Nowadays, people use a simpler family of formats called JSON for many of the same purposes as XML, but it is quite a bit easier for humans to read and write, and (not coincidentally) quite a bit easier to create computer programs to read and write.

So, finally, two more web-and-software developers/gurus, Brent Simmons and Manton Reece realised they could use JSON for the same purposes as RSS. Simmons is behind NewNewsWire and Reece’s most recent project is an “indie microblogging” platform (think Twitter without the giant company behind it), so they both have an interest in these things. And because JSON is so comparatively easy to use, there is already code that I could easily add to this blog so it would have its own JSONfeed. So I did it.

So it’s easy to create a JSONfeed. What there isn’t — so far — are any newsreaders like NetNewsWire or Reeder that can ingest them. (In fact, Maxime Vaillancourt apparently wrote a web-based reader in about an hour, but it may already be overloaded…). Still, looking forward to seeing what happens.


Python Bug Hunting

Permalink - Posted on 2017-05-19 14:44, modified on 2017-07-20 08:45

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here. (See also updates below.)

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)

A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/jupyter_notebook_config.py, by including the line

c.NotebookApp.browser = u'Safari'

(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at http://python.org instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
>>> br.open("http://python.org")
0:33: execution error: "http://python.org" doesn't understand the "open location" message. (-1708)
False

So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location "http://www.python.org/"'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location "http://www.python.org/"' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "http://python.org" into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.

Updates:


Knightian Uncertainty

Permalink - Posted on 2017-05-03 15:21, modified on 2017-05-18 11:33

[Update: I have fixed some broken links, and modified the discussion of QBism and the recent paper by Chris Fuchs— thanks to Chris himself for taking the time to read and find my mistakes!]

For some reason, I’ve come across an idea called “Knightian Uncertainty” quite a bit lately. Frank Knight was an economist of the free-market conservative “Chicago School”, who considered various concepts related to probability in a book called Risk, Uncertainty, and Profit. He distinguished between “risk”, which he defined as applying to events to which we can assign a numerical probability, and “uncertainty”, to those events about which we know so little that we don’t even have a probability to assign, or indeed those events whose possibility we didn’t even contemplate until they occurred. In Rumsfeldian language, “risk” applies to “known unknowns”, and “uncertainty” to “unknown unknowns”. Or, as Nicholas Taleb put it, “risk” is about “white swans”, while “uncertainty” is about those unexpected “black swans”.

(As a linguistic aside, to me, “uncertainty” seems a milder term than “risk”, and so the naming of the concepts is backwards.)

Actually, there are a couple of slightly different concepts at play here. The black swans or unknown-unknowns are events that one wouldn’t have known enough about to even include in the probabilities being assigned. This is much more severe than those events that one knows about, but for which one doesn’t have a good probability to assign.

And the important word here is “assign”. Probabilities are not something out there in nature, but in our heads. So what should a Bayesian make of these sorts of uncertainty? By definition, they can’t be used in Bayes’ theorem, which requires specifying a probability distribution. Bayesian theory is all about making models of the world: we posit a mechanism and possible outcomes, and assign probabilities to the parts of the model that we don’t know about.

So I think the two different types of Knightian uncertainty have quite a different role here. In the case where we know that some event is possible, but we don’t really know what probabilities to assign to it, we at least have a starting point. If our model is broad enough, then enough data will allow us to measure the parameters that describe it. For example, in recent years people have started to realise that the frequencies of rare, catastrophic events (financial crashes, earthquakes, etc.) are very often well described by so-called power-law distributions. These assign much greater probabilities to such events than more typical Gaussian (bell-shaped curve) distributions; the shorthand for this is that power-law distributions have much heavier tails than Gaussians. As long as our model includes the possibility of these heavy tails, we should be able to make predictions based on data, although very often those predictions won’t be very precise.

But the “black swan” problem is much worse: these are possibilities that we don’t even know enough about to consider in our model. Almost by definition, one can’t say anything at all about this sort of uncertainty. But what one must do is be open-minded enough to adjust our models in the face of new data: we can’t predict the black swan, but we should expand the model after we’ve seen the first one (and perhaps revise our model for other waterfowl to allow more varieties!). In more traditional scientific settings, involving measurements with errors, this is even more difficult: a seemingly anomalous result, not allowed in the model, may be due to some mistake in the experimental setup or in our characterisation of the probabilities of those inevitable errors (perhaps they should be described by heavy-tailed power laws, rather than Gaussian distributions as above).

I first came across the concept as an oblique reference in a recent paper by Chris Fuchs, writing about his idea of QBism (or see here for a more philosophically-oriented discussion), an interpretation of quantum mechanics that takes seriously the Bayesian principle that all probabilities are about our knowledge of the world, rather than the world itself (which is a discussion for another day). He tentatively opined that the probabilities in quantum mechanics are themselves “Knightian”, referring not to a reading of Knight himself but to some recent, and to me frankly bizarre, ideas from Scott Aaronson, discussed in his paper, The Ghost in the Quantum Turing Machine, and an accompanying blog post, trying to base something like “free will” (a term he explicitly does not apply to this idea, however) on the possibility of our brains having so-called “freebits”, quantum states whose probabilities are essentially uncorrelated with anything else in the Universe. This arises from what is to me a mistaken desire to equate “freedom” with complete unpredictability. My take on free will is instead aligned with that of Daniel Dennett, at least the version from his Consciousness Explained from the early 1990s, as I haven’t yet had the chance to read his recent From Bacteria to Bach and Back: a perfectly deterministic (or quantum mechanically random, even allowing for the statistical correlations that Aaronson wants to be rid of) version of free will is completely sensible, and indeed may be the only kind of free will worth having.

Fuchs himself tentatively uses Aaronson’s “Knightian Freedom” to refer to his own idea

that nature does what it wants, without a mechanism underneath, and without any “hidden hand” of the likes of Richard von Mises’s Kollective or Karl Popper’s propensities or David Lewis’s objective chances, or indeed any conception that would diminish the autonomy of nature’s events,

which I think is an attempt (and which I admit I don’t completely understand) to remove the probabilities of quantum mechanics entirely from any mechanistic account of physical systems, despite the incredible success of those probabilities in predicting the outcomes of experiments and other observations of quantum mechanical systems. I’m not quite sure this is what either Knight nor Aaronson had in mind with their use of “uncertainty” (or “freedom”), since at least in quantum mechanics, we do know what probabilities to assign, given certain other personal (as Fuchs would have it) information about the system. My Bayesian predilections make me sympathetic with this idea, but then I struggle to understand what, exactly, quantum mechanics has taught us about the world: why do the predictions of quantum mechanics work?

When I’m not thinking about physics, for the last year or so my mind has been occupied with politics, so I was amused to see Knightian Uncertainty crop up in a New Yorker article about Trump’s effect on the stock market:

Still, in economics there’s a famous distinction, developed by the great Chicago economist Frank Knight, between risk and uncertainty. Risk is when you don’t know exactly what will happen but nonetheless have a sense of the possibilities and their relative likelihood. Uncertainty is when you’re so unsure about the future that you have no way of calculating how likely various outcomes are. Business is betting that Trump is risky but not uncertain—he may shake things up, but he isn’t going to blow them up. What they’re not taking seriously is the possibility that Trump may be willing to do things—like start a trade war with China or a real war with Iran—whose outcomes would be truly uncertain.

It’s a pretty low bar, but we can only hope.


SOLE Survivor

Permalink - Posted on 2017-01-24 11:57, modified on 2017-01-25 09:17

I recently finished my last term lecturing our second-year Quantum Mechanics course, which I taught for five years. It’s a required class, a mathematical introduction to one of the most important set of ideas in all of physics, and really the basis for much of what we do, whether that’s astrophysics or particle physics or almost anything else. It’s a slightly “old-fashioned” course, although it covers the important basic ideas: the Schrödinger Equation, the postulates of quantum mechanics, angular momentum, and spin, leading almost up to what is needed to understand the crowning achievement of early quantum theory: the structure of the hydrogen atom (and other atoms).

A more modern approach might start with qubits: the simplest systems that show quantum mechanical behaviour, and the study of which has led to the revolution in quantum information and quantum computing.

Moreover, the lectures rely on the so-called Copenhagen interpretation, which is the confusing and sometimes contradictory way that most physicists are taught to think about the basic ontology of quantum mechanics: what it says about what the world is “made of” and what happens when you make a quantum-mechanical measurement of that world. Indeed, it’s so confusing and contradictory that you really need another rule so that you don’t complain when you start to think too deeply about it: “shut up and calculate”. A more modern approach might also discuss the many-worlds approach, and — my current favorite — the (of course) Bayesian ideas of QBism.

The students seemed pleased with the course as it is — at the end of the term, they have the chance to give us some feedback through our “Student On-Line Evaluation” system, and my marks have been pretty consistent. Of the 200 or so students in the class, only about 90 bother to give their evaluations, which is disappointingly few. But it’s enough (I hope) to get a feeling for what they thought.

SOLE 2016 Chart

So, most students Definitely/Mostly Agree with the good things, although it’s clear that our students are most disappointed in the feedback that they receive from us (this is a more general issue for us in Physics at Imperial and more generally, and which may partially explain why most of them are unwilling to feed back to us through this form).

But much more fun and occasionally revealing are the “free-text comments”. Given the numerical scores, it’s not too surprising that there were plenty of positive ones:

  • Excellent lecturer - was enthusiastic and made you want to listen and learn well. Explained theory very well and clearly and showed he responded to suggestions on how to improve.

  • Possibly the best lecturer of this term.

  • Thanks for providing me with the knowledge and top level banter.

  • One of my favourite lecturers so far, Jaffe was entertaining and cleary very knowledgeable. He was always open to answering questions, no matter how simple they may be, and gave plenty of opportunity for students to ask them during lectures. I found this highly beneficial. His lecturing style incorporates well the blackboards, projectors and speach and he finds a nice balance between them. He can be a little erratic sometimes, which can cause confusion (e.g. suddenly remembering that he forgot to write something on the board while talking about something else completely and not really explaining what he wrote to correct it), but this is only a minor fix. Overall VERY HAPPY with this lecturer!

But some were more mixed:

  • One of the best, and funniest, lecturers I’ve had. However, there are some important conclusions which are non-intuitively derived from the mathematics, which would be made clearer if they were stated explicitly, e.g. by writing them on the board.

  • I felt this was the first time I really got a strong qualitative grasp of quantum mechanics, which I certainly owe to Prof Jaffe’s awesome lectures. Sadly I can’t quite say the same about my theoretical grasp; I felt the final third of the course less accessible, particularly when tackling angular momentum. At times, I struggled to contextualise the maths on the board, especially when using new techniques or notation. I mostly managed to follow Prof Jaffe’s derivations and explanations, but struggled to understand the greater meaning. This could be improved on next year. Apart from that, I really enjoyed going to the lectures and thought Prof Jaffe did a great job!

  • The course was inevitably very difficult to follow.

And several students explicitly commented on my attempts to get students to ask questions in as public a way as possible, so that everyone can benefit from the answers and — this really is true! — because there really are no embarrassing questions!

  • Really good at explaining and very engaging. Can seem a little abrasive at times. People don’t like asking questions in lectures, and not really liking people to ask questions in private afterwards, it ultimately means that no questions really get answered. Also, not answering questions by email makes sense, but no one really uses the blackboard form, so again no one really gets any questions answered. Though the rationale behind not answering email questions makes sense, it does seem a little unnecessarily difficult.

  • We are told not to ask questions privately so that everyone can learn from our doubts/misunderstandings, but I, amongst many people, don’t have the confidence to ask a question in front of 250 people during a lecture.

  • Forcing people to ask questions in lectures or publically on a message board is inappropriate. I understand it makes less work for you, but many students do not have the confidence to ask so openly, you are discouraging them from clarifying their understanding.

Inevitably, some of the comments were contradictory:

  • Would have been helpful to go through examples in lectures rather than going over the long-winded maths to derive equations/relationships that are already in the notes.

  • Professor Jaffe is very good at explaining the material. I really enjoyed his lectures. It was good that the important mathematics was covered in the lectures, with the bulk of the algebra that did not contribute to understanding being left to the handouts. This ensured we did not get bogged down in unnecessary mathematics and that there was more emphasis on the physics. I liked how Professor Jaffe would sometimes guide us through the important physics behind the mathematics. That made sure I did not get lost in the maths. A great lecture course!

And also inevitably, some students wanted to know more about the exam:

  • It is a difficult module, however well covered. The large amount of content (between lecture notes and handouts) is useful. Could you please identify what is examinable though as it is currently unclear and I would like to focus my time appropriately?

And one comment was particularly worrying (along with my seeming “a little abrasive at times”, above):

  • The lecturer was really good in lectures. however, during office hours he was a bit arrogant and did not approach the student nicely, in contrast to the behaviour of all the other professors I have spoken to

If any of the students are reading this, and are willing to comment further on this, I’d love to know more — I definitely don’t want to seem (or be!) arrogant or abrasive.

But I’m happy to see that most students don’t seem to think so, and even happier to have learned that I’ve been nominated “multiple times” for Imperial’s Student Academic Choice Awards!

Finally, best of luck to my colleague Jonathan Pritchard, who will be taking over teaching the course next year.


Electoral woes and votes

Permalink - Posted on 2016-11-22 12:41, modified on 2017-01-14 20:54

Like everyone else in my bubble, I’ve been angrily obsessing about the outcome of the US Presidential election for the last two weeks. I’d like to say that I’ve been channelling that obsession into action, but so far I’ve mostly been reading and hoping (and being disappointed). And trying to parse all the “explanations” for Trump’s election.

Mostly, it’s been about what the Democrats did wrong (imperfect Hillary, ignoring the white working class, not visiting Wisconsin, too much identity politics), and what the Republicans did right (imperfect Trump, dog whistles, focusing on economics and security).

But there has been an ongoing strain of purely procedural complaint: that the system is rigged, but (ironically?) in favour of Republicans. In fact, this is manifestly true: liberals (Democrats) are more concentrated — mostly in cities — than conservatives (Republicans) who are spread more evenly and dominate in rural areas. And the asymmetry is more true for the sticky ideologies than the fungible party affiliations, especially when “liberal” encompasses a whole raft of social issues rather than just left-wing economics. This has been exacerbated by a few decades of gerrymandering. So the House of Representatives, in particular, tilts Republican most of the time. And the Senate, with its non-proportional representation of two per state, regardless of size, favours those spread-out Republicans, too (although party dominance of the Senate is less of a stranglehold for the Republicans than that of the House).

But one further complaint that I’ve heard several times is that the Electoral College is rigged, above and beyond those reasons for Republican dominance of the House and Senate: as we know, Clinton has won the popular vote, by more than 1.5 million as of this writing — in fact, my own California absentee ballot has yet to be counted. The usual argument goes like this: the number of electoral votes allocated to a state is the sum of the number of members of congress (proportional to the population) and the number of senators (two), giving a total of five hundred and thirty-eight. For the most populous states, the addition of two electoral votes doesn’t make much of a difference. New Jersey, for example, has 12 representatives, and 14 electoral votes, about a 15% difference; for California it’s only about 4%. But the least populous states (North and South Dakota, Montana, Wyoming, Alaska) have only one congressperson each, but three electoral votes, increasing the share relative to population by a factor of 3 (i.e., 300%). In a Presidential election, the power of a Wyoming voter is more than three times that of a Californian.

This is all true, too. But it isn’t why Trump won the election. If you changed the electoral college to allocate votes equal to the number of congressional representatives alone (i.e., subtract two from each state), Trump would have won 245 to 191 (compared to the real result of 306 to 232).1 As a further check, since even the representative count is slightly skewed in favour of small states (since even the least populous state has at least one), I did another version where the electoral vote allocation is exactly proportional to the 2010 census numbers, but it gives the same result. (Contact me if you would like to see the numbers I use.)

Is the problem (I admit I am very narrowly defining “problem” as “responsible for Trump’s election”, not the more general one of fairness!), therefore, not the skew in vote allocation, but instead the winner-take-all results in each state? Maine and Nebraska already allocate their two “Senatorial” electoral votes to the statewide winner, and one vote for the winner of each congressional district, and there have been proposals to expand this nationally. Again, this wouldn’t solve the “problem”. Although I haven’t crunched the numbers myself, it appears that ticket-splitting (voting different parties for President and Congress) is relatively low. Since the Republicans retained control of Congress, their electoral votes under this system would be similar to their congressional majority of 239 to 194 (their are a few results outstanding), and would only get worse if we retain the two Senatorial votes per state. Indeed, with this system, Romney would have won in 2012.

So the “problem” really does go back to the very different geographical distribution of Democrats and Republicans. Almost any system which segregates electoral votes by location (especially if subjected to gerrymandering) will favour the more widely dispersed party. So perhaps the solution is to just to use nationwide popular voting for Presidential elections. This would also eliminate the importance of a small number of swing states and therefore require more national campaigning. (It could be enacted by a Constitutional amendment, or a scheme like the National Popular Vote Interstate Compact.) Alas, it ain’t gonna happen.


  1. I have assumed Trump wins Michigan, and I have allocated all of Maine to Clinton and all of Nebraska to Trump; see below. ↩︎


The Sick Rose

Permalink - Posted on 2016-06-24 10:42

Songs of innocence and of experience page 39 The Sick Rose Fitzwilliam copy

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience


Wussy (Best Band in America?)

Permalink - Posted on 2016-05-05 22:07, modified on 2016-12-29 09:26

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.

Wusst

And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.


Atheism, naturalism, and the way things ought to be

Permalink - Posted on 2015-03-09 11:48, modified at 15:24

In an occasionally thoughtful but mostly silly attempted takedown of the so-called New Atheists (Dawkins, Dennett, Harris and such), philosopher John Gray writes that

there is an irresolvable contradiction between viewing religion naturalistically — as a human adaptation to living in the world — and condemning it as a tissue of error and illusion.

-John Gray, What Scares the New Atheists

No, there’s not.

There are lots of human adaptations that are useless or outmoded. Racism, sexism, and other forms of bigotry have at least some naturalistic explanation in terms of evolution, but we certainly ought to condemn them despite this history. This is of a piece with what I understand to be Gray’s general opposition to a sort of Whiggish belief in progress and humanism. But Gray’s argument seems to be another, somewhat disguised and inverted, attempt to derive “ought” from “is”: we are certainly the product of biological and cultural evolution but that doesn’t give us any insight into how we should run the society in which we find ourselves (even though our society is the product of that evolution).


Oscillators, Integrals, and Bugs

Permalink - Posted on 2014-11-24 12:48, modified on 2015-01-15 16:40

[Update: The bug seems fixed in the latest version, 10.0.2.]

I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

MathematicaGaussHermiteBug

The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?


Loncon 3

Permalink - Posted on 2014-08-15 10:16, modified at 15:42

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.


More events: me and my friends

Permalink - Posted on 2014-06-04 10:00, modified on 2014-08-23 05:31

A quick heads-up on some recent and upcoming events:

A couple of weeks ago, I delivered my long-delayed (if not actually long-awaited) inaugural lecture, “The Random Universe”. A video is currently available through Imperial College’s media library so you can hear me opine on how we learn about the history and evolution of the Universe (and my career thinking about those things). The squeamish may want to shut their eyes at about three minutes in to avoid a picture of me in a wetsuit….

On Tuesday, June 10, my friend and colleague Pedro Ferreira will be speaking at the London Review Bookshop about his new book, The Perfect Theory, a history of general relativity — Einstein’s theory of gravity — and the controversies (and strong personalities stoking them) that have come along with our growing understanding of it. He’ll be talking with math-pundit Marcus du Sautoy and I know it will be a great discussion.

Finally, a reminder that a bit later on in the summer I’ll get to engage in some further punditry of my own: I’ll be speaking, again on “The Random Universe”, at the Gravity Fields Festival up in Grantham, Lincolnshire, where Isaac Newton was educated. There’s lots of other astronomy, other kinds of science, as well as art, theatre, dance and lots more.


Spring & Summer Science

Permalink - Posted on 2014-04-09 12:07, modified on 2014-08-23 05:31

As the academic year winds to a close, scientists’ thoughts turn towards all of the warm-weather travel ahead (in order to avoid thinking about exam marking). Mostly, that means attending scientific conferences, like the upcoming IAU Symposium, Statistical Challenges in 21st Century Cosmology in Lisbon next month, and (for me and my collaborators) the usual series of meetings to prepare for the 2014 release of Planck data. But there are also opportunities for us to interact with people outside of our technical fields: public lectures and festivals.

Next month, parallel to the famous Hay Festival of Literature & the Arts, the town of Hay-on-Wye also hosts How The Light Gets In, concentrating on the also-important disciplines of philosophy and music, with a strong strand of science thrown in. This year, along with comic book writer Warren Ellis, cringe-inducing politicians like Michael Howard and George Galloway, ubiquitous semi-intellectuals like Joan Bakewell, there will be quite a few scientists, with a skew towards the crowd-friendly and controversial. I’m not sure that I want to hear Rupert Sheldrake talk about the efficacy of science and the scientific method, although it might be interesting to hear Julian Barbour, Huw Price, and Lee Smolin talk about the arrow of time. Some of the descriptions are inscrutable enough to pique my interest: Nancy Cartwright and George Ellis will discuss “Ultimate Proof” — I can’t quite figure out if that means physics or epistemology. Perhaps similarly, chemist Peter Atkins will ask “Can science explain all of existence” (and apparently answer in the affirmative). Closer to my own wheelhouse, Roger Penrose, Laura Mersini-Houghton, and John Ellis will discuss whether it is “just possible the Big Bang will turn out to be a mistake”. Penrose was and is one of the smartest people to work out the consequences of Einstein’s general theory of relativity, though in the last few years his cosmological musings have proven to be, well, just plain wrong — but, as I said, controversial and crowd-pleasing… (Disclosure: someone from the festival called me up and asked me to write about it here.)

Alas, I’ll likely be in Lisbon, instead of Hay. But if you want to hear me speak, you can make your way up North to Grantham, where Isaac Newton was educated, for this year’s Gravity Fields festival in late September. The line-up isn’t set yet, but I’ll be there, as will my fellow astronomers Chris Lintott and Catherine Heymans and particle physicist Val Gibson, alongside musicians, dancers, and lots of opportunities to explore the wilds of Lincolnshire. Or if you want to see me before then (and prefer to stay in London), you can come to Imperial for my much-delayed Inaugural Professorial Lecture on May 21, details TBC…


Gravitational Waves?

Permalink - Posted on 2014-03-20 18:23, modified on 2014-08-23 05:31

[Uh oh, this is sort of disastrously long, practically unedited, and a mixture of tutorial- and expert-level text. Good luck. Send corrections.]

It’s been almost exactly a year since the release of the first Planck cosmology results (which I discussed in some depth at the time). On this auspicious anniversary, we in the cosmology community found ourselves with yet more tantalising results to ponder, this time from a ground-based telescope called BICEP2. While Planck’s results were measurements of the temperature of the cosmic microwave background (CMB), this year’s concerned its polarisation.

Background

Polarisation is essentially a headless arrow that can come attached to the photons coming from any direction on the sky — if you’ve worn polarised sunglasses, and noticed how what you see changes as you rotate them around, you’ve seen polarisation. The same physics responsible for the temperature also generates polarisation. But more importantly for these new results, polarisation is a sensitive probe of some of the processes that are normally mixed in, and so hard to distinguish, in the temperature.

Technical aside (you can ignore the details of this paragraph). Actually, it’s a bit more complicated than that: we can think of the those headless arrows on the sky as the sum of two separate kinds of patterns. We call the first of these the “E-mode”, and it represents patterns consisting of either radial spikes or circles around a point. The other patterns are called the “B-mode” and look like patterns that swirl around, either to the left or the right. The important difference between them is that the E modes don’t change if you reflect them in a mirror, while the B modes do — we say that they have a handedness, or parity, in somewhat more mathematical terms. I’ve discussed the CMB a lot in the past but can’t do the theory of the CMB justice here, but my colleague Wayne Hu has an excellent, if somewhat dated, set of web pages explaining the physics (probably at a physics-major level).

EBfig

The excitement comes because these B-mode patterns can only arise in a few ways. The most exciting is that they can come from gravitational waves (GWs) in the early Universe. Gravitational waves (sometimes incorrectly called “gravity waves” which historically refers to unrelated phenomena!) are propagating ripples in space-time, predicted in Einstein’s general relativistic theory of gravitation. Because the CMB is generated about 400,000 years after the big bang, it’s only sensitive to gravitational radiation from the early Universe, not astrophysical sources like spiralling neutron stars or — from where we have other, circumstantial, evidence for gravitational waves, and which are the sources for which experiments like LIGO and eLISA will be searching. These early Universe gravitational waves move matter around in a specific way, which in turn induce those specific B-mode polarization pattern.

In the early Universe, there aren’t a lot of ways to generate gravitational waves. The most important one is inflation, an early period of expansion which blows up a subatomically-sized region by something like a billion-billion-billion times in each direction — inflation seems to be the most well thought-out idea for getting a Universe that looks like the one in which we live, flat (in the sense of Einstein’s relativity and the curvature of space-time), more or less uniform, but with small perturbations to the density that have grown to become the galaxies and clusters of galaxies in the Universe today. Those fluctuations arise because the rapid expansion takes minuscule quantum fluctuations and blows them up to finite size. This is essentially the same physics as the famous Hawking radiation from black holes. The fluctuations that eventually create the galaxies are accompanied by a separate set of fluctuations in the gravitational field itself: these are the ones that become gravitational radiation observable in the CMB. We characterise the background of gravitational radiation through the number r, which stands for the ratio of these two kinds of fluctuations — gravitational radiation divided by the density fluctuations.

Important caveat: there are other ways of producing gravitational radiation in the early Universe, although they don’t necessarily make exactly the same predictions; some of these issues have been discussed by my colleagues in various technical papers (Brandenberger 2011; Hindmarsh et al 2008; Lizarraga et al 2014 — the latter paper from just today!).

However, there are other ways to generate B modes. First, lots of astrophysical objects emit polarised light, and they generally don’t preferentially create E or B patterns. In particular, clouds of gas and dust in our galaxy will generally give us polarised light, and as we’re sitting inside our galaxy, it’s hard to avoid these. Luckily, we’re towards the outskirts of the Milky Way, so there are some clean areas of sky, but it’s hard to be sure that we’re not seeing some such light — and there are very few previous experiments to compare with.

We also know that large masses along the line of sight — clusters of galaxies and even bigger — distort the path of the light and can move those polarisation arrows around. This, in turn, can convert what started out as E into B and vice versa. But we know a lot about that intervening matter, and about the E-mode pattern that we started with, so we have a pretty good handle on this. There are some angular scales over which this is larger than the gravitational wave signal, and some scales that the gravitational wave signal is dominant.

So, if we can observe B-modes, and we are convinced that they are primordial, and that they are not due to lensing or astrophysical sources, and they have the properties expected from inflation, then (and only then!) we have direct evidence for inflation!

Data

Here’s a plot, courtesy the BICEP2 team, with the current state of the data targeting these B modes: Almost all BB limits

The figure shows the so-called power spectrum of the B-mode data — the horizontal “multipole” axis corresponds to angular sizes (θ) on the sky: very roughly, multipole ℓ ~ 180°/θ. The vertical axis gives the amount of “power” at those scales: it is larger if there are more structures of that particular size. The downward pointing arrows are all upper limits; the error bars labeled BICEP2 and Polarbear are actual detections. The solid red curve is the expected signal from the lensing effect discussed above; the long-dashed red curve is the effect of gravitational radiation (with a particular amplitude), and the short-dashed red curve is the total B-mode signal from the two effects.

The Polarbear results were announced on 11 March (disclosure: I am a member of the Polarbear team). These give a detection of the gravitational lensing signal. It was expected, and has been observed in other ways both in temperature and polarisation, but this was the first time it’s been seen directly in this sort of B-mode power spectrum, a crucial advance in the field, letting us really see lensing unblurred by the presence of other effects. We looked at very “clean” areas of the sky, in an effort to minimise the possible contamination from those astrophjysical foregrounds.

The BICEP2 results were announced with a big press conference on 17 March. There are two papers so far, one giving the scientific results, another discussing the experimental techniques used — more papers discussing the data processing and other aspects of the analysis are forthcoming. But there is no doubt from the results that they have presented so far that this is an amazing, careful, and beautiful experiment.

Taken at face value, the BICEP2 results give a pretty strong detection of gravitational radiation from the early Universe, with the ratio parameter r=0.20, with error bars +0.07 and -0.05 (they are different in the two different directions, so you can’t write it with the usual “±”).

This is why there has been such an amazing amount of interest in both the press and the scientific community about these results — if true, they are a first semi-direct detection of gravitational radiation, strong evidence that inflation happened in the early Universe, and therefore a first look at waves which were created in the first tiny fraction of a second after the big bang, and have been propagating unimpeded in the Universe ever since. If we can measure more of the properties of these waves, we can learn more about the way inflation happened, which may in turn give us a handle on the particle physics of the early Universe and ultimately on a so-called “theory of everything” joining up quantum mechanics and gravity.

Taken at face value, the BICEP2 results imply that the very simplest theories of inflation may be right: the so-called “single-field slow-roll” theories that postulate a very simple addition to the particle physics of the Universe. In the other direction, scientists working on string theory have begun to make predictions about the character of inflation in their models, and many of these models are strongly constrained — perhaps even ruled out — by these data.

Skepticism

This is great. But scientists are skeptical by nature, and many of us have spent the last few days happily trying to poke holes in these results. My colleagues Peter Coles and Ted Bunn have blogged their own worries over the last couple of days, and Antony Lewis has already done some heroic work looking at the data.

The first worry is raised by their headline result: r=0.20. On its face, this conflicts with last year’s Planck result, which says that r<0.11 (of course, both of these numbers really represent probability distributions, so there is no absolute contradiction between these numbers, but rather they should be seen to be as a very unlikely combination). How can we ameliorate the “tension” (a word that has come into vogue in cosmology lately: a wimpy way — that I’ve used, too — of talking about apparent contradictions!) between these numbers?

PlanckCl lowFirst, how does Planck measure r to begin with? Above, I wrote about how B modes show only gravitational radiation (and lensing, and astrophysical foregrounds). But the same gravitational radiation also contributes to the CMB temperature, albeit at a comparatively low level, and at large angular scales — the very left-most points of the temperature equivalent of a plot like the above — I reproduce one from last year’s Planck release at right. In fact, those left-most data points are a bit low compared to the most favoured theory (the smooth curve), which pushes the Planck limit down a bit.

But Planck and BICEP2 measure r at somewhat different angular scales, and so we can “ameliorate the tension” by making the theory a bit more complicated: the gravitational radiation isn’t described by just one number, but by a curve. If both data are to be believed, the curve slopes up from the Planck regime toward the BICEP2 regime. In fact, such a new parameter is already present in the theory, and goes by the name “tensor tilt”. The problem is that the required amount of tilt is somewhat larger than the simplest ideas — such as the single-field slow-roll theories — prefer.

If we want to keep the theories simple, we need to make the data more complicated: bluntly, we need to find mistakes in either Planck or BICEP2. The large-scale CMB temperature sky has been scrutinised for the last 20 years or so, from COBE through WMAP and now Planck. Throughout this time, the community has been building up a catalog of “anomalies” (another term of art we use to describe things we’re uncomfortable with), many of which do seem to affect those large scales. The problem is that no one quite figure out if these things are statistically significant: we look at so many possible ways that the sky could be weird, but we only publish the ones that look significant. As my Imperial colleague Professor David Hand would point out, “Coincidences, Miracles, and Rare Events Happen Every Day”. Nonetheless, there seems to be some evidence that something interesting/unusual/anomalous is happening at large scales, and perhaps if we understood this correctly, the Planck limits on r would go up.

But perhaps not: those results have been solid for a long while without an alternative explanation. So maybe the problem is with BICEP2? There are certainly lots of ways they could have made mistakes. Perhaps most importantly, it is very difficult for them to distinguish between primordial perturbations and astrophysical foregrounds, as their main results use only data from a single frequency (like a single colour in the spectrum, but down closer to radio wavelengths). They do compare with some older data at a different frequency, but the comparison does not strongly rule out contamination. They also rely on models for possible contamination, which give a very small contribution, but these models are very poorly constrained by current data.

Another way they could go wrong is that they may misattribute some of their temperature measurement, or their E mode polarisation, to their B mode detection. Because the temperature and E mode are so much larger than the B they are seeing, only a very small amount of such contamination could change their results by a large amount. They do their best to control this “leakage”, and argue that its residual effect is tiny, but it’s very hard to get absolutely right.

And there is some internal evidence within the BICEP2 results that things are not perfect. The most obvious one comes from the figure above: the points around ℓ=200 — where the lensing contributions begins to dominate — are a bit higher than the model. Is this just a statistical fluctuation, or is it evidence of a broader problem? Their paper show some somewhat discrepant points in their E polarisation measurements, as well. None of these are very statistically significant, and some may be confirmed by other measurements, but there are enough of these that caution makes sense. From only a few days thinking about the results (and not yet really sitting down and going through the papers in great depth), it’s hard to make detailed judgements. It seems like the team have been careful that it’s hard to imagine the results going away completely, but easy to imagine lots of ways in which it could be wrong in detail.

But this skepticism from me and others is a good thing, even for the BICEP2 team: they will want their results scrutinised by the community. And the rest of us in the community will want the opportunity to reproduce the results. First, we’ll try to dig into the BICEP2 results themselves, making sure that they’ve done everything as well as possible. But over the next months and years, we’ll want to reproduce them with other experiments.

First, of course, will be Planck. Since I’m on Planck, there’s not much I can say here, except that we expect to release our own polarisation data and cosmological results later this year. This paper (Efstathiou and Gratton 2009) may be of interest….

Next, there are a bunch of ground- and balloon-based CMB experiments gathering data and/or looking for funding right now. The aforementioned Polarbear will continue, and I’m also involved with the EBEX team which hopes to fly a new balloon to probe the CMB polarisation again in a few years. In the meantime, there’s also ACT, SPIDER, SPT, and indeed the successor to BICEP itself, called the Keck array, and many others besides. Eventually, we may even get a new CMB satellite, but don’t hold your breath…

Rumour-mongering

I first heard about the coming BICEP2 results in the middle of last week, when I was up in Edinburgh and received an email from a colleague just saying “r=0.2?!!?” I quickly called to ask what he meant, and he transmitted the rumour of a coming BICEP detection, perhaps bolstered by some confirmation from their successor experiment, the Keck Array (which does in fact appear in their paper). Indeed, such a rumour had been floating around the community for a year or so, but most of thought it would turn out to be spurious. But very quickly last week, we realised that this was for real. It became most solid when I had a call from a Guardian journalist, who managed to elicit some inane comments from me, before anything was known for sure.

By the weekend, it became clear that there would be an astronomy-related press conference at Harvard on Monday, and we were all pretty sure that it would be the BICEP2 news. The number r=0.20 was most commonly cited, and we all figured it would have an error bar around 0.06 or so — small enough to be a real detection, but large enough to leave room for error (but I also heard rumours of r=0.075).

By Monday morning, things had reached whatever passes for a fever pitch in the cosmology community: twitter and Facebook conversations, a mention on BBC Radio 4’s Today programme, all before the official title of the press conference was even announced: “First Direct Evidence for Cosmic Inflation”. Apparently, other BBC journalists had already had embargoed confirmation of some of the details from the BICEP2 team, but the embargo meant they couldn’t participate in the rumour-spreading.

I was traveling during most of this time, fielding occasional call from journalists (there aren’t that many CMB-specialists within within easy of the London-based media), though, unfortunately for my ego, I wasn’t able to make it onto any of Monday night’s choice tv spots.

By the time of the press conference itself, the cosmology community had self-organised: there was a Facebook group organised by Fermilab’s Scott Dodelson, which pretty quickly started dissecting the papers and was able to follow along with the press conference as it happened (despite the fact that most of us couldn’t get onto the website — one of the first times that the popularity of cosmology has brought down a server).

At the time, I was on a series of trains from Loch Lomond to Glasgow, Edinburgh and finally on to London, but the facebook group made (from a tech standpoint, it’s surprising that we didn’t do this on the supposedly more capable Google Plus platform, but the sociological fact is that more of us are on, and use, Facebook). It was great to be able to watch, and participate in, the real-time discussion of the papers (which continues on Facebook as of now). Cosmologists have been teasing out possible inconsistencies (some of which I alluded to above), trying to understand the implications of the results if they’re right — and thinking about the next steps. IRL, Now that I’m back at Imperial, we’ve been poring over the papers in yet more detail, trying to work exactly how they’ve gathered and analysed their data, and seeing what parts we want to try to reproduce.

Aftermath

Physics moves fast nowadays: as of this writing, about 72 hours after the announcement, there are 16 papers mentioning the BICEP2 results on the physics ArXiV (it’s a live search, so the number will undoubtedly grow). Most of them attempt to constrain various early-Universe models in the light of the r=0.20 results — some of them with some amount of statistical rigour, others just pointing out various models in which that is more or less easy to get. (I’ve obviously spent too much time on this post and not enough writing papers.)

It’s also worth collecting, if only for my own future reference, some of the media coverage of the results:

For more background, you can check out


Around Asia in search of a meal

Permalink - Posted on 2014-03-10 14:53, modified on 2016-12-28 07:25

I’m recently back from my mammoth trip through Asia (though in fact I’m up in Edinburgh as I write this, visiting as a fellow of the Higgs Centre For Theoretical Physics).

I’ve already written a little about the middle week of my voyage, observing at the James Clerk Maxwell Telescope, and I hope to get back to that soon — at least to post some pictures of and from Mauna Kea. But even more than telescopes, or mountains, or spectacular vistas, I seemed to have spent much of the trip thinking about and eating food. (Even at the telescope, food was important — and the chefs at Halu Pohaku do some amazing things for us sleep-deprived astronomers, though I was too tired to record it except as a vague memory.) But down at sea level, I ate some amazing meals.

When I first arrived in Taipei, my old colleague Proty Wu picked me up at the airport, and took me to meet my fellow speakers and other Taiwanese astronomers at the amazing Din Tai Fung, a world-famous chain of dumpling restaurants. (There are branches in North America but alas none in the UK.) As a scientist, I particularly appreciated the clean room they use to prepare the dumplings to their exacting standards: Dumpling lab at Din Tai Fung, Taipei

Later in the week, a few of us went to a branch of another famous Taipei-based chain, Shin Yeh, for a somewhat traditional Taiwanese restaurant meal. It was amazing, and I wish I could remember some of the specifics. Alas, I’ve only recorded the aftermath: Shin Yeh (Nanxi) restaurant

From Taipei, I was off to Hawaii. Before and after my observing trip, I spent a few days in Honolulu, where I managed to find a nice plate of sushi at Doraku — good, but not too much better than I’ve had in London or New York, despite the proximity to Japan. Doraku Sushi, Waikiki, Honolulu, Hawaii

From Hawaii, I had to fly back for a transfer in Taipei, where I was happy to find plenty more dumplings (as well as pleasantly sweet Taiwanese pineapple cake). Certainly some of the best airport food I’ve had (for the record, my other favourites are sausages in Munich, and sushi at the Ebisu counter at San Francisco): Taipei airport dumplings

From there, my last stop was 40 hours in Beijing. Much more to say about that visit, but the culinary part of the trip had a couple of highlights. After a morning spent wandering around the Forbidden City (aka the Palace Museum), I was getting tired and hungry. I tried to find Tian Di Yi Jia, supposedly “An Incredible Imperial-Style Restaurant”. Alas, some combination of not having a website, not having Roman-lettered signs, and the likelihood that it had closed down meant an hour’s wandering Beijing’s streets was in vain. Instead, I ended up at this hole in the wall: Restaurant near the Forbidden City And was very happy indeed, in particular with the amazing slithery, tangy eggplant: Lunch near the Forbidden City That night, I ended up at The Grandma’s, an outpost of yet another chain, seemingly a different chain than Grandma’s Kitchen, which apparently serves American food. Definitely not American food. Note especially the “thousand-year egg” at left (I was happy to see from wikipedia that the idea they’re cured in horse urine is only a myth!): Grandma's Restaurant, Beijing

It was a very tasty trip. I think there was science, too.


Observing, days 3-4: galaxies and blank fields

Permalink - Posted on 2014-02-21 07:16, modified on 2014-05-16 09:43

After a couple of days of lousy weather, the sky cleared up and dried out Wednesday. Eventually, we got down to τ<0.08 — not quite the best possible conditions, but good enough for almost anything we might want to do. We started out slightly worse than that, but that meant we got to observe more interesting things: nearby bright, big galaxies. Unfortunately, a galaxy that is bright and big in visible light is still just a blob in the submillimetre (submm). Our first one was NGC 3034, aka M82, exciting for two reasons. First, it’s the prototypical starburst galaxy, a galaxy undergoing a rapid period of star formation, gobbling up gas and dust and turning them into stars, which in turn heat up the remaining dust, making the galaxy glow brightly in the infrared and submm. Second, M82 is the home of a recent supernova explosion, the nearest one since 2004, and the nearest one of the particularly important type Ia since 1972. And it was first discovered by students at University College London, right across town.

So, I am sure that you are very excited to see a beautiful picture of the galaxy, at right. The elongated blob in the center isn’t even the whole galaxy: M82 3that’s the bright nucleus glowing from the concentration of star formation there. I think — and my proper observational-astronomer friends will correct me if I’m wrong — that some of the dark fuzz around the nucleus is really part of the galaxy, which would take up most of this picture, about 15 arc minutes from top to bottom.

After M82, we observed another nearby galaxy, the somewhat less famous NGC 4559, and then conditions improved enough that we could do observations as part of the SCUBA-2 Cosmology Legacy Survey (CLS), which is officially why I’m here. But that’s a lot less fun, as it’s just observing more or less blank patches, again and again, building up a deep submm survey of large areas of sky (where for these purposes, “large” just means about 35 square degrees, out of about 41,000 on the whole sky). We repeat each small patch dozens of times, adding them up and building up pictures so dense with galaxies that they are said to be “confusion limited” — the main source of noise is just the population of galaxies themselves, individually too faint to see, but contributing to the infrared background light everywhere we look (this depends on both the wavelength of the light and the resolution of the telescope — that is, the size of the smallest object that you can make out.).

For the rest of the night, through until dawn, we kept on observing the CLS fields, and have started back onto them today, in even better conditions than yesterday.

So far, I’ve been pleasantly surprised about life at 14,000 feet: there is definitely less oxygen than down at sea level (or even than Hale Pohaku at 9,000 feet where I sleep and spend the days), but I’ve been spared the worse symptoms of altitude sickness. And jet-lag, combined with strong and good coffee provided by the excellent Telescope System Specialist (TSS) has meant that staying up through until 7 or 8am hasn’t been too bad. (On the other hand, re-reading this post leaves the impression that my ability to string a sentence together has been somewhat impaired by the lack of sleep and oxygen…)

In fact, the TSS is really the one doing — quite literally — all of the work here. Because we are observing as part of the JCMT Legacy Survey, there’s nothing for the “observer” (i.e., me) to do. Later on, the survey team will collate the data that have been gathered and make the final images and catalogs, but that’s a slow and painstaking process, not one that happens on the night the data are taken. And taking the data is such a complicated task that only the Specialist really has the expertise to do it. He keeps me informed of what’s going on, but I don’t really get much of a say in what happens.

You may ask why someone spends the money to send us astronomer/observers across an ocean or two to stay up at night, drink coffee, and not really do any science. Gift-horses aside, so do I.

But it certainly is a gift and a privilege to be here:

JCMT

P1020552


Observing, days 1-2

Permalink - Posted on 2014-02-19 10:15, modified on 2014-05-07 03:26

I am sitting in the control room of the James Clerk Maxwell Telescope (JCMT), 14,000 feet up Mauna Kea, on Hawaii’s Big Island. I’m here to do observations for the SCUBA-2 Cosmology Legacy Survey (CLS).

I’m not really an observer — this is really my first time at a full-sized, modern telescope. But much of JCMT’s observing time is taken up with a series of so-called Legacy Surveys (JLS) — large projects, observing large amounts of sky or large numbers of stars or galaxies.

JCMT is a submillimeter telescope: it detects light with wavelength at or just below one millimeter. This is a difficult regime for astronomy: the atmosphere itself glows very strongly in the infrared, mostly because of water vapour. That’s why I’m sitting at the cold and dry top of an active volcano (albeit one that hasn’t erupted in thousands of years).

Unfortunately, “cold and dry” doesn’t mean there is no precipitation. Here is yesterday’s view, from JCMT over to the CSO telescope:

Snowy view of the CSO from JCMT

This is Hawaii, not Hoth, or even Antarctica.

Tonight seems more promising: we measure the overall quality as an optical depth, denoted by the symbol τ, essentially the probability that a photon you care out will get scattered by the atmosphere before it reaches your telescope. The JLS survey overall requires τ<0.2, and the CLS that I’m actually here for needs even better conditions, τ<0.10. So far we’re just above 0.20 — good enough for some projects, but not the JLS. I’m up here with a JCMT Telescope System Specialist — who actually knows how to run the telescope — and he’s been calibrating the instrument, observing a few sources, and we’re waiting for the optical depth to dip into the JLS band. If that happens, we can fire up SCUBA-2, the instrument (camera) that records the light from the sky. SCUBA-2 uses bolometers (like HFI on Planck), very sensitive thermometers cooled down to superconducting temperatures.

(You can keep track of the conditions here, and specifically monitor the optical depth here. News flash: as I type this, τ=0.199, less than 0.2!)

Later this week, I’ll try to talk about why these are called “Legacy” surveys — and why that’s bad news.

 


meTube

Permalink - Posted on 2014-01-22 16:47, modified on 2014-08-26 01:43

Some time last year, Physics World magazine asked some of us to record videos discussing scientific topics in 100 seconds. Among others, I made one on cosmic inflation and another on what scientists can gain from blogging, which for some reason has just been posted to YouTube, and then tweeted about by FQXi (without which I would have forgotten the whole thing). There are a few other videos of me, although it turns out that there are lots of people called “Andrew Jaffe” on YouTube.

I’m posting this not (only) for the usual purposes of self-aggrandizement, but to force — or at least encourage — myself to actually do some more of that blogging which I claim is a good thing for us scientists. With any luck, you’ll be able to read about my experiences teaching last term, and the trip I’m about to take to observe at a telescope (a proper one, at the top of a high mountain, with a really big mirror).

[On a much more entertaining note, here’s a song from a former Imperial undergraduate recounting “A Brief History of the Universe”. Give it a listen!]


Academic Blogging Still Dangerous?

Permalink - Posted on 2013-12-05 15:18, modified on 2014-05-16 08:49

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).


Teaching mistakes

Permalink - Posted on 2013-10-08 11:14, modified on 2014-08-30 13:24

The academic year has begun, and I’m teaching our second-year Quantum Mechanics course again. I was pretty happy with last year’s version, and the students didn’t completely disagree.

This year, there have been a few changes to the structure of the course — although not as much to the content as I might have liked (“if it ain’t broke, don’t fix it”, although I’d still love to use more of the elegant Dirac notation and perhaps discuss quantum information a bit more). We’ve moved some of the material to the first year, so the students should already come into the course with at least some exposure to the famous Schrödinger Equation which describes the evolution of the quantum wave function. But of course all lecturers treat this material slightly differently, so I’ve tried to revisit some of that material in my own language, although perhaps a bit too quickly.

Perhaps more importantly, we’ve also changed the tutorial system. We used to attempt an imperfect rendition of the Oxbridge small-group tutorial system, but we’ve moved to something with larger groups and (we hope) a more consistent presentation of the material. We’re only on the second term with this new system, so the jury is still out, both in terms of the students’ reactions, and our own. Perhaps surprisingly, they do like the fact that there is more assessed (i.e., explicitly graded, counting towards the final mark in the course) material — coming from the US system, I would like to see yet more of this, while those brought up on the UK system prefer the final exam to carry most (ideally all!) the weight.

So far I’ve given three lectures, including a last-minute swap yesterday. The first lecture — mostly content-free — went pretty well, but I’m not too happy with my performance on the last two: I’ve made a mistake in each of the last two lectures. I’ve heard people say that the students don’t mind a few (corrected) mistakes; it humanises the teachers. But I suspect that the students would, on the whole, prefer less-human, more perfect, lecturing…

Yesterday, we were talking about a particle trapped in a finite potential well — that is, a particle confined to be in a box, but (because of the weirdness of quantum mechanics) with some probability of being found outside. That probability depends upon the energy of the particle, and because of the details of the way I defined that energy (starting at a negative number, instead of the more natural value of zero), I got confused about the signs of some of the quantities I was dealing with. I explained the concepts (I think) completely correctly, but with mistakes in the math behind them, the students (and me) got confused about the details. But many, many thanks to the students who kept pressing me on the issue and helped us puzzle out the problems.

Today’s mistake was less conceptual, but no less annoying — I wrote (and said) “cotangent” when I meant “tangent” (and vice versa). In my notes, this was all completely correct, but when you’re standing up in front of 200 or so students, sometimes you miss the detail on the page in front of you. Again, this was in some sense just a mathematical detail, but (as we always stress) without the right math, you can’t really understand the concepts. So, thanks to the students who saw that I was making a mistake, and my apologies to the whole class.


Songs about f*&%ing

Permalink - Posted on 2013-06-21 22:46, modified on 2014-05-16 09:59

First, my apologies that I couldn’t resist the almost not-safe-for-work title, especially to those expecting posts about astrophysics and cosmology rather than a reference to a 1987 record by Big Black (which it’s worth pointing out can be found in its entirety on YouTube). But this is not a post about Big Black.

Rather, it’s a brief reminiscence of another album with a similar subject matter and a very different style, Liz Phair’s Exile in Guyville, which I was shocked to discover is about to have its 20th anniversary, also commemorated with an article and interview in the Chicago Tribune.

I lived in Chicago in the early 90s when Exile In Guyville was released, although I don’t think I heard it until I left town and moved to Toronto a few months later. But she was already a presence on the scene when Chicago was taking its place in the world of post-Nirvana indie-rock (led by the Smashing Pumpkins, along with Urge Overkill, who never quite capitalised on the marquee placement of their “Girl, You’ll Be A Woman Soon” cover on the Pulp Fiction soundtrack, and my favourite, Eleventh Dream Day). It was a record full of great songs about fucking and love and being a lonely twenty-something hipster in a big city, and was a sort of homage to the Rolling Stones’ own Exile on Main Street, all of which was enough to make rock critics (and wannabes like me) wet their pants — although by now I’m sure the Stones reference is irrelevant to record’s brilliance. “Guyville” was code (surfacing first in an Urge Overkill song) for the Wicker Park neighbourhood which was the center of the Chicago rock scene, and home to my second-favourite Chicago bar, the still-going-strong Rainbo Club (alas, my favourite, Ciral’s House of Tiki, closed in 2000).

And the title of this post also covers The Book of Mormon, which I went to see in London’s West End last week, the filthy and wonderful musical comedy from the creators of South Park. Despite songs about sex with amphibians (and worse), a character named “General Butt Fucking Naked” (sort of named after a real Liberian warlord), and being self-consciously suffused with coarse stereotyping of Africans and the eponymous Mormons, manages to be old-fashioned, warm-hearted and strangely, uncynically, affirming of the ability of individuals to actually make a difference in each other’s lives.