I spent Friday and Saturday listening to these papers on dealing with 'real-time' data – that is, data that are actually available at a given point of time. The main question that people are grappling with is how to extract information from preliminary estimates that will be subsequently revised.
As far as I can tell, this is still a somewhat niche subject, but it shouldn't be: revision errors are often very large, and there are an alarming number of examples of cases in which real-time signals such as the sign of the output gap are reversed as the data are revised.
One thing that struck me was Simon van Norden's anecdote about the plight of Irish researchers. When the Irish Central Statistics Office releases preliminary estimates for GDP, it also revises the numbers it had published previously. That's not unusual: every statistics agency does that. But in Ireland – and apparently several other European countries as well – they revise the entire history of their series, and not just the last few observations. At some point, you'd expect that we would stop learning about what happened in the Irish economy in 1978.
If a Canadian or US researcher working in 2008 finds an interesting pattern in the data between (say) 1975 and 2000, she can be reasonably confident that if someone decides to try to reproduce her results in 2013, the finding should still hold up. But her Irish colleague knows that between now and 2013, the data for 1975-200 will be revised. It must be frustrating to know that any result you may find will have to be verified again and again and again and again…
Well, obsessively revising data has to be less socially damaging than furnishing closed input-output model multipliers to support yet another government expenditure program? Revising data may be less tiring than skipping rope.