Big 2nd day over at JSM!

(by Julien Cornebise)

As expected, Monday was a crazy day at JSM, with massive program collisions between exciting sessions. Excruciating choices had to be made.

My core theme was present in Advances in Monte Carlo: I was especially eager to see Natesh Pillai‘s talk about his recent work on optimal scaling of Monte Carlo algorithms based from Langevin diffusions or Hamiltonian/hybrid integration. This is especially exciting to me as some of the most recent advances in MCMC that were developed in my department, the Riemannian Manifold Monte-Carlo methods of Mark Girolami and Ben Calderhead (which was a read paper of the Royal Statistical Society last year), are extremely elegant but similarly rely on choosing a time-discretization (and therefore, a scaling) of such Langevin diffusion to explore the manifold of the parameters of interest. Although Natesh’s work does not yet cover the extension to the Riemannian case, it is definitely a massive step forward in this direction — with very elegant maths in it, at that.

However, precisely because this talk was so close to home (and because I had the pleasure to read Natesh paper earlier), I decided to attend instead Michael Jordan‘s Neyman Lecture on Non-Parametric Bayes. Jordan’s salient trait he is one of those all-too-rare bridges between Machine Learning and Statistics, able to talk both languages and being equally enthusiastic with the two fields. That gives a special insight to his work. Especially, we learned that

  • He finds prior distributions painful (who doesn’t?), and considers that the biggest strength of Bayesian statistics are instead the power of their hierarchical modeling, who makes it extremely easy to “Divide, Shrink, And Conquer” our problems. I especially like this link to algorithmic, see below.
  • This latter actually comes from computer science (CS) and algorithmic (a field dear to my heart — that’s where I started). A very nice take is that CS and Stats split in the 40s, while CS was becoming better at dealing with data structures and algorithms to handle them, while Stats were growing stronger on managing uncertainty. Since the 90s, the two fields are getting closer again; hierarchical modeling is one way for statistics to move beyond the “vector of parameters” into more elaborate structures, to which we can apply the strength of probabilistic inference. Delightful, and even more so that it matches my very own experience and path evolving from undergrad in CS/Engineering to a PhD in Mathematical Statistics! I would risk a parallel between this evolution of statistics and the evolution of programming from procedural to Object Oriented: more structure for more powerful coding — my thoughts on the use of object oriented programming in stats could fill a blog post in itself!
  • The core of the talk consisted in a broad overview of Non-Parametric Bayes (NPB), who make full use of hierarchical models and avoid having to choose a prior, replacing prior parameters by random probability measures. The very way those methods work (Polyà Urn, Stick Breaking, Hoppe Urn / Chinese Restaurant Process, …) is intrinsically algorithmic, reminding me of some of the pleasures of discrete mathematics in CS, while still relying on remarkable probability theory papers of the 60s.

An inspiring talk, that made me itch for resuming the work that we started with Lane Burgette last year on a “balanced” variation of the Dirichlet Process! (note to my boss: nothing to worry about, I very well know what I have to work on before that!)

The choice of the second session of the day was kind of a no-brainer to me: I was talking. I won’t comment on my own talk, obviously, but I was really impressed with the work done by my colleagues finalists of the Applied Methodology category. It is a pity that the other finalist in the Theory and Methods category could not be present today, as I would have loved to meet him. Robin Ryder‘s application of MCMC to rebuilding nine-thousand years of evolution of languages was a fascinating topic, with remarkable results linking apparently unlinked languages! Ricardo Lemos had a very thorough investigation of warming of the ocean on the coasts of Portugal, going deep into the interactions between winds, currents, temperatures: when I thought that the study was as detailed as could be, I discovered a whole new part of it, extension from coasts of Portugal to the whole Atlantic ocean!

The only downside to the session was the relatively small attendance: we were in front of very strong competition, as a session on Bayesian Model Assessment, organized by Christian Robert, featuring Andrew Gelman, Jean-Michel Marin, and Merlise Clyde. I would have attended myself, and was told that people could not even seat, such was the crowd.

Afternoon sessions came very quickly after a very pleasant lunch in the great company of the chairman of my session and a French friend (whom I won’t name to preserve his reputation of a gourmet) being initiated to Macaroni and Cheese (I do have pictures but have been forbidden to put them online).

The session themed on Approximate Bayesian Computation” was pretty short, as, unfortunately, all but one speaker pulled off from the program. Nevertheless, quality being better than quantity, we were well served: Dennis Prangle‘s article with Paul Fearnhead on the choice of Summary Statistics will be a read paper next December at the Royal Statistical Society. Unfortunately, to fit in the short alloted time, he restrained from presenting what I think is the most interesting part of it: the calibration of ABC  by surprisingly adding extra noise to. I look forward to December, where he will have the full extend of an hour plus a preliminary session to go into those details.

Finally, the crowd went wild for Sir David R. Cox‘s ASA President’s Invited Address. As shown here, in spite of the huge ballroom, a massive amount of people had to stand in the hallway throughout the whole hour. The topics touched upon by Sir Cox in his overview of the current statistical field are just too many to be covered in a blog post. I especially noted his comparison of Bayes and Frequentist in a very encompassing way, and his consideration that formal statistical theory is essentially conceptual, even if is then mathematical and computer-science in its implementation — a slightly different take than Andrew Gelman’s exposed yesterday. As an extra source of awe, I was taken aback when I realized that Sir Cox is 87 years old: his energy on stage, walking it all the time, never sitting, never still, was sure nowhere close to what you would expect at that age — I have seen more static talks by people my age!

On the social side, I finished the day by the NISS/SAMSI reception, which lived true to its long-earned reputation: best food and drinks in town! More seriously, it was a delight to see the familiar faces from my first postdoc, first year in the US — with a regret for those would could not make it (Jamie, Rita, Pierre!). That is, I think, the real strength of JSM: getting to see all of your American friends and colleagues at the same time in one place in one week. The massive size of JSM, which causes its downsides, hence also causes its greatest perk.

Advertisements

1 Response to “Big 2nd day over at JSM!”


  1. 1 Jamie Nunnelly August 2, 2011 at 1:41 am

    Thanks for a great recap, Julien. At least Rita, Pierre and I can feel like we are there by reading your blog!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




About

The Statistics Forum, brought to you by the American Statistical Association and CHANCE magazine, provides everyone the opportunity to participate in discussions about probability and statistics and their role in important and interesting topics.

The views expressed here are those of the individual authors and not necessarily those of the ASA, its officers, or its staff. The Statistics Forum is edited by Andrew Gelman.

A Magazine for People Interested in the Analysis of Data

%d bloggers like this: