Archive Page 2

A misleading title… [book review]

(By Christian Robert)

When I received this book, Handbook of fitting statistical distributions with R, by Z. Karian and E.J. Dudewicz, from/for the Short Book Reviews section of the International Statistical Review, I was obviously impressed by its size (around 1700 pages and 3 kilos…). From briefly glancing at the table of contents, and the list of standard distributions appearing as subsections of the first chapters, I thought that the authors were covering different estimation/fitting techniques for most of the standard distributions. After taking a closer look at the book, I think the cover is misleading in several aspects: this is not a handbook (a.k.a. a reference book), it does not cover standard statistical distributions, the R input is marginal, and the authors only wrote part of the book, since about half of the chapters were written by other authors…

Continue reading ‘A misleading title… [book review]’

Error and Inference (part 1)

(by Christian Robert)

“The philosophy of science offer valuable tools for understanding and advancing solutions to the problems of evidence and inference in practice”—D. Mayo & A. Spanos, p.xiv, Error and Inference, 2010

Deborah Mayo kindly sent me her book, whose subtitle is “Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of Science” and contributors are P. Achinstein, A. Chalmers, D. Cox, C. Glymour, L. Laudan, A. Musgrave, and J. Worrall, plus both editors, Deborah Mayo and Aris Spanos. Deborah Mayo rightly inferred that this debate was bound to appeal to my worries about the nature of testing and model choice and to my layman interest in the philosophy of Science. Speaking of which [layman], the book reads really well, even though I am clearly missing references to Mayo’s and others’ earlier works. And even though it cannot be read under my cherry tree (esp. now that weather has moved from été to étaumne… as I heard this morning on the national public radio) Deborah Mayo is clearly the driving force in putting this volume together, from setting the ERROR 06 conference to commenting the chapters of all contributors (but her own and Aris Spanos’). Her strongly frequentist perspective on the issues of testing and model choice are thus reflected in the overall tone of the volume, even though contributors bring some contradiction to the debate. A complete book review was published in the Notre-Dame Philosophical Review.

However, scientists wish to resist relativistic, fuzzy, or post-modern turns (…) Notably, the Popperian requirement that our theories are testable and falsifiable is widely regarded to contain important insights about responsible science and objectivity.—D. Mayo & A. Spanos, p.2, Error and Inference, 2010

Given the philosophical, complex, and interesting nature of the work, I will split my comments into several linear posts (hence the part 1), as I did for Evidence and Evolution. The following comments are thus about a linear (even pedestrian) and incomplete read through the first three chapters. Those comments are not pretending at any depth, they simply reflect the handwritten notes, thoughts, and counterarguments I scribbled as I was reading through… As illustrated by the above quote (which first part I obviously endorse), the overall perspective in the book is Popperian, despite Popper’s criticism of statistical inference as a whole. Another fundamental concept throughout the book is the “Error-Statistical philosophy” whose Deborah Mayo is the proponent. One of the tenets of this philosophy is a reliance on statistical significance tests in the Fisher-Neyman-Pearson (or frequentist) tradition, along with a severity principle (“We want hypotheses that will allow for stringent testing so that if they pass we have evidence of a genuine experimental effect“, p.19) stated as (p.22)

A hypothesis H passes a severe test T with data x is

  1. x agrees with H, and
  2. with very high probability, test T would have produced a result that accords less well with H than does x, if H were false or incorrect.

(The p-value is advanced as a direct accomplishment of this goal, but I fail to see why it does or why a Bayes factor would not. Indeed, the criterion depends on the definition of probability when H is false or incorrect. This relates to Mayo’s criticism of the Bayesian approach, as explained below.)

Formal error-statistical tests provide tools to ensure that errors will be correctly detected with high probabilities“—D. Mayo, p.33, Error and Inference, 2010

In Chapter 1, Deborah Mayo has a direct go at the Bayesian approach. The main criticism is about the Bayesian approach to testing (defined through the posterior probability of the hypothesis, rather than through the predictive) is about the catchall hypothesis, a somehow desultory term replacing the alternative hypothesis. According to Deborah Mayo, this alternative should “include all possible rivals, including those not even though of” (p.37). This sounds like a weak argument, although it was also used by Alan Templeton in his rebuttal of ABC, given that (a) it should also apply in the frequentist sense, in order to define the probability distribution “when H is false or incorrect” (see, e.g., “probability of so good an agreement (between H and x) calculated under the assumption that H is false”, p.40); (b) a well-defined alternative should be available as testing an hypothesis is very rarely the end of the story: if H is rejected, there should/will be a contingency plan; (c) rejecting or accepting an hypothesis H in terms of the sole null hypothesis H does not make sense from operational as well as from game-theoretic perspectives. The further argument that the posterior probability of H is a direct function of the prior probability of H does not stand against the Bayes factor. (The same applies to the criticism that the Bayesian approach does not accommodate newcomers, i.e., new alternatives.) Stating that “one cannot vouch for the reliability of [this Bayesian] procedure—that it would rarely affirm theory T were T false” (p.37) completely ignores the wealth of results about the consistency of the Bayes factor (since the “asymptotic long run”, p.20, matters in the Error-Statistical philosophy). The final argument that Bayesians rank “theories that fit the data equally well (i.e., have identical likelihoods)” (p.38) does not account for (or dismisses, p.50, referring to Jeffreys and Berger instead of Jefferys and Berger) the fact that Bayes factors are automated Occam’s razors in that the averaging of the likelihoods over spaces of different dimensions are natural advocates of simpler models. Even though I plan to discuss this point in a second post, Deborah Mayo also seems to imply that Bayesians are using the data twice (this is how I interpret the insistence on same p. 50), which is a sin [genuine] Bayesian analysis can hardly be found guilty of!

another lottery coincidence

(by Christian Robert)

Once again, meaningless figures are published about a man who won the French lottery (Le Loto) for the second time. The reported probability of the event is indeed one chance out of 363 (US) trillions (i.e., billions in the metric system. or 1012)… This number is simply the square of

{49 \choose 5}\times{10 \choose 1} = 19,068,840

which is the number of possible loto grids. Thus, the probability applies to the event “Mr so-&-so plays a winning grid of Le Loto on May 6, 1995 and a winning grid of Le Loto on July 27, 2011“. But this is not the event that occured: one of the bi-weekly winners of Le Loto won a second time and this was spotted by Le Loto spokepersons. If we take the specific winner for today’s draw, Mrs such-&-such, who played bi-weekly one single grid since the creation of Le Loto in 1976, ie about 3640 times, the probability that she won earlier is of the order of

1-\left(1-\frac{1}{{49\choose 5}\times{10\choose 1}}\right)^{3640}=2\cdot 10^{-4}.

There is thus a chance in 20 thousands to win again for a given (unigrid) winner, not much indeed, but no billion involved either. Now, this is also the probability that, for a given draw (like today’s draw), one of the 3640 previous winners wins again (assuming they all play only one grid, play independently from each other, &tc.). Over a given year, i.e. over 104 draws, the probability that there is no second-time winner is thus approximately

\left(1-\frac{1}{2\cdot10^4}\right)^{104} = 0.98,

showing that within a year there is a 2% chance to find an earlier winner. Not so extreme, isn’t it?! Therefore, less bound to make the headlines…

Now, the above are rough and conservative calculations. The newspaper articles about the double winner report that the man is playing about 1000 euros a month (this is roughly the minimum wage!), representing the equivalent of 62 grids per draw (again I am simplifying to get the correct order of magnitude). If we repeat the above computations, assuming this man has played 62 grids per draw from the beginning of the game in 1976 till now, the probability that he wins again conditional on the fact that he won once is

1-\left(1-\frac{62}{{49 \choose 5}\times{10 \choose 1}}\right)^{3640} = 0.012,

a small but not impossible event. (And again, we consider the probability only for Mr so-&-so, while the event of interest does not.) (I wrote this post before Alexs Jakulin pointed out the four-time lottery winner in Texas, whose “luck” seems more related with the imperfections of the lottery process…)

I also stumbled on this bogus site providing the “probabilities” (based on the binomial distribution, nothing less!) for each digit in Le Loto, no need for further comments. (Even the society that runs Le Loto hints at such practices, by providing the number of consecutive draws a given number has not appeared, with the sole warning “N’oubliez jamais que le hasard ne se contrôle pas“, i.e. “Always keep in mind that chance cannot be controlled“…!)

Numerical analysis for statisticians [a review]

(Post contributed by Christian Robert.)

“In the end, it really is just a matter of choosing the relevant parts of mathematics and ignoring the rest. Of course, the hard part is deciding what is irrelevant.”

Somehow, I had missed the first edition of this book and thus I started reading it this afternoon with a newcomer’s eyes (obviously, I will not comment on the differences with the first edition, sketched by the author in the Preface). Past the initial surprise of discovering it was a mathematics book rather than an algorithmic book, I became engrossed into my reading and could not let it go! Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical analyses of statistical problems. This includes expansions, eigen-analysis, optimisation, integration, approximation theory, and simulation, in less than 600 pages. It may be due to the fact that I was reading the book in my garden, with the background noise of the wind in tree leaves, but I cannot find any solid fact to grumble about! Not even about the MCMC chapters! I simply enjoyed Numerical Analysis for Statisticians from beginning till end.

“Many fine textbooks (…) are hardly substitutes for a theoretical treatment emphasizing mathematical motivations and derivations. However, students do need exposure to real computing and thoughtful numerical exercises. Mastery of theory is enhanced by the nitty gritty of coding.”

From the above, it may sound as if Numerical Analysis for Statisticians does not fulfill its purpose and is too much of a mathematical book. Be assured this is not the case: the contents are firmly grounded in calculus (analysis) but the (numerical) algorithms are only one code away. An illustration (among many) is found in Section 8.4: Finding a Single Eigenvalue, where Kenneth Lange shows how the Raleigh quotient algorithm of the previous section can be exploited to this aim, when supplemented with a good initial guess based on Gerschgorin’s circle theorem. This is brilliantly executed in two pages and the code is just one keyboard away. The EM algorithm is immersed into a larger M[&]M perspective. Problems are numerous and mostly of high standards, meaning one (including me) has to sit and think about them. References are kept to a minimum, they are mostly (highly recommended) books, plus a few papers primarily exploited in the problem sections. (When reading the Preface, I found that “John Kimmel, [his] long suffering editor, exhibited extraordinary patience in encouraging [him] to get on with this project”. The quality of Numerical Analysis for Statisticians is also a testimony to John’s editorial acumen!)

“Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book equips students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Issues of numerical stability, accurate approximation, computational complexity, and mathematical modeling share the limelight in a broad yet rigorous overview of those parts of numerical analysis most relevant to statisticians.”

While I am reacting so enthusiastically to the book (imagine, there is even a full chapter on continued fractions!), it may be that my French math background is biasing my evaluation and that graduate students over the World would find the book too hard. However, I do not think so: the style of Numerical Analysis for Statisticians is very fluid and the rigorous mathematics are mostly at the level of undergraduate calculus. The more advanced topics like wavelets, Fourier transforms and Hilbert spaces are very well-introduced and do not require prerequisites in complex calculus or functional analysis. (Although I take no joy in this, even measure theory does not appear to be a prerequisite!) On the other hand, there is a prerequisite for a good background in statistics. This book will clearly involve a lot of work from the reader, but the respect shown by Kenneth Lange to those readers will sufficiently motivate them to keep them going till assimilation of those essential notions. Numerical Analysis for Statisticians is also recommended for more senior researchers and not only for building one or two courses on the bases of statistical computing. It contains most of the math bases that we need, even if we do not know we need them! Truly an essential book.

Point Process Crime Prediction

The New York Times reported today on predictive policing, or deploying officers where crimes are predicted to occur in the future. According to the Times, “Based on models for predicting aftershocks from earthquakes, [the method used in Santa Cruz, CA] generates projections about which areas and windows of time are at highest risk for future crimes”. The statistical work was done by George Mohler of Santa Clara University and Martin Short of UCLA. Kudos to them for an interesting and useful application of point processes.

Using a “pure infographic” to explore differences between information visualization and statistical graphics

(by Andrew Gelman)

Our discussion on data visualization continues.

One one side are three statisticians–Antony Unwin, Kaiser Fung, and myself. We have been writing about the different goals served by information visualization and statistical graphics.

On the other side are graphics experts (sorry for the imprecision, I don’t know exactly what these people do in their day jobs or how they are trained, and I don’t want to mislabel them) such as Robert Kosara and Jen Lowe, who seem a bit annoyed at how my colleagues and myself seem to follow the Tufte strategy of criticizing what we don’t understand.

And on the third side are many (most?) academic statisticians, econometricians, etc., who don’t understand or respect graphs and seem to think of visualization as a toy that is unrelated to serious science or statistics.

I’m not so interested in the third group right now–I tried to communicate with them in my big articles from 2003 and 2004)–but I am concerned that our dialogue with the graphics experts is not moving forward quite as I’d wished.

I’m not trying to win any arguments here; rather I’m trying to move the discussion away from “good vs. bad” (I know I’ve contributed to that attitude in the past, and I’m sure I’ll do so again) toward a discussion of different goals.

I’ll try to write something more systematic on the topic, but for now I’d like to continue by discussing examples.

My article with Antony had many many examples but we got so involved in the statistical issues of data presentation that I think the main thread of the argument got lost.

For example, Hadley Wickham, creator of the great ggplot2, wrote:

Unfortunately both sides [statisticians and infovgraphics people] seem to be comparing the best of one side with the worst of the other. There are some awful infovis papers that completely ignore utility in the pursuit of aesthetics. There are many awful stat graphics papers that ignore aesthetics in the pursuit of utility (and often fail to achieve that). Neither side is perfect, and it’s a shame that we can’t work more closely together to get the best of both worlds.

I agree about the best of both worlds (and return to this point at the end of the present post). But I don’t agree that we’re comparing to “the worst of the other.” Sure, sometimes this is true (as in the notorious “chartjunk” paper in which pretty graphs are compared to piss-poor plots that violate every principle of visualization and statistical graphics).

But recent web discussions have been about the best, not the worst. In my long article with Unwin, we discussed the “5 best data visualizations of the year”! In our short article, we discuss Florence Nightingale’s spiral graph, which is considered a data visualization classic. And, from the other side, my impression is that infographics gurus are happy to celebrate the best of statistical graphics.

But in this sort of discussion we have to discuss examples we don’t like. There are some infographics that I love love love–for example, Laura and Martin Wattenberg’s Name Voyager, which is on my blogroll and which I’ve often linked to. But I don’t have much to say about these–I consider them to have the best features of statistical graphics.

In much of my recent writing on graphics, I’ve focused on visualizations that have been popular and effective–Wordle is an excellent example here–while not following what I would consider to be good principles of statistical graphics.

When I discuss the failings of Wordle (or of Nightingale’s spiral, or Kosara’s swirl, or this graph), it is not to put them down, but rather to highlight the gap between (a) what these visualizations do (draw attention to a data pattern and engage the viewer both visually and intellectually) and (b) my goal in statistical graphics (to display data patterns, both expected and unexpected). The differences between (a) and (b) are my subject, and a great way to highlight them is to consider examples that are effective as infovis but not as statistical graphics. I would have no problem with Kosara etc. doing the opposite with my favorite statistical graphics: demonstrating that despite their savvy graphical arrangements of comparisons, my graphs don’t always communicate what I’d like them to.

I’m very open to the idea that graphics experts could help me communicate in ways that I didn’t think of, just as I’d hope that graphics experts would accept that even the coolest images and dynamic graphics could be reimagined if the goal is data exploration.

To get back to our exchange with Kosara, I stand firm in my belief that the swirly plot is not such a good way to display time series data–there are more effective ways of understanding periodicity, and no I don’t think this has anything to do with dynamic vs. static graphics or problems with R. As I noted elsewhere, I think the very feature that makes many infographics appear beautiful is that they reveal the expected in an unexpected way, whereas statistical graphics are more about revealing the unexpected (or, as I would put it, checking the fit to data of models which may be explicitly or implicitly formulated. But I don’t want to debate that here. I’ll quarantine a discussion of the display of periodic data to another blog post.

Instead I’d like to discuss a pure infographic that has no quantitative content at all. It’s a display of strategies of Rock Paper Scissors that Nathan Yau featured a couple weeks ago on his blog:

This is an attractive graphic that conveys some information–but the images have almost nothing to do with the info. It’s really a small bit of content with an attractive design that fills up space.

Difference in perspectives

The graphic in question is titled, “How do I win rock, paper, scissors every time?”, which is completely false. As my literal-minded colleague Kaiser Fung would patiently explain, No, the graph does no tell you how to win the game every time. This is no big deal–it’s nothing but a harmless exaggeration–but it illustrates a difference in perspective. A statistician wouldn’t be caught dead making a knowingly false statement. Conversely, a journalist wouldn’t be caught dead making a boring headline (for example, “Some strategies that might increase your odds in rock paper scissors”).

Who’s right here–the statistician or the journalist? It depends on your goals. I’ll stick with being who I am–but I also recognize that Nathan’s post got 116 comments and who knows how many thousand viewers. In contrast, my post from a few years ago (titled “How to win at rock-paper-scissors,” a bit misleading but much less so than “How to win every time”) had a lot more information and received exactly 6 comments. This is fair enough, I’m not complaining. Visuals are more popular than text, and “popular” isn’t a bad thing. The goal is to communicate, and sacrificing some information for an appealing look is a tradeoff that is often worth it.

Moving forward

Let me conclude with a suggestion that I’ve been making a lot lately. Lead with the pretty graph but then follow up with more information. In this case, Nathan could post the attractive image (and thus sill interest his broad readership and inspire them to those 100+ comments) but set it up so that if you click through you get text (in this case, it’s words not statistical graphs) with more detailed information:

(Sorry about the tiny font; I was having difficulty with the screen shots.)

Again I purposely chose a non-quantitative example to move the discussion away from “How’s the best way to display these data” and focus entirely on the different goals.

Data science vs. statistics: has “statistics” become a dirty word?

(by John Johnson)

Revolution Analytics recently published the results of a poll indicating that JSM 2011 attendees consider themselves “data scientists.” Nancy Geller, President of the ASA, asks statisticians not to “Shun the ‘S’ word.” Yet a third take on the matter is the top tweet from JSM 2011 with Dave Blei’s quote “‘machine learning’ is how you say ‘statistics’ to a computer scientist.”

Comments about selection bias from Revolution’s poll aside (it was conducted as part of the free wifi connection in the expo), the shift from “statistics” to “analytics,” “machine learning,” “data science,” and other terms seems to reflect that calling oneself a “statistician” is just not cool or scares our colleagues. So I open the floor up to the question: has “statistics” become a dirty word?

Why Going to JSM?

(by Julien Cornebise)

For my final post about JSM, based on three year’s attendance in a row (DC, Vancouver, Miami), a recap for next year potential attendants: Why Going to JSM? When is it worth it, when is it not?

First, the obvious wrong reasons for going: such a massive monster, with its 15-20 minutes talks barely allowing for anything but an extended abstract, and with 50 sessions in parallel, you rarely go to JSM for its scientific presentations. JSM is not the place:

  • to learn on recent developments in your field: not enough precise content in 20 minutes.
  • to get to know better someone’s work: same problem.
  • to get advertisement and visibility for your work: same problem, plus, empty sessions do happen way too much — you can’t compete with a panel of world famous speakers, especially when all you offer is a skewer of 20 minutes talks.
  • to see a wide overview of your optic: conflicting sessions on a same topic make it a frustrating experience.

For all those, specific small conferences (such as MCMCSki in the MCMC field) are way better: more focused interaction, more time for work sessions, more time for exposure of ideas, for constructive feedback. So why the heck coming? What makes 5,000 people fly here and spend a whole week? Why am I so glad I attended?

Of course, JSM offers some important community events, most noticeably its awards sessions and lectures (COPPS, Neyman, Wolf, and Medallion Lectures, …) where great contributors to our fields are honored by all their peers. Even though we’re all in there for the science, I won’t hide that I, for one, appreciate such public displays of recognition: it is not because we are scientists that we should never tell those who completely wow us that, indeed, we do think they do amazingly and that we want to thank them for that! Still, this would not be a sufficient reason by itself to hold such a gigantic and costly meeting.

But JSM incredible strength is truly its social side:

  • Nowhere else can you meet all of your US-based colleagues face to face at the same time in the same place, exchanging scientific ideas or just spending some great time in an informal context, getting to know each other better in a relaxed setting.
  • Nowhere else can you see former and new people from all the institutions you’ve worked at, keeping up with what they’re up to, keeping them up with what you’re up to!
  • Never else can you go for dinner with people from all those, getting them to meet, meeting their new colleagues, learning about their recent interests, what’s hot in the field, who’s moving where, why this or that department suddenly busted, how this or that other one is about to double its size and go on a hiring spree, what interesting specialized workshop is in preparation, etc. JSM is the largest grapevine concentrated over three days.

JSM is like iterating the adjacency matrix of your graph by several steps: not only do you strengthen your links with colleagues/friends you already know and appreciate, but you also get to know those they know, and find great matches! With the obvious caveat: if you don’t know anyone, then it will be quite difficult to meet new people. I’d recommend going there with a few colleagues from your institution for the first time. The less easy profile: the isolated statistician from a foreign country; his geographical attaches (Alma mater, former employer) won’t even compensate for his lack of people to hang out with — with the noticeable exception of seizing the occasion to meet someone you’ve only interacted with remotely. The best profile: pretty much any other!

Of course, all of the above is by no mean as formal/opportunistic as it may sound. Most of this happens while going to the beach with friends (after sessions…), going to dinner, sampling terriblific junk food (Five Guys Burgers, 15th and Espanola… I will miss you), living crazy nights on Ocean Drive — note to funding agencies: this never happens, I am just pretending, we are an extremely serious bunch, all of us, no exception. Simple: most of this is essentially hanging out with friends. With the noticeable difference: those friends are also our colleagues, lots of colleagues are also our friends.

And that’s why, in spite of all its flaws, this massive meeting is so enjoyable: work and fun do mix, friends and colleagues do mix, and real long-term highlights come out of it. After all, we’re all in here for the different faces of a common passion! See you next year.

JSM treat for the road: Significance Magazine

(by Julien Cornebise)

That’s it. It’s over. Done. Gone. RIP JSM 2011. ’til next year. A great week!
Yesterday’s convention center was a mix between an airport and the ghost town of Saturday: a fraction of the people were still here, most of them carrying suitcases. There should not be any talks on the last day 😉 And, although there were not big 2 hours Lecture to attend, I still had a hard time choosing between

The 15-minutes shortness of the former’s talks put me off, and the curiosity about this magazine that Xian blogged about, the challenges to talk stats to non-statisticians, and my own will for a steroid-version of “Popular science” decided me into picking the latter.

Boy was I glad: after a short introduction outlining the aim of Significance and calling for contributors (think of it, for you or your PhD students, it looks like a great experience!), we were treated to three very enjoyable talks by authors of recent cover papers:

Howard Wainer on how missing data can lead to dire policies, and how just a few extra data will be of precious help to avoid dramatic mistakes, with striking illustrations in Education that are also available in his book. This was thought-provoking: in a first move, I might tend to integrate out the missing data using using EM algorithm or Data Augmentation, hence assuming that the missing data is distributed similarly to the non-missing. Wrong! Howard’s examples were some of those “ah-ah!” moments, where you just realize that the original strategy amounted to standing on your head. Three examples:

  • Allowing the students to pick a subset of possible questions in a test, so as to make it fairer. Wrong. A quick study on one class showed that it tends to worsen the inequality: weak students are impaired in their choice and pick the hardest questions, failing them. Consequence of assuming random missing data: augmenting the score gap with the better students who picked the easiest questions.
  • Eliminating tenure for teachers to save money. Wrong. Looking back to 1991’s suppression of tenure for super-intendants showed that the salaries increased massively. Most likely explanation: tenure is a job benefit that costs nothing to the employer; removing it requires to increase the salary to compensate. Consequence of assuming random missing data: augmenting the expenses.
  • Making SAT scores disclosure optional to enter college>. Wrong. Studying withheld SAT scores for the one college who has done so for 40 years shows that students choose rationally to disclose their score or not: very few “I did very well at SAT, but so what?”, many “I scored less than the average entry score, disclosing it won’t help my chances to enter”. Consequence of assuming random missing data: those students picked classes that they failed, as they lacked too many prerequisites. A thought here: it would also have been interesting to compare them not only with students who divulged their score as Howard did, but with other students with similar scores who went to other universities: did getting access to harder classes than they would have usually been allowed to helped them on the long term?

Andrew Solow on the Census of Marine Life (2000-2010): how many species, and is a species extinct? There were some striking statistical problems, again due to non-uniform missing data: it is missing because the species is harder to observe in our usual surroundings! So there is more to it than the abstract problem of estimating the number of classes in multinomial sampling, and of estimating the end-point of a distribution (a tricky problem in itself already).

Finally, most anchored in recent actuality, Ian MacDonald brilliant talk on the BP Discharge in the Gulf of Mexico (I learned it’s a more precise term than “Deepwater oil spill”: it’s not Deepwater in charge but BP, and it is not an overboard spill but a discharge from a reservoir).
This one was one for the records: a precise and scientific study of the estimates of the size of the discharge, based on the speaker’s experience with natural oil seeps occurring everyday in the Gulf. Beyond the beautiful/appalling before/after pictures, and the pleasant feeling of the modest scientist being (sadly) proved true vs the massive corporation, there was a fascinating scientific chase to the source of the discrepancies amongst the estimates. Ian brilliantly chased it down to the table linking thickness of the surface oil spread with its color (rainbow, metallic, light-brown, dark), which is multiplied by the surface to estimate the volume: while all of the scholar’s studies use one table, oil companies (BP, Exxon) use one provided by US Coast Guards with a 100-fold downward error for the thickest levels — precisely the ones needed when drama occurs!

The dramatic consequences of this error are well-know: we’re not talking indemnities, but dramatic error on the pressure escaping the well leading to failure of the blockage attempts — an error confirmed when the videos of the leak were finally released and particle-velocity expert scholars were able to confirm overnight that the flow was much more than officially stated.

Ian concluded not in an obvious “who’s to blame” that would have been too easy (and obvious…), but focused on the question: what will be the long-lasting impact? His study of the spatial distribution of the natural seeps, much different than that of the BP discharge, puts at rest the idea that the ecosystem is somehow immunized. We’re left with the challenge of designing a statistical test to that unwanted massive experiment. Ian calls for two concrete measure:

  • Identify and monitor key habitats and population to check ecosystem health.
  • Put the repayment of the ecosystem in the front of the line, using BP’s fine to that effect.

In conclusion, a much pleasant session, a treat for those of us who could stay this last day, and a much interesting magazine: I’ll definitely think of contributing!

Stay tuned for a final post later tonight, before I hand back the keys of the blog to its editor.

JSM impressions (day 4)

(by Christian Robert)

Another early day at JSM 2011, with a series of appointments at the Loews Hotel, whose only public outcome is that the vignettes on Bayesian statistics I called for in a previous post could end up being published in Statistical Science… I still managed to go back to the conference centre (almost) in time for Chris Holmes’ talk. Although I am sure Julien will be much more detailed about this Medallion Lecture talk, let me say that this was a very enjoyable and informative talk about the research Chris has brilliantly conducted so far! I like very much the emphasis on decision-theory, subjective Bayesianism, and hidden Markov models, while the application section was definitely impressive in the scope of the problems handled and the rich outcome of Chris’ statistical analyses, especially in connection with cancer issues…

In the afternoon I attended a Bayesian non-parametric session, before joining many others for the COPSS Awards session, where the awards were given to

seeing the same person Nilanjan Chatterjee being awarded two rewards twice for the first time.


About

The Statistics Forum, brought to you by the American Statistical Association and CHANCE magazine, provides everyone the opportunity to participate in discussions about probability and statistics and their role in important and interesting topics.

The views expressed here are those of the individual authors and not necessarily those of the ASA, its officers, or its staff. The Statistics Forum is edited by Andrew Gelman.

A Magazine for People Interested in the Analysis of Data