Please Note: This seminar series has finished. For information on our current seminar series please see our Seminar Series webpage.
This seminar will be held in the Board Room, OUCE, at 1pm on Friday the 23rd of February. For further information contact Dr Mark New.
Week 6: Tuesday 23 January 2007
Evaluating Probabilistic Forecasts.
Dr Jochen Broecker, Center for the Analysis of Time Series, LSE.
Abstract: Better forecasts naturally allow the forecast user to make more informed decisions. In this respect, probabilistic forecasts are superior to deterministic forecasts (at least in principle), as they allow for a better assessment of potential risks. In weather and climate, probabilistic forecasts have therefore been a longstanding aim. In these sectors, the use of ensembles to convey probabilistic information has become more and more common.
Different from deterministic forecasts, probabilistic forecasts (ensemble forecasts or other) cannot simply be evaluated by just how far they missed truth. But how do we know then if our probabilistic forecast is "good"? How do we approach questions allocating limited resources between for example using a more costly athmospheric model or using a larger ensemble? And once we have an ensemble forecast, how is a user to interpret and exploit the additional information in all these model runs? How can the user decide which from the plethora of forecast commercially available is most valuable to him? These questions are investigated in the context of medium range weather forecasts. A variety of tools which aid in forecast verification will be explored, including the construction of reliability diagrams and discussion of the strengths and weaknesses of various skillscores which can be applied to any probability forecast.