Feng Li News Yanfei Kang

We are presenting at ISF2020 Invited Session

Our lab members will be presenting our work at the invited session of the 40th International Symposium on Forecasting virtually.

Session: Forecast Combination

Time: October 26, Monday, 17:00-18:00 GMT+8


  • Yanfei Kang (Speaker) Associate Professor, School of Economics and Management, Beihang University
  • Xiaoqian Wang (Speaker) PhD student, Beihang University
  • Xixi Li (Speaker) The University of Manchester.
  • Feng Li (Speaker) Assistant Professor at School of Statistics and Mathematics, Central University of Finance and Economics

Chair: Yanfei Kang

  • Yanfei Kang (Speaker) Associate Professor, School of Economics and Management, Beihang University

Forecast combination has been widely applied in the last few decades to improve forecast accuracy. In recent years, the idea of using time series features to construct forecast combination model has flourished in the forecasting area. Although this idea has been proved to be beneficial in several forecast competitions such as the M3 and M4 competitions, it may not be practical in many situations. For example, the task of selecting appropriate features to build forecasting models can be a big challenge for many researchers, and the interpretation may also be obscure so that it is hard to get valuable information from them. Hence, it is crucially important to improve the interpretability of forecast combination, making it feasible in practical applications. In this work, we treat the diversity of a pool of algorithms as an alternative to state-of-the-art time series features, and use meta-learning to construct diversity-based forecast combination models. A rich set of time series are used to evaluate the performance of the proposed method. Experimental results show that our diversity-based combination forecasting framework not only simplifies the modeling process but also achieves superior forecasting performance.

Providing forecasts for ultra-long time series plays a vital role in various activities, such as investment decisions, industrial production arrangements, and farm management. This paper develops a novel distributed forecasting framework to tackle challenges associated with forecasting ultra-long time series by utilizing the industry-standard MapReduce framework. The proposed model combination approach facilitates distributed time series forecasting by combining the local estimators of ARIMA (AutoRegressive Integrated Moving Average) models delivered from worker nodes and minimizing a global loss function. In this way, instead of unrealistically assuming the data generating process (DGP) of an ultra-long time series stays invariant, we make assumptions only on the DGP of subseries spanning shorter time periods. We investigate the performance of the proposed distributed ARIMA models on an electricity demand dataset. Compared to ARIMA models, our approach results in significantly improved forecasting accuracy and computational efficiency both in point forecasts and prediction intervals, especially for longer forecast horizons. Moreover, we explore some potential factors that may affect the forecasting performance of our approach.

Time series forecasting plays an increasingly important role in modern business decisions. In today’s data-rich environment, people often want to choose the optimal forecasting model for their data. However, identifying the optimal model often requires professional knowledge and experience, making accurate forecasting a challenging task. To mitigate the importance of model selection, we propose a simple and reliable algorithm and successfully improve forecasting performance. Specifically, we construct multiple time series with different sub-seasons from the original time series. These derived series highlight different sub-seasonal patterns of the original time series, making it possible for the forecasting methods to capture diverse patterns and components of the data. Subsequently, we make extrapolation forecasts for these multiple time series separately with classical statistical models (ETS or ARIMA). Finally, forecasts of these multiple time series are averaged together with equal weights. Whether in point or interval predictions, we evaluate our approach on the widely used competition datasets M1, M3, and M4 and it improves the forecasting performance in total horizon compared with the benchmarks. We also study which pattern of time series is more suitable for our method.

  • Feng Li (Speaker) Assistant Professor at School of Statistics and Mathematics, Central University of Finance and Economics

In this work, we propose a feature-based Bayesian forecasting model averaging framework (febama). Our Bayesian framework estimates weights of the feature-based forecasting combination via a Bayesian log predictive score, in which the optimal forecasting combination is connected and determined by time-series features from historical information. In particular, we utilize the prior knowledge of the coefficients of time-series features. We use an efficient Bayesian variable selection method to weight important features that may affect the forecasting combinations. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. Our framework is more computational efficient because the log predictive score and time-series features are calculated in the offline phase. We apply our framework to stock market data and M4 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms the optimal prediction pools (Geweke and Amisano, 2011) or simple averaging, and Bayesian variable selection further enhanced the forecasting performance.

Leave a Reply

Your email address will not be published. Required fields are marked *