Please use this identifier to cite or link to this item:
|Title:||Learning Bayesian Networks from Big Data with Greedy Search|
|Keywords:||Bayesian networks;Structure Learning;Big Data;Computational Complexity|
|Citation:||Statistics and Computing, 2018, pp. 1 - 15|
|Abstract:||Learning the structure of Bayesian networks from data is known to be a computationally challenging, NP-hard problem. The literature has long investigated how to perform structure learning from data containing large numbers of variables, following a general interest in high-dimensional applications (“small n, large p”) in systems biology and genetics. More recently, data sets with large numbers of observations (the so-called “big data”) have become increasingly common; and these data sets are not necessarily high-dimensional, sometimes having only a few tens of variables depending on the application. We revisit the computational complexity of Bayesian network structure learning in this setting, showing that the common choice of measuring it with the number of estimated local distributions leads to unrealistic time complexity estimates for the most common class of score based algorithms, greedy search. We then derive more accurate expressions under common distributional assumptions. These expressions suggest that the speed of Bayesian network learning can be improved by taking advantage of the availability of closed form estimators for local distributions with few parents. Furthermore, we ﬁnd that using predictive instead of in sample goodness-of-ﬁt scores improves speed; and we conﬁrm that is improves the accuracy of network reconstruction as well, as previously observed by Chickering and Heckerman (2000). We demonstrate these results on large real-world environmental and epidemiological data; and on reference data sets available from public repositories.|
|Appears in Collections:||Dept of Computer Science Research Papers|
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.