Abstract

Learning large network (with hundreds of variables) is gaining interest of many researchers with the emergence of high-throughput biological data sources such as micro-array data. In this paper, we investigated the two popular large scale network structure learning algorithms, sparse candidate hill climbing (SCHC) and Grow-Shrinkage(GS) algorithm. The experiments show that in fact both of them have serious effectiveness problems when the number of variables(genes) is large compared to the number of instances(experimental conditions), which is a common case in micro-array data. We further propose a new large scale structure learning algorithm based on Lasso regression. Theoretical analysis in [10] suggested that the L1-norm in lasso regression could make our algorithm especially suitable in the cases that the number of variables and instances is unbalance. Our algorithm achieves much better results than SCHC and GS on the synthetic data. We also show the effectiveness of our algorithm by learning genetic regulatory network modules from a real micro-array data (with more than 6000 genes), combined with the genome-wide location analysis data. The learned results are consistent well with biological knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call