This study describes tree-based intonation modeling for text-to-speech systems. The procedure for generating intonation consists of two phases: assigning tonal labels and predicting fundamental frequency contours, which are done by using tree-structured predictors. The decision tree assigns tonal labels to the syllables, and 10 sampled pitch values per syllable are predicted by a vector-regression tree. Additionally, we apply a bootstrap aggregating technique to improve the performance of the trees and create a born again tree as a representer of the multiple tree predictor. We collected 500 Korean sentences and their corresponding speech corpus, trained trees on 300 sentences, and tested them on 200 sentences. The overall misclassification rates of the born again trees were about 37% for boundary tone prediction and about 19% for non-boundary tone prediction. In the experiment on fundamental frequency contour prediction, we compared the tree-based modeling and linear regression methods with objective and subjective measures. The correlation coefficient for the observed pitch values and those predicted by the born again tree was 0.812, while a linear regression method yielded 0.805. In the subjective test, native Korean subjects clearly preferred the intonation generated by the born again vector-regression tree to that by the linear regression method.