Well-established methods of locating the source of information in a complex network are usually derived with the assumption of complete and exact knowledge of network topology. We study the performance of three such algorithms (Limited Pinto-Thiran-Vetterli Algorithm – LPTVA, Gradient Maximum Likelihood Algorithm – GMLA and Pearson Correlation Algorithm – PCA) in scenarios that do not fulfill this assumption by modifying the network before localization. This is done by adding superfluous new links, hiding existing ones, or reattaching links following the network’s structural Hamiltonian. Our results show that GMLA is highly resilient to adding superfluous edges, as its precision falls by more than statistical uncertainty only when the number of links is approximately doubled. On the other hand, if the edge set is underestimated or reattachment has taken place, the performance of GMLA drops significantly. In such a scenario, PCA is preferable, retaining most of its performance when other simulation parameters favor successful localization (high density of observers, highly deterministic propagation). It is also generally more accurate than LPTVA and orders of magnitude faster. The differences between localization algorithms can be intuitively explained, although further theoretical research is needed.