We revisit the Learning Sparse Parities with Noise (LSPN) problem on k out of n variables for k≪n, and present the following findings.1.For true parity size k=nu for any 0<u<1, and noise rate η<1/2, the first algorithm solves the (n,k,η)-LSPN problem with constant probability and time/sample complexity n(1−u+o(1))k(1/2−η)2.2.For any 1/2<c1<1, k=o(ηn/logn), and η≤n−c1/4, our second algorithm solves the (n,k,η)-LSPN problem with constant probability and time/sample complexity n2(1−c1+o(1))k.3.We show a “win-win” result about reducing the number of samples. If there is an algorithm that solves (n,k,η)-LSPN problem with probability Ω(1), time/sample complexity nO(k) for k=o(n1−c), any noise rate η=n1−2c/3 and 1/2≤c<1. Then, either there exists an algorithm that solves the (n,k,μ)-LSPN problem under lower noise rate μ=n−c/3 using only 2n samples, or there exists an algorithm that solves the (n,k′,μ)-LSPN problem for a much larger k′=n1−c with probability n−O(k)/poly(n), and time complexity poly(n)⋅nO(k), using only n samples. Our algorithms are simple in concept by combining a few basic techniques such as majority voting, reduction from the LSPN problem to its decisional variant, Goldreich-Levin list decoding, and computational sample amplification.