An algorithm for density estimation based on ordinary polynomial (Lagrange) interpolation is studied. Let $F_n(x)$ be $n/(n + 1)$ times the sample c.d.f. based on $n$ order statistics, $t_1, t_2, \cdots t_n$, from a population with density $f(x)$. It is assumed that $f^{(v)}$ is continuous, $v = 0, 1, 2,\cdots, r, r = m - 1$, and $f^{(m)} \in L_2(-\infty, \infty). F_n(x)$ is first locally interpolated by the $m$th degree polynomial passing through $F_n(t_{ik_n}), F_n(t_{(i+1)k_n}),\cdots F_n(t_{(i+m)k_n})$, where $k_n$ is a suitably chosen number, depending on $n$. The density estimate is then, locally, the derivative of this interpolating polynomial. If $k_n = O(n^{(2m-1)/(2m)})$, then it is shown that the mean square convergence rate of the estimate to the true density is $O(n^{-(2m-1)/(2m)})$. Thus these convergence rates are slightly better than those obtained by the Parzen kernel-type estimates for densities with $r$ continuous derivatives. If it is assumed that $f^{(m)}$ is bounded, and $k_n = O(n^{2m/(2m+1)})$, then it is shown that the mean square convergence rates are $O(n^{-2m/(2m+1)})$, which are the same as those of the Parzen estimates for $m$ continuous derivatives. An interesting theorem about Lagrange interpolation, concerning how well a function can be interpolated knowing only its integral at nearby points, is also demonstrated.