While using continuous time neural network described by the E.Oja. learning rule (Oja-N) for computing real symmetrical matrix eigenvalues and eigenvectors, the initial vector must be on R n unit hyper-sphere surface, otherwise, the network may produce limit-time overflow. In order to get over this defect, a new neural network (lyNN) algorithm is proposed. By using the analytic solution of the differential equation of lyNN, the following results are received: If initial vector belongs to a space corresponding to certain eigenvector, the lyNN equilibrium vector will converge in this space; If initial vector does not fall into the space corresponding to any eigenvector, the equilibrium vector will belong to the space spanned by eigenvectors corresponding to the maximum eigenvalue. The initial vector maximum space for the lyNN equilibrium vector will fall into space spanned by eigenvectors corresponding to any eigenvalue received. If the initial vector is perpendicular to a known eigenvector, so is the equilibrium vector. The equilibrium vector is on the hyper-sphere surface decided by the initial vector. By using the above results, a method for computing real symmetric matrix eigenvalues and eigenvectors using lyNN is proposed, the validity of this algorithm is exhibited by two examples, indicating that this algorithm does not bring about limit-time overflow. But for Oja-N, if the initial vector is outside the unit hyper-sphere and the matrix is negatively determinant, the neural network will consequentially produce limit-time overflow. Compared with other algorithms based on optimization, lyNN can be realized directly and its computing weight is lighter.