A classical approach used to obtain basic facts in the theory of square matrices involves an analysis of the relationship between polynomials p in one variable and square matrices T such that p( T) = 0. We consider matrices and operators which satisfy a different type of polynomial constraint. Let H be a complex Hilbert space, T be a bounded linear transformation of H , T∗ be the adjoint of T, and C[ x, y] be the algebra of polynomials in x and y with complex coefficients. For a polynomial p ∈ C[ x, y] in two variables with complex coefficients, define p(T) := Σ m, n ⩾ 0 p ^(m, n)T∗ nT m , where p ^( m, n) is the coefficient of y n x m in the expansion of p in a power series about the point (0, 0). T is called a root of p if and only if p( T) = 0. Note that if p ∈ C[ x, y] is a polynomial in the single variable x, then the definition of p( T) given here agrees with the classical definition. In this paper, we study the relationships which p( T) = 0 forces between p and T when T is an algebraic operator (i.e., there exists n ⩾ 1 and complex numbers a 0, …, a n − 1 such that 0 = a 0 + a 1 T + … + a n − 1 T n − 1 + T n ). The classification starts with the following observation: Suppose p ∈ C[ x, y] and an algebraic operator T ∈ L( H) satisfy p( T) = 0. Then certain subspaces of H which are invariant for T must be orthogonal or certain coefficients of p must vanish. This leads to the notions of a graph attached to each p ∈ C[ x, y] and a graph attached to each square matrix T. For diagonalizable T, a necessary and sufficient graph theoretic condition for solving p( T) = 0 is given. For nondiagonalizable T, this condition is necessary, but not sufficient. The use of these graphs does, however, reduce the problem to the problem of solving the equation p( T) = 0 for T with exactly one or two eigenvalues. For T with one eigenvalue, we give a necessary and sufficient condition for solving p( T) = 0. This leaves the case of solving p( T) = 0 when T has exactly two eigenvalues. This problem mixes algebra involving polynomials with matrix theory. We show that it is equivalent to the purely algebraic problem of determining if equations of the form Σ ( i, j) ∈ E C i, j X i + r, j + s = 0 have solutions of finite support with certain nonvanishing properties. We call these equations bi-Hankel equations subordinate to a given subset E of the lattice of integer pairs {( i, j) : 0 ⩽ i ⩽ m − 1, 0 ⩽ j ⩽ n − 1}. It turns out that there is an algorithm (which uses Gröbner bases) for determining if the type of solution we seek exists and for computing it.