Previous studies that use decoding methods and EEG to investigate the neural representation of the category information of visual objects focused mainly on consciously processed visual objects. It remains unclear whether the category information of unconsciously processed visual objects can be decoded and whether the decoding performance is different for consciously and unconsciously processed visual objects. The present study compared the neural decoding of the animacy category of visible and invisible visual objects via EEG and decoding methods. The results revealed that the animacy of visible visual objects could be decoded above the chance level by the P200, N300, and N400, but not by the early N/P100. However, the animacy of invisible visual objects could not be decoded above the chance level by neither early nor late ERP components. The decoding accuracy was greater for visible visual objects than that for invisible visual objects for the P200, N300 and N400. These results suggested that access to animacy category information for visual objects requires conscious processing.