Abstract
Neural codes, represented as collections of binary strings called codewords, are used to encode neural activity. A code is called convex if its codewords are represented as an arrangement of convex open sets in Euclidean space. Previous work has focused on addressing the question: how can we tell when a neural code is convex? Giusti and Itskov identified a local obstruction and proved that convex neural codes have no local obstructions. The converse is true for codes on up to four neurons, but false in general. Nevertheless, we prove this converse holds for codes with up to three maximal codewords, and moreover the minimal embedding dimension of such codes is at most two.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.