Abstract
The classical perceptron algorithm is an elementary algorithm for solving a homogeneous linear inequality system Ax > 0, with many important applications in learning theory (e.g., [11,8]). A natural condition measure associated with this algorithm is the Euclidean width τ of the cone of feasible solutions, and the iteration complexity of the perceptron algorithm is bounded by 1/τ 2. Dunagan and Vempala [5] have developed a re-scaled version of the perceptron algorithm with an improved complexity of O(nln (1/τ)) iterations (with high probability), which is theoretically efficient in τ, and in particular is polynomial-time in the bit-length model. We explore extensions of the concepts of these perceptron methods to the general homogeneous conic system where K is a regular convex cone. We provide a conic extension of the re-scaled perceptron algorithm based on the notion of a deep-separation oracle of a cone, which essentially computes a certificate of strong separation. We give a general condition under which the re-scaled perceptron algorithm is theoretically efficient, i.e., polynomial-time; this includes the cases when K is the cross-product of half-spaces, second-order cones, and the positive semi-definite cone.KeywordsConvex ConeLinear InequalityConic SystemClosed Convex ConeEllipsoid MethodThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.