Abstract

We propose a variant of dictionary learning (DL) for sparse representations where the atoms are cones instead of simple vectors. The most convenient vector from a cone, called actual atom, is used to build the linear sparse representation of a given signal. We present a DL algorithm suited for cone atoms, which can update the dictionary without storing all the actual atoms that are used in the representations of the training signals. Also, the algorithm ensures that the cone atoms are disjoint and thus the representation problem is well posed. We use the proposed cone DL for anomaly detection. On a specific type of anomaly, called “dependency”, the DL methods involving cone atoms are better than those from a reputed benchmark. They are also better than standard DL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call