Abstract

The K-means algorithm is one of the most frequently used investigatory algorithms in data analysis [108]. The algorithm attempts to locate K prototypes or means throughout a data set in such a way that the K prototypes in some way best represent the data.In this book, we investigate alternative performance functions and show the effect the different functions have on the effectiveness of the resulting algorithms. We are specifically interested in developing algorithms which are effective in a worst case scenario: when the prototypes are initialized at the same position which is very far from the data points. This may initially sound an unlikely scenario but in a typical high dimensional space, most of the probability mass lies in the outer shell of the data [227]: so initialization to a mean of a set of data points may well lie closer to the centre than the individual data points and initialization to a single data point may cause distances to be measured across the empty centre of the space. These are well-known aspects of the “curse of dimensionality”. If an algorithm can cope with these unfavourable scenarios, it should be able to cope with a more benevolent initialization.We wish to overcome the problem of dependency on initial conditions by creating algorithms for prototype placement based on different performance functions.KeywordsPerformance FunctionQuantization ErrorMinimum PerformanceSpectral Cluster AlgorithmClose Data PointThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call