Abstract

The intention of this paper is to empirically compare two closely related (and considered novel metaphor based) nature inspired algorithms and then to use these comparisons and other ideas to shed some light on randomized hypercomputation. The Bat and the Novel Bat algorithms are two of the recently created nature inspired algorithms with the claim of novelty in both of them as both of them uses different randomization, but how novel or similar are they in reality remains hazy especially from the practical perspective. Therefore, here I compare both of them on a data dependent, unexplainable real world classification function. Particularly, I create two classification machines, one derived by BA and another derived by the NBA, using weighted liner loss twin support vector machine and compare it with other trailblazing classifiers on the real world UCI machine learning data sets. These machines are, especially compared with each other also. The results show that the formulated machines perform better with respect to other classifiers on more than 80% of comparisons but are extraordinarily similar between themselves, as they agree with each other up to even four decimal places in almost 100% of cases. Moreover, these optimization algorithms form a perfect example of partially random machines which are claimed to be hypercomputational. But, here it can be shown that unless one has an access to the uncomputable input and uses it intelligently one cannot hypercompute.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call