Abstract

Decades of research on the 0-1 knapsack problem led to very efficient algorithms that are able to quickly solve large problem instances to optimality. This prompted researchers to also investigate the structure of problem instances that are hard for existing solvers. In the current paper we are interested in investigating which features make 0-1 knapsack problem instances hard to solve to optimality for the state-of-the-art 0-1 knapsack solver. We propose a set of 14 features based on previous work by the authors in which so-called inclusionwise maximal solutions (IMSs) play a central role. Calculating these features is computationally expensive and requires one to solve hard combinatorial problems. Based on new structural results about IMSs, we formulate polynomial and pseudopolynomial time algorithms for calculating these features. These algorithms were executed for two large datasets on a supercomputer in approximately 540 CPU-hours. We show that the proposed features contain important information related to the empirical hardness of a problem instance that was missing in earlier features from the literature by training machine learning models that can accurately predict the empirical hardness of a wide variety of 0-1 knapsack problem instances. Moreover, we show that these features can be cheaply approximated at the cost of less accurate hardness predictions. Using the instance space analysis methodology, we show that hard 0-1 knapsack problem instances are clustered together around a relatively dense region of the instance space and several features behave differently in the easy and hard parts of the instance space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call