Abstract

This paper addresses the problem of dynamically assigning tasks to a crowd consisting of AI and human workers. Currently, crowdsourcing the creation of AI programs is a common practice. To apply such kinds of AI programs to the set of tasks, we often take the ``all-or-nothing'' approach that waits for the AI to be good enough. However, this approach may prevent us from exploiting the answers provided by the AI until the process is completed, and also prevents the exploration of different AI candidates. Therefore, integrating the created AI, both with other AIs and human computation, to obtain a more efficient human-AI team is not trivial. In this paper, we propose a method that addresses these issues by adopting a ``divide-and-conquer'' strategy for AI worker evaluation. Here, the assignment is optimal when the number of task assignments to humans is minimal, as long as the final results satisfy a given quality requirement. This paper presents some theoretical analyses of the proposed method and an extensive set of experiments conducted with open benchmarks and real-world datasets. The results show that the algorithm can assign many more tasks than the baselines to AI when it is difficult for AIs to satisfy the quality requirement for the whole set of tasks. They also show that it can flexibly change the number of tasks assigned to multiple AI workers in accordance with the performance of the available AI workers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.