Abstract
Most theoretical studies of inductive inference model a situation involving a machine M learning its environment E on following lines. M, placed in E, receives data about E, and simultaneously conjectures a sequence of hypotheses. M is said to learn E just in case the sequence of hypotheses conjectured by M stabilizes to a final hypothesis which correctly represents E.The above model makes the idealized assumption that the data about E that M receives is from a single and accurate source. An argument is made in favor of a more realistic learning model which accounts for data emanating from multiple sources, some or all of which may be inaccurate. Motivated by this argument, the present paper introduces and theoretically analyzes a number of inference criteria in which a machine is fed data from multiple sources, some of which could be infected with inaccuracies. The main parameters of the investigation are the number of data sources, the number of faulty data sources, and the kind of inaccuracies.KeywordsLearning MachineInductive InferenceInaccurate DataIdealize AssumptionComputational Learn TheoryThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.