Abstract
Pattern recognition, machine learning and artificial intelligence offer tremendous opportunities for efficient operations, management and governance. They can optimise processes for object, text, graphics, speech and pattern recognition. In doing so the algorithmic processing may be subject to unknown biases that do harm rather than good. We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems. But what are the risks, given the Human Condition?
Highlights
Pattern recognition (PR) and artificial intelligence (AI) are machine systems for finding or inferring patterns and relationships in data
We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems
The Information and Communications Technologies (ICT) research Menlo report proposes three core ethical principles, three of which derive from the Belmont report: 1 . respect for persons; 2 . beneficence; and 3 . justice
Summary
Pattern recognition (PR) and artificial intelligence (AI) are machine systems for finding or inferring patterns and relationships in data. The power of these systems and their deployment across multiple social, commercial and government domains impacts everyone. Artificial intelligence and pattern recognition systems are technological tools for people. The effect of such systems should comply with systems of rights and responsibilities. Calls have come out to limit the use of AI as matters of policy, especially in policing; the Government Accountability Office, Science, Technology Assessment and Analytics team of the United States is evaluating law. We discuss factors that must be addressed for proper and reliable use of AI, PR and machine learning
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have