Abstract

According to “Huang's law”, Artificial intelligence (AI)-related hardware increases in power 4–10 times per year. AI can benefit various stages of real estate development, from planning and construction to occupation and demolition. However, Hong Kong's legal system is currently behind when it comes to technological abilities, while the field of AI safety in built environments is still in its infancy. Negligent design and production processes, irresponsible data management, questionable deployment, algorithm training, sensor design and/or manufacture, unforeseen consequences from multiple data inputs, and erroneous AI operation based on sensor or remote data can all lead to accidents. Yet, determining how legal rules should apply to liability for losses caused by AI systems takes time. Traditional product liability laws can apply for some systems, meaning that the manufacturer will bear responsibility for a malfunctioning part. That said, more complex cases will undoubtedly have to come before the courts to determine whether something unsafe should be the manufacturer's fault or the individual's fault, as well as who should receive the subsequent financial and/or non-financial compensation, etc. Since AI adoption has an inevitable relationship with safety concerns, this project intends to shed light on responsible AI development and usage, with a specific focus on AI safety laws, policies, and people's perceptions. We will conduct a systematic literature review via the PRISMA approach to study the academic perspectives of AI safety policies and laws and data-mining publicly available content on social media platforms such as Twitter, YouTube, and Reddit to study societal concerns about AI safety in built environments. We will then research court cases and laws related to AI safety in 61 jurisdictions, in addition to policies that have been implemented globally. Two case studies on AI suppliers that sell AI hardware and software to users of built environment will also be included. Another two case studies will be conducted on built environment companies (a contractor and Hong Kong International Airport) that use AI safety tools. The results obtained from social media, court cases, legislation, and policies will be discussed with local and international experts via a workshop, then released to the public to provide the international community and Hong Kong with unique policy and legal orientations.KeywordsArtificial intelligenceRobotNew institutional economicsPRISMALawCase studiesHong Kong

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call