Abstract

Researchers who study data collection, analysis, and use in the era of big data and algorithms are paying increased attention to inferred uses. The information inferred by an algorithm has distinct personality and property interests and challenges existing theories of personal information and privacy. However, a complete method of legal regulation for such information does not yet exist in China. This article focuses on how to recognize the nature of inferred information and how to carry out appropriate legal evaluation and regulation to better protect the legitimate rights and interests of relevant subjects in China. Based on China's social needs and judicial practice experience, the "contextual integrity" privacy theory developed by Professor Nissenbaum can be used to evaluate whether inferred information is infringed upon, and we believe that China is likely to adopt the US regulatory model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.