Abstract
Összefoglalás. A jelen írás alapja a témában tartott előadásom. Először általános kérdésekkel foglalkozom, majd a tervezett „EU AI Act”-ről lesz szó, utána egy VW projektet ismertetek röviden, majd a „megmagyarázható MI”-ről fogok beszélni, aztán egy saját, hazai kezdeményezésről, az Alfi projektről teszek említést. Végezetül egy kitekintés zárja le az írást. Summary. This writing is based on a lecture on the topic. In my other (German) affiliation I am manager of a large-scale EU project called “HumanE AI Net” (funded with 12m Euro) comprising 53 leading EU institutions, including large universities (UCL London, LMU Munich, Sorbonne, Sussex or ELTE), networks of research institutes (Fraunhofer, Max Planck Gesellschaft, INRIA, CNR Italy), large international companies (ING Bank, SAP, Philips, Airbus), etc. In the writing I discuss general issues related to Humane AI, the planned EU AI Act, social credit systems, explainable AI, and the Alphie project, respectively. In April 2021, the European Commission proposed a regulation on artificial intelligence, known as the AI Act. The regulation aims at human-faced AI in a European dimension. Although it is still only a draft, the stakes are high. The planned law has, however, faults (I maintain here), to be corrected before the text passes as law. Another subject to discuss is the study – and prohibition (at least in Europe) – of social credit systems. The original “Social Credit System” is a national credit rating and blacklist developed by the Government of the People’s Republic of China. Proponents of the system claim that it helps regulate social behaviour, improves citizens’ ‘trustworthiness’ (which includes paying taxes and bills on time) and promotes the spread of traditional moral values. Critics of the system, however, argue that it goes far beyond the rule of law and violates the legitimate rights of people – in particular, the right to reputation, privacy and personal dignity – and that it can be a tool for extensive government surveillance and suppression of dissent. “Explainable AI” (XAI) has become a hot topic in recent years. AI applications are mostly “opaque”: this is especially true for learning systems and by definition for neural networks (NN). The current fashion, “deep learning”, usually means the application of a particularly opaque NN anyway. It is natural not to know what the system is doing and why. So, let’s change that! With this tenet, XAI was born. I review some solutions to the problem. In the writing I also mention an application, Alphie, the first version of which was done in the OTKA project “Good Mobile” and is now supported by the MI National Laboratory. Alphie is a science-based playful application for children that helps them to use digital tools more consciously and within limits, while developing a variety of skills. It performs the functions of a ‘grandmother’ who shows emotions towards the child: can be e.g. angry, loving, etc. The application makes the corresponding sounds (!) and facilitates real social interactions (e.g. sends the child to play football (!).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.