ANACOM

Explaining the black box: when law controls AI.pdf    
TÍTULO/RESP.:

Explaining the black box [documento eletrónico] : when law controls AI / Alexandre de Streel... [et al.]

AUTOR(ES):

STREEL, Alexandre de, outro

PUBLICAÇÃO:

Brussels: CERRE, 2020

NOTAS:

"The explainability of Artificial Intelligence algorithms, in particular Machine-Learning algorithms, has become a major concern for society. Policy-makers across the globe are starting to reply to such concern. In Europe, a High-level Expert Group on AI has proposed seven requirements for a trustworthy AI, which are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity/non-discrimination/fairness, societal and environmental wellbeing, and accountability. On that basis, the Commission proposed six types of requirement for high risk AI applications in its White Paper on AI: ensuring quality of training data; keeping data and records of the programming of AI systems; information to be proactively provided to various stakeholders (transparency and explainability); ensuring robustness and accuracy; having human oversight; and other specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification. Thus in both documents, transparency and explainability are considered key. This is why several new obligations, specific to automated systems (and thus, to AI), in particular in data protection rules and consumer protection rules, have been adopted in Europe to enhance the explainability of algorithmic decisions."

TEMA:

Comunicações Eletrónicas

ASSUNTOS:

Comunicações eletrónicasInteligência artificialProteção dos consumidores

DATA PUB.:

2020

TipoReg:

Multimédia

LÍNGUA:

ENG

Monografias