Paper Reading AI Learner

Heterogeneity of AI-Induced Societal Harms and the Failure of Omnibus AI Laws

2023-03-20 15:23:40
Sangchul Park

Abstract

AI-induced societal harms mirror existing problems in domains where AI replaces or complements traditional methodologies. However, trustworthy AI discourses postulate the homogeneity of AI, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such AI monism has spurred legislation for omnibus AI laws requiring any high-risk AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU AI Regulation and the U.S. draft Algorithmic Accountability Act. However, it is irrational to require high-risk or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks, biases, infringements, and privacy problems. Legislators should gradually adapt existing regulations by categorizing AI systems according to the types of societal harms they induce. Accordingly, this paper proposes the following categorizations, subject to ongoing empirical reassessments. First, regarding intelligent agents, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior. Second, regarding discriminative models, law must focus on the mitigation of allocative harms and the disclosure of marginal effects of immutable features. Third, for generative models, law should optimize developer liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive filtering and identify cases where its non-human identity should be disclosed. Lastly, for cognitive models, data protection law should be adapted to effectively address privacy, surveillance, and security problems and facilitate governance built on public-private partnerships.

Abstract (translated)

人工智能引起的社会危害类似于在人工智能取代或补充传统方法学的领域中存在的现有问题。然而,可信的人工智能论述主张人工智能的一致性,旨在从生成的危害中找到共同的原因,并要求人类统一干预。这种人工智能一元论推动了针对所有高风险人工智能系统的通用人工智能法律,要求它们遵守公平、透明、责任、人类监督、精度、强度和安全性的全面、统一规则,正如欧盟人工智能条例和美国起草的算法责任法案所证明的那样。然而,当存在分离可能导致安全风险、偏见、侵犯隐私和问题的情况下,要求高风险或关键人工智能系统遵守所有安全、公平、责任和隐私法规是不公平的。议员应该逐步适应现有的法规,按照它们所引起危害的类型进行分类。因此,本文提出了以下分类方案,但仍需要持续的实证评估。首先,关于智能实体,安全法规应该适应从自主行为中产生的增量事故风险。其次,关于歧视性模型,法律应该关注减少分配 harms 和披露不可变特征的边际效应。第三,对于生成模型,法律应该优化开发责任的数据挖掘和内容生成责任,平衡侵犯内容引起的潜在社会危害和过度过滤的负面影响,并识别其非人类身份应该公开的情况。最后,对于认知模型,数据保护法规应该适应有效地解决隐私、监视和安全问题,并基于公私营合作建立有效的治理。

URL

https://arxiv.org/abs/2303.11196

PDF

https://arxiv.org/pdf/2303.11196.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot