HIGHLIGHTS
SUMMARY
When this happens, the relevant human actors may not be sufficiently able, motivated and willing to prevent undesired outcomes (Elish, 2019; Flemisch et_al, 2017). Similar considerations are also behind the literature on so-called gaps in "transparency" and "explainability" of AI systems (Doran et_al, 2017) and their moral (Coeckelbergh, 2020) and legal implications (Edwards and amp; Veale, 2017; Noto La Diega, 2018; Wachter et_al, 2017). Some authors have argued against the existence, relevance, or novelty of AIinduced responsibility gaps (Simpson and amp; Müller, 2016; Tigard, 2020) while others have proposed general principles to . . .
If you want to have access to all the content you need to log in!
Thanks :)
If you don't have an account, you can create one here.