Skip Ribbon Commands
Skip to main content
Sign In

    Artificial Intelligence Law

    CLD has published a number of articles on artificial intelligence and the law. 

    Abstract: This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans and applies it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not too strong AI.

    Abstract: This article explores the legal and ethical implications of big data’s pursuit of human ‘digital thought clones’. It identifies various types of digital clones that have been developed and demonstrates how the pursuit of more accurate personalized consumer data for micro-targeting leads to the evolution of digital thought clones. The article explains the business case for digital thought clones and how this is the commercial Holy Grail for profit-seeking big data and advertisers, who have commoditized predictions of digital behavior data. Given big data’s industrial-scale data mining and relentless commercialization of all types of human data, this article identifies some types of protections but argues that more jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorized development and use of digital thought clones.

    Abstract: Despite an emerging international consensus on principles of AI governance, lawmakers have so far failed to translate those principles into regulations in the financial sector. Perhaps, in order to remain competitive in the global race for AI supremacy without being typecast as stifling innovation, typically cautious financial regulators are unusually allowing the introduction of experimental AI technology into the financial sector, with few controls on the unprecedented risks to consumers and financial stability. Once unregulated AI software causes serious economic harm, the public and regulatory backlash would lead to over-regulation that could harm the innovation of this potentially beneficial technology. Artificial intelligence is rapidly influencing the financial sector with innumerable potential benefits, such as enhancing financial services and improving regulatory compliance. This article argues that the best way to encourage a sustainable future in AI innovation in the financial sector is to support a proactive regulatory approach prior to any financial harm occurring. This proactive approach should implement rational regulations that embody jurisdiction-specific rules in line with carefully construed international principles.

    Abstract: Big Tech's unregulated roll‐out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision‐making software making important financial decisions affecting customers. Automated decision‐making algorithms have displayed evidence of bias, lack of ethical governance, and limited transparency in the basis for their decisions, causing unfair outcomes and amplifying unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by the potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre‐emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well‐designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where public and regulatory backlash may result in over‐regulation that could damage the otherwise beneficial development of AI.

    Abstract by the author: This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood and discusses the potential for conferring rights or imposing obligations on weak and strong AI. The paper discusses the right to own property and the obligations of property ownership in nonhumans and applies it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not too strong AI.

    Abstract by the author: This paper argues for a sandbox approach to regulating artificial intelligence (AI) to complement a strict liability regime. The authors argue that sandbox regulation is an appropriate complement to a strict liability approach, given the need to maintain a balance between a regulatory approach that aims to protect people and society on the one hand and to foster innovation due to the constant and rapid developments in the AI field on the other. The authors analyze the benefits of sandbox regulation when used as a supplement to a strict liability regime, which by itself creates a chilling effect on AI innovation, especially for small and medium-sized enterprises. The authors propose a regulatory safe space in the AI sector through sandbox regulation, an idea already embraced by European Union regulators and where AI products and services can be tested within safeguards.

    ​​​​