Babylonian Pearl

Corporations Law and Artificial Intelligence

Note: This is a draft, a messy one at that. Please send any thoughts through.

23 April 2023+

Alright, so the general sentiment is that AI is moving too fast for law. Either in judge-made law or legislation. However, I believe that there are existing regulatory frameworks in which, if regulatory bodies were given the mandate, would allow existing regulatory frameworks to apply to and regulate AI practices.

Where is the risk? Who is, right now, using AI that is causing harm? Corporations. There is a significant risk in government and individuals practices, as anyone with a powerful enought laptop and some knowledge can use AI to cause harm. But companies are extracting data, turning it into profit.

So let's use corporations law to regulate AI practices. This is not without precedent. Directors duties can be expanded to new fields and responsibilities. Companies are self-regulating, showing a sense of responsibility. Amazon has introduced a 'fairness metric' called conditional demographic disparity

"AI regulation must also address the accountability of those who develop, deploy, and use AI systems. This means holding companies responsible for the impact of their systems on society, and requiring transparency and explainability of AI systems. It also means establishing mechanisms for redress and remedies in cases where AI systems cause harm or discrimination. Corporate responsibility for AI must be a key part of any regulatory framework." Havard Business Review

A common position is that while there are existing frameworks that apply to AI, such as data protection and privacy laws, the complexities of AI require a separate regulatory framework be applied (e.g. HBR). I argue this misses key regulatory frameworks that may be used in lieu of a comprehensive and specific AI regulatory framework. The philosophical underpinning that responsibility lies with a person (be it the developer company, the training model data owener, or the person using the AI, shows that harm-based regulatory frameworks can be applied. For example, a company may implement AI that generates advertising that contains false or misleading claims. There are many potential outcomes here.

Self-Regulation

It seems to be accepted that companies that develop or use AI have a responsibility to ensure that their systems are safe and trustworthy, and that they do not harm individuals or society as a whole.

This is evidenced by the usually voluntary ethics frameworks published (e.g. Department of Infrastructure) and the self-regulatory steps taken by companies.

International Efforts at Regulation

The EU has proposed AI regulations, which are designed to ensure that AI systems are safe and trustworthy. These regulations cover a wide range of AI applications, including critical infrastructure, transportation, healthcare, and more. Some of the key provisions of the proposed regulations include:

Also there is the AI Act, which is a proposed US federal law that would establish a regulatory framework for AI. The AI Act would create a new agency within the Department of Commerce to oversee AI regulation, and would cover a wide range of AI applications, including autonomous vehicles, facial recognition, and more. The proposed law includes provisions for safety, privacy, and transparency of AI systems, as well as requirements for human oversight and accountability.

Other countries and regions are also developing their own AI regulations. For example, China has issued guidelines on AI ethics, while Canada has created a national AI strategy that includes a focus on responsible AI development. In addition, several international organizations, such as the OECD and UNESCO, have developed guidelines and recommendations for AI regulation.

.

.

.

.

.

.