Posted: 12/06/2023
Artificial intelligence (AI) is transforming the way that we live and work. As AI changes how businesses operate, supply services and work with customers and clients, the contracts that govern those relationships and frameworks will need to be adapted in light of the new challenges posed by the introduction of AI.
Traditional forms of contract are often lacking significant detail when applied to scenarios involving AI. Crucially, the supply of AI systems and technology in business contracts presents new challenges that generic SaaS or software development and licence agreements are unlikely to consider. This article will explore some of the issues that such commercial contracts should address when dealing with the provision of AI solutions.
Delivery
The delivery of an AI solution to a customer is often a multi-faceted and iterative process, similar in some ways to a typical software development and supply contract, but with additional phases that are unique to AI software. A software development agreement is likely to deal with issues such as pre-delivery testing with the customer’s equipment, installation, and acceptance testing, which is equally applicable to the delivery of AI software. To deal with the more nuanced aspects of AI software delivery, however, a contract should consider additional concepts like AI training and delivery according to a specification, or other statement of requirements.
For example, consideration should be given to the method of training the AI. Will the AI software be delivered pre-trained to the extent required to adequately deliver the solution purchased by the customer, or will the customer be required to train the AI itself once delivery has taken place? Whose data will be used to train the AI? Will the supplier provide data to train the AI before delivery takes place or will the customer provide its own data? A contract should ideally allocate the parties’ respective obligations with regards to the training of the AI.
A further AI concept that impacts contracts is the ‘explainability’ of AI. This is the concept of ‘how’ the AI reaches certain decisions or outputs and whether it is, or is not, ‘explainable’. This is not always possible to achieve, in particular where off the shelf AI solutions are supplied, as a supplier may not in fact be willing to explain how its proprietary AI works.
Where this intersects with commercial contracts is with the development of software according to a specification. It may be that drafting an accurate or effective specification for an AI solution is impossible where the customer does not understand how such a solution might work, and a supplier is not prepared to assist in defining this scope. It may be useful in such scenarios to step away from a specification format and rely more on the customer’s desired objectives or consequences produced by the AI (eg akin to an agile software development project).
The supplier in that case would be obliged to provide an AI solution that meets the objectives-based criteria of the customer. This would also make the performance of the solution itself more easily measurable as performance indicators would be matched to the objectives of the customer, rather than to a specification dependent on analysis of the unknown inner workings of the AI solution.
Intellectual property
Intellectual property (IP) rights are of course a hot topic when it comes to AI. As our previous article on AI and copyright indicates, it is important to set out the ownership and responsibility for the underlying AI, as well as the training data, and the outputs and decisions of the AI at an early stage, to avoid issues further down the line. In a commercial contract for the supply of AI, there are numerous items that could generate questions of IP ownership and for which a party should consider adequate protection. This may include IP rights in:
Other than perhaps the parties’ background IP, the IP rights subsisting in any of the items above could in theory be owned by either party. The parties to such an agreement should therefore carefully consider what items they want to retain ownership of, and what items are to be licensed or assigned to the other party under the agreement.
As our previous article also illustrated, there is considerable scope for copyright infringement claims to be brought against AI developers and users, so sufficient protection should be built into commercial contracts for the supply of AI. One area where this protection could be provided is in the form of more AI-specific warranties and indemnities to protect against claims for infringement at the various stages of the AI lifecycle, whether in the training phase where an AI has been trained using copyright protected works without a licence or consent, or where the outputs of the AI infringe on third-party rights. The drafting of such clauses should therefore take into consideration more specific AI-based scenarios in which the development or use of the AI could cause liability for infringement of third party rights.
Legislative landscape
There is not currently any legislation in the UK that specifically regulates AI, and the government’s recent white paper on AI indicates that there is nothing imminent on the legislative horizon. However, the government’s approach to regulating AI, through various pieces of guidance and codes issued by sectoral regulators, means that the regulation of AI will undergo change. This change is likely to be far more significant and rapid than typical changes to regulation, and the changes may be frequent, at least to begin with.
This may result in certain elements of an AI solution becoming restricted or prohibited by regulation, impacting the respective value of the contract to the parties. The parties should consider this at the outset, for example by incorporating a right to terminate the agreement if changes in law or guidance materially inhibit a party’s ability to comply with contractual obligations, or impact the functionality of the AI solution.
The potential of innovative AI solutions has been well publicised and we are likely to see more and more contractual arrangements governing the provision of AI solutions. It is therefore critical that businesses have robust contracts in place to account for the unique and novel risks associated with such innovation. The issues considered above only form part of a much wider discussion that businesses should be having with their legal advisors to proactively assess how their existing contractual frameworks and policies should be updated if they are to be applied in an AI context. Other factors including data protection and cyber security are equally important, as well as controlling the use of the supplied solution by the customer. By considering these issues now, businesses will be better placed to leverage the opportunities afforded by AI whilst mitigating risks and potential liabilities.