OEN NewsGrow Responsibly: Legal Priorities for AI Adoption

The use of third-party AI tools for business efficiency offers startups a dual-edged sword. On the one hand, it presents a wealth of opportunities to innovate and maximize the utility of business resources; on the other, using AI can create unique risks that, if unchecked, can substantially impact the value and risk profile of a company. That said, keeping an eye towards protecting the company when working with AI and contracting with AI vendors can allow a business to maximize the value of automation while mitigating unnecessary risk.

Intellectual Property Protection

Using AI can vastly augment the creative process, leading to ideas for branding, inventions, software development, and more. However, the use of AI in the development of business assets can muddy the waters of IP ownership. Designs, logos, and other branding created with AI may lack copyright protection, which requires a human author. The USPTO has also established requirements for disclosure of any AI involvement in the creation of a patentable invention, which could shine a spotlight on problems of inventorship.

For instance, inventions created with the assistance of AI may be compromised by the contractual rights of an AI vendor, which may establish that the vendor owns all intellectual property rights in outputs from the licensed product, rather than the licensee company. Businesses must ensure contracts with AI vendors explicitly state that ownership of outputs—be they designs, code, or data—resides with the licensee business, not the third-party vendor. 

Relatedly, the rise of AI code assistants has provided major gains for small teams and entrepreneurs with limited resources or coding experience, but the efficiency gains of assisted technology development can come with major risks for the future value of the business and its key assets. Code assistant products are often trained to write code using public repositories, relying upon open-source code which may be subject to a variety of problematic licenses, which alone or in combination with modules under different licenses can create intellectual property issues.  

Current code assistant products may advertise the capability to identify and call out open-source modules, but security research has demonstrated these capabilities do not always function as advertised and may struggle to identify borrowed code. As a result, the presence of legal obligations and risks may be entirely unknown to the licensee company, potentially compromising company trade secrets and the commercial value of the developed product. 

For example, open-source code may be subject to license terms which require attribution, but the user of a code assistant may be unaware that the generated code was taken from a repository or is subject to legal obligations or restrictions, which may lead to copyright infringement for failure to provide proper attribution in accordance with the terms of the open-source license. Additionally, open-source communities rely upon collaboration to maintain security by sharing newly discovered vulnerabilities. If the licensee company cannot identify that its generated code features open-source modules, it may be the last to know of a widely publicized vulnerability, leaving it exposed to be exploited by bad actors. 

Even further, some “copyleft” licenses provide that any works which include the licensed code must be made available for free, and must permit third parties to copy, distribute, and modify your work. In addition to the obvious impact of these terms on the ability to commercially exploit the developed technology, these license terms undermine intellectual property protection. Specifically, trade secret protection is an essential tool for protecting proprietary interests in software, but trade secrets require confidentiality, and requirements to make software openly available could destroy the confidentiality necessary to maintain trade secret protection. 

Another critical aspect of IP protection is the management of data flow. When utilizing AI tools, businesses must engage in careful due diligence to ensure a complete understanding of any use or disclosure to the licensor vendor (or the company which licenses the model underlying the vendor’s service) of licensee data provided in the course of using the tool. Without proper precaution, a licensee of an AI product or service may inadvertently share sensitive data with third parties, compromising confidentiality, company secrets, and potentially violating legal obligations to protect and not disclose certain information. Such data mismanagement not only affects immediate business operations but can also carry long-term repercussions for the value of the company or its assets. Investors and acquiring companies are growing increasingly cautious about data-related issues, often examining practices and compliance as part of due diligence. 

Businesses must be vigilant. Protecting IP means not only securing initial ownership rights but also foreseeing and forestalling future challenges that may arise from the interaction with AI technologies. Whether through meticulous contract drafting, strategic planning, or ongoing legal consultation, safeguarding intellectual property when working with AI is paramount for ensuring sustainable business success.

Contractual Protection

Engaging with AI vendors requires businesses to take care in contracting to protect their interests and foster long-term innovation (and IP ownership). Solid, forward-looking contracts can provide a framework that not only secures a business’ intellectual property but can be critical for compliance with applicable laws and mitigation of potential liabilities. 

As previously stated, businesses must ensure that they unequivocally own the outputs produced by AI tools. Often, the proprietary nature of a developed technology can be the largest asset of an emerging company, and contracts with AI vendors should safeguard that value. Accordingly, clarity around who owns the resultant IP, and under what conditions, is crucial to avoid future legal entanglements that could obstruct business growth or acquisition opportunities. 

It may also be appropriate to negotiate representations and warranties that the AI vendor will comply with all applicable laws. Regulators and legislatures are particularly focused on decisions made using automated technologies that impact a human person. These types of regulations regularly pertain to the use of tools for decisions that impact the legal rights of individuals, such as hiring or firing decisions, insurance coverage, housing decisions, and other similar decisions. However, even the use of AI tools to optimize supply chain processes or better manage employees in manufacturing processes could become subject to these regulations. Accordingly, compliance warranties from vendors should cover not only the immediate functionalities of the AI but also broader regulatory areas, such as privacy, data protection, and non-discrimination, each of which may be substantially implicated by the use of AI tools in the regular course of business. 

Contracting for the appropriate allocation of risk and liability can be equally critical. Businesses should seek to carve out exceptions to limitations of liability. Certain harms, such as data breaches, discrimination arising from biased AI algorithms, or gross negligence, should not be subject to standard caps on liability. Similarly, businesses should push for indemnity from vendors for third-party claims resulting from violation of laws, intellectual property infringement, violations of privacy or publicity rights of individuals, cybersecurity incidents, discrimination, or other harms that could result from the intended use of the technology. 

Further, when leveraging a technological ecosystem that thrives on data, robust security assurances from vendors must be a non-negotiable element of any contract, covering both company information used in connection with the tool, and the security of vendor systems generally. The vendor should warrant that the system will be deployed with adequate safeguards to protect licensees’ confidential information and user data, and clear procedures should be established to outline obligations for notification and remediation of security incidents. 

It is critical to prioritize future and potential growth scenarios for the business when contracting with AI vendors. There is risk inherent in becoming too reliant on a single AI vendor, which may change its services, go out of business, or otherwise become non-competitive relative to the broader market of available services. Accordingly, contracts should include provisions for the smooth transition of services if the business outgrows the vendor, if the vendor’s business fails, or if the services become non-competitive due to pricing or obsolescence. Data interoperability and accessibility clauses will ensure that a company’s valuable data assets remain available and transferable, maintaining operational continuity and competitive advantage. 

Last, businesses should require transparency from their AI vendors, which can be essential for proving a breach or violation if there is ever an issue with the services. AI products can sometimes operate like a “black box,” where it can be unclear what decisions were made to arrive at the ultimate output. Therefore, seeking comprehensive documentation to understand system capabilities, the development process, and operational mechanics can be essential for demonstrating the cause of any issues. This level of clarity will facilitate more effective usage, efficient dispute resolution, and compliance with legal requirements.

Bottom Line

Given the complexities of navigating the modern business environment with AI integration, businesses must approach these relationships with strategic forethought. Executing contracts with AI vendors is not only about safeguarding present operations; it is also about insuring the company against future uncertainties.

To harness the power of AI responsibly and effectively, businesses should leverage contractual relationships to build robust legal and operational frameworks that protect their interests at every stage of business development. A proactive approach ensures that businesses remain agile and secure in their growth, maintain strategic control of their innovations, and are prepared to meet the challenges and exploit the opportunities that the future holds. With the right contracting practices, businesses can turn AI implementation from a potential risk into a calculated investment in their burgeoning enterprise’s long-term viability and success.


Stoel Rives is a leading US corporate and litigation law firm providing sophisticated business clients high quality legal services. With offices in seven states and Washington, D.C., Stoel Rives is a nationally recognized leader in project finance and the energy and natural resources industries. From deals and disputes to compliance and counseling, clients turn to Stoel Rives for their most complex business challenges.

About the Author

Joe Heinlein is an attorney at Stoel Rives LLP and serves as a trusted advisor to clients on a wide range of technology-related matters, including with respect to intellectual property and technology transactions, AI adoption/development, data privacy, and cybersecurity. Joe’s practice combines a strong corporate M&A background with technical experience, bringing a multidisciplinary perspective to transactions and counseling.

Back to Top
Simple Share Buttons
Simple Share Buttons