The integration of generative AI into product development and engineering tasks has opened Pandora’s box of opportunities and risks. As legal teams strive to bridge innovation and compliance, collaboration with product and engineering departments becomes vital.

In this edition of Counsel Corner, we’ll be sharing insights on how legal teams can best approach this evolving landscape from experts in the field including Barath Chari, Partner @ WSGR, Francine Godrich, GC @ Focusrite PLC, and Philip Grimason, GC @ Synalogik Innovative Solutions Limited.

Strategies for Collaboration with Product and Engineering Teams

Generative AI holds immense potential for innovation but can also raise legal and ethical concerns, which is why many legal teams are taking a proactive approach to strike a balance of power between the two. Focusrite PLC has approached this with a simple policy: use with caution. Since most generative AI models will store the information they receive, avoid feeding the one you use any commercially sensitive information and always validate the responses it produces.

When it comes to collaboration and relationship building between these teams, it’s important to show that you are allies in innovation and not an impediment. Make it a point to remind your teams that you have an open door. As Francine Godrich says,

“It goes without saying that we’d be only too happy to remove the door to our office so that anyone who needs advice can get it. In fact, we have tried to remove the door but it turned out the door frame was part of the supporting structure of the building.”

It’s also worth noting that you should be ready to listen and learn about these technologies and ask for demos:

“Don’t be afraid to look foolish in asking to see what’s “under the hood” and how everything works. Make clear that you are interested in helping them understand legal risks in concrete terms and to work with them on technical mitigation strategies.

— Barath Chari

Navigating the Copyright Landscape

Since generative AI can potentially create content resembling existing copyrighted material, the need for preventive measures is essential. And, as the legal landscape includes pitfalls like unwitting IP infringement and has yet to make final determinations on topics such as, “Who owns the IP created by AI?”, it becomes essential to establish ground rules for engaging with these platforms.

Here are some ideas on how to deal with AI and copyright without ruling out the use of AI:

If building AI software/platforms:

  • Ensure the integrity of algorithms and materials used for AI content creation. Only use data/materials you own, have permission for, or that are in the public domain.
  • When seeking licenses, clarify your intent for data use and obtain necessary rights, such as the ability to modify and develop derivative content.
  • For the creation of new copyrightable pieces, adjust the algorithm to make significant alterations to the input data. Ensure you have the right to adapt the original material or that it’s public domain.

If using AI software/platforms created by 3rd parties:

  • Secure a Robust Contract: make sure it’s it has clear warranties and indemnities against IP infringement
  • Review Licenses and policies: understand 3P’s stance on data ownership and permissions
  • Examine License Terms and restrictions: do this especially if generative AI is being used to craft new works. Potential restrictions to watch out for:
    • Advertising & Copywriting
    • Distribution
    • Commercialization
    • Monetization
    • Copyrights
  • IP Insurance: this may be necessary to cover the cost of potential IP infringement claims if AI-derived content plays a large part in your business

Addressing Bias and Reputational Risks

Unconscious bias and the generation of offensive content are serious concerns that legal teams must address to avoid reputational damage. Practical training sessions that use real-time examples from AI systems like ChatGPT have proven effective in educating staff.

Additionally, implementing “human in the loop” policies, where AI-generated content is reviewed and validated before public release, should serve as an additional safeguard. Just as a company wouldn’t let somebody writing marketing copy publish without approvals, and studios wouldn’t release a work without vetting rights, product teams should make sure that they have a process in place to evaluate and validate outputs prior to release.

It’s also worth exploring features that allow users to flag undesired output, which can then be reviewed and adjusted as necessary.

Consider the following questions to pose to your teams:

  • Is this information from an open AI data set? If so, be extremely specific with your prompt to pull the precise data you need rather than open-ended questions that could lead to troublesome or biased responses.
  • Is the response up to date and relevant? ChatGPT is typically about 2 years behind in its data/knowledge, so its response may be nonsense in consideration of present knowledge and current events.
  • How did the AI arrive at its answer/conclusion? If sources cannot be obtained, use extreme caution or consider not using it all for that particular application until sources can be verified.

Ensuring Transparency and Compliance

Legal teams need to work closely with product and engineering departments to develop clear disclosures about the AI’s involvement in content creation, including watermarks, disclaimers, and other in-product notifications. The goal is to maintain transparency while abiding by data protection laws like GDPR and evolving AI-related regulations. Training programs related to intellectual property rights help the team understand the complexities of content ownership and legality.

“Our research focus is on developing and leveraging explainable AI techniques using fully anonymized data. We aim to be able to explain to a level satisfactory for evidencing, exactly why our systems have made the recommendation it has. For generated content, a highly templated approach is used to make the content predictable and repeatable.”

— Philip Grimason

Allocating Responsibility and Accountability for AI-Generated Content

Addressing the intricate issue of accountability in AI systems is crucial as AI continues to evolve. Some organizations employ explicit policies that outline user responsibility for understanding the input, output, and risks involved in AI systems, further categorized based on the type of AI in use (focused vs. generative).

In contrast, others prioritize consumer expectations, emphasizing that ultimate accountability lies with the organization, not the developers or AI vendors. Internal frameworks should be guiding the ‘do’s and don’ts’ for developers, backed by contractual protections from vendors, making the issue of accountability a collective effort involving multiple departments.

At Focusrite PLC, Francine uses the following Do’s and Don’ts table, followed by a simple final step: anything that is publicly available must be validated.

Focused AI Generative
DO check for any bias in the data sets relevant to the use DO verify any code or system configuration 
DON’T use a system which draws data relevant to people or people trends unless the information is fully anonymized. DON’T input company-sensitive data into systems unless it has been agreed by the company.

I love a policy as long as 1. it isn’t too long, 2. is easy to understand, and 3. is practical. Our policy is clear that anyone using an AI system is responsible for understanding what they are inputting to the system, what it is generating by way of response, and what is the risk involved.”

— Francine Godrich

Consider the following policy/table Francine uses where a range of scenarios are listed to help the user understand the essence of what is going on:

Action A code is not working.  Code sample entered into an AI system. AI system matches the pattern to get a matching output Code is fixed by AI system.
What is actually going on This can be lots of things. It can be the code is unable to run or was built as a program and is not conforming to the rules of programming language or could just be doing something you don’t want it to do. Confidential and commercially sensitive information is being disclosed publicly. Takes the inputs and expected outputs and tries to find patterns so that it can make functional code based on the input.  Code is now public.

Although generative AI is challenging the traditional boundaries of legal compliance, the balance between innovation and compliance is attainable through collaboration and proactive policies. Legal teams, naturally, are at the forefront of this critical process and can continue to pave the best possible path forward through transparency, accountability, and compliance.

Do you want access to the best resources and network for confronting today’s AI-legal challenges? Apply for membership at TechGC today.