As AI technology advances, numerous legal questions that businesses and individuals need to consider arise. Companies big and small are shaping policies that dictate what employees can and can’t do with AI tools like OpenAI’s ChatGPT. Some organizations remain bullish, while others restrict access to manage the associated risks.
IP & “Hallucinations”
We have to look at this through the lens of intellectual property (IP) rights. Generative AI systems, like OpenAI’s ChatGPT, learn and improve from the data they process. This can lead to potential breaches of IP rights. For instance, in May 2023, Samsung banned using generative AI tools after discovering an employee had uploaded sensitive codes to ChatGPT. This incident showed us the risk of unintentionally sharing proprietary information with AI systems, where it might be incorporated into the tool’s learning protocol, potentially making it accessible to other users.
The ability to retrieve and delete data from AI systems is limited, complicating efforts to protect proprietary information. Businesses must consider how their data policies address the use of AI to prevent similar breaches. As AI tools evolve, the balance between leveraging AI for innovation and protecting intellectual property remains a critical concern.
Another significant issue is the liability for errors or “hallucinations” AI generates. Hallucinations in AI refer to factually incorrect or unrelated responses produced by the system. Companies using AI must determine who is accountable for these errors, especially if they lead to harmful consequences or legal disputes. This responsibility extends to ensuring that AI creates things that are compliant with laws and regulations, which can be complex given how quickly AI technology evolves and changes. Accountability measures can include limits on who may access the information, rigorous testing, clear usage policies, and regular audits of AI outputs.
Can AI Protect Your Sensitive Information?
Data privacy is another critical legal concern related to AI. Generative AI tools can involve processing vast amounts of personal and sensitive data, raising questions about compliance with data protection laws. Samsung’s policy in May 2023 restricted the use of generative AI on company devices. The company cited difficulties retrieving and deleting data on external servers as a significant risk, emphasizing the challenge of maintaining control over data once AI systems have processed it. This action reflects broader concerns within the industry about data leaks and unauthorized access.
Formulating company policies around AI use is a necessity and an opportunity for businesses to take control of these risks. Some companies have pioneered the development of specific AI-use policies to mitigate potential issues. Other companies have mitigated these issues by creating their own internal AI product for company use. The protective policies in place often include data classification, prohibiting confidential information input into AI systems, and holding employees accountable for AI outputs. By building such policies, businesses can navigate the legal complexities associated with AI and ensure compliance with relevant regulations. As the AI landscape evolves, proactive policy-making will be crucial for companies to manage risks and leverage AI responsibly, putting them in the driver’s seat of their AI journey.
Protect Your Business with Rodriguez-McCloskey PLLC
Companies must proactively address these issues to safeguard their interests and ensure compliance. Schedule a consultation with us if you have questions or need guidance on how AI might impact your business. Understanding AI’s legal implications is the first step towards leveraging its potential for your business.