AI IN BUSINESS: CONFIDENTIALITY RISKS & INTELLECTUAL PROPERTY CONSIDERATIONS

Table of Contents

Artificial intelligence (AI) systems are now embedded into many everyday business tools, from email and document editors to specialized scientific software. While they can significantly increase efficiency, they also introduce unique confidentiality risks and intellectual property considerations because they process user input in ways that can be stored, analyzed, or used to train underlying models. This overview summarizes key concepts, identifies risks for specific tools, provides instructions for disabling training where possible (with direct links to vendor documentation), and explains how current intellectual property rules apply to AI-assisted work. For specific guidance, seek the advice of counsel. This overview is informational only.

Key Points:
– Do not input confidential data into public AI tools unless training is disabled.
– Use enterprise accounts for AI tools wherever possible.
– AI cannot invent or author under current U.S. law.

The use of generative AI tools (such as ChatGPT or Gemini) introduces confidentiality risks that differ from traditional software. This is especially important to be aware of when handling sensitive information such as trade secrets, proprietary methods, or technical data.

1. Data Used for Model Training
When AI tools have model training settings enabled, user inputs may be retained and analyzed to improve the AI’s performance. This creates a risk of exposure of confidential or proprietary content beyond your organization. Even if such content is not intentionally shared with others, its presence in the AI’s training data can compromise trade secret status.

2. Data Not Used for Model Training
When training settings are disabled, many AI tools will not use your data to improve their models. However, this does not necessarily mean your inputs are not stored or logged. Inputs may still be retained for purposes such as abuse monitoring, safety, or debugging.

Tool NameIntended UseKey RisksConfidentiality Controls / Training Settings
ChatGPT
(OpenAI)
General Purpose AIInputs stored and used for training unless history and model training settings are turned offSettings → Data Controls → Turn off “Improve the model for everyone” – See the Open AI Data Controls FAQ
Google GeminiGeneral Purpose AIInputs may be stored and used for training unless disabled.Use Google Workspace account or disable via Gemini Activity Settings – See the Gemini Apps Privacy Hub
PerplexityGeneral Purpose AIUnless disabled, inputs/outputs may be used to improve models; personal or proprietary inputs could be retained.In user settings (Profile → Settings → AI Data Usage), toggle off “AI Data Usage” to opt out of training.
Perplexity’s Privacy Policy
Claude
(Anthropic)
General Purpose AIFeedback submissions (e.g., thumbs-up/down) may be used for model training; flagged content may be retained for safety analysis.Claude does not train on consumer input/output by default unless explicitly opted in (via feedback or programs). Conversation data auto-deletes within  about 30 days unless flagged. – see “How do you use personal data in model training”;
For commercial products (e.g. Claude for Work, Anthropic API), see “Is my data used for model training?
Microsoft CopilotGeneral Purpose AIRisk of exposing internal confidential data if permissions aren’t properly set within organizational data. See Microsoft’s Data, Privacy, and Security for Microsoft 364 Copilot article for more details.Prompts, responses, and Microsoft Graph data are not used to train foundation LLMs. Admins can manage retention via Microsoft Purview, and users can delete activity history.
Data, Privacy, and Security for Microsoft 365 Copilot

Recommendations for Responsible AI Use

  • Standardize Tools in Use. Adopt a list of approved AI tools, and ensure teams are trained on their appropriate use. Regularly review and update approved tools as technology evolves.
  • Use Enterprise or Team Controlled Accounts. Choose AI tools that offer administrative settings, control over training preferences, and clear data handling assurances. Avoid free public-facing tools for use with confidential or proprietary information.
  • Always Disable Model Training. Assume that if model training settings are not explicitly disabled, any data inputs may be retained or used for training purposes. Verify these settings before use and audit them regularly.
  • Use AI for Exploration, Not Final Work Product. Treat generative AI outputs as a brainstorming and research tool. Always review, fact-check, and validate AI-generated content before incorporating it into final deliverables or patent applications.
  • Match Tool Sensitivity to Data Sensitivity. Use the most secure, enterprise-grade tools when working with sensitive or high-risk information. Reserve lower-security tools for public or low-risk data only.

As artificial intelligence tools become more embedded in innovation and creativity, U.S. intellectual property agencies have made it clear: only human contributions are protectible under current law. The U.S. Patent and Trademark Office (USPTO) and the U.S. Copyright Office (USCO) have each issued guidance to help applicants understand the limits of IP protection when AI is involved. This summary breaks down the current stance on AI-assisted inventions and creative works, and how to properly disclose and integrate human input.

Patent Law: Human Inventorship Required
The USPTO requires that a human must have significantly contributed to the invention for it to be patentable. AI can assist in the process, but AI alone cannot qualify as an inventor, nor can an invention be patented if it is conceived solely by an AI system.

Official Guidance:

Disclosure Requirements:

  • When filing a patent application, applicants must identify the natural person(s) who contributed to the conception of the invention.
  • If AI played a role, the application must explain how the human inventor used AI as a tool.
  • The USPTO may request a detailed explanation of the human inventor’s contribution during examination or challenge proceedings.
  • NOTE: a purely AI-generated technical discovery that is not publicly known could be protectable as a trade secret. If you want to keep it that way, you will not want to include it in a patent application since they become public (other than unconverted provisionals).

Official Guidance:

Disclosure Requirements:

  • Applicants must disclose the inclusion of AI-generated material when submitting a registration application.
  • Use the USCO’s Standard Application and describe only the human-authored elements.
  • The “Note to Copyright Office” field should be used to explain the role of AI.
  • Failure to disclose AI use may lead to revocation or limitation of registration, as seen in the Zarya of the Dawn case (USCO limited protection to the human-authored selection and arrangement of images, not the AI-generated portions).
  • Document your contribution. Keep detailed notes or version histories showing human input.
  • Treat AI like a tool. Use AI to assist, not replace, the human creative or inventive spark.
  • Be transparent. Disclose AI involvement clearly in both patent applications and copyright registrations (and to your counsel prior to any filing!).
  • Separate protectible and non-protectible elements. Clarify what was authored or invented by a human.
  • Stay updated. Monitor ongoing developments from the USPTO and USCO.

Skip to content