Share
  • Download PDF 

AI in the Workplace: Legal Considerations

The enthusiastic adoption of artificial intelligence (AI) in the workplace has given rise to a cascade of applications that leverage the technology. Companies considering adopting a new AI-based software solution face decisions about which solutions to adopt and what to consider when evaluating these options. In addition to technical and functional decisions, adopting an AI solution may have legal implications because governments have increased their focus on regulating AI.

AI can be leveraged for a variety of uses within transportation, including fleet monitoring and maintenance, shipping statistics, routing programs that monitor weather and road conditions, and deep analysis of various elements of a company’s business. Contract drafting and review, and virtual “AI assistants” are two rapidly expanding applications that rely upon generative AI. . Virtual “AI assistants” can monitor meetings, record transcriptions, take meeting notes, and suggest follow-up tasks to participants.

Understanding how AI is integrated into a particular product or solution will help ensure that a company does not inadvertently increase legal exposure or risk exposing confidential information. A company implementing a new generative AI solution should consider the following:

  • What type of AI is being used? Is the AI predicting patterns, analyzing trends, or doing other complex data analysis, or is it generating a unique output based on the input provided?
  • What are the capabilities and limitations of the product? For example, contract review and drafting software may be able to “issue spot” based on the parameters you provide, or it may suggest redline language based on standard contract terms from a playbook. What can the software consistently produce, and how is it presented for the company’s use?
  • What does the software do with the data that is collected? Many AI solutions recognize that companies do not want their confidential or proprietary data used to train an AI model. The best practice is to ensure that the AI model does not store, retain, or use data in training outside your enterprise.
  • Are there any limitations based on the output that it generates? What are the terms of use of the work product created by the software? Does the software rely on or is it trained by data that may have intellectual property protections? If so, does the software provider provide protections for potential liability from this use?
  • During implementation, it is essential to provide guidelines for using AI products. We suggest a company consider the following:
    • When may generative AI be used? Are there instances where the company does not want AI to be used for some reason?
    • What processes are in place to check the work of the AI product? While AI-based products are incredibly powerful, AI is still a developing technology. Companies should establish methods to check materials produced by AI while implementing software.
    • Is AI being used for automated decision-making? AI-based products can analyze huge amounts of data to provide input on your workers and prospective job applicants. While the space remains mostly unregulated in the United States, regulators are beginning to focus on preventing bias and requiring human input in AI decision-making, especially where the decision-making impacts an individual’s job or finances. For example, the California Privacy Protection Agency has issued draft regulations that may require certain opt-out rights or other notices. Likewise, the Colorado AI Act (set to go into effect in 2026) has similar provisions regarding notice and opt-out requirements. Similar provisions are in place or being considered in other states, including Virginia.
    • How is the information shared? AI assistants and contract review platforms can quickly create a large volume of information. However, this work product may contain confidential or privileged information. Care should be taken to ensure that confidential and privileged information is protected and that strict controls on the use of AI functionality are in place to protect this sensitive data.
The Transportation Brief®

A quarterly newsletter of legal news for the clients and friends of Scopelitis, Garvin, Light, Hanson & Feary

Related Topics

News from Scopelitis is intended as a report to our clients and friends on developments affecting the transportation industry. The published material does not constitute an exhaustive legal study and should not be regarded or relied upon as individual legal advice or opinion.

AI in the Workplace: Legal Considerations

The enthusiastic adoption of artificial intelligence (AI) in the workplace has given rise to a cascade of applications that leverage the technology. Companies considering adopting a new AI-based software solution face decisions about which solutions to adopt and what to consider when evaluating these options. In addition to technical and functional decisions, adopting an AI solution may have legal implications because governments have increased their focus on regulating AI.

AI can be leveraged for a variety of uses within transportation, including fleet monitoring and maintenance, shipping statistics, routing programs that monitor weather and road conditions, and deep analysis of various elements of a company’s business. Contract drafting and review, and virtual “AI assistants” are two rapidly expanding applications that rely upon generative AI. . Virtual “AI assistants” can monitor meetings, record transcriptions, take meeting notes, and suggest follow-up tasks to participants.

Understanding how AI is integrated into a particular product or solution will help ensure that a company does not inadvertently increase legal exposure or risk exposing confidential information. A company implementing a new generative AI solution should consider the following:

  • What type of AI is being used? Is the AI predicting patterns, analyzing trends, or doing other complex data analysis, or is it generating a unique output based on the input provided?
  • What are the capabilities and limitations of the product? For example, contract review and drafting software may be able to “issue spot” based on the parameters you provide, or it may suggest redline language based on standard contract terms from a playbook. What can the software consistently produce, and how is it presented for the company’s use?
  • What does the software do with the data that is collected? Many AI solutions recognize that companies do not want their confidential or proprietary data used to train an AI model. The best practice is to ensure that the AI model does not store, retain, or use data in training outside your enterprise.
  • Are there any limitations based on the output that it generates? What are the terms of use of the work product created by the software? Does the software rely on or is it trained by data that may have intellectual property protections? If so, does the software provider provide protections for potential liability from this use?
  • During implementation, it is essential to provide guidelines for using AI products. We suggest a company consider the following:
    • When may generative AI be used? Are there instances where the company does not want AI to be used for some reason?
    • What processes are in place to check the work of the AI product? While AI-based products are incredibly powerful, AI is still a developing technology. Companies should establish methods to check materials produced by AI while implementing software.
    • Is AI being used for automated decision-making? AI-based products can analyze huge amounts of data to provide input on your workers and prospective job applicants. While the space remains mostly unregulated in the United States, regulators are beginning to focus on preventing bias and requiring human input in AI decision-making, especially where the decision-making impacts an individual’s job or finances. For example, the California Privacy Protection Agency has issued draft regulations that may require certain opt-out rights or other notices. Likewise, the Colorado AI Act (set to go into effect in 2026) has similar provisions regarding notice and opt-out requirements. Similar provisions are in place or being considered in other states, including Virginia.
    • How is the information shared? AI assistants and contract review platforms can quickly create a large volume of information. However, this work product may contain confidential or privileged information. Care should be taken to ensure that confidential and privileged information is protected and that strict controls on the use of AI functionality are in place to protect this sensitive data.

News from Scopelitis is intended as a report to our clients and friends on developments affecting the transportation industry. The published material does not constitute an exhaustive legal study and should not be regarded or relied upon as individual legal advice or opinion.