Back to Blogs

Does your firm have an AI usage policy?

Does your firm have an AI usage policy?

 

AI tools are showing up everywhere now, and are extremely tempting for a variety of reasons. They can help reduce admin time, research time, suggest answers, write better blogs (this was human-written), and even do marketing for you.

While it can be tempting, there is a risk that once the inputs are provided to the AI model, they are lost in the sea of information, which could be used to piece together client sensitive information.

 

AI tools learn from data provided, which can be combined with other information on the dark web such as user's login history, credentials from other data breaches, online digital activity/footprints, which can later be combined to create a very accurate profile of an individual (internal employee or client), and increase the chances of a successful attempt at a hack or unauthorized access.

 

At SideDrawer, we have recently implemented an AI use policy that prohibits the use of any confidential or sensitive data, such as company or client information, to be inputted into any AI model. This policy was introduced and must be signed off by  all employees, subcontractors and vendors where applicable. 

 

Given financial professionals have an obligation to secure and protect internal and client information, we recommend practices incorporate an AI usage policy to prevent their own employees from using any client information in these models. Even if the employee parts ways, the reputational risk and liability will remain with the practice. 

 

Considerations for your AI policy include:

  • Review the AI tool
    • Each AI tool must be reviewed independently, and also applies to AI-features within existing applications.
    • Your company should have someone, ideally with technical expertise,, designated to approve the specific AI tool being used. This individual must be able to understand how the AI tool will work, and what inputs are required - company data (restrict), client data (restrict), publicly available data (possibly ok).
    • There should be a strong understanding of whether the AI tool is a stand-alone offering, or an embedded offering within an application. You must also limit the ability of an employee to turn on the AI functionality within an existing platform as it may automatically expose your client information. 
    • The review could require a technical review to determine if it poses any risk of a 'always-on' scenario where the AI tool is analyzing your computer's keystrokes/workflows etc as this can capture client sensitive information as well. 
    • The review should also incorporate the output of the AI tool and how it is intended to be used with clients. If it leads to output that has consequences (i.e. recommendations, suggestions, guidance) - those must be reviewed by an individual with appropriate and relevant experience to ensure the output is accurate and appropriate. An AI tool was used by an experienced lawyer to research case law and when used in court, it turned out the cases were "made up" by the AI tool unbeknownst to the lawyer. The experienced lawyer is now subject to a sanction by the NY judge.
  • Approval and sign-off
    • Confirmation that each individual has read, understood and agrees with the company policy around its usage
    • Confirmation that each individual has read, understood and agrees with the terms of use of the AI tool
  • Scope of use
    • Confirm that the AI tool will not be used outside of the scope of managerial consent

For existing SideDrawer clients, we can share our AI use policy which is governed by our client non-disclosure agreement.

For non-SideDrawer clients, we discuss how our platform is used to protect your business and your clients information. Book a demo with us.