AI Acceptable Use Policy
Purpose
The Firm uses artificial intelligence (AI) tools to improve productivity, analysis, and service quality while protecting client and Firm information, meeting regulatory obligations, and managing operational, model, and conduct risks. This policy establishes mandatory rules and controls for the selection and use of AI systems.
Scope
This policy applies to:
- Any AI-enabled tool, system, feature, plugin, add-on, or API—whether third-party, open source, or internally developed—used to generate, transform, analyze, summarize, classify, or recommend content.
- Use on Firm devices, personal devices for Firm business, and any AI use related to Firm data or Firm activities.
- Generative AI (text/image/audio/video/code), decision-support tools, machine learning analytics, and agentic workflows.
This policy does not replace existing policies (e.g., Information Security, Record Retention, Trading/Market Abuse, Communications, Vendor Management, Model Risk Management). Where conflicts exist, the stricter requirement applies.
Definitions
- AI system / AI tool: Any software feature that uses ML/LLMs or similar methods to generate outputs or inferences.
- Approved AI Service: An AI system that has completed the Firm’s vendor due diligence and is listed on the Approved AI Services List.
- Non-approved AI Service: Any AI system not on the Approved AI Services List.
- Firm Confidential Information (FCI): Any non-public information belonging to the Firm or clients, including trading strategies, positions, risk limits, research, pricing models, counterparties, employee data, and internal communications.
- MNPI: Material non-public information.
- Client data: Any non-public information received from or about a client.
- Sensitive data: FCI, MNPI, client data, PII, authentication secrets, credentials, keys, source code, infrastructure/config details, incident data, legal matters, HR matters, or regulated records.
- Prompt: Any input, instruction, data, file, image, or context provided to an AI system.
- Output: Any content produced by an AI system.
- No-Training / No-Retention Commitment: Contractual and/or policy assurance from a vendor that (a) Firm inputs/outputs are not used to train their models, and (b) retention is either disabled or limited to the Firm’s requirements.
Regulatory Requirements & Principles
The Firm’s AI use must comply with applicable laws, rules, and guidance (as relevant to our operations and jurisdictions), including but not limited to:
- Confidentiality & Privacy -
Protect client confidentiality, Firm confidentiality, and privacy (including PII).
Use AI vendors only where privacy terms and contracts protect Firm data and prohibit training on Firm data. - Records & Supervision
Maintain required records of business communications and research supporting investment decisions.
Ensure supervisors can review AI-related work product when it forms part of a regulated process. - Market Integrity / Market Abuse
Do not use AI in ways that facilitate market manipulation, misuse of MNPI, front-running, or improper communications. - Truthfulness & Fair Dealing
AI outputs used externally must be accurate, not misleading, and appropriately attributed/qualified. - Model & Operational Risk Management
Understand limitations, validate outputs, and manage risks of hallucinations, bias, and over-reliance.
Ensure critical decisions have human accountability. - Security by Design
Treat AI tools as potential data egress channels.
Apply least-privilege, secure integration patterns, and approved configurations. - Vendor Governance
Follow vendor onboarding, third-party risk management, and contract requirements, including data processing terms.
Approved AI Services & Onboarding Requirements
Approved AI Services List
Personnel may only use AI services listed on the Firm’s Approved AI Services List for Firm business.
Mandatory vendor requirements (minimum)
To be approved, an AI vendor/service must meet (or be contractually bound to meet) requirements including:
- No training on Firm data (prompts, files, outputs, telemetry) and explicit opt-out by default.
- Data retention controls appropriate to Firm recordkeeping rules and security posture.
- Confidentiality obligations and appropriate data processing terms.
- Security controls: encryption in transit/at rest, access controls, audit logging, incident notification, and (as applicable) penetration testing reports.
- Service location & subcontractors disclosed and acceptable.
- Availability and resilience for business-critical usage.
Prohibited by default
Until formally approved, the following are prohibited for Firm business:
- Consumer/free AI chat sites and “public” assistants.
- Browser extensions, plugins, or AI agents that can read webpages/email/files unless explicitly approved.
- AI tools that store prompts/outputs for training, product improvement, or advertising.
What You Can Do (Permitted Uses)
With Approved AI Services and following this policy:
- Productivity and drafting
Draft internal documents, emails, meeting agendas, project plans, training materials.
Rewrite text for clarity, tone, or brevity. - Research assistance (non-sensitive)
Summarize public information, regulations, or market news.
Generate checklists or interview questions. - Code assistance
Generate non-sensitive code snippets, test cases, documentation, and refactoring ideas.
Use for internal tooling where code and data are not sensitive or are properly sanitized. - Data analysis (sanitized)
Analyze synthetic, anonymized, aggregated, or otherwise non-sensitive datasets.
Create charts, pseudo-code, and analytical narratives using non-sensitive inputs. - Risk management support
Generate templates for controls, risk registers, incident playbooks, and policy drafts. - Translation
Translate non-sensitive text.
What You Must Not Do (Prohibited Uses)
Regardless of tool approval status, you must not:
- Disclose sensitive information
Never input MNPI, client data, Firm trading strategies, positions, risk limits, proprietary models, deal terms, internal incident data, credentials, keys, or any non-public information into any AI system unless explicitly authorized for that data class and use case. - Use non-approved services for Firm business
Do not use consumer AI tools, personal AI accounts, or unapproved extensions/plugins for any Firm purpose. - Automate trading or execution without approval
Do not connect AI tools to trading/execution systems, order management, portfolio management, or risk systems without formal approval, documented controls, and monitoring. - Create or disseminate misleading content
Do not represent AI output as verified fact without appropriate checking.
Do not generate external communications (client-facing, investor letters, marketing) without required review and approvals. - Circumvent controls
Do not disable logging, use shadow IT, route traffic through personal accounts, or bypass Firm security. - Generate restricted content
Do not use AI to create instructions for wrongdoing, market manipulation, hacking, or policy violations.
Data & Prompt Handling
AI systems can unintentionally expose sensitive data. Treat prompts and uploaded files as if they could be disclosed.
Data classification requirements
Before using AI, classify the data you plan to include:
- Public: May be used with Approved AI Services.
- Internal (non-sensitive): May be used with Approved AI Services.
- Confidential / Restricted / MNPI / Client / PII: Do not input unless the service and use case are explicitly approved for that data category and you follow additional controls.
Prompt minimization and sanitization
- Provide the minimum data necessary.
- Remove identifiers (names, account numbers, deal IDs) where possible.
- Prefer summaries over raw documents.
- Use synthetic examples to illustrate structure.
Files and attachments
- Uploading files to AI tools is higher risk than pasting text.
- Upload only when necessary and only to an Approved AI Service authorized for file handling.
- Never upload:
- Client agreements, statements, or reports containing client identifiers.
- Trading blotters, position reports, risk limit reports.
- Credentials, keys, certificates, or infrastructure diagrams.
Data retention and recordkeeping
- Assume prompts and outputs may be retained by the vendor unless explicitly disabled.
- Where AI output becomes part of a business record (e.g., investment research, client communication drafts), save it in approved Firm systems under normal recordkeeping rules.
Use of AI outputs in regulated workflows
- If AI output influences investment decisions, risk limits, or client-facing statements, document:
- The prompt(s) and input sources (at least at a descriptive level).
- The output used.
- Your validation steps.
- Final human decision and rationale.
Security Requirements
Access and identity
- Use Firm-managed accounts and SSO where available.
- Never share AI service credentials.
- Enable MFA for any approved tool that supports it.
Device and network
- Use only Firm-managed devices unless explicitly permitted.
- Do not access AI tools from insecure networks without VPN.
Integrations and plugins
- Plugins, agents, browser extensions, and connectors (email, drives, ticketing, CRM) are prohibited unless explicitly approved.
- API keys for AI services must be stored in approved secrets management tools.
Code and executable content
- Treat AI-generated code as untrusted.
- Perform security review before deploying.
- Do not paste secrets into AI for debugging.
Incident response
If you suspect you entered sensitive data into an AI tool, or if an AI tool behaves suspiciously:
- Immediately notify Information Security and Compliance.
- Provide details: tool name, account used, time, and what data may have been exposed.
Review Outputs & Understand Limitations
AI outputs can be wrong, biased, outdated, or fabricated.
Mandatory human review
- You are accountable for any AI-assisted work product.
- Validate factual claims with reliable sources.
- Check calculations and data transformations.
Common failure modes
Personnel must be alert to:
- Hallucinations (confident but false statements)
- Citation fabrication
- Hidden assumptions and omitted constraints
- Bias and stereotyping
- Tool or context limitations (token limits, missing data, stale sources)
High-stakes restrictions
AI must not be the sole basis for:
- Trading decisions or changes in risk limits
- Legal or regulatory interpretations without compliance review
- Client advice or suitability decisions
- Incident triage or security decisions
Logging & Monitoring
The Firm may monitor AI usage to manage risk and meet supervisory obligations.
Logging requirements
Where technically feasible and permitted by law:
- Log access to Approved AI Services (user, time, tool, feature used).
- Maintain audit trails for AI outputs used in regulated workflows.
Privacy and proportionality
Monitoring will be conducted in accordance with privacy laws and internal policies. Monitoring is aimed at protecting the Firm, clients, and markets.
Data loss prevention (DLP)
- The Firm may implement DLP controls to detect and block sensitive data from being sent to AI tools.
Risk & Oversight
Governance
- Compliance owns this policy and interprets regulatory obligations.
- Information Security sets technical security requirements.
- Technology implements controls and supports approved tools.
- Risk/Model Risk Management (MRM) reviews AI use cases that function as models or decision-support.
Use case risk tiering
AI use cases will be categorized (e.g., Low/Moderate/High) based on:
- Data sensitivity
- Potential impact on trading, clients, markets
- Degree of automation
- Explainability and validation needs
Pre-approval required (examples)
The following require documented review and approval before use:
- AI that summarizes or analyzes internal research, trading data, or risk reports
- Any integration with internal systems (email, files, ticketing, code repos)
- AI agents that take actions (send emails, create tickets, run trades)
- AI used to generate or interpret investment research in a systematic way
Periodic review
- Approved AI vendors are reviewed at least annually.
- High-risk use cases are reviewed more frequently and upon material change.
Training & Support
Mandatory training
Personnel must complete:
- Initial AI Acceptable Use training upon hire or before first AI use
- Annual refreshers
- Role-specific modules for trading, research, compliance, legal, technology
Support channels
- Questions about permitted tools, data classes, or use cases: contact Compliance.
- Technical issues or suspected exposure: contact Information Security/IT.
Reference materials
The Firm will maintain:
- Approved AI Services List
- AI Use Case Intake Form
- Prompting and sanitization guidelines
- Examples of compliant/non-compliant use
Enforcement
Violations may result in disciplinary action up to and including termination, and may also require regulatory reporting or client notification where applicable.
Exceptions
Exceptions require prior written approval from Compliance and Information Security, including:
- Business justification
- Data classification and controls
- Duration and scope
- Monitoring and recordkeeping approach
Appendix A: Practical Do/Don’t Examples
Do
- “Summarize this public regulatory guidance and produce a checklist.”
- “Rewrite this internal (non-sensitive) project plan for clarity.”
- “Generate unit tests for this non-proprietary utility function.”
Don’t
- “Here’s our positions report and risk limits—what trades should we do?”
- “Here’s a client list with emails—draft outreach messages.”
- “Here are API keys and logs—debug this outage.”
Appendix B: AI Use Case Intake (Summary)
At minimum, provide:
- Business purpose and owner
- Data types involved (public/internal/confidential/MNPI/PII)
- Tool/service requested
- Output usage (internal/external/regulatory/trading)
- Controls (sanitization, access, logging, validation)
- Risk assessment and approvals