Investment Strategy
1 minute read
Safeguarding privacy has always been non-negotiable for family offices—so one senior executive was blindsided to learn just how much a trial AI model seemed to “know” about the family.
At first, no one could explain how the family’s data had ended up online. There were no obvious leaks, no clear points of failure. The breakthrough came when the family discovered a family member had been using a free AI app as a therapist. In that moment, it clicked: Intimate, highly sensitive details were being shared with a user-friendly tool capable of learning from interactions. Depending on how the AI tool is configured, it can retain, learn and share data—potentially causing harm and making information about the family discoverable.
As family offices start weaving AI into everyday workflows, the ease of these tools can disguise the sophistication and complexity behind them. When AI models collect data, they can store it, connect it and infer what was never explicitly shared, creating material privacy and security risks if they aren’t tightly configured and governed. And as “agentic” AI takes hold—systems that can act autonomously with users’ data—the dire consequences of getting it wrong will increase dramatically.
Keeping your family’s information safe requires knowing how to configure the setup of these tools properly, manage your data securely, adopt the right instruments and guardrails, and educate everyone who has a stake in your family office ecosystem. Here, we look at the steps you can take to protect yourself and your family’s data.
Most family office networks were not designed with the complexity of AI’s speed, system penetration and data-gathering powers in mind. In legacy networks, files of all types may be scattered across personal and shared drives, in note-taking apps, inboxes and inherited folders. Humans tend to “just know” where sensitive data lives (and know how to respect those limits); an AI agent given broad data permissions would have no such qualms: it would immediately trawl through whatever personal data you allow it to access.
Access constraints are imperative, which is why we advocate for taking a hard look at your data governance as a first step. Before introducing AI tools, or scaling them to your legacy office systems, it’s imperative to spend time cataloging what data exists, where it sits in your office network and who can access it already (and under what conditions).
When you’re ready to proceed with enabling AI securely in your family office, here are four key steps to take:
1. Decide which tools you might want to use and why. Set a policy in your office that articulates which tools can be used, and which cannot be accessed until fully tested.
2. Leverage the support of an AI-savvy service provider (your tech department may require assistance). Begin by ringfencing a well-defined “staging,” or sandboxed data set. Move only the content you intend the AI model to use into this controlled environment and tag it by level of sensitivity. If something goes wrong as you test your new AI tool, this approach will limit any potential damage and facilitate reviews.
3. Configure these tools in “read-only” or “sandbox” mode initially, restrict connectors (so AI assistants can’t access through personal or family archives by default) and disable “model learning” on your data wherever possible.
4. In parallel, conduct a robust data governance exercise before implementing the new AI tool. Be sure to detail what databases and drives you want the tool to access (or not) and test it. Make sure to set clear retention/deletion policies for AI prompts and outputs.
With every passing month, AI tools seemingly become more powerful. Generative large-language models (LLMs)—including ChatGPT, Claude, CoPilot and Gemini—are rapidly enhancing their models, adding new capabilities. In AI’s next chapter, models will be enhanced by agentic AI capabilities, which can take the outputs of the prompts you’ve written and complete real-world tasks. That glittering itinerary for a trip to Rome that you just prompted AI to draft for you? Soon you’ll be able to ask your AI “agent” to book your flights and make your hotel reservations.
But with these dynamic new tools will come heightened risks. Consider, for example, the data that an agentic AI model might need to book an Italian holiday: Would it require your passport information, date of birth, travel details…even unfettered access to your inbox?
As agentic AI gains traction in the marketplace, your family office will need explicit approvals for actions involving payments, bookings, the transfer of personally identifiable information or human resources files. Stronger audit trails will be absolutely crucial for monitoring the flow of information. Without these guardrails, AI agents could potentially overreach, moving across systems and ignoring permissions in ways your family office employees never would.
In short, start with data governance, not gadgetry. Many family offices may benefit from enlisting the help of specialized professional services to structure their information architecture and policies before rolling out broader AI capabilities.
AI can compress administrative tasks that used to take hours into minutes, but only if everyone associated with your family office, and even your family business more broadly, understands what information must never be shared. Given the risks, it can be helpful to assume that every input, including your personal prompts, may be legally discoverable. Providers can review chats or share them with law enforcement when compelled to, and AI model inputs may be subpoenaed or cited in litigation.
These details alone make a case for educating everyone in the family ecosystem and the disciplined use of licensed AI tools. When working with family office staff—and even members of the family—it’s important to establish clear prohibitions on entering:
A practical rule of thumb: if you wouldn’t want to see the information on the front page of a newspaper, don’t put it into a model prompt. Train staff on safe usage of AI tools, label LLM-generated drafts and require hands-on, human-in-the-loop reviews for any materials that involve legal, financial or reputational work. Documentation matters, too, so be sure to store any AI-generated outputs under normal retention and access controls.
Finally, close the gap between your own personal and professional behaviors, and encourage your family office staff to think twice before using publicly accessible AI tools. Public chatbots and consumer note-takers can leak context or metadata you didn’t intend to share. Even third-party firms that have access to your data present a risk: We’ve seen a real-world case where an external professional pasted a confidential contract into a public AI tool—a misstep that could have been avoided with stronger AI guardrails and education.
It can also be helpful to standardize approved enterprise tools and monitor for unapproved AI assistants, including auto-joining note-takers, on sensitive legal or financial calls: Your vendors and third-party business partners may not be as careful as you think they are, and it’s important to understand how your data may be inadvertently exposed.
Integrating AI tools into your family office workflows can deliver remarkable efficiencies—just don’t let it compound your risks. The family office teams that are successfully leveraging AI’s potential are moving deliberately, carefully and systematically, pulling in expert services and support as needed. They’re also taking the time to educate their colleagues, ensuring that everyone understands the rules around data protection and best practice.
For more information and resources to better secure yourself, your family and your business, please contact your J.P. Morgan team.
We can help you navigate a complex financial landscape. Reach out today to learn how.
Contact usLEARN MORE About Our Firm and Investment Professionals Through FINRA BrokerCheck
To learn more about J.P. Morgan’s investment business, including our accounts, products and services, as well as our relationship with you, please review our J.P. Morgan Securities LLC Form CRS and Guide to Investment Services and Brokerage Products.
JPMorgan Chase Bank, N.A. and its affiliates (collectively "JPMCB") offer investment products, which may include bank-managed accounts and custody, as part of its trust and fiduciary services. Other investment products and services, such as brokerage and advisory accounts, are offered through J.P. Morgan Securities LLC ("JPMS"), a member of FINRA and SIPC. Insurance products are made available through Chase Insurance Agency, Inc. (CIA), a licensed insurance agency, doing business as Chase Insurance Agency Services, Inc. in Florida. JPMCB, JPMS and CIA are affiliated companies under the common control of JPMorgan Chase & Co. Products not available in all states.
Please read the Legal Disclaimer (for J.P. Morgan regional affiliates and other important information) and the relevant deposit protection schemes.