How to Build an Effective AI Usage Policy
If you’re like most tax and accounting professionals, you’ve probably already encountered AI in some form. Maybe a team member asked about using ChatGPT to draft emails. Perhaps you’ve wondered if Microsoft Copilot could help streamline your workflow. Or maybe you’ve simply noticed that AI tools seem to be everywhere, and you’re questioning if they’re worth the hype or worth the risk.
Firms subject to Written Information Security Plan (WISP) requirements are asking the right questions, but the answers aren’t cut and dry. And that’s ok.
In this post, you’ll learn:
- Why Your WISP Needs an AI Usage Policy
- The SECURE Framework for Safe AI Adoption
- What Your AI Usage Policy Should Include
- A Practical Approach to AI
An AI usage policy outlines the ethical and responsible use of artificial intelligence technologies within an organization, ensuring compliance and mitigating risks.
Why Your WISP Needs an AI Usage Policy
Think of your WISP as your firm’s blueprint for protecting client information. As AI tools become more prevalent in your firm (think document review, data crunching, or handling day-to-day tasks), your security approach needs to keep pace. That’s where an AI usage policy comes in. It provides everyone with clear guardrails for using these tools responsibly, ensuring that you still meet all the necessary privacy rules and industry standards.
Top Considerations When Writing an AI Usage Policy
The tension between AI’s promise and its risks is real. According to Rightworks 2025 Accounting Firm Technology Survey, nearly all firms (96%) acknowledge AI’s potential benefits, yet an almost equal number (98%) harbor serious concerns about implementation. Much of this hesitation stems from uncertainty about how to evaluate AI vendors and establish safe usage parameters properly. The good news? A systematic approach to reviewing platforms can eliminate the guesswork. When considering AI platforms and drafting your policy, focus on these critical factors:
| Factor | What It Means | Why It Matters |
|---|---|---|
| Data Encryption | The AI keeps your data locked and safe using special codes. | Prevents hackers from reading your info. |
| Data Storage Location | Where the AI stores your data (e.g., U.S., Europe). | Some laws require data to stay in certain countries. Be sure to understand what laws apply to you. |
| Data Sharing Policy | Whether the AI shares your data with others (like partners or advertisers). | You want to know who sees your info and why. Stay clear of any AI tools that share your entries. |
| Model Training Rules | Does the AI learn from your data to get smarter? | Most firms will opt for solutions that do not use their info to train the AI. |
Related → WISP 101: Goals, Components, & Best Practices
The SECURE Framework for Safe AI Adoption
The key to AI adoption is implementing a structured approach that allows you to harness its benefits while maintaining the security and compliance standards your firm requires.
Review the SECURE framework below for a systematic approach to evaluating, implementing, and managing AI tools in your firm.

S – Screen Tools Thoroughly
Evaluate every AI platform against your firm’s data privacy, security, and compliance standards. Review the vendor’s privacy policy and security documentation carefully. Pay attention to encryption practices, data storage policies, geographic data location, and whether the vendor uses your inputs for model training. Document your evaluation process—this creates a paper trail that demonstrates due diligence.
E – Establish Data Boundaries
Define which types of data can and cannot be entered into AI tools. For most firms, this means restricting the use of AI to non-client data, unless a platform meets rigorous security standards. Create written guidelines that define acceptable use cases and prohibited activities. Make these boundaries specific and easy to understand.
C – Control Unauthorized Access
Decide your firm’s stance on tools like free versions of ChatGPT or other publicly available AI services. If you allow them, implement strict safeguards: prohibit inputting client information, confidential firm data, or any personally identifiable information. Consider requiring employees to use only approved AI tools for work-related tasks.
U – Uphold Regulatory Standards
Verify that any AI platform you use complies with relevant regulations—whether that’s GDPR, industry-specific standards, or professional requirements. Don’t assume compliance; ask vendors directly and request documentation. When in doubt, consult with legal or compliance experts before rolling out new tools.
R – Roll Out Gradually
Start with low-risk use cases where AI can enhance productivity without accessing sensitive data. Consider internal research, drafting templates, or brainstorming sessions. As you gain confidence and refine your safeguards, gradually expand to more sophisticated applications. Monitor results and adjust your approach based on what you learn.
E – Evaluate and Evolve
Continuously monitor how AI tools are being used in your firm and assess their impact. Gather feedback from your team, track any security incidents or close calls, and stay informed about updates to the platforms you use. Update your policies and training as technology and regulations evolve.
By following these six principles, you can move forward with confidence, knowing you’re building AI capabilities on a foundation of responsible practices rather than rushing in blindly and hoping for the best.
What Your AI Usage Policy Should Include
An AI usage policy isn’t just a statement that says, “We use AI responsibly.” It’s a practical section of your WISP that addresses specific aspects of AI adoption. Here are some elements you might find in a well-crafted AI usage policy:
- Designated Oversight: Who in your firm is responsible for overseeing AI usage? This person (or team) serves as the point of contact for questions, evaluates new tools, and ensures the policy is being followed.
- Approved Platforms List: Which specific AI tools has your firm evaluated and approved for use? This list should be regularly maintained and updated as you assess new platforms or discontinue existing ones.
- Permissible Use Cases: What can staff use AI tools for? For example, you might allow AI for administrative tasks, such as scheduling or drafting internal communications, while requiring additional safeguards for client-facing work.
- Data Handling Restrictions: What types of information can—and cannot—be input into AI tools? Many firms prohibit entering personally identifiable information, tax identification numbers, or other sensitive client data into certain platforms.
- Evaluation Criteria: What standards must an AI platform meet before approval? This may include encryption requirements, compliance certifications, data storage locations, and policies related to model training.
- Employee Training Requirements: How will staff learn to use AI tools safely and effectively? Your policy should outline any required training that employees must complete before accessing approved platforms.
Monitoring and Review Process: How often will you reassess approved tools and evaluate new ones? Given the rapid evolution of AI technology, regular reviews are crucial.
A Practical Approach to AI
Creating an AI usage policy doesn’t mean you need to become an expert in every platform overnight. The technology landscape will continue to evolve, and no one can predict every development that will occur. What matters is establishing a framework for evaluation, making decisions based on your firm’s specific needs and risk tolerance, and documenting your approach to ensure transparency and accountability.
- Start by identifying which, if any, AI tools your team is already using or wants to use.
- Establish clear guidelines regarding the types of data that can be shared with AI tools and under what circumstances.
- Designate someone in your firm to stay informed about AI developments and policy changes.
Remember that your WISP is a living document. As you learn more about AI tools and as the technology evolves, your policy should adapt accordingly. Regular reviews ensure your approach remains aligned with both technological capabilities and regulatory expectations.
When to Call in the Experts
AI platforms continually update their capabilities and policies, turning compliance into a moving target. While firms must ultimately make their own informed decisions about which tools to approve and how to use them, navigating these choices can feel overwhelming.
Just know, Rightworks offers expert-led reviews of existing WISPs to help incorporate AI usage policies that reflect current best practices. As firms evaluate AI platforms and develop their policies, Rightworks guides the key considerations outlined above—including security, compliance, data handling, and vendor transparency.


