AI Implications and Risks in the Tax World

By: | 04/30/25

< Back to insights

Artificial intelligence (AI) offers several enticing benefits to the daily work of tax practitioners. These benefits include enhancing efficiency by automating routine tasks, optimizing other tasks to enable less time to be spent on scheduling and basic email management, offering advanced data insights, and serving as a research tool that can quickly bring new tax professionals up to speed.

There is a significant difference between automation and AI. Automation requires very specific inputs and runs a very specific process. The benefits of automation include predictability and the ability to see exactly how it arrives at its answers. However, it has limitations such as a lack of adaptability and the manual effort required to get inputs in the correct format. It often requires add-ons and manual changes to address new factors.

On the other hand, AI can take generic inputs, determine its own logic based on its “experience” (i.e., datasets), and provide specific answers. AI is adaptable and can learn based on common language, allowing someone to teach and modify the request without technological knowledge.

The downside is that without a defined process, we do not always know how AI arrives at its conclusions or if it is learning the right thing. For example, if we try to teach AI what factors and scenarios constitute a permanent establishment to predict the likelihood of winning an IRS challenge, but the dataset it learns from is full of cases where the IRS wins, the AI may learn that the IRS is always right rather than understanding the deciding factors.

Are there risks in using AI?

AI’s predictive abilities are promising, but there are serious security, ethical and accuracy concerns to address. Protection of sensitive data, biases in datasets, a potential for lack of transparency and accountability, and ensuring due process are critical issues to be considered in the incorporation of AI going forward.

●       Accuracy

There is a risk that the AI model may give incorrect results, misinterpret data, or “hallucinate” (create its own sources or facts), which can result from low-quality, or biased datasets leading to unreliable outputs. AI is not a self-reasoning machine; it recognizes correlation, not causation, making it hard to determine when errors are happening. AI can become embedded into systems, making it far more difficult to remove from corporate processes than a person.

●       Transparency

There is also a lack of transparency, commonly referred to as the “black box problem”, as users cannot see why they are getting the results they are getting or in many cases even what source documents on which the AI is relying.

●       Data security

AI tools have access to vast amounts of data without adequate protections for privacy. AI modules “train” on data, which can include issues with copyright data being used for training purposes. If AI learns on datasets including confidential information, it may produce sensitive information in response to other questions.

Other AI programs can then pull from each other, potentially grabbing articles to which they originally did not have access. People are developing tools to identify AI users on websites and combat the use of private information (e.g. blocking the AI’s access or providing AI with false data), but there is significant need for firms to consider the success of their own firewalls against an AI’s attempt to access information. There is significant risk that professionals with less AI experience are more likely to inadvertently enter sensitive information into an AI engine.

●       Overreliance

One of the most significant AI risks today is overreliance. Because AI can be very human-like and convincing, people have already begun to rely on it without confirming the answers they are receiving. There have been scenarios where people over-relied on AI, such as in the case of Kohls v. Ellison, where a Stanford University professor claiming to be an AI expert filed two expert declarations generated by AI, but the AI hallucinated fake journal articles which it cited.

After President Biden pardoned his son last year, TV personality Ana Navarro Cardenas referenced other situations of presidents pardoning family members which had been hallucinated by ChatGPT. The risk of overreliance not only can result in occasional wrong answers, but there is real concern that it will reduce the development of students and younger professionals who do not learn to verify the answers they are receiving from AI.

Get Expert Advice

Will I be replaced by AI?

It does seem unlikely that AI will replace tax practitioners in the foreseeable future. However, tax professionals who know how to use AI will likely have a significant advantage. AI is a tool that needs to be used properly to keep up with competition. It can take basic tasks and consistently populate them in the same areas, taking over the “bootcamp” of new hire type entry-level work, such as stuffing processed information into boxes.

A chatbot can ask questions necessary to determine R&D credit determinations and produce an R&D study in no time, though it is not yet as reliable as a person. While it will not replace the tax expert, the expert using AI will be able to produce a quality study significantly faster than someone producing it by hand.

How will AI change the tax arena?

There will likely be a change in work structure and output expectations. Currently, there is resistance to changing from a billable hour model to a project model, but clients are likely to challenge that briefs could have been produced in no time. The professional service business model is not prepared for this challenge. With the Big Four accounting firms have invested millions into AI and now possibly being able to enter the legal arena, it is likely to drastically change the industry.

Barriers to entry with AI include the need for AI specialists. Practitioners can begin incorporating AI into daily processes by using it for research, routine tasks, creating summaries from larger documents, learning about new industries, and starting research. AI is best used as a support tool, not an answer, and practitioners must be mindful of prior year issues as AI depends on the current Internal Revenue Code (IRC). Document management tools can also benefit from AI integration.

AI is not only impacting private accounting and law firms but is already being incorporated by the tax administration. The reduction in the IRS’s labor force will likely result in more reliance on AI to account for the increased workload on the remaining labor force. Large language models (LLMs) provide better advantages than automated answering machines, such as better language translating, making services more broadly available for taxpayers with other languages. AI is being used for internal communication, allowing new employees to quickly grab historic information.

Taxpayers should be concerned about the use of AI in audit selection and throughout audit procedures, as AI can enhance enforcement by analyzing and identifying patterns to predict higher-risk cases and quickly identifying true owners in partnerships. Additionally, biased datasets may lead to certain groups of people being unfairly targeted for audits. Taxpayers should also be concerned about their data potentially being incorporated into an AI tool at a taxing authority. Governing policies for how AI is used in tax administration are crucial. Tax administrations can mitigate taxpayer concerns by ensuring transparency and accountability, pushing for better frameworks, and addressing data security issues.

Learn More

To learn more about the potential impacts of AI in the tax world, contact Sam Heberton at [email protected].

Back to insights

Stay Ahead with Expert Tax & Advisory Insights

Never miss an update. Sign up to receive our monthly newsletter to unlock our experts' insights.

Subscribe Now