How Can CPAs Ethically Interact With ChatGPT?

Elizabeth Kittner is GigaOm’s VP of Finance.

As CPA firms experiment with ChatGPT and other AI solutions, it is important to consider the technology’s limitations—including the ethical ones.

A powerful new artificial intelligence (AI) bot, ChatGPT, is causing quite a stir in the business world, including in the accounting profession. Using a wide range of internet data, ChatGPT can help users answer questions, write articles, program code, and engage in in-depth conversations on a substantial range of topics.

What makes ChatGPT sensational to some, besides the headlines of it passing the Bar Exam, various medical exams, and MBA exams, is its ability to hold human-like conversations with users. Essentially, AI developers designed neural networks in the technology to function similarly to the human brain. Because language is full of nuances and contextual differences, earlier versions of AI programs had not achieved as much success in this area.

As powerful as this new bot is, ChatGPT (and other similar AI) does have its limitations—including ethical ones. Before we delve into the ethics of ChatGPT, let’s first look at what the bot does well, and what it does not.


Ultimately, AI programs are designed to improve efficiencies for performing basic tasks, including researching and writing. Some users are even asking ChatGPT to take on more robust forms of these tasks, including drafting emails to fire clients, crafting job descriptions, and writing company mission statements.

It is important to note that while ChatGPT can provide helpful suggestions, it is not as good at decision-making or personalizing scripts based on personality or organizational culture. An effective way to use ChatGPT and similar AI programs is to ensure a human or group of humans is reviewing the data, testing it, and implementing the results in a way that makes sense for the organization using it. For example, with job descriptions written by an AI program, at least one human should ensure the details make sense with what the organization does and does not do.

A significant reason that humans are still better at decision-making is because the neurons in our brains are autonomous, efficient, and constantly changing. ChatGPT is not an independently thinking program—despite some of its responses that lead people to believe that it is. Programs like ChatGPT have rigid neural networks and require a high amount of computing energy and electricity to learn and operate.

Another current weakness of ChatGPT is its ability to break and be manipulated. ChatGPT has consistently experienced overwhelmed servers, making it inaccessible to users. Additionally, some rogue users have been able to “jailbreak” ChatGPT’s safeguards, causing the technology to push out unethical and harmful information. While ChatGPT’s owners have been able to patch and update the AI for these jailbreaks, it has yet to be seen if they will be able to fully stop further attacks.

One way ChatGPT is working on preventing the release of inappropriate content is by asking humans to flag content for it to ban. Of course, this method brings up a number of ethical considerations. People who are utilitarians would argue that this method is ethical because the end justifies the means—the masses are not subject to bad content because only a few people are. Those who are more deontological in their ethical views would say this method is unethical because people cannot intentionally subject other people to harm, regardless of the outcome.

In terms of preventing unethical behaviors, such as users asking the program to write their papers to pass off as their own, some technology developers are creating AI to specifically combat nefarious usage with AI. One such technology is ZeroGPT, which can help people determine if content is generated from a human or from AI.


I decided to try ChatGPT and asked the bot if it could tell me more about the ethics of AI. It did not hesitate to point out that the field of ethics in AI is concerned with the moral implications of the development and use of the technology, pointing to a range of ethical topics on bias and fairness, privacy, responsibility and accountability, job displacement, and algorithmic transparency.

I also asked how I can use ChatGPT ethically. Not surprisingly, it suggested being respectful, avoiding spreading misinformation, protecting personal information, and using ChatGPT responsibly.

The points ChatGPT made in its responses are generally known as items that need to be addressed. To its point about fairness and bias, specifically how to mitigate bias in the AI, we can focus on how to interact with ChatGPT and similar programs to not be as affected by the bias.

Importantly, we should be treating the responses from ChatGPT and other AI programs as suggestions and considerations instead of as final products. Much like how internet searches are not always accurate, we should recognize that AI is still a developing and biased technology. While we can marvel in its evolution, we should still be adding judgment to the decision-making advice and data it presents.

There is no doubt that ChatGPT is a powerful technological advancement, and the hope is that it is used mostly for ethical good. As this technology progresses, it will be important for us as CPAs to consider how we can ethically use ChatGPT and similar evolving AI programs in our work while still maintaining integrity. Understanding the developing capabilities and limitations of AI will be key to guiding our interactions with it and helping us maximize its benefits while preserving our quality and ethics.


This article originally appeared in Insight, the magazine of the Illinois CPA Society.