AccelPro | Audit
AccelPro | Audit
On Fraud and Professional Skepticism
0:00
-27:04

On Fraud and Professional Skepticism

With Joe Brazel, Jenkins Distinguished Professor of Accounting at North Carolina State University | Interviewed by Jessica Stillman

Listen on Apple Podcasts, Spotify and YouTube

Welcome to AccelPro Audit, where we provide expert interviews and coaching to accelerate your professional development. Today we’re featuring a conversation about professional skepticism and fraud. Our guest is Joe Brazel, Distinguished Professor of Accounting and North Carolina State University. 

Given auditors are paid by the very companies they’re supposed to be checking up on, complicated incentives are baked into the very structure of the profession. No one has yet discovered a better structure for the industry, so auditors must negotiate tricky tradeoffs from the start. How do you keep clients happy while maintaining an appropriate level of professional skepticism? When is an inconsistency or concern serious enough to warrant further investigation? And can new AI tools play a role in helping to unravel some of these knotty problems? 

These are the questions at the heart of my conversation with Brazel, who runs through what his research reveals about how audit leaders and audit committees can best encourage healthy skepticism, the promise and perils of new digital tools and the advice he always gives his students about charting their careers.

Listen on Apple Podcasts, Spotify and YouTube


Interview References:


TRANSCRIPT

I. WHY SKEPTICISM CAN BE HARD FOR AUDITORS AND HOW TO PROMOTE MORE OF IT

Jessica Stillman, Host: You are a professor at North Carolina State University. Can you give us a quick overview of your research? 

Joe Brazel: I teach courses in audit and assurance services, so I'm making people feel better about the financial statements that they use for investment and credit decisions, and part and parcel with that, I do a lot of research in relation to the detection of fraudulent financial reporting or cooking the books. My main focus is incentivizing the professional skepticism that is needed to identify fraud, red flags, and then investigate further. 

JS: Looking at your work, I get the sense that it's actually somewhat difficult to encourage an appropriate level of skepticism among auditors, that it's a perpetual challenge to some degree. Is that a correct understanding, and if so, why is that?

JB: Well, it's a unique situation where—and we haven't figured out a better solution—that, by and large, the audit firm and the auditor are paid fees by the company that they are judging or evaluating. So you arrive on day one with a conflict of interest. What we're trying to do is to maintain independence and maintain skepticism in that very difficult environment.

For example, unlike an FBI agent or maybe an NSA worker, auditors have to maintain some positive relations with management because at the end of the day, while management cannot remove the auditor, we know from research that they can play a role with the board of directors. It's a tough situation.

I always say if skepticism was easy, everyone would do it. In a lot of audit failures, the root cause of the failure to detect a material misstatement due to error or fraud is the result of a lack of professional skepticism. Again, skepticism is hard. It could be costly. You may spend extra time, delay additional procedures, management may be a little bit less happy with you, and so, therefore, in certain situations auditors choose, due to a lot of complex factors, to not apply the level of skepticism that the data or evidence warrant.

JS: I want to move on to what audit leaders can do about that. But before we do that, I just wanted to clarify something. You said there that a company can't remove its auditor, but they can intervene with the board of directors. Can you explain what you mean by that? 

JB: There's two main markets for audit services. One is the audit of publicly traded companies, financial statements—your Google, Apple, Microsoft and so on—and in those situations, the Sarbanes Oxley Act passed by our Congress says only the audit committee of the board of directors is able to hire, fire and compensate the auditors. And so they've insulated the auditor from management.

Although, we know from our research that management can push the board of directors of the audit committee in a certain way if they're not happy with the audit firm. In the case of an audit of a privately held company, like a family-owned business, you have a situation where management does potentially directly affect the choice of auditor, the firing of the auditor, the compensation of the auditor. 

And I'll leave one other thing. The audit committee obviously consists of only independent board directors, so the CEO, CFO, Vice President of Marketing, those members of the board directors, if they're employed by the company, cannot serve on the audit committee, which is a very unique committee of board of directors. And it only consists of independent directors that are not employed by Company X. 

JS: Let's circle back to the conversation about skepticism among auditors. It's a perpetual challenge, and I know that you research what audit leaders can do to encourage that appropriate skepticism, even though it's a little bit difficult. Could you walk us through one or two or three practical takeaways from your research about what is a good idea for audit leaders to do to encourage that skepticism? 

JB: The first thing we found loud and clear in our research is that skepticism that yields a error detection or fraud detection and material misstatement in the financial statements, any cooking the books or misappropriation of assets, the skepticism that yields that obviously gets a nice pat in the back, a nice golf clap and so on.

The unfortunate part is material misstatements are a low likelihood event, so oftentimes when you apply skepticism and things look weird, nine times out of ten, there's a reasonable explanation for the inconsistency that drew you to investigate further. And what we find with our research time and time again is when the auditor appropriately applies skepticism but does not detect a misstatement, unfortunately, hindsight bias kicks in on their evaluators and they are either not rewarded for appropriately applying skepticism, or sometimes they get a slap on the wrist because the evaluator says, “Well, you should have known that this inconsistency was due to this reason,” although the auditor doesn't know that unless they investigate.

I always say, “If you don't ask the questions, you never find anything.” But a lot of times when you apply skepticism, it does not yield a misstatement. And we see in those situations it is oftentimes not rewarded or penalized. 

We show in subsequent research that consistently rewarding, appropriate skepticism regardless of the outcome, in other words, just having their back, will lead auditors to be more skeptical in the future. It's that change in culture where, of course, we want to reward people that apply skepticism and find something, but in the situations where they don't find something, but they were right to look into it, their supervisors should have their back, if you will, and evaluate them positively.

Otherwise, you get situations where people have been slapped on the wrist a few times and they just quit looking for things. We see that a lot when we present it to practitioners, or ex-practitioners, that this was the reason they either left the firm, left the profession, or were just fairly frustrated. The incentives, the consistent rewarding of skepticism, play a big, big role. 

JS: You say reward the skepticism. What does that look like? Is that a financial incentive? Is it just praise? Is it a promotions and review thing? All of the above. What's an appropriate reward or incentive? 

JB: What we look at in our research and what others have replicated in their studies, is the evaluation, which then leads to promotions, raises, getting all good clients. So the key starting block for rewarding in the audit profession is those evaluations you receive every few months from your supervisors, like a performance evaluation, which then again feeds all of those more tangible rewards, such as a nice raise, getting better clients, or being promoted from a staff to senior to manager to partner. We're really talking about the base reward being evaluation. 

JS: That's super helpful. You were saying you had some other research that had practical takeaways for leaders around skepticism. 

JB: The second thing we've looked at is the support of that audit committee. We find that there's a lot of variation in the support audit committees are willing to supply their auditors. Some are very, very helpful. Some are, let's just say, more likely to side with management when there's a disagreement. 

We've investigated ways in which audit committees can be supportive to the engagement team, for example, having their back when they have a disagreement with management over how an accounting principle has been applied, or making management more timely in their provision of audit evidence—making that more cooperative. 

And so what we find is that when that audit committee support is there, if the audit partner or even the audit committee chair visits with the engagement team from time to time during the course of the audit and conveys that support, we see dramatic rises in skepticism, particularly in situations that are a little bit more stressful, such as when management has a bad attitude towards the audit team. 

Again, the role of conveying audit committee support to the little people on the audit—who rarely, if ever, meet an audit committee member and certainly are very rarely at the meeting, —either through the audit partner indirectly or through the audit committee chair directly, really can improve the skepticism of the staff who are out there trying to find any misstatements in the financial statements out in the field. 

The last area I've spent a lot of time in over the last three or four years, like a lot of people, is looking at the effective advanced data analytics and how these analytics are providing more data. You're able to see more data. You can visualize data, but as you can imagine, until they're extremely well calibrated, they also yield a lot of false positives, and a false positive in the audit sense is that costly skepticism I talked about. In other words, you dig and there's no misstatement there. 

We're looking at ways to alleviate the effects of false positives, whether it's through awards or through having auditors actively involved in the development of the analytics. That research is in flux and we're working on it as we go. But we do know that effective rewards are important when we have all these outliers, and we also know that reducing false positives to fairly low levels can achieve better skepticism outcomes.

II. WILL AI MAKE IT EASIER TO DETECT FRAUD?

JS: It almost seems like, the way you describe it, these tools are kicking up so many red flags that it might actually end up making the auditors overwhelmed and therefore less vigilant. Is that a danger that leaders and audit teams need to be watching out for? 

JB: Yeah, it's a situation where a lot of times it's very difficult to document inconsistencies or red flags or anomalies and actually not look into them. There's also situations where, for example, visualizations have the ability to show a lot of consistent relationships and positive data and consistent evidence, and then potentially show a couple of outliers to the right. 

And so a lot of my research I look at is that very tense situation where the analytics provide you with a lot of supporting evidence to say, “Hey, this sales growth is in line with industry and ratios and budgeted data, and so on. But there may be other forms of evidence to show an inconsistent relationship.”

That's really where I think the tension lies. When analytics provide a lot of consistent information, but a few outliers, which way does the auditor move? And that's where we see the power of low false positive rates and also rewards really kick in such that auditors are way more likely to look into those red flags and ignore that consistent data, or at least balance it out when they have that consistent reward structure and with a false positive rate of say 20-50%.

The other thing that's interesting about false positive rates is the statements are a low likelihood event. The thing you have to keep in mind is while a 75% false positive rate sounds bad, it's actually a 25% hit rate and anything that's a low likelihood event that is one time out of four you’re going to hit pay dirt. That's actually a pretty well calibrated tool. 

We have a study that we're in the midst of right now where we flip the script and we say, “Let's not talk about analytics and false positives. Let's talk about their hit rate levels.” The auditors may be afraid to use a tool that has a 75% false positive rate, but they may be inclined to use a tool that has a 25% hit rate—a change in terminology can play a role. 

And lastly, we're looking at how artificial intelligence and the ability to screen out a lot of outliers may be in the future, and will be able to help auditors say, “Okay, I'm seeing this weird thing at my client, but they're in the airline industry, or the pharma industry, or banking,” and maybe using AI to go out and search the web to find out whether this red flag or exception has been found in other settings for other clients in that industry. That study's probably about a year away because we want to see how AI develops in the short term. 

But we’re really using AI to screen out a lot of the quantity of red flags that the analytics produce. If they're going to produce a lot of anomalies that the auditor has to look into, can the auditor use another form of technology, artificial intelligence, to screen out, say, 30-40% of them and reduce their workload and focus on the most crucial areas?

JS: I think the AI angle is really fascinating. Obviously it's super fast moving and none of us owns an accurate crystal ball, alas, but looking ahead to how the trends are now in AI, how do you think it's going to impact the profession going forward? 

JB: It's hard to say. The way I feel about technology is to the extent that you can write down what you do every day on a piece of paper or Word document—that's not good. If it's more difficult to describe your tasks or what you do or what goes through your brain from 8:00 AM to 6:00 PM every day, I think you're in a better shape. 

I think if AI can somehow be incorporated in audits to remove even more of the grunt work that’s a good thing. And that's what I saw when I entered practice 20 years ago at the, I don't want to say the advent of Word and Excel, but certainly they became more advanced. And a lot of things that the audit staff did years before I started I would've been miserable doing—just flat out adding numbers on a calculator. I think AI’s actually probably improved things in the same way. I was super glad someone came up with Excel way before I started in the profession. 

The other thing that people may not be aware of is that a large number of accounting professors have run their exams through ChatGPT, and there's a study that's coming out in Issues in Accounting Education where the ChatGPT does fairly well. It gets a C minus on exams in some classes, and B pluses in others. In audit, it actually does a little bit better than in other areas because you don't have the five-step problems that you have with applying generally accepted accounting principles or a tax issue, for example. 

I've actually run my short essays that are based upon cutting edge research through ChatGPT, and it does a fairly decent job. I've been working with AI since Watson came out. It's the first time I've been thoroughly impressed with the human nature of the output versus a load of highlighted words. 

JS: Has it changed the way you teach, the kind of questions you ask or how you approach them at this stage? Or are you still experimenting?

JB: My take is in a world of Google, why do we have students memorize anything? What I want them to be able to do is to be able to think in a financial reporting, audit and assurance world. I don't need them to memorize formulas or definitions, but they need to be comfortable enough that they can apply them wherever they go. I know when they go work for the firms, they're going to get their specific training in X, Y and Z.

For example, when I go over sampling or materiality, I go over it at the 1,000 foot level because there's a lot of variation in how firms apply those things. Whereas things like the auto risk model, we get down and deep because I know that's going to be relevant to every audit they work on. 

I have chosen to not go the traditional memorization accounting route. I remember going through that as a student and it wasn't a whole lot of fun and it didn't do me a lot of good. 

III. BRAZEL’S CAREER ADVICE FOR STUDENTS AND WHEN TO ‘CHOOSE YOUR MAJOR’

JS: You spoke earlier about the good old days where it was just Excel when you were in practice yourself, and I want to use that as a segue to get into a section we have in all of these conversations, which is talking about your own career path and how you got from wherever you started to where you are now. Can you tell us about your path into academia and also how you became interested in fraud as a research topic?

JB: Very simply, I went to college and like a lot of people, I took accounting and finance courses and found out I had an aptitude for it. So they said, well, I thought I was going to go into business, but maybe finance and accounting is more up my field.

And the advantage accounting had was a more accelerated recruiting process in that by the fall of my senior year I was tied into one of the large accounting firms, Deloitte, and was signed with them before the finance recruiting process even began at my college. Of course I knew, and I tell my students today, that it's a wonderful place to start. It may not be where you want to spend ten years, fifteen years of your life, but your first two or three years are accelerated so much with these firms that it's almost a no-brainer to go with them. 

I went to work for Deloitte in what we call the tri-state region in New York, New Jersey, Connecticut area, and found out I really enjoyed a bunch of different industries, manufacturing, retail, real estate, even book publishing. And then finished my work at Deloitte after about five plus years in the Philadelphia office of the firm. I  can also give my students feedback on choosing a different office and also switching offices if they decide that they want California or Europe or what have you. And that happens a lot nowadays.

While I was at Deloitte, I was also teaching CPA review, and I found out that while I was decent working at Deloitte, I was really good at teaching, so I went back to some of my mentors in my undergraduate program. I said, “How can I become a professor?” And they're like, “You’ve got to get this PhD.” And I said, “What's a PhD in accounting all about?” I didn't know anything about it. 

There's a lot of information out now if you Google it, but there wasn't a whole lot of information back then and really what a PhD in accounting is mostly a research-based degree. So I went back to Drexel University and got my PhD degree where I took courses in psychology because I do a lot of experimental behavioral work, a lot of courses in econometrics, statistics, basic econ, a number of fields. 

And then I did my dissertation and that led me down the road of when I was in my last years with the firm, SAP and Oracle were playing a large role at many large corporations—how they were doing all of their IT across accounting, distribution, HR, you name it. And I thought, This is neat, let's look at the effect of these implementations on the quality of the audit, the fact that everything's being automated

And so that's where my dissertation was. I went to NC State University in 2003, and I've been there ever since, which is amazing. About 20 plus years now I've been at one university. I always tell my students I'm either very boring or very risk averse or very loyal, because I've only really had two real jobs my whole life. 

I wasn't really interested in the fraud game, but as I started getting on in my career, Enron, WorldCom, Adelphia Cable, Tyco were all front page news, and the question people would ask me was, “If you're doing accounting research, how is it not related to this?” Because this is what we care about as a public. So I got involved with that. I had a colleague who was big into it before I was. We got some very large research grants that funded us developing websites and pools and running studies for investor protection and increasing audit quality. 

The chief result of all that was we developed a fairly intuitive fraud red flag, which was simply, when a company's financial performance is improving, there are things that they can manipulate, like sales, growth, assets. But what happens when their base operational metrics, what we call non-financial measures, things like number of employees, square footage, facilities, number of patents, what if those are declining? And are companies who are more likely cooking the books seeing vast inconsistency or a red flag between what they say they're doing and what they're actually doing in the field?

And so we identified that red flag. The companies that were committing fraud were exhibiting an inconsistency in excess of 25%. It was a big red flag. Then we built a tool that actually went on companies’ annual reports that investors use, and automated the collection of that data and then developed the difference for regulators. Everything has flown right out of that, all of the experimental work I've done.

JS: I want to go back to the earlier part of your career where you said that joining a Big Four like Deloitte is a great way to accelerate your growth. Would you say that still holds today? Do you often advise either recent graduates or people in the early stages of their career to go the Big Four route?

JB: I always say, “If you've got more than one option, then you have choices.” The main thing is to get some form of offer from a regional, national, international Big Four firm, and then in class I hint at the fact that not all firms are made equal and that there isn’t a right firm for everybody. Some people may be better with a regional firm, some people may work out better at a national firm, and then other people want to go to the largest firm there is.

We talk about the size of the pond at these firms and the size of the other fish and how everybody can't get an A. For some people going to the best of the best may not be the best opportunity. Also, it's not all about the firm. It may be about the city, because the city can often dictate the quality of the firm, how well they're doing there, and also the industries that they serve.

If you're thinking about Charlotte, North Carolina, obviously banking's going to play a big role, but not so much in another city. Here in Raleigh, we've got a lot of pharma and high tech and government with it being the capital. If you're interested in gambling, you're obviously going to Atlantic City or Las Vegas.

I go over that with my students. And I also say, “Remember when you work for a Big Four firm, it may sound very glamorous.” I tested accounts receivable in sales for the better part of about seven months and didn't see any other audit area. So you do get a deeper level of education, but potentially in a very specific industry, in a very specific accounting area.

Again, it's one firm, one city is not made for each person. And so we just talk about the decision process. Obviously the most important part is you're doing well as an undergraduate. You're doing well in the interview process, so you have choices. 

JS: In terms of that question of early career training and breadth versus depth, do you need both? Is one more important than the other? If you're just starting out in your career, which would you recommend people focus on more, or both? 

JB: I typically tell my students to delay closing a window as long as you can. You want to stay general to some extent, but eventually year one and a half to two and a half, you're going to have to, what we say, ‘pick your major.’ And the firms have moved, even before I started, they were moving towards having you pick your major or your industries of choice sooner rather than later. 

And so I want to make my students aware that eventually they're going to have to make a choice or someone's going to make it for you. And that choice may not be the one you like, so be proactive with HR people and let them know the industries you're interested in, even when it comes to, after an internship, your first job opportunity. A lot of those putting people on different jobs are done with a lot of what we call asymmetric information. The person doing it doesn't really know where your interest lies, so to the extent you can convey that, that would be great. 

And I also stress to my students the importance of recurring on jobs when possible. We've done some research that shows when the staff people signal to their bosses that they want to recur, those bosses are more likely to supply them with a lot more professional development on the job training than they do people that are here there. And you're not going to be here next year. 

JS: Are there any other aspects of your research or you want to get the word out about that you think are under-recognized in the field among practitioners?

JB: The only thing I'll bring up is that I've been lucky to have my research supported by a lot of grant agencies and also very generous access to auditors to perform my studies. And without that, none of this stuff gets figured out. The profession's participation in our research, whether it's supporting it financially or even just access to people, is a precursor to doing the research.

We need to continue that support or potentially improve it. We're having some issues now with access to people and people not performing in our studies. Without that access, we can't develop a stream of research like I and my co-authors have done related to incentives and professional skepticism and detections or red flags.

We need to do lots of studies. There's no one study is going to answer everything, but if you do three, four, or five of them, you start getting at different angles and you have a good picture of what works, what doesn't work, what's the cost effective solution to improving things.

Listen on Apple Podcasts, Spotify and YouTube.

This AccelPro audio transcript has been edited and organized for clarity. This interview was recorded on June 22, 2023.

AccelPro’s expert interviews and coaching accelerate your professional development. Our mission is to improve your day-to-day job performance and make your career goals achievable.

JOIN NOW

Send your comments and career questions to questions@joinaccelpro.com. You can also call us at 614-642-2235.

If your colleagues in any sector of the audit field might be interested, please let them know about AccelPro. As our community grows, it grows more useful for its members.

AccelPro | Audit
AccelPro | Audit
AccelPro’s expert interviews and coaching accelerate your professional development. Our mission is to improve your everyday job performance and make your career goals achievable. How? By connecting with a group of experienced Audit professionals.
You’ll get the knowledge and advice you need to navigate your changing field. You’ll hear deep dives with experts on the most important Audit topics. You’ll give and receive advice on how to make difficult job decisions. Join now to accelerate your career: https://joinaccelpro.com/audit/