Wednesday, November 9, 2016

Estimating the Tax Loss Based on Statistical Sampling (11/9/16)

As I have frequently noted, the tax loss is the principal driver in the Sentencing Guidelines for tax crimes.  Very generally the sentencing factors, including the tax loss, must be proved by a preponderance of the evidence.  And, for the key tax loss finding, the sentencing court must only make a "reasonable estimate."  USSG § 2T1.1 n.1, here.

In cases where the defendant's crime was essentially "one-offs" or limited in scope -- the taxpayer and a related party, perhaps for a number of years -- estimating the tax loss is routine.  It is the same type of drill the Government would undertake in a civil tax audit to determine the tax liability and impose it on the parties involved.  High volume return preparer criminal cases, however, present a problem.  The Government does not have the resources to audit all or even most or even, probably, very many of the returns that the Government knows or believes was the object of the return preparer's crime(s).  What to do?

Statistical sampling.  Wikipedia, here, describes this methodology (footnotes omitted):
In statistics, quality assurance, and survey methodology, sampling is concerned with the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. Each observation measures one or more properties (such as weight, location, color) of observable bodies distinguished as independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population.
The sampling process comprises several stages:

  • Defining the population of concern
  • Specifying a sampling frame, a set of items or events possible to measure
  • Specifying a sampling method for selecting items or events from the frame
  • Determining the sample size
  • Implementing the sampling plan
  • Sampling and data collecting
  • Data which can be selected

OK, there is a lot of jargon in there, but I suspect that most of the readers of this blog will have some idea as to the concept.  I think the key is the breadth of the sample set for the inferences desired and the range of error related to the inferences.  I have written on this subject before discussing other cases.  I collect those writings below at the end of this blog entry.  But, I offer here a recent opinion, United States v. Johnson, ___ F.3d ___, 2016 U.S. App. LEXIS 19399 (5th Cir. 2016), here.  Here are the key excepts:
The Government calculated its estimate of the tax loss amount in the following manner. According to IRS case agent testimony, Johnson's company filed 13,429 returns during the years 2006 to 2013. Of those, 6,727 returns contained Schedule C losses that generated a total claimed refund amount of approximately $37.5 million. The IRS case agent then researched tax returns filed in the Dallas/Fort Worth ("DFW") area during the 2006-2013 time frame and determined that 7.1% contained Schedule C losses. This 7.1% was then applied to $37 million to yield $2.6 million; the case agent concluded that $2.6 million (i.e., 7.1%) of the total claimed refund amount was not fraudulent because this amount was on par with the other returns filed in the DFW area. 
Next, the case agent employed the extrapolation method n1 of estimating tax loss by selecting a sample of tax returns prepared during the 2006 to 2013 tax seasons to analyze the percentage of the claimed refund amount that was deemed to be fraudulent in this group. The sample was defined as follows. Throughout the investigation, 58 Revenue Agent Reports ("RARs") were prepared. Of those, 41 investigated returns reflected Schedule C losses; this amount included not only returns that were part of the indictment as counts of conviction, but also returns from the RARs that did not factor into the counts of conviction. The case agent calculated the total criminal tax loss of the 41 returns to be approximately $235,000. This amount constituted 72.2% of the total refund amount claimed, which was approximately $325,000.
   n1 Extrapolation is often used in tax audit situations when the number of tax returns under review is too large for a practicable and effective analysis. Black's Law Dictionary defines "extrapolation" as "[t]he process of estimating an unknown value or quantity on the basis of the known range of variables." Extrapolation, BLACK'S LAW DICTIONARY (9th ed. 2009) . For a description of the use of extrapolation to calculate fraudulent tax loss, see United States v. Mehta, 594 F.3d 277, 283 (4th Cir. 2010). 
Recall the $2.6 million figure which equaled to 7.1% of the total refund amount of $37.5 million claimed from returns with Schedule C losses. The case agent subtracted $2.6 million from $37.5 million to find a difference of $34.9 million. This amount was determined to be the total amount of refunds claimed on Schedule C losses less the DFW area credit. The case agent then applied the 72.2% falsity rate to $34.9 million to reach an estimated total loss amount-resulting from fraudulent Schedule Cs-to be $25,201,861. 
       3. 
Johnson and Everson objected to the Presentence Investigation Report ("PSR") recommending the aforementioned calculation of losses and raise the same objection on appeal. They contend that the sample used to extrapolate the amount of tax loss was too small and not representative of the total corpus of tax returns prepared. The use of this sample, according to Johnson and Everson, resulted in an inflated percentage of fraud. 
The Government counters that the IRS case agent's method of calculation was similar to that utilized previously in United States v. Simmons, 420 F. App'x 414 (5th Cir. 2011). In Simmons, an unpublished decision, the IRS examined a sample of 41 tax returns to estimate the tax loss-a number that is identical to the number of returns composing the extrapolated sample in the instant case. Although the sample in Simmons was "not . . . completely random," no evidence in the record "indicate[d] [whether] those returns would have a higher falsity rate than any other returns prepared by Simmons." Id. at 418. 
Johnson and Everson argue that the instant case is distinguishable from Simmons. Here, the 41 tax returns were chosen from the prepared RARs. The defendants claim that, as such, they had already been audited and were therefore less random than the selection at issue in Simmons. The defendants point out too that nearly half of the returns were bases for the counts of conviction. In addition, Johnson and Everson introduced evidence before the district court indicating that their investigators had located 82 customers whose returns were not included in the sample; none of these clients expressed any problems with their tax returns. These factors, according to Johnson and Everson, contributed to an inflated falsity rate. The defendants also point to other circuits' decisions to support their argument, namely the Fourth Circuit's opinion in United States v. Mehta, 594 F.3d 277, 283 (4th Cir. 2010). The court held in Mehta that the use of only audit-flagged returns to calculate total tax loss was an error on the part of the district court, but that the error was harmless under the harmless error standard of review. Johnson and Everson argue that Mehta's persuasive authority with respect to the improper use of audit-flagged returns supports a reversal of the instant tax loss calculation. 
      4. 
We view Johnson and Everson's arguments as reasonable and relevant, but we do not find them sufficient to warrant a reversal of the district court's adoption of the tax loss calculation. Although the defendants point to errors in the Government's calculation, they do not offer evidence or alternative calculations to contradict or rebut a finding that the alleged tax loss was anything but a "reasonable estimate based on the available facts." Further, we have affirmed tax loss calculations based on less reliable information than that present in the instant case. See, e.g., United States v. Montgomery, 747 F.3d 303, 311-12 (5th Cir. 2014) (upholding IRS case agent's calculation of tax loss notwithstanding agent's failure to account for defendants' unclaimed business expenses). Thus, even if the sample might have been more randomly selected to generate a more specific portrayal of the tax fraud at issue, the district court did not commit clear error in adopting the case agent's calculations. 
   B. 
Johnson and Everson's argument that they were not allowed sufficient time to inspect the non-sample tax returns was raised for the first time on appeal and thus warrants plain error review. 
The defendants were given the time between the PSR's articulation of the calculation method on March 16, 2015 and the date of sentencing on May 13, 2015 to prepare their defense to the tax loss calculation. This time frame resulted from the court's grant of a continuance to the defendants; after the May 13th sentencing date was set, Johnson and Everson did not move for a second continuance. The defendants nonetheless now attempt to compare their allotted time to the five months of preparation obtained by the defendant in Simmons. They claim further that with more time they would have been able to locate favorable witnesses in excess of the 82 customers they contacted as part of their defense. 
This argument fails under plain error review. Johnson and Everson have not shown that they were unable to prepare a proper defense as a result of insufficient time. Neither do they show that the two-month time frame affected their substantial rights or seriously hindered the fairness of the judicial proceedings. Indeed, it was Johnson and Everson's own failure to request a second continuance that contributed to the length of the time frame at issue. Thus, we find no plain error on the part of the district court. 
IV. 
In sum, the district court did not abuse its discretion in refusing to grant the first waiver of a jury trial. Further, the district court did not clearly err in adopting the PSR's calculation method in estimating the amount of tax loss generated by Johnson and Everson's fraudulent activity. Moreover, the time frame allotted to the defendants to prepare their defense did not constitute plain error. Thus, we AFFIRM the judgment of the district court.
JAT Comment: The Tax Table provides a Base Offense Level of 28 for a tax loss range of $25 million to $65 million.  Here, on this evidence the court founder a tax loss of $25,201,861 attributable to Schedule Cs.  Given the issues with respect to the estimates, not much would have been lost to choose a tax loss below $25 million.  I haven't re-worked the Guidelines calculations, but moving the BOL down 2 levels to 26 probably would not have affected the sentence.

Prior blog entries (in reverse chronological order):

  • Fourth Circuit Approves Statistical Sampling Technique for Sentencing Tax Loss (Federal Tax Crimes Blog 11/23/13), here.
  • Court of Appeals Acts on Its Hunch re Flawed Sentencing Tax Loss Estimate Is Harmless (Federal Tax Crimes Blog 2/3/10), here, (discussing the Mehta case).


No comments:

Post a Comment

Please make sure that your comment is relevant to the blog entry. For those regular commenters on the blog who otherwise do not want to identify by name, readers would find it helpful if you would choose a unique anonymous indentifier other than just Anonymous. This will help readers identify other comments from a trusted source, so to speak.