Renowned legal scholar William Blackstone famously introduced the now-commonplace notion that it is “better that ten guilty persons escape than that one innocent suffer.” Unfortunately, despite the safeguards that the United States criminal justice system provides, innocent people are imprisoned in our country. With the development of DNA testing, which is capable of proving the innocence of individuals in certain criminal investigations, the prevalence of wrongful imprisonment cases has gained widespread attention.
Numerous groups, perhaps most notably the Innocence Project, have made strong efforts to exonerate wrongfully-convicted individuals and to discover exactly what leads to wrongful convictions in the first place. These groups have been quite successful, as evidenced by the Innocence Project’s 300th exoneration on September 28, 2012. The efforts of these groups have taught us a lot about why individuals are wrongfully convicted. The single most common cause: faulty eyewitness testimony.
It is bad enough that eyewitness testimony may not be as reliable as we once thought. To make matters even worse, however, jurors tend to believe that eyewitness testimony is the most powerful form of evidence available. Because eyewitness evidence is so influential, and because it contributes to a hefty number of wrongful convictions, public policy demands that we take a closer look at how we can limit the effects of faulty eyewitness testimony.
In a series of related posts, I will provide some background on the justice system’s analysis of eyewitness testimony, discuss how a sea of empirical data has raised questions concerning the reliability of eyewitnesses, examine whether the current jury instructions regarding the credibility of eyewitnesses violates due process, and discuss how a recent New Jersey Supreme Court ruling may encourage nationwide reform.
In the remainder of Part I of this analysis, I hope to provide some of the necessary legal background surrounding the issue of eyewitness credibility.
In the landmark case of Neil v. Biggers, the Supreme Court of the United States first formally recognized the need to instruct jurors on how to adequately weigh eyewitness testimony. The Court came up with a five-factor guide stating that, when weighing the credibility of an eyewitness’s testimony, jurors should consider: (1) the opportunity of the witness to view the criminal and his actions at the time of the crime, (2) the witness’ degree of attention to the crime, (3) the accuracy of the witness’ prior description of the criminal, (4) the level of certainty demonstrated by the witness at the confrontation, and (5) the length of time between the crime and the confrontation. The Supreme Court later affirmed this approach in the case of Manson v Brathwaite, virtually cementing the five-factor approach as the appropriate way to analyze eyewitness reports.
The Biggers and Brathwaite approach seems straightforward and easy to understand, which is exactly the way ideal jury instructions should be. So what’s the problem? The problem is that Biggers was decided in 1972 and Brathwaite in 1977, and the Supreme Court has not reconsidered or added guidance on the issue since. More importantly, in the years since the Brathwaite decision, a vast amount of empirical research has emerged in the social science fields (primarily in the field of psychology and law) that indicates that some of the five factors may not, in fact, be good things for jurors to consider when evaluating eyewitnesses.
In Part II of this analysis, I will discuss this vast body of empirical research, citing specific examples that tend to call for a reform of the five-factor approach. Spoiler alert: extremely confident, dare I say 100% positive, eyewitnesses have sent their fair share of innocent individuals to prison.