One of forensic science’s dirty little secrets is that it isn’t always accurate; results are subject to interpretation and error. In some cases, there are no set standards for technicians or expert witnesses to follow when evaluating evidence. The President’s Council of Advisors on Science and Technology (PCAST) has released their 2016 report: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods and they have called multiple forensic techniques like DNA, fingerprints, bitemarks, and even ballistics into question.
Multiple-Source DNA Testing
DNA testing is considered the “gold standard” in forensics today because of it’s accuracy. However, PCAST has identified several key issues that still need to be resolved; ones that scientists and even jurors need to be aware of. Sometimes police retrieve DNA samples which contain biological information from multiple people. Perhaps the crime took place in a public area where many people touched the same surface, or even the weapon of the crime. These samples are known as “multiple-source” and it can be very difficult for forensic scientists to sort out which bits of DNA belong to which people, or if the possible perpetrator’s DNA exists somewhere within the mixture. It often becomes a subjective call, depending on the experience and knowledge of the technician. Instead of stating that multiple source samples ought to be inadmissible or impossible to use in legal cases, PCAST has called for the creation of scientifically valid and standardized procedures for testing multiple-source DNA.
Bitemarks have always been a bit of a subjective area; scientists first determine whether or not the marks on a victim where made by human teeth, then they make a cast and compare the injury to the teeth or dental records of the suspect. The problem? A 2010 study found that bitemarks leave impressions so general that they can be matched to a wide range of dental profiles, with some scientists in previous studies unable to even agree on whether certain bitemarks were made by humans. In order to ensure bitemark evidence presented in court cases is as accurate as possible, PCAST recommended multiple independent studies be conducted on the marks prior to the admission of the evidence.
Fingerprinting has been used to identify criminals as far back as the 1800’s, yet even with it’s solid history the process has yet to be perfected. Fingerprint verification first relies on computer technology to evaluate the features of the fingerprint and generate a list of possible suspect matches. The fact that it is a technological process ought to increase validity as it removes human bias from the equation. However, the software used to evaluate prints is private, not open-source, so the software itself may contain inherent errors. Additionally, once the software compares prints it is still up to an expert to interpret the results, re-opening the door to possible human error or bias. In order to ensure the validity of fingerprint evidence PCAST again noted the need for multiple independent studies to verify accuracy, as well as the need to validate common testing methods. There is some new technology on the horizon which may be a bit of a game-changer for the next PCAST report.
The science of firearms examination is based on the idea that when a bullet is fired through the barrel of a gun it will leave marks on the bullet which are unique to the inside of the barrel, much like a fingerprint. PCAST called this theory into question, noting that this has never actually been proven, only assumed. Bullets may not always have enough unique identifying marks to determine with 100% accuracy that there is a match between a suspect’s gun and a particular bullet. Firearms and Tool Mark Examiners however, understand that it is not essential for the bullets to have unique marks in order to determine whether or not they came out of a specific gun; matches can still be made with near-perfect accuracy. The PCAST report calls for a re-evaluation of testing procedures anyway, and additional studies to confirm validity of the theory.
One of the most notable themes throughout PCAST’s report was that of Expert Bias. Not only the biases of the scientists themselves, but the fact that jurors give great weight to the scientific evidence presented during a trial. Jurors are heavily biased to believe the testimony of experts, they trust the scientific process and some of them lack the education or expertise to understand the jargon commonly used by experts when testifying. Additionally, some of the experts present their findings in too conclusive a way, without explaining to the jury the possible flaws in the scientific testing process. It is not unusual for an expert to claim they are “certain” of their findings when the actual scientific process used to examine the evidence was inherently flawed or biased. This is bad news for those facing wrongful conviction on dubious testing processes and it’s something PCAST and the scientific community as a whole are working to remedy.