We are searching data for your request:
Upon completion, a link will appear to access the found materials.
If the DNA from a crime scene is damaged, why would it be helpful to replicate it by PCR (polymerase chain reaction)? Even if we get billions of those copies, it is still incomplete, isn't it? How can the PCR make the forensic process easier?
This process helps with the signal to noise ratio. In theory, you could use the fingerprinting techniques to dice up the few billion copies of the DNA and carefully measure them. However, it is far easier to replicate the DNA to increase the number of molecules of DNA available to the process.
Remember, fingerprinting for crime investigations doesn't involve reading the DNA, but merely uniquely identifying it.
POLYMERASE CHAIN REACTION
Polymerase chain reaction (PCR) is a technology for exponential amplification of a fragment of DNA. (The PCR is covered by patents owned by Hoffman-La Roche. A license is required to use the PCR process.) The limit of its sensitivity is a single molecule, making PCR a superb qualitative tool for the specific detection of rare DNA sequences. Under proper conditions, the yield of amplified DNA is proportional to the initial number of target molecules, rendering it a quantitative analytical tool as well. Since its original description in 1985, PCR has evolved into an assemblage of varied methodologies almost universally used in basic biological research, biotechnology, clinical research, clinical diagnostics, forensics, food technology, environmental testing, archaeology and anthropology, and other fields. Even though other nucleic acid amplification technologies have been described, PCR remains by far the most widely used.
The New Science of Metagenomics: Revealing the Secrets of Our Microbial Planet (2007)
Microbes run the world. It&rsquos that simple. Although we can&rsquot usually see them, microbes are essential for every part of human life&mdashindeed all life on Earth. Every process in the biosphere is touched by the seemingly endless capacity of microbes to transform the world around them. The chemical cycles that convert the key elements of life&mdashcarbon, nitrogen, oxygen, and sulfur&mdashinto biologically accessible forms are largely directed by and dependent on microbes. All plants and animals have closely associated microbial communities that make necessary nutrients, metals, and vitamins available to their hosts. Through fermentation and other natural processes, microbes create or add value to many foods that are staples of the human diet. We depend on microbes to remediate toxins in the environment&mdashboth the ones that are produced naturally and the ones that are the byproducts of human activities, such as oil and chemical spills. The microbes associated with the human body in the intestine and mouth enable us to extract energy from food that we could not digest without them and protect us against disease-causing agents.
These functions are conducted within complex communities&mdashintricate, balanced, and integrated entities that adapt swiftly and flexibly to environmental change. But historically, the study of microbes has focused on single species in pure culture, so understanding of these complex communities lags behind understanding of their individual members. We know enough, however, to confirm that microbes, as communities, are key players in maintaining environmental stability.
By making microbes visible, the invention of microscopes in the late 18th century made us aware of their existence. The development of labora-
tory cultivation methods in the middle 1800s taught us how a few microbes make their livings as individuals, and the molecular biology and genomics revolutions of the last half of the 20th century united this physiological knowledge with a thorough understanding of its underlying genetic basis. Thus, almost all knowledge about microbes is largely &ldquolaboratory knowledge,&rdquo attained in the unusual and unnatural circumstances of growing them optimally in artificial media in pure culture without ecological context. The science of metagenomics, only a few years old, will make it possible to investigate microbes in their natural environments, the complex communities in which they normally live. It will bring about a transformation in biology, medicine, ecology, and biotechnology that may be as profound as that initiated by the invention of the microscope.
WHAT IS METAGENOMICS?
Like genomics itself, metagenomics is both a set of research techniques, comprising many related approaches and methods, and a research field. In Greek, meta means &ldquotranscendent.&rdquo In its approaches and methods, metagenomics circumvents the unculturability and genomic diversity of most microbes, the biggest roadblocks to advances in clinical and environmental microbiology. Meta in the first context recognizes the need to develop computational methods that maximize understanding of the genetic composition and activities of communities so complex that they can only be sampled, never completely characterized. In the second sense, that of a research field, meta means that this new science seeks to understand biology at the aggregate level, transcending the individual organism to focus on the genes in the community and how genes might influence each other&rsquos activities in serving collective functions. Individual organisms remain the units of community activities, of course, and we anticipate that metagenomics will complement and stimulate research on individuals and their genomes. In the next decades, we expect that the top-down approach of metagenomics, the bottom-up approach of classical microbiology, and organism-level genomics will merge. We will understand communities, and the collection of communities that forms the biosphere, as a nested system of systems of which humans are a part and on which human survival depends. In some situations, it will be possible to apply the new understanding to problems of urgency and importance.
Metagenomics in either sense will probably never be circumscribed tightly by a definition, and it would be undesirable to attempt to so limit it now, but the term includes cultivation-independent genome-level characterization of communities or their members, high-throughput gene-level studies of communities with methods borrowed from genomics, and other &ldquoomics&rdquo studies (see Box 1-1), which are aimed at understanding transorganismal
The Other &ldquoOmics&rdquo Sciences
The term genome was first proposed by Hans Winkler, a professor of botany at the University of Hamburg, Germany, in 1920 (Winstead 2007). It was coined to describe the total hereditary material contained in an organism long before it was known that genetic information is encoded by DNA. Today genome is used to describe all the DNA present in a haploid set of chromosomes in eukaryotes, in a single chromosome in bacteria, or all the DNA or RNA in viruses. The suffix ome is derived from the Greek for &ldquoall&rdquo or &ldquoevery.&rdquo In the past several years, many related neologistic omes have come into use to describe related fields of study that encompass other aspects of large-scale biology. Some of them are:
The proteome, the total set of proteins in an organism, tissue, or cell type proteomics is the associated field of study.
The transcriptome, the total set of RNAs found in an organism, tissue, or cell type.
The metabolome, the entire complement of metabolites that are generated in an organism, tissue, or cell type.
The interactome, the entire set of molecular interactions in an organism.
The list of &ldquoomes&rdquo and &ldquoomics&rdquo is growing longer as scientists develop new tools and approaches for carrying out large-scale studies of biological systems.
behaviors and the biosphere at the genomic level. Although in its current early implementation (and for the purposes of this report) metagenomics focuses on non-eukaryotic microbes (see Box 1-2), there is no doubt that its concepts and methods will ultimately transform all biology. In just this way has genomics, a science developed to aid the advancement of biomedicine and the understanding of our own species, transformed the science of all organisms and the application of that science in epidemiology, clinical microbiology, virology, agriculture, forestry, fisheries, biotechnology, microbial forensics, and many other fields.
In conceptualizing metagenomics, we might simply modify Leroy Hood&rsquos definition of systems biology as &ldquothe science of discovering, modeling, understanding and ultimately managing at the molecular level the dynamic relationships between the molecules that define living organisms&rdquo (Hood 2006). We need only replace the last word, organisms, with the phrase &ldquocommunities and the biosphere.&rdquo
A Note on Terminology
What is a microbe? In practice, the term microbe is used to describe living things invisible to the human eye, that is, generally less than about 0.2 mm. The terms microbe, microorganism, bacteria, germ, and even bug are often used interchangeably by nonscientists to describe these small organisms. Microbiologists have specific names for the various microbes, which include Bacteria, Archaea and some members of the Eukarya. The first two groups (domains), although unlike in many ways, share a type of cellular organization known as prokaryotic. They lack membrane-enclosed organelles, such as mitochondria, chloroplasts and, most notably, a nucleus. The genomes of Bacteria and Archaea typically contain little non-coding DNA and range in size from 0.5 to 10 million base pairs. By contrast, members of life&rsquos third domain, Eukarya, which comprises animals, plants, fungi, algae, and protozoa have larger genomes with substantially more non-coding DNA. Some eukaryotes are also too small to be seen individually except under a microscope and thus have been traditionally studied by microbiologists. Included among these small eukaryotes are many fungi, such as baker&rsquos yeast and the human pathogen Candida, and many of the algae and protozoa (harmless paramecia, for instance, and the malaria parasite Plasmodium). Viruses, although arguably not alive, in that they can replicate only inside cells and have no metabolism or cell structure of their own, are also encompassed in the science of microbiology. In this report, we address primarily metagenomics projects that focus on Bacteria, Archaea and viruses. Because of their larger genomes, microbial eukaryotes have received less attention, a situation which should be remedied as sequencing becomes less expensive and bioinformatic methods become more powerful.
WHAT MICROBES CAN DO: FOUR EXAMPLES
We start with examples. There are countless ways in which microbes influence daily life. Earth is a biological entity as much as it is a physical one, and most of the vital biology, on which all life depends, is microbiology (see Box 1-2). But because microbes are individually invisible, we (even microbiologists) need to be reminded of our debt to them. Here are four of the thousands of reasons.
Microbes Modulate and Maintain the Atmosphere
Carbon is the most abundant chemical element in all living things, including humans (excluding the hydrogen and oxygen in the water, which makes up the bulk of our weight). Carbon dioxide (CO2) in the atmosphere
is the most abundant source of carbon on Earth, but in this form it is inaccessible to animals and most bacteria. Plants and some bacteria &ldquofix&rdquo carbon through photosynthesis, a light-driven conversion of CO2 to sugars that generates the oxygen that fuels all aerobic forms of life. Although plants tend to get most of the credit, bacteria are responsible for about half of the photosynthesis on Earth (Pedros-Alio 2006).
Ocean microbes, collectively present at billions of cells per liter, grow at rates of about one doubling per day in surface waters and are consumed at about the same rate (Whitman et al. 1998). The organisms that carry out photosynthesis turn over rapidly in the ocean as well, on the average about once per week. Net primary productivity in the global ocean is estimated to fix 45-50 billion tons of CO2 per year (Falkowski et al. 1998). Chemical transformations mediated by marine microbes play a critical role in global biogeochemical cycles (see Figure 1-1). The collective metabolism of marine microbial communities has global effects on fluxes of energy and matter in the sea, on the composition of Earth&rsquos atmosphere, and on global
climate. In essence, the combined activities of microbial communities affect the chemistry of the entire ocean and maintain the habitability of the entire planet. Hidden within the population dynamics of these complex communities are fundamental lessons of environmental response and sensing, species and community interactions, gene regulation, and genomic plasticity and evolution. Microbes are the stewards of Earth&rsquos biosphere and are Nature&rsquos biosensors par excellence.
Perhaps most obviously today, the living oceans play a critical role in the global carbon cycle (Falkowski et al. 1998). The coupling of the upper ocean and the atmosphere results in higher concentrations of dissolved CO2 in surface seawater than in the rest of the ocean. Much of the elevated carbon input can move through the action of the ocean&rsquos &ldquobiological pump,&rdquo which depends on microbial communities in the surface water that transform inorganic CO2 into organic carbon. The organic carbon can either be respired and recycled back to the upper ocean-atmosphere system or sink out of the surface water and be sequestered in the deep ocean. Complex microbial community interactions help to regulate the proportion of recycled versus sequestered carbon. The structure of the phytoplankton community, the rates at which phytoplankton are attacked and destroyed by viruses, and the capacity of other microbes to turn organic carbon back into CO2 all influence the fate of carbon, and the ability of the ocean to act as a source of, or a sink for, CO2. CO2 is a very important greenhouse gas, so photosynthetic bacteria serve the planet in two ways: they convert carbon into biologically accessible forms and they remove CO2 from the atmosphere, thereby mitigating some of the anthropogenic release of CO2 and other greenhouse gases.
Microbes Keep Us Healthy
It should come as no surprise that in the microbe-dominated biosphere, close relationships between microbes and animals are an ancient theme. Humans are no exception. The numbers are staggering. The microbes that reside on the surface of the human body alone outnumber human cells by about a factor of 10. The genomes of members of our indigenous microbial communities (the human metagenome) contain thousands of times more genes than the human genome (Gill et al. 2006). Microbial communities also inhabit the human mouth, skin, and respiratory and female reproductive tracts. The compositions of these communities change over time and, for some body sites, like the oral cavity, there is already evidence that certain community compositions are associated with periodontal disease. Understanding how microbial community structure affects health and disease may contribute to better diagnosis, prevention, and treatment of disease. The vast majority of these microbial partners live in the intestine,
where a diverse community of microbes, 10 to 100 trillion in number, perform functions that humans have not had to evolve, including the extraction of calories from otherwise indigestible components of our diet and the synthesis of essential vitamins and amino acids. The complex communities of microbes that dwell in the human gut shape key aspects of postnatal life, such as the development of the immune system, and influence important aspects of adult physiology, including energy balance. Gut microbes serve their host by functioning as a key interface with the environment for example, they defend us from encroachment by pathogens that cause infectious diarrhea, and they detoxify potentially harmful chemicals that we ingest (intentionally or unintentionally). In light of the crisis in management of infectious pathogens due to emergence of antibiotic resistance, we would be well served to understand the role of microbial communities in protecting us from infectious agents. Our microbes are master physiological chemists: identifying the chemical entities that they have learned to manufacture and characterizing the functions of human genes and gene products that they manipulate should lead to valuable additions to our 21st-century medicine cabinet (pharmacopeia).
Microbes Support Plant Growth and Suppress Plant Disease
The microbial communities on and around plants play a central role in the health and productivity of crops. The most complex of these communities reside in the soil, which is a composite of mineral and organic materials teeming with bacteria and archaea. Some functions of these microbes are well known. Some bacteria fix atmospheric nitrogen, converting it from dinitrogen gas&mdasha form unusable by plants and animals&mdashto ammonia, which is readily used. Other soil microbes recycle nutrients from decaying plants and animals, and others convert elements, such as iron and manganese, to forms that can be used for plant nutrition. Soil microbial communities determine whether plants will become infected by pathogens. A lingering mystery is the &ldquosuppressive soil&rdquo phenomenon (Mazzola 2004). In some soils, plants stay healthy even when pathogens are present at high density when the soil is sterilized, the disease suppression disappears, suggesting a biological basis of the phenomenon. However, in only very few cases has a single microbe isolated from a soil been able to duplicate the suppression. After decades of wrestling with the enigma of suppressive soils, plant pathologists have concluded that in many cases a complex community is responsible for the suppressive activity, which is hugely beneficial to agriculture. No organism has been found to provide the same effect in isolation, because the community members modify each other&rsquos behavior.
Microbes Clean Up Fuel Leaks
There are hundreds of thousands of underground storage tanks in this country, most of which are used for storing gasoline. In fact, almost every corner gasoline station in the United States uses three or more of these tanks to dispense regular, premium, and super-premium versions of gasoline. The sad truth about these underground tanks is that the vast majority of them are already leaking or will leak and send gasoline into the subsurface, where it has the potential to contaminate the groundwater. Given the ubiquity and magnitude of the gasoline leaks and the fact that 50% of the US population relies on groundwater as a drinking-water source, one must wonder how it is that we are not all drinking water contaminated with gasoline!
The answer is that we are being protected by the omnipresent and vastly adaptable subsurface microbial community (Mazzola 2004). As gasoline is released into the subsurface, relatively dormant members of the microbial community are triggered to become active and biodegrade the gasoline constituents. Gasoline is composed of thousands of organic chemicals and a variety of microbes containing complementary metabolic systems are required to degrade them all. Furthermore, because there is too little of any single electron acceptor in the subsurface to react with all the electron donors of gasoline, different bacteria with different respiratory capabilities are required to complete the gasoline remediation. For example, when oxygen is depleted in the groundwater in the vicinity of a gasoline spill, bacteria that can respire nitrate take over, followed by bacteria that respire iron, manganese, sulfate, and, eventually, CO2. This complicated community of microbes works together in a self-organized pattern triggered by the movement of the leaking gasoline until the contaminants have been transformed into harmless CO2 and water. The microbial community then becomes dormant again, awaiting the next influx of substrate (either natural or anthropogenic) to return to activity.
INVISIBLE COMMUNITIES: GLOBAL IMPACT
Modulating the atmosphere, keeping humans and plants healthy, and cleaning up leaking gasoline are just a few examples of the many things that microbial communities can do. The combined activities of microbial communities shape the face of the biosphere on a global scale. The power of these communities lies hidden in the metabolic versatility of their component species that, acting together, regulate the vast majority of matter and energy transformations on Earth. In a loose analogy, the entire biosphere can be imagined as a sort of &ldquosuperorganism.&rdquo Its many systems for the recycling of carbon, oxygen, nitrogen, and phosphorus can be compared with the organs of the human body working in unison to facilitate circulation,
nutrient acquisition, respiration, waste processing, and so forth. Unquestionably, humans depend on these global geochemical cycles, and microbes are vital players in the cycles&rsquo operation and stability. Microbes can &ldquoeat&rdquo rocks, &ldquobreathe&rdquo metals, transform the inorganic to the organic, and crack the toughest of chemical compounds. They achieve these amazing feats in a sort of microbial &ldquobucket brigade&rdquo&mdasheach microbe performs its own task, and its end product becomes the starting fuel for its neighbor. For complex transformations, no microbe can do it alone&mdashit takes a community. For example, no microbial species is capable of completely oxidizing ammonia to nitrate, but teams of microbes do it efficiently. One microbial group oxidizes ammonia to nitrite, and its waste becomes the fuel for another species that transforms nitrite to nitrate, completing the &ldquobucket brigade.&rdquo Virtually all elemental cycles&mdashincluding the generation, consumption and flux of greenhouse gases (or, as noted above, the remediation of spilled gasoline)&mdashinvolve similar sorts of microbial collaborations that are tightly regulated and coupled through microbial community interactions. So the bucket brigades are themselves interconnected laterally&mdashan interwoven web of chains. In this way, microbial communities play essential roles in the transformations of energy and matter, producing the air we breathe and shaping the biosphere and climate that we enjoy on Earth today.
Larger organisms play key roles, too, of course: about half of all carbon is fixed and half of all oxygen produced by trees, grasses, and other macroscopic plant life. But these larger organisms also depend on microbes for example, plants depend on the nitrogen fixation carried out by symbiotic microbes in the roots of legumes and other plants that form symbiotic associations. Humans might survive in a world lacking other macroscopic life forms, but without microbes all higher plants and animals, including humans, would die. Not only can many individual systems&mdashfor example, the human gut or such processes as the bioremediation of toxic hydrocarbons&mdashbe seen to be the tasks of complex and dynamic microbial communities, but these communities are themselves constituents of even larger systems, predominantly microbial, that collectively make up the biggest and most complex functioning system we know: the biosphere. Whatever the causes, extent, and consequences of the global climate change now upon us, the biosphere&rsquos response to the changes&mdashand human survival&mdashwill depend on its microbes and their activities.
We live in a time of unprecedented and dramatic global change, in which the effects of human activities challenge the ability of natural ecosystems to buffer them. The industrial revolution marked the beginning of rapid environmental transformation. For example, until the early 20th century, all nitrogen entering the biosphere was produced from atmospheric nitrogen by microbes, providing the organic nitrogen required for new plant growth. In the early 1900s, the Haber-Bosch process was invented to
perform the same job to produce vast amounts of nitrogenous plant fertilizer from atmospheric nitrogen this industry now produces more organic nitrogen than all biological processes combined (Socolow 1999). Another obvious and dramatic change in the global environment is the enormous amount of CO2 released by the burning of fossil fuels, previously stored as relatively inert reservoirs deep in Earth. Present concentrations of atmospheric CO2 are higher than they have been in 420,000 years and, given current trajectories, will continue to rise dramatically (Petit 1999).
Understanding the dynamic role of microbial communities in this rapidly changing environment is a critical and currently unmet challenge. How resilient are microbial communities in the face of such rapid global change? Can microbial communities, versatile as they are, help to buffer and mediate key elemental cycles now undergoing rapid shifts? Can changes in microbial communities serve as sensors and early-alarm systems of environmental perturbation? To what extent can we &ldquomanage&rdquo microbial communities to modulate the effects of human activities on natural elemental cycles sensibly and deliberately? Never before have such questions had such urgency.
UNDERSTANDING MICROBIAL COMMUNITIES
Given that the microbial collective profoundly influences geochemical and greenhouse-gas cycles, as well as climate and environmental change, it is relevant to ask how well we understand microbial communities. In the past, it was difficult to study microbes in their own environments microbiologists studied individual species one by one in the laboratory. It now appears that many microbes function in nature as multicellular, often multi-species, entities, sometimes even physically connected (as in biofilms) and often metabolically connected.
The Limits of Pure Culture
Even into the 19th century, some scientists believed that microbes were generated spontaneously from nonliving matter or from other organisms. Establishing that such tiny entities were organisms that belonged to definable, fixed species was difficult. Fixity of species was especially important in theories of disease causation fixed species were essential if a single bacterial species was to be held responsible for a single infectious disease. Agriculturalists and botanists had long suspected that some sort of unseen organisms were associated with plant disease in 1726, for example, the association farmers saw between barberry rust and wheat rust led the Connecticut colonial legislature to ban the bushes (Campbell et al. 1999). Over a century later, the German botanist Anton de Bary demonstrated the
correlation between the life cycle of Phytophthora infestans and the disease cycle of late blight of potato. In a series of experiments conducted in the late 1850s and early 1860s, he built on the previous work of J. Speerschneider and Marie-Anne Libert and established that P. infestans was indeed the cause of the disease (Matta 2007).
Demonstrating that microorganisms were not spontaneously generated and had distinct species was fundamental to bacteriology as well. Robert Koch published his description of the life cycle of Bacillus anthracis (the cause of anthrax) in 1876 and then published a series of papers in which he established an experimental method for confirming the specific causes of various infectious diseases. In an 1884 paper on tuberculosis, he outlined his four &ldquopostulates&rdquo for proof of microbial causation: an organism must be found in all cases of the disease but not in healthy hosts, the organism must be isolated from the host and grown in pure culture, reintroduction of the organism from such cultures must cause disease in healthy hosts, and the organism must again be isolatable from such infected hosts (Munch 2003). That rigorous approach, particularly the emphasis on pure cultures (a culture that contains organisms of only one type) set the standards for microbiology as a whole. By the middle of the 20th century, even with &ldquoenvironmental microbes&rdquo (the vast majority of harmless and beneficial bacteria, archaea and microbial eukaryotes), pure cultures became a gold standard for experimentation and the basis of almost all recent knowledge of medical bacteriology, biochemistry, and molecular biology.
In the pure-culture paradigm, the presence of multiple species in the same culture medium means &ldquocontamination,&rdquo and species whose growth requires metabolic products of other species are impossible to detect, study, or even name. Not surprisingly, microbes that grow well as single cells suspended in a liquid medium and that can easily form discrete colonies on Petri plates became the model for much of modern biology. Indeed, many microbiologists came to view the &ldquoplanktonic&rdquo state as the natural condition of microbes&mdashcomplex communities and slimy biofilms being somehow an aberration and unworthy of serious scientific attention. On the contrary, it is now becoming clear that many microbes live in communities whose members interact and communicate in complex ways. Microbial communities often interact through the medium (water or soil) in which they grow, exchanging nutrients, biochemical products, and chemical signals without direct cell-to-cell contact. Some grow on surfaces (on suspended particles, on the walls of pipes, on teeth) where they are in physical contact with others of their own kind and with other species. Biofilms, which are aggregates of microbial cells embedded in an extracellular polysaccharide matrix, exhibit a great diversity of complex structures. The composition of such communities is far from accidental. Many microbes have evolved to grow together in surface communities and many of their collective activi-
ties, whether vital to the biosphere or detrimental to human health, reflect the physical structure and division of labor within the communities.
The study of microbes in culture will continue to be important, but it falls short of telling us about environmental processes, biofilms, microbial bucket brigades of energy and matter flux, and the future trajectory of biogeochemical cycles. Understanding microbial communities will require that the traditional techniques of pure culture be supplemented with new approaches.
The Genomics Promise
One approach that has contributed greatly to understanding all organisms is genomics&mdashlearning about the evolution and capabilities of organisms by deciphering the sequence of their DNA. Genomics has also greatly advanced microbiology, but, like pure culture, traditional genomics is limited in its ability to elucidate the dynamics of microbial communities.
The precipitous decline in the cost of gene sequencing, spurred in part by the Human Genome Project, has made it possible to generate genomic sequences for a great variety of organisms. The first microbial genome sequenced, that of the pathogen Haemophilus influenzae, was published in 1995 (Fleischmann et al. 1995). Microbial genome sequences have since appeared at an exponentially increasing rate: the genome sequences of 399 bacteria, 29 archaea, and almost 30 eukaryotic microbes are publicly available at the time of this writing. Pathogenic bacteria and eukaryotes&mdashsuch as the causative agents of plague, anthrax, tuberculosis, Lyme disease, candidiasis, malaria, and sleeping sickness&mdashhave received much attention. But many nonpathogenic archaea and bacteria have also been sequenced, including such beneficial organisms as several species of Prochlorococcus and Synechococcus, major producers of oxygen in the ocean Dehalococcoides ethenogenes, effective in the bioremediation of soils contaminated with chlorinated hydrocarbons Lactobacillus acidophilus, used in making yogurt Bradyrhizobium japonicum, a nitrogen-fixing symbiont of soybeans and Saccharomyces cerevisiae (baker&rsquos yeast). 1
When attention turned to sequencing the genomes of microbes, the preference for working in pure culture was reinforced. No one knew how difficult it might be to sequence an entire genome, but it was obvious that assembly (using a computer to put the sequenced fragments together in complete genomes) would be vastly more complicated if the pieces belonged to several different organisms (see Box 1-3). Until recently, all microbial genome sequences were determined from pure cultures. But in the last few years, more than a dozen microbes that can be physically separated
Blueprints for the Living World: Genes, Genomes, and Genomic Sequences
Genes are made of DNA, and the exact sequence of the four canonical DNA bases (designated A, T, C, and G) in any gene specifies the product (usually a protein) that it encodes. In bacteria and archaea, genes are about 1,000 base pairs long. These microbes have 500-10,000 genes, usually arrayed on a single circular DNA molecule (a chromosome), some 600,000-12 million base pairs long (there is some space between genes for regulatory signals). Eukaryotic microbes typically have more and longer genes and multiple chromosomes. Together, all the genes in a microbe&rsquos chromosome or chromosomes and any in accessory genetic elements, such as plasmids, make up its genome.
For complete genome sequencing, the whole genome shotgun approach has proved effective. All the DNA from a pure culture is fragmented randomly into pieces of one to a few thousand base pairs. Fragments totaling some 6-10 times the genome&rsquos length are sequenced so that overlaps between them can be used to establish the order of the fragments in the intact genome and verify the accuracy of the sequencing. This step, called assembly, is computationally intensive. So is the next step, annotation, which is the prediction of gene boundaries, regulatory regions, and the properties and function of the proteins (or sometimes RNAs) that the genes encode. Annotation usually involves finding a similar gene sequence for which a function has already been determined in another organism, although at present typically one-third of the genes in any newly sequenced microbe will not have any obvious similarity to genes with known or proposed functions. Finally, the data are released to a public data repository, such as GenBank, maintained by the National Center for Biotechnology Information (National Library of Medicine) in Bethesda, Maryland.
from other major sources of DNA or that greatly predominate where they are found in nature have also been sequenced. Treponema pallidum and Mycobacterium leprae (which cause syphilis and leprosy, respectively) are among the former, and two species predominant among acid-mine drainage site biofilms (Ferroplasma acidarmanus and a species of Leptospirillum) are examples of the latter. Sequencing such physically purified or environmentally concentrated (and thus naturally &ldquopure&rdquo) microbes crosses the boundary between genomics and metagenomics as far as methods are concerned.
Soon, there will be thousands of sequenced microbial genomes. If all microbial species were culturable and if such species were easily defined and limited in number (even a number in the tens of thousands), the ultimate goal of microbial genomics might be to determine all these genome sequences once the per-genome cost fell far enough. Then the meta in
metagenomics might parallel its use in meta-analysis and mean bringing together individual databases in search of a common set of truths about nature. But not all species are culturable, few are easily defined at the genomic level, and indeed the number of different genomes in nature turns out to be uncountably large. We discuss these problems in turn.
WHY GENOMICS IS NOT ENOUGH
Most Microbes Cannot Be Cultured
In 1985, Staley and Konopka reviewed data on scientists&rsquo ability to bring microbes from the environment into laboratory cultivation. The &ldquogreat plate-count anomaly&rdquo they identified was this: the vast majority of microbial cells that can be seen in a microscope and shown to be living with various staining procedures cannot be induced to produce colonies on Petri plates or cultures in test tubes. It is estimated that only 0.1-1.0% of the living bacteria present in soils can be cultured under standard conditions the culturable fraction of bacteria from aquatic environments is ten to a thousand times lower still. The application of genomics-inspired moderate- to high-throughput nutrient screening methods and nontraditional approaches to monitoring growth responses will no doubt bring many recalcitrant organisms into culture. Indeed, two recent successes are the cultivation (and genome sequencing) of Pelagibacter ubique, a bacterium representative of one of the most common microbial phylogenetic groups found in the open ocean, and the isolation of several acidobacteria, the most abundant organisms in soil (Sait et al. 2002 Field et al. 1997 Martinez and Rodriguez-Valera 2000 Brown and Fuhrman 2005 Rappe et al. 2002). Both successes depended on the nontraditional molecular (rRNA-based) method discussed below for monitoring growth. But the fraction of organisms cultivatable in isolation will likely always be low, and for most the reason will be that, for growth, it takes a community. Culturing always favors the recovery of organisms that are best able to thrive under laboratory conditions (colloquially &ldquolab weeds&rdquo), not necessarily the dominant or most influential organisms in the environment.
Given the evidence that many microbes resist being cultured, culture-independent methods for identifying and enumerating microbes in the environment have come to play a larger and larger role over the last several decades. Predominant among them is ribosomal RNA (rRNA) phylotyping, a powerful technique&mdashindeed, an independent research paradigm&mdashdeveloped by Pace and his colleagues (Pace 1997). This method is based on the enormous database of rRNA gene sequences (more than 200,000) that have been collected for the purpose of reconstructing the universal Tree of Life (see Box 1-4). By determining the sequence of an organism&rsquos rRNA genes,
Ribosomal RNA and the Tree of Life
Ribosomal RNAs (rRNAs) are essential structural and functional components of ribosomes, the cellular factories on which proteins are made according to the information encoded in DNA. Information from DNA is transmitted to the ribosomes through an intermediate, &ldquomessenger RNA.&rdquo All organisms have rRNAs similar enough to each other that they can be recognized as the &ldquosame molecule&rdquo but different enough that the differences are a good measure of evolutionary distance. The same is true of the genes encoding the rRNAs, on which the phylotyping method is based. Thus, two closely related organisms (for example, the benign Escherichia coli laboratory strain K12 and its sometimes lethal diarrhea-producing relative, strain O157:H7) will have almost identical rRNA gene sequences, whereas two remotely related species (such as E. coli K12 and an archaean, such as Picrophilus torridus) will have very different sequences. With enough sequences and suitably sophisticated computational tools, relationships between organisms measured by the differences in the sequences in their rRNA genes can be converted to a tree-like picture of their evolutionary histories. The widely accepted rRNA-based three-domain Tree of Life, which we owe to the pioneering work of Carl Woese and the heroic efforts of his many colleagues and students, is shown below (adapted from Pace  SOURCE: Hazen 2005).
one can position it on the appropriate branch of the Tree of Life and infer that its biology and ecology are likely to be similar to those of its closest relatives, the nearest branches on the tree. An organism does not have to be culturable to determine its phylotype. The polymerase chain reaction (PCR) allows rRNA (or other) genes to be detected and copied directly from environmental samples, then cloned and sequenced. If the environmental sample contains many types of organisms, there will be many different rRNA sequences, the diversity of which will be a measure of the complexity of the community and which, in the context of the Tree, will tell us &ldquowho is there.&rdquo Phylotyping has revolutionized the field of microbial ecology, and hundreds of environments&mdashfrom dry Antarctic valleys to deep-sea hydrothermal vents (&ldquoblack smokers&rdquo) to sewage-treatment plants and methane-producing reactors&mdashhave been studied in this way. Very often, new lineages whose rRNA gene sequences are little like anything that has been cultured are discovered. Indeed, the majority of the 50-plus major divisions of Bacteria that have been delineated through their rRNA genes do not yet have any cultured representatives. Community rRNA sequencing and phylogenetic analysis, in itself, is not considered metagenomics (because it focuses on only one gene, not entire genomes), but it can be a useful preliminary step in a metagenomics project because it provides a phylogenetic assessment of the diversity of a community.
Microbial Diversity and Variation Have No Limits
When genetic information from macroscopic organisms (animals or plants) is organized into phylogenetic trees to examine how they are related to one another, one can assume that all the individuals of a given species have virtually identical genomes. For example, the genomes of humans differ from one another by only 0.1%. In contrast, microbial phylotyping coupled with genome sequencing has shown that even if culturability ceased to be a problem, diversity will always be a challenge indeed, it is a greater challenge than might have been imagined. Hundreds of thousands or even millions would be too low an estimate of the number of genomes that would have to be sequenced in any kind of whole-genome-based metagenomics program. This is due in part to the large numbers of species of microbes in most environments. But also it reflects genomic diversity within what scientists had been calling species. Almost all phylotyping surveys of almost all environments yield not a single phylotype for each likely microbial species contributor to community dynamics, but dozens or hundreds of very close but unquestionably nonidentical phylotypes that form microdiverse clusters (see Figure 1-2). In addition to differing slightly in the sequences of the marker genes used in phylotyping, these organisms&mdashsupposedly members
How DNA Evidence Works
The main objective of DNA analysis is to get a visual representation of DNA left at the scene of a crime. A DNA "picture" features columns of dark-colored parallel bands and is equivalent to a fingerprint lifted from a smooth surface. To identify the owner of a DNA sample, the DNA "fingerprint," or profile, must be matched, either to DNA from a suspect or to a DNA profile stored in a database.
Let's consider the former situation -- when a suspect is present. In this case, investigators take a DNA sample from the suspect, send it to a lab and receive a DNA profile. Then they compare that profile to a profile of DNA taken from the crime scene. There are three possible results:
- Inclusions -- If the suspect's DNA profile matches the profile of DNA taken from the crime scene, then the results are considered an inclusion or nonexclusion. In other words, the suspect is included (cannot be excluded) as a possible source of the DNA found in the sample.
- Exclusions -- If the suspect's DNA profile doesn't match the profile of DNA taken from the crime scene, then the results are considered an exclusion or noninclusion. Exclusions almost always eliminate the suspect as a source of the DNA found in the sample.
- Inconclusive results -- Results may be inconclusive for several reasons. For example, contaminated samples often yield inconclusive results. So do very small or degraded samples, which may not have enough DNA to produce a full profile.
Sometimes, investigators have DNA evidence but no suspects. In that case, law enforcement officials can compare crime scene DNA to profiles stored in a database. Databases can be maintained at the local level (the crime lab of a sheriff's office, for example) or at the state level. A state-level database is known as a State DNA index system (SDIS). It contains forensic profiles from local laboratories in that state, plus forensic profiles analyzed by the state laboratory itself. The state database also contains DNA profiles of convicted offenders. Finally, DNA profiles from the states feed into the National DNA Index System (NDIS).
To find matches quickly and easily in the various databases, the FBI developed a technology platform known as the Combined DNA Index System, or CODIS. The CODIS software permits laboratories throughout the country to share and compare DNA data. It also automatically searches for matches. The system conducts a weekly search of the NDIS database, and, if it finds a match, notifies the laboratory that originally submitted the DNA profile. These random matches of DNA from a crime scene and the national database are known as "cold hits," and they are becoming increasingly important. Some states have logged thousands of cold hits in the last 20 years, making it possible to link otherwise unknown suspects to crimes.
How Crime Scene Investigation Works
On TV shows like "CSI," viewers get to watch as investigators find and collect evidence at the scene of a crime, making blood appear as if by magic and swabbing every mouth in the vicinity. Many of us believe we have a pretty good grip on the process, and rumor has it criminals are getting a jump on the good guys using tips they pick up from these shows about forensics.
But does Hollywood get it right? Do crime scene investigators follow their DNA samples into the lab? Do they interview suspects and catch the bad guys, or is their job all about collecting physical evidence? In this article, we'll examine what really goes on when a CSI "processes a crime scene" and get a real-world view of crime scene investigation from a primary scene responder with the Colorado Bureau of Investigation.
Crime scene investigation is the meeting point of science, logic and law. "Processing a crime scene" is a long, tedious process that involves purposeful documentation of the conditions at the scene and the collection of any physical evidence that could possibly illuminate what happened and point to who did it. There is no typical crime scene, there is no typical body of evidence and there is no typical investigative approach.
At any given crime scene, a CSI might collect dried blood from a windowpane — without letting his arm brush the glass in case there are any latent fingerprints there, lift hair off a victim's jacket using tweezers so he doesn't disturb the fabric enough to shake off any of the white powder (which may or may not be cocaine) in the folds of the sleeve, and use a sledge hammer to break through a wall that seems to be the point of origin for a terrible smell.
All the while, the physical evidence itself is only part of the equation. The ultimate goal is the conviction of the perpetrator of the crime. So while the CSI scrapes off the dried blood without smearing any prints, lifts several hairs without disturbing any trace evidence and smashes through a wall in the living room, he's considering all of the necessary steps to preserve the evidence in its current form, what the lab can do with this evidence in order to reconstruct the crime or identify the criminal, and the legal issues involved in making sure this evidence is admissible in court.
The investigation of a crime scene begins when the CSI unit receives a call from the police officers or detectives on the scene. The overall system works something like this:
- The CSI arrives on the scene and makes sure it is secure. She does an initial walk-through to get an overall feel for the crime scene, finds out if anyone moved anything before she arrived, and generates initial theories based on visual examination. She makes note of potential evidence. At this point, she touches nothing.
- The CSI thoroughly documents the scene by taking photographs and drawing sketches during a second walk-through. Sometimes, the documentation stage includes a video walk-through, as well. She documents the scene as a whole and documents anything she has identified as evidence. She still touches nothing.
- Now it's time to touch stuff — very, very carefully. The CSI systematically makes her way through the scene collecting all potential evidence, tagging it, logging it and packaging it so it remains intact on its way to the lab. Depending on the task breakdown of the CSI unit she works for and her areas of expertise, she may or may not analyze the evidence in the lab.
- The crime lab processes all of the evidence the CSI collected at the crime scene. When the lab results are in, they go to the lead detective on the case.
Every CSI unit handles the division between field work and lab work differently. What goes on at the crime scene is called crime scene investigation (or crime scene analysis), and what goes on in the laboratory is called forensic science. Not all CSIs are forensic scientists. Some CSIs only work in the field — they collect the evidence and then pass it to the forensics lab. In this case, the CSI must still possess a good understanding of forensic science in order to recognize the specific value of various types of evidence in the field. But in many cases, these jobs overlap.
Joe Clayton is a primary crime scene responder at the Colorado Bureau of Investigation (CBI). He has 14 years of field experience and also is an expert in certain areas of forensic science. As Clayton explains, his role in laboratory analysis varies according to the type of evidence he brings back from the crime scene:
Crime scene investigation is a massive undertaking. Let's start at the beginning: scene recognition.
At the Crime Scene: Scene Recognition
When a CSI arrives at a crime scene, he doesn't just jump in and start recovering evidence. The goal of the scene recognition stage is to gain an understanding of what this particular investigation will entail and develop a systematic approach to finding and collecting evidence. At this point, the CSI is only using his eyes, ears, nose, some paper and a pen.
The first step is to define the extent of the crime scene. If the crime is a homicide, and there is a single victim who was killed in his home, the crime scene might be the house and the immediate vicinity outside. Does it also include any cars in the driveway? Is there a blood trail down the street? If so, the crime scene might be the entire neighborhood. Securing the crime scene -- and any other areas that might later turn out to be part of the crime scene -- is crucial. A CSI really only gets one chance to perform a thorough, untainted search -- furniture will be moved, rain will wash away evidence, detectives will touch things in subsequent searches, and evidence will be corrupted.
Usually, the first police officers on the scene secure the core area -- the most obvious parts of the crime scene where most of the evidence is concentrated. When the CSI arrives, he will block off an area larger than the core crime scene because it's easier to decrease the size of a crime scene than to increase it -- press vans and onlookers may be crunching through the area the CSI later determines is part of the crime scene. Securing the scene involves creating a physical barrier using crime scene tape or other obstacles like police officers, police cars or sawhorses, and removing all unnecessary personnel from the scene. A CSI might establish a "safe area" just beyond the crime scene where investigators can rest and discuss issues without worrying about destroying evidence.
Once the CSI defines the crime scene and makes sure it is secure, the next step is to get the district attorney involved, because if anyone could possibly have an expectation of privacy in any portion of the crime scene, the CSI needs search warrants. The evidence a CSI recovers is of little value if it's not admissible in court. A good CSI errs on the side of caution and seldom searches a scene without a warrant.
With a search warrant on the books, the CSI begins a walk-through of the crime scene. He follows a pre-determined path that is likely to contain the least amount of evidence that would be destroyed by walking through it. During this initial walk-through, he takes immediate note of details that will change with time: What's the weather like? What time of day of day is it? He describes any notable smells (gas? decomposition?), sounds (water dripping? smoke alarm beeping?), and anything that seems to be out of place or missing. Is there a chair pushed up against a door? Is the bed missing pillows? This is also the time to identify any potential hazards, like a gas leak or an agitated dog guarding the body, and address those immediately.
The CSI calls in any specialists or additional tools he thinks he'll need based on particular types of evidence he sees during the recognition stage. A t-shirt stuck in a tree in the victim's front yard may require the delivery of a scissor lift to the scene. Evidence such as blood spatter on the ceiling or maggot activity on the corpse requires specialists to analyze it at the scene. It's hard to deliver a section of the ceiling to the lab for blood spatter analysis, and maggot activity changes with each passing minute. Mr. Clayton happens to be an expert in blood spatter analysis, so he would perform this task in addition to his role as crime scene investigator.
During this time, the CSI talks to the first responders to see if they touched anything and gather any additional information that might be helpful in determining a plan of attack. If detectives on the scene have begun witness interviews, they may offer details that point the CSI to a particular room of the house or type of evidence. Was the victim yelling at someone on the phone a half-hour before the police arrived? If so, the Caller ID unit is a good piece of evidence. If an upstairs neighbor heard a struggle and then the sound of water running, this could indicate a clean-up attempt, and the CSI knows to look for signs of blood in the bathroom or kitchen. Most CSIs, including Mr. Clayton, do not talk to witnesses. Mr. Clayton is a crime scene investigator and a forensic scientist -- he has no training in proper interview techniques. Mr. Clayton deals with the physical evidence alone and turns to the detectives on the scene for any useful witness accounts.
The CSI uses the information he gathers during scene recognition to develop a logical approach to this particular crime scene. There is no cookie-cutter approach to crime scene investigation. As Mr. Clayton explains, the approach to a crime scene involving 13 deaths in a high school (Mr. Clayton was one of the CSIs who processed Columbine High School after the shootings there) and the approach to a crime scene involving a person who was raped in a car are vastly different. Once the CSI has formed a plan of attack to gather all of the evidence that could be relevant to this particular crime, the next step is to fully document every aspect of the scene in a way that makes it possible for people who weren't there to reconstruct it. This is the scene-documentation stage.
Police officers are typically the first to arrive at a crime scene. They arrest the perpetrator if he's still there and call for an ambulance if necessary. They are responsible for securing the scene so no evidence is destroyed. The CSI unit documents the crime scene in detail and collects any physical evidence. The district attorney is often present to help determine if the investigators require any search warrants to proceed and obtain those warrants from a judge. The medical examiner (if a homicide) may or may not be present to determine a preliminary cause of death. Specialists (entomologists, forensic scientists, forensic psychologists) may be called in if the evidence requires expert analysis. Detectives interview witnesses and consult with the CSI unit. They investigate the crime by following leads provided by witnesses and physical evidence.
Minute quantities of DNA, including ancient DNA, from sources such as hair, bones and other tissues can be amplified using PCR. The DNA can then be identified and analysed, and genomes can be sequenced. These processes allow scientists to further their knowledge and understanding of evolution and paleontology. Genome sequencing can also aid in phylogenetic studies, leading to greater understanding of organisms’ evolutionary relationships to each other. This information can be useful to scientists in supporting conservation efforts, studying evolution and understanding unique adaptations.
The Science Learning Hub has several articles highlighting New Zealand examples of research in this area, for example,
Working for good:
How do you think a better understanding of genetics can help with conservation?
Image courtesy of Bence Viola, Max Planck Institute.
Human Error and Admissibility
DNA evidence is not unassailable, however. Errors in the collection and/or handling of the biological samples used for the DNA analysis can result in it being excluded at trial. Similarly, if a lab contaminates the biological sample or is found to use unreliable methods, a judge may reject it at trial.
When challenging this type of evidence, defense attorneys will usually focus on the behavior of the investigators and forensic analysts in an attempt to cast doubt on the results of DNA profiles, rather than attack the reliability of DNA profiling as a whole. A well-known example of this is the defense strategy used in the O.J. Simpson trial. Additionally, each state has difference rules regarding evidence, and any failure to comply with the particulars of each states requirements can result in a refusal of the court to examine DNA evidence.
How Forensic Lab Techniques Work
A forensic expert of the International Commission for Missing Persons works with DNA evidence.
When there is a murder, suspicious fire or hit-and-run accident, police and rescue workers aren't the only ones in on the investigation. Forensic scientists also play an important part. They will take samples collected at the scene and analyze them in a forensics laboratory. With a little ingenuity and some very high-tech equipment, forensic scientists can help law enforcement catch even the wiliest perpetrator.
Forensic science is a discipline that applies scientific analysis to the justice system, often to help prove the events of a crime. Forensic scientists analyze and interpret evidence found at the crime scene. That evidence can include blood, saliva, fibers, tire tracks, drugs, alcohol, paint chips and firearm residue.
Using scientific equipment, forensic scientists identify the components of the samples and match them up. For example, they may determine that a paint chip found on a hit-and-run accident victim came off a '96 Ford Mustang convertible, a fiber found at a murder scene belonged to an Armani jacket or a bullet was fired from a Glock G24 pistol.
How do forensic scientists turn even the tiniest clues into real evidence that can help track down criminals? What are the latest technologies being used today in forensics labs? Find out next.
Kurt Hutton/Picture Post/Getty Images
A scientist at Preston Forensic Science Laboratory removes a hair from a hat left at the scene of a shooting in the 1940s.
The history of forensic science dates back thousands of years. Fingerprinting was one of its first applications. The ancient Chinese used fingerprints to identify business documents. In 1892, a eugenicist (an adherent of the often prejudiced system of scientific classification) named Sir Francis Galton established the first system for classifying fingerprints. Sir Edward Henry, commissioner of the Metropolitan Police of London, developed his own system in 1896 based on the direction, flow, pattern and other characteristics in fingerprints. The Henry Classification System became the standard for criminal fingerprinting techniques worldwide.
In 1835, Scotland Yard's Henry Goddard became the first person to use physical analysis to connect a bullet to the murder weapon. Bullet examination became more precise in the 1920s, when American physician Calvin Goddard created the comparison microscope to help determine which bullets came from which shell casings. And in the 1970s, a team of scientists at the Aerospace Corporation in California developed a method for detecting gunshot residue using scanning electron microscopes.
- Labs should have procedures in place for the use and disposal of chemicals, as well as a safety plan in case of emergency (including a safety shower and eyewash station).
- Employees need to be well-trained in the use of all chemicals, understanding the properties of each chemical and its potential to cause injury.
- Lab technicians should wear the proper gear -- eyewear to protect against chemical splashes and gloves to protect their hands.
- Chemical containers should be properly labeled with the correct chemical name.
- Flammable liquids should always be kept in special storage containers or a storage room. Putting these types of chemicals in a regular refrigerator can lead to an explosion.
In 1836, a Scottish chemist named James Marsh developed a chemical test to detect arsenic, which was used during a murder trial. Nearly a century later, in 1930, scientist Karl Landsteiner won the Nobel Prize for classifying human blood into its various groups. His work paved the way for the future use of blood in criminal investigations. Other tests were developed in the mid-1900s to analyze saliva, semen and other body fluids as well as to make blood tests more precise.
With all of the new forensics techniques emerging in the early 20th century, law enforcement discovered that it needed a specialized team to analyze evidence found at crime scenes. To that end, Edmond Locard, a professor at the University of Lyons, set up the first police crime laboratory in France in 1910. For his pioneering work in forensic criminology, Locard became known as "the Sherlock Holmes of France."
August Vollmer, chief of the Los Angeles Police, established the first American police crime laboratory in 1924. When the Federal Bureau of Investigation (FBI) was first founded in 1908, it didn't have its own forensic crime laboratory -- that wasn't set up until 1932.
By the close of the 20th century, forensic scientists had a wealth of high-tech tools at their disposal for analyzing evidence from polymerase chain reaction (PCR) for DNA analysis, to digital fingerprinting techniques with computer search capabilities.
Next, we'll see some of the applications of these modern forensic technologies.
Forensic labs are often called in to identify unknown powders, liquids and pills that may be illicit drugs. There are basically two categories of forensic tests used to analyze drugs and other unknown substances: Presumptive tests (such as color tests) give only an indication of which type of substance is present -- but they can't specifically identify the substance. Confirmatory tests (such as gas chromatography/mass spectrometry) are more specific and can determine the precise identity of the substance.
Australian Federal Police via Getty Images
Forensic technicians are often called to identify unknown drugs. A beauty student allegedly tried to smuggle more than 10,000 amphetamine tablets into Australia.
Color tests expose an unknown drug to a chemical or mixture of chemicals. What color the test substance turns can help determine the type of drug that's present. Here are a few examples of color tests:
|Type of Test||Chemicals||What the Results Mean|
|Marquis Color||Formaldehyde and concentrated sulfuric acid ||Heroin, morphine and most opium-based drugs will turn the solution purple. Amphetamines will turn it orange-brown. |
|Cobalt thiocyanate||Cobalt thiocyanate, distilled water, glycerin, hydrochloric acid, chloroform ||Cocaine will turn the liquid blue. |
|Dillie-Koppanyi||Cobalt acetate and isopropylamine ||Barbiturates will turn the solution violet-blue. |
|VanUrk||P-dimethylaminobenzaldehyde, hydrochloric acid, ethyl alcohol ||LSD will turn the solution blue-purple. |
|Duquenois-Levine Test||Vanillin, acetaldehyde, ethyl alcohol, chloroform ||Marijuana will turn the solution purple. |
Other drug tests include ultraviolet spectrophotometry, which analyzes the way the substance reacts to ultraviolet (UV) and infrared (IR) light. A spectrophotometry machine emits UV and IR rays, and then measures how the sample reflects or absorbs these rays to give a general idea of what type of substance is present.
A more specific way to test drugs is with the microcrystalline test in which the scientist adds a drop of the suspected substance to a chemical on a slide. The mixture will begin to form crystals. Each type of drug has an individual crystal pattern when seen under a polarized light microscope.
Gas chromatography/mass spectrometry isolates the drug from any mixing agents or other substances that might be combined with it. A small amount of the substance is injected into the gas chromatograph. Different molecules move through the chromatograph's column at different speeds based on their density. For example, heavier compounds move more slowly, while lighter compounds move more quickly. Then the sample is funneled into a mass spectrometer, where an electron beam hits it and causes it to break apart. How the substance breaks apart can help the technicians tell what type of substance it is.
What methods do technicians use to help track down hit-and-run vehicles or arsonists? Find out next.
Forensic Paint Analysis and Arson Investigation
Forensic scientists are sometimes called to help analyze evidence left from a hit-and-run or possible case of arson. They have special techniques to study what's often small or extremely damaged evidence.
Sometimes forensic scientists need to analyze a paint sample -- for example, if a paint chip is found on the body of a hit-and-run victim and investigators are trying to match it to a make and model of car.
First, the scientists look at the appearance of the sample -- its color, thickness and texture. They examine the sample under a polarized light microscope to view its different layers. Then they can use one of several tests to analyze the sample:
- Fourier transform infrared (FTIR) spectrometry determines the type of paint (chemicals, pigments, etc.) by analyzing the way in which its various components absorb infrared light.
- Solvent tests expose the paint sample to various chemicals to look for reactions such as swelling, softening, curling and color changes.
- Pyrolysis gas chromatography/mass spectrometry helps distinguish paints that have the same color, but a different chemical composition. The paint sample is heated until it breaks into fragments, and then is separated into its various components.
To light a fire, arsonists need a flammable material and an accelerant (such as kerosene or gas). Arson investigators look for these items when they're investigating the crime scene. Because all that's usually left of the evidence is charred remains, the investigators will collect fire debris and take it back to the forensics lab for analysis.
Gary Tramontina/Getty Images
Investigators look through the remains of the Morning Star Missionary Baptist Church on Feb. 8, 2006, near Boligee, Ala. Forensic technicians will examine the fire debris.
Samples are sealed in airtight containers and then tested for residues of accelerant liquid that might have been used to start the fire. These are the most common tests performed by forensics labs during an arson investigation:
- Static headspace heats the sample, causing the residue to separate out and vaporize into the top, or "headspace" of the container. That residue is then injected into a gas chromatograph, where it's broken apart to analyze its chemical structure.
- Passive headspace heats the sample and the residue collects onto a carbon strip in the container. Then the residue collected is injected into a gas chomatograph/mass spectrometer for analysis.
- Dynamic headspace bubbles liquid nitrogen gas through the sample and captures the residue onto an absorbent trap. The trapped compounds are then analyzed using gas chromatography.
How do technicians analyze biological evidence like blood, semen or the oils left behind by fingerprints? In the next section, we'll find out.
Mario Villafuerte/Getty Images
A forensic analyst holds
Murder scenes can produce a wealth of evidence, from shell casings to human blood and hair. Investigators gather all of this evidence, and forensic technicians analyze it in various ways, based on the type of evidence:
Gunshot residue: When a gun is fired, residue exits the gun behind the bullet. Traces of this residue can land on the hands of the person firing the weapon or on the victim. Police use tape or a swab to lift residue off the hands of a suspected shooter. Then the forensics technician uses a scanning electron microscope to examine the sample. Because elements in gunpowder have a unique X-ray signature, examination under the electron microscope can help determine whether the substance is actually gunshot residue. Technicians will also use dithiooxamide (DTO), sodium rhodizonate or the Greiss test to detect the presence of chemicals produced when a gun is fired.
Fibers: Infrared spectrometry/spectroscopy identifies substances by passing infrared radiation through them and then detecting how much of the radiation they absorb. It can identify the structure and chemical components of various substances like soil, paint or fibers. With this technique, forensic technicians can match fibers found on a victim's body to those in a piece of clothing or furniture.
Fingerprints: Fingerprinting relies on the unique pattern of loops, arches and whorls that covers each person's fingertips. There are two types of fingerprints. Visible prints are made on a card, or on a type of surface that creates an impression, such as blood or dirt. Latent prints are made when sweat, oil and other substances on the skin reproduce the fingerprints on a glass, murder weapon or any other surface the perpetrator has touched. These prints can't be seen with the naked eye, but they can be made visible using dark powder, lasers or other light sources.
One method forensics labs use to make latent prints visible uses cyanocrylate -- the same ingredient in superglue. When it's heated inside a fuming chamber, cyanocrylate releases a vapor that interacts with the amino acids in a latent fingerprint, creating a white print. Technicians may also use a wandlike tool that heats up a mixture of cyanocrylate and fluorescent pigment. The tool then releases gases on the latent prints, to fix and stain them on the paper. Other chemicals that react with oils in fingerprints to reveal latent prints include silver nitrate (the chemical in black-and-white film), iodine, ninhydrin and zinc chloride.
Body fluids: A number of tests are used to analyze blood, semen, saliva and other bodily fluids:
- Semen: To test a sample to see whether it contains semen, technicians use acid phosphatase, an enzyme found in semen. If the test turns purple within a minute, it's positive for semen. To confirm the results, technicians look at stained slides of the sample under a microscope. The stain colors the heads of the sperm red and the tails green (which is why the test is referred to as the "Christmas tree stain").
- Blood: the Kastle-Meyer test uses a substance called phenolphthalein, which is normally colorless, but turns pink in the presence of blood. Another test for blood is luminal, which is sprayed over a room to detect even the tiniest droplets of blood.
- Saliva: The phadebas amylase test is used to detect a-amylase, an enzyme in human saliva. If amylase is present, a blue dye will be released.
DNA analysis: DNA is the unique genetic fingerprint that distinguishes one person from another. No two people share the same DNA (with the exception of identical twins). Today, forensic scientists can identify a person from just a few tiny blood or tissue cells using a technique called polymerase chain reaction (PCR). This technique can make millions of copies of DNA from a tiny sample of genetic material.
To find out more about forensic labs and related topics, visit our links page.
Why do we extract DNA?
Extracting DNA from cells is one of the first steps of one of the most commonly used procedures in molecular Biology: Polymerase Chain Reaction (PCR). Separating the DNA from the rest of contents of the cell makes for a cleaner result, but nowadays it's not strictly required.
Extraction of DNA is important because of many reasons. With the ability to remove DNA from an organism, scientists can observe, manipulate, and classify the DNA.
Scientists can identify genetic disorders or diseases from studying DNA.
Scientists can possibly find cures for these causes by manipulating or experimenting with this DNA.
Scientists can accurately sort organisms into classes because of DNA uniqueness. If we didn't have DNA extraction, it would be a lot harder to decide which organisms are different from each other.
Scientists can genetically engineer some organisms to produce beneficial things. A common example is that of insulin. Scientists can genetically engineer insulin production so that people with Diabetes can live longer.
PEOPLE v. LUNA
The PEOPLE of the State of Illinois, Plaintiff–Appellee, v. Juan LUNA, Defendant–Appellant.
Decided: April 25, 2013
¶ 1 Juan Luna and James Degorski were charged with first degree murder for the 1993 shooting deaths of seven people at a Brown's Chicken restaurant in Palatine, Illinois. Following a severed jury trial in 2007, defendant Luna was found guilty of first degree murder and sentenced to natural life imprisonment. Defendant presses several arguments on appeal: (1) the trial court should have excluded expert testimony that a latent print found on a napkin matched defendant's palm print, or the court should have granted defendant's request for a Frye hearing, because the “controversy surrounding latent print identification” shows that the relevant scientific community does not generally accept the method used to match latent prints to known prints (2) defense counsel was ineffective for failing to move for a Frye hearing, because testing showed that the amount of DNA recovered from the crime scene was less than 0.5 nanograms, and obtaining a profile from such low amounts of DNA is not generally accepted within the relevant scientific community (3) this court should discard Frye and adopt Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 573 (1993), for assessing the admissibility of scientific evidence (4) defendant was denied a fair trial because the prosecutor made improper comments during rebuttal closing argument and (5) the trial court abused its discretion in denying defendant's motion to admit two out-of-court statements from Casey Sander and Todd Wakefield. For the reasons that follow, we affirm.
¶ 3 In the early morning of January 9, 1993, Palatine police officers received separate reports that workers from a Palatine Brown's Chicken had not returned home after their shifts. Upon entering the restaurant, police found the bodies of seven persons, all of whom had been shot in the head. Richard Ehlenfeldt and Thomas Mennes were found in the cooler on the west side of the restaurant. The other five, Lynn Ehlenfeldt, Guadalupe Maldonado, Rico Solis, Michael Castro, and Marcus Nellsen, were found in the freezer on the east side of the restaurant.
¶ 4 At the trial, spanning several weeks, dozens of witnesses testified for the State and defendant. We provide a brief overview of the relevant testimony, which we explore in more detail, where necessary, as part of our analysis.
¶ 5 I. Defendant's Statements
¶ 7 Eileen Bakalla testified that on January 8, 1993, James Degorski called and asked her to meet him and defendant in the parking lot of a Jewel grocery store in Carpentersville, Illinois. Bakalla was Degorski's close friend, and she knew defendant they would visit at Degorski's house and smoke marijuana. When she met them in the parking lot, Bakalla saw Degorski and defendant in defendant's car, along with latex gloves and a canvas money bag. The men got into Bakalla's car, and as she drove them to her home, they told her they had robbed the Brown's Chicken. At Bakalla's home, defendant and Degorski split the money in the bag, which Bakalla estimated at over $1,000. They gave $50 to Bakalla, which she said was repayment for a loan she had given to Degorski. The three smoked marijuana, relaxed for a few hours, and Bakalla drove defendant back to his car. Bakalla and Degorski then drove past the Brown's Chicken restaurant, where she saw numerous ambulances and police cars. The next day, Bakalla met Degorski at a car wash, where Degorski “extensively cleaned” defendant's car. A few weeks later, Bakalla and defendant were at Degorski's house. Defendant smiled as he talked about slitting a lady's throat.
¶ 8 Bakalla testified that on November 25, 1995, a police officer investigating the murders asked her to go to the Palatine police station. She went with Degorski and defendant, who had been an employee at the Brown's Chicken. Bakalla told a police officer that she was with Degorski and defendant at her house on the night of the murders, and they found out about the murders the next morning. Bakalla testified that only some of that was true. Bakalla admitted that, in exchange for her testimony, the State's Attorney's office promised Bakalla that she would not be prosecuted for “obstruction of justice” or “concealment or accessory after the fact.” The judge instructed the jury to view Bakalla's testimony “with caution.”
¶ 10 Anne Lockett testified that in 1991, when she was 17 and still in high school, she met defendant and Bakalla through Degorski. Defendant, Degorski, and Lockett used to drink and smoke marijuana together. Lockett, who used others drugs like PCP and LSD, started dating Degorski in 1992 when she was 17. On January 7, 1993, she was admitted to Forest Hospital after she attempted suicide. A few days after she was admitted, Degorski called her at the hospital. Lockett testified that after the phone call, she watched the evening news. The lead story was the Brown's Chicken murders.
¶ 11 After she was discharged from the hospital on January 25, 1993, Lockett went to Degorski's house. Defendant was there. Degorski asked Lockett if she wanted to know what happened at Brown's Chicken, and Lockett said yes. Lockett testified that “they” told her that they went to the Brown's Chicken with pockets full of bullets. Degorski had a .38–caliber revolver. They went in around closing and defendant ordered chicken, which angered Degorski because he was worried about leaving greasy fingerprints. Lockett testified that they told her they put on gloves in the bathroom. The men blocked the back exit door with a wedge so that no one could run out the back exit.
¶ 12 Lockett testified that both men admitted shooting the victims with Degorski's gun. At one point, one of the Brown's employees ran through the kitchen, jumped over the counter, and was shot. They told her one of the boys who was left in the cooler had vomited French fries before he died. Defendant demonstrated how he cut the throat of “a woman who made him mad, something about the safe.” Afterward, they cleaned up and disposed of their clothing, shoes, and the gun. Degorski threatened to kill Lockett if she said anything, and he told her that because she had been in the hospital at the time, he was going to use Bakalla as an alibi.
¶ 13 In March 2002, Lockett eventually recounted to a friend what Degorski and defendant had told her. That same month, after the friend had gone to police, Sergeant Bill King of the Palatine police department contacted Lockett. Lockett admitted that she did not come forward earlier, even when Martin Black, a friend of hers, was arrested for the murders.
¶ 14 Lockett testified that her drug and alcohol use continued until 2004, when she was sent to “detox” after arriving to work intoxicated. Lockett relapsed a year later, but, at the time of trial, she had been sober for one year. She denied significant memory or cognitive difficulties, but when she testified before the grand jury, Lockett read from a statement the prosecutor's office had typed out.
¶ 15 C. Defendant's Videotaped Statement
¶ 16 Following his arrest on May 16, 2002, defendant gave a videotaped statement that he and Degorski went to Brown's Chicken to commit a robbery. Assistant State's Attorney Darren O'Brien testified that on May 17, 2002, he spoke with defendant, who after waiving his Miranda rights, agreed to give the statement. The statement was played for the jury.
¶ 17 Defendant stated that on January 8, 1993, he and Degorski planned to rob the Brown's Chicken in Palatine they chose the restaurant because defendant had worked there and knew there was no alarm. They decided to go there “at closing time like around 9:00 o'clock” because fewer people would be there and because there would be money at that time since “everybody was doing their counts to make deposits for the bank or just to count and leave some in there in the safe.” At around 9 p.m., defendant and Degorski drove to the restaurant, parked “on the north side of the building,” and then “walked by the west side of the building to walk inside, by the west side, double side doors to walk in there.” Defendant ordered “four or five pieces of chicken,” and the two walked “on the west side of the building” to sit down at “the first booth next to the garbage can.”
¶ 18 Defendant and Degorski then put on latex gloves and decided to go ahead with the robbery. Defendant approached Rico Solis, who was mopping the floor, and told him to go to the back of the restaurant. Degorski fired a shot and told everyone to get on the floor. One employee tried to jump over the counter, and Degorski shot him. Degorski took the man into “the west side cooler,” and at some point, took Richard Ehlenfeldt, one of the owners, into the cooler. Degorski then fired several shots.
¶ 19 Defendant kept an eye on the five other employees until Degorski ordered them into the freezer. Degorski gave defendant a knife and told him to have the “lady owner” (Lynn Ehlenfeldt) get the money out of the safe, but “with everything going all wild and crazy,” defendant “got caught up in the moment and * * * cut her on her throat.” Degorski then handed the gun to defendant, took some money out of the safe and dragged Lynn Ehlenfeldt into the freezer. When defendant looked into the freezer, the four men inside pleaded with him not to shoot. Defendant fired one shot into the freezer, and Degorski then took the gun back and fired repeatedly into the freezer. After Degorski checked to “make sure everyone was dead” by kicking them and poking them with a stick, he and Degorski turned off the lights and locked the doors to make it look like the place was closed. After leaving through the side employee door, defendant and Degorski drove to Carpentersville where they met Bakalla. She then drove them in her car to her house where defendant and Degorski split the money.
¶ 20 II. Palm Print Evidence
¶ 21 Dr. Jane Homeyer testified that at the time of trial she was the Director of Competencies and Standards in the Office of the Director of National Intelligence. She had also worked for the FBI, but before 1999, Dr. Homeyer was a forensic scientist at the Northern Illinois Police Crime Laboratory (NIPCL). Testifying as an expert in the “field of prints and * * * crime scenes,” Dr. Homeyer explained that she was part of the team that documented the Brown's Chicken crime scene. Her primary responsibilities were to process prints and help collect and preserve evidence.
¶ 22 On January 11, 1993, while examining the scene at the Brown's Chicken, Dr. Homeyer saw that the register receipt on one of the cash registers showed that a four-piece chicken meal with fries, coleslaw, and a small drink was purchased at 9:08 p.m. She also noticed that, although the garbage receptacle on the west side of the dining room had a relatively fresh bag, it contained a cardboard box with four pieces of chicken, scattered French fries, biscuits, coleslaw, paper products (including four used napkins), and other chicken pieces and bones.
¶ 23 Dr. Homeyer and her colleague Chris Hedges removed the plastic garbage bag and set it on the floor so that they could “compar[e] the cash register receipt with the general contents of the garbage bag to see if in general they looked and appeared to be consistent.” After concluding that there “was good alignment between the items, the food items, and the paper items that were in the bottom of the garbage bag” with what was on the register tape, Dr. Homeyer began separating out the paper items from the food items. She sealed them in paper bags and transported them to the laboratory for processing.
¶ 24 On January 14, 1993, Dr. Homeyer began processing the paper items for fingerprints at the NIPCL laboratory. Dr. Homeyer examined the four napkins from the bag under various angles of light to see if there were any patent print impressions. She then dipped each in a ninhydrin solution to help make any latent fingerprints visible. Dr. Homeyer found one suitable impression on one napkin and photographed it (ninhydrin prints fade over time). She repeated the ninhydrin treatment to enhance the print and took another photograph.
¶ 25 Later, sometime around January 22 or 23, 1993, at the Illinois State Police Crime Laboratory, Dr. Homeyer attempted to use zinc chloride to react with the lipids in the print residue to develop additional impressions on the napkins. No additional prints were developed, but the process left the earlier-developed print obscured by darkened or blackened areas.
¶ 26 Dr. Homeyer examined defendant's fingerprint and palm print cards on March 2, 1993, but the left palm print was missing from those cards. Arlington Heights Police Officer Ronald Sum testified that he interviewed defendant (along with other former Brown's employees) and took defendant's fingerprints and palm prints, along with his photograph, on February 17, 1993. Sum testified that he accidentally failed to include defendant's left palm print on the card.
¶ 27 After defendant's arrest, however, his full palm prints were submitted to John Onstwedder, a latent print examiner, who testified as an expert in latent print examination at trial. Onstwedder, a member of the International Association for Identification, had testified over 100 times as an expert in latent print examination. He concluded that the napkin print matched a small area of the palm under defendant's left “pinky.” Using a technique known as ACE–V (analysis, comparison, evaluation, and verification), Onstwedder testified in detail about the common characteristics between the latent print and defendant's known print.
¶ 28 On cross-examination, Onstwedder testified that he first ran the print through the Automated Fingerprint Identification Service (AFIS), which contains only fingerprints. When Onstwedder performed the comparison between the known print and the latent print, he knew that defendant had been arrested and that “there was other evidence pointing at him.” While Onstwedder testified that he had never made an error, he acknowledged that errors in fingerprint identification have occurred in the past, and he testified about his knowledge of specific cases of erroneous identifications. He also acknowledged that earlier in the investigation, two examiners had matched the napkin print to someone other than defendant. Onstwedder agreed that “every step of the way involves subject calls or judgment calls for you [to] get to the next step.”
¶ 30 Along with the paper napkins, the chicken bones in the garbage bag were taken back to the evidence room on January 11, 1993. At some point, Dr. Homeyer combined all the food pieces into a single plastic bag. The bag was stored at the NIPCL laboratory at room temperature until January 14, 1993, when she opened it to inventory and photograph the chicken pieces found in the box.
¶ 31 In 1994, Dr. Homeyer sent the chicken pieces to a private laboratory, Life Codes, for DNA testing. Rich Cunningham, a former employee of Life Codes, testified that he attempted to obtain a DNA sample from one of the chicken bones, but the amount of DNA potentially present, about 1 to 2 nanograms, was too small for the type of testing Life Codes was using at the time. Cunningham was looking for a sample size of around 10 nanograms, as Life Codes was not yet using “Short Tandem Repeat” (STR) testing, which works well with smaller amounts of DNA. After the testing, Cunningham disposed of the bone he tested. In Cunningham's opinion, there “would be no reason” to keep the bone, for he had exposed the bone to a chemical bath, such that he had removed “all the cellular material” from the bone. He returned the other untested chicken bones to the Palatine police.
¶ 32 Ornithologist Dr. David Willard testified that in June 1995, he received the chicken evidence at the Chicago Field Museum from Palatine police officers. He attempted to determine how many pieces of chicken were in the garbage bag. The partially thawed chicken was set out on an unsterilized table in a semi-public area of the museum. Dr. Willard and a colleague pulled back meat from some of the bones and removed meat from others. They did not wear gloves or masks. Dr. Willard testified that he looked at each bone and gave it a name (e.g., “wing bone”), but he could not determine what type of meal (e.g., four-piece or five-piece) was presented by the chicken pieces because he did not know what type of pieces were included in those different sized meals.
¶ 33 Debra Depczynski of the Illinois State Police Forensic Science Center in Chicago (ISP laboratory), testified that on September 18, 1998, she began tests to determine if there was any saliva on the partially eaten chicken bones. On two of the five bones, she was able to get a positive result. Depczynski gave a set of four swabs for each of the two bones to Cecilia Doyle for testing.
¶ 34 Cecilia Doyle, chief of the Biology–DNA section of the ISP laboratory, testified as an expert in the field of DNA analysis. As explained in greater detail in the analysis section below, Doyle used STR DNA analysis on the swabs from the two bones and was able to obtain a nine-loci DNA profile from the bones. She explained that in 1998, the laboratory was only looking at 9 allele locations for DNA profiles and did not advance to 13–loci DNA testing until 1999.
¶ 35 The testing revealed that each sample contained DNA from multiple contributors. Doyle testified that a DNA profile, viewed on a graph called an electropherogram, normally has just two peaks at any one area of the DNA (i.e., any locus), because half of the DNA is inherited from the mother and half from the father. When an allele (i.e., an area of genetic variation that analysts measure and use for comparison) is detected at a particular locus, it is represented by a peak plotted at a point on the electropherogram. Here, Doyle saw more than two peaks at two of the nine loci tested, indicative of a mixture. To bring up more alleles from the “minor profile” (the one with lower peak heights), Doyle repeated the analysis using more of the sample. That test revealed alleles at four more loci.
¶ 36 Kenneth Pfoser testified as an expert in the field of DNA. In May 2002, when he was assistant DNA technical leader at the ISP laboratory, Pfoser reviewed the DNA profile obtained from defendant (in 2002 and 2004) as well as the major profile obtained from the chicken bone swabs. In his opinion, the profiles matched. He testified that the profile would be expected to occur in 1 in 139 trillion Black, 1 in 8.9 trillion Caucasian, and 1 in 2.8 trillion Hispanic unrelated individuals.
¶ 37 Defendant presented testimony of Dr. Karl Reich, founder and part owner of a private laboratory that performs and advises on DNA testing for law enforcement and private individuals. Dr. Reich, who holds an M.D. and a Ph.D. in molecular biology, testified that the ISP ran the 220,456 records in the Combined DNA Index System (CODIS) database against each other to determine the number of nine-loci matches. The study revealed 903 pairs of matching nine-loci profiles in the database, i.e., 1,806 persons with identical DNA at nine loci. In Dr. Reich's opinion, the study cast doubt on the State's claim that the odds of another unrelated Hispanic sharing defendant's DNA at nine loci are 1 in 2.8 trillion.
¶ 38 Dr. Reich also testified that there was “no way” to determine the source of the DNA or when the DNA was deposited on the bone. Dr. Reich stated that 13 loci, not 9, are necessary to have a “scientifically, fully justifiable identity match.” On cross-examination, Dr. Reich testified that his laboratory was paid over $100,000 for its work on this “very complicated” case.
¶ 39 In rebuttal, the State called Dr. Ranajit Chakraborty, who testified that the results of the Illinois DNA database study did not cast doubt on the random match probability that another person shares defendant's particular nine-loci profile.
¶ 40 IV. Statements of John Simonek
¶ 41 Before trial, the court granted defendant's motion to admit a statement by John Simonek, given to Palatine detectives August 9, 1999, that he and Todd Wakefield committed the murders. According to Simonek's videotaped statement, which was played for the jury, he and Wakefield went to Brown's Chicken on the evening of January 8, 1993. Wakefield drove a brown station wagon, which he parked on the east side of the restaurant, and Simonek and Wakefield entered through the east door. Wakefield ordered food. Five to ten minutes later, he pulled out a revolver and told the employees to move into the freezer. According to Simonek, Wakefield “moved everybody except for two people into the freezer,” and Simonek escorted the two people into the “other freezer.”
¶ 42 As he moved back toward Wakefield, Simonek heard gunshots and “started freaking out.” Wakefield shot the people in the freezer and then ordered Simonek to shoot the people in the cooler. Simonek argued with him and told Wakefield he did not “want any part of this,” but Wakefield “proceeded to make [Simonek] do it anyway.” Simonek “recall[ed] shooting somebody outside the freezer,” but did not “remember how they got out there though.” When Wakefield directed Simonek to shoot the person inside the cooler, Simonek resisted, but then shot into plastic strips over the entrance of the cooler. Wakefield and Simonek moved the body that was outside the cooler into the cooler and “laid it on its right side facing east.” Simonek then saw Wakefield stab two or three people in the freezer. Before leaving, Simonek saw that the freezer door was left partially open. He waited in the car for Wakefield, who drove him home. Simonek said he was “very sorry it happened,” he “just want[ed] this to be over,” and he “hope[d] [Wakefield] gets locked up.”
¶ 43 Before giving the videotaped statement, Simonek was arrested on August 5, 1999. Sergeant King testified that Jim Bell, coordinator of a task force to investigate the Brown's Chicken murders, had Simonek arrested, without King's knowledge, when King was on vacation. Police officer Steve Bratcher, who was one of the officers present for the statement, denied supplying Simonek with details about the crime scene. After giving his statement on August 9, 1999, Simonek was released and never prosecuted.
¶ 44 Assistant State's Attorney John Dillon testified that in July 1998, he had questioned Simonek at the request of Jim Bell. Simonek initially described what he was telling Dillon as a “vision statement,” meaning that the information Simonek gave was based on a vision he had. Dillon further testified that Simonek gave five statements, none of which was the same, which Simonek claimed were based on his firsthand knowledge.
¶ 45 Apart from Simonek's statement, the court denied defendant's motion to admit two other statements made to police, from Todd Wakefield and his former girlfriend, Casey Sander. In his statement, Wakefield told police that he was at Brown's Chicken on the night of the murders, that he ate the last meal sold that night, but that he was not involved in the murders. Sander told police that she saw Wakefield shoot people at the Brown's Chicken, but she denied any involvement in the murders. The trial court found that these hearsay statements, discussed in more detail in the analysis below, could not be admitted as statements against penal interest.
¶ 46 The jury found defendant guilty of seven counts of first degree murder. The trial court denied defendant's motion for a new trial. The jury found defendant eligible for the death penalty but did not reach a unanimous verdict on imposing a death sentence, and the court sentenced defendant to natural life imprisonment. Defendant now appeals.
¶ 48 I. Latent Print Evidence
¶ 49 Defendant first argues that the trial court erred when it denied his motion in limine to exclude John Onstwedder's expert testimony that defendant's palm print matched the partial latent print from the napkin found at Brown's Chicken. Defendant contends that the method used to match a known print to a latent print, known as friction ridge analysis or, more specifically “ACE–V,” is not generally accepted in the relevant scientific community and thus the expert testimony should have been excluded under Frye v. United States, 293 F. 1013, 1014 (D.C.Cir.1923). Alternatively, defendant argues that the court should have conducted a Frye hearing to determine whether friction ridge analysis is a generally accepted technique within the relevant community. Our review is de novo. In re Commitment of Simons, 213 Ill.2d 523, 530–31, 290 Ill.Dec. 610, 821 N.E.2d 1184 (2004) People v. McKown (McKown I), 226 Ill.2d 245, 254, 314 Ill.Dec. 742, 875 N.E.2d 1029 (2007).
¶ 50 The admission of expert testimony in Illinois is governed by the Frye “general acceptance test.” Under Frye, “scientific evidence is admissible at trial only if the methodology or scientific principle upon which the opinion is based is ‘sufficiently established to have gained general acceptance in the particular field in which it belongs.’ “ Simons, 213 Ill.2d at 529–30, 290 Ill.Dec. 610, 821 N.E.2d 1184 (quoting Frye, 293 F. at 1014). “[T]he Frye test is necessary only if the scientific principle, technique or test offered by the expert to support his or her conclusion is ‘new’ or ‘novel.’ “ People v. McKown (McKown II), 236 Ill.2d 278, 282–83, 338 Ill.Dec. 415, 924 N.E.2d 941 (2010). “General acceptance” of a methodology does not mean “universal acceptance,” and “it does not require that the methodology * * * be accepted by unanimity, consensus, or even a majority of experts.” Simons, 213 Ill.2d at 530, 290 Ill.Dec. 610, 821 N.E.2d 1184 Donaldson v. Central Illinois Public Service Co., 199 Ill.2d 63, 76–77, 262 Ill.Dec. 854, 767 N.E.2d 314 (2002), abrogated on other grounds by Simons, 213 Ill.2d at 530, 290 Ill.Dec. 610, 821 N.E.2d 1184. Under Frye, we inquire as to the general acceptance of a methodology, not the particular conclusion reached by an examiner or the application of the methodology in a particular case. Donaldson, 199 Ill.2d at 77, 262 Ill.Dec. 854, 767 N.E.2d 314 McKown I, 226 Ill.2d at 255, 314 Ill.Dec. 742, 875 N.E.2d 1029.
¶ 51 A. Judicial Notice of General Acceptance
¶ 52 “A court may determine the general acceptance of a scientific principle or methodology in either of two ways: (1) based on the results of a Frye hearing or (2) by taking judicial notice of unequivocal and undisputed prior judicial decisions or technical writings on the subject.” McKown I, 226 Ill.2d at 254, 314 Ill.Dec. 742, 875 N.E.2d 1029. In this case, the trial court denied defendant's motion for a Frye hearing and instead concluded that “the methodology used in palm print identification is commonly accepted within the scientific community.” Because the court's decision to take judicial notice of general acceptance is central to this appeal, we review in some detail two recent opinions from our supreme court on the issue: In re Commitment of Simons, finding that judicial notice of general acceptance was proper, and McKown I, finding that a Frye hearing was required.
¶ 53 In Simons, our supreme court considered whether the trial court properly took judicial notice of the general acceptance of actuarial risk assessment. Simons, 213 Ill.2d at 533, 290 Ill.Dec. 610, 821 N.E.2d 1184. The supreme court, noting that the appellate court in Illinois was “sharply divided” on the question (id.), explained that “experts in at least 19 other states rely upon actuarial risk assessment in forming their opinions on sex offenders' risk of recidivism.” Id. at 535–36, 290 Ill.Dec. 610, 821 N.E.2d 1184. Among these court opinions, eight had “directly addressed the Frye question and concluded either that Frye is inapplicable to actuarial risk assessment or that actuarial risk assessment satisfies the general acceptance standard.” Id.
¶ 54 While acknowledging that “relying exclusively upon prior judicial decisions to establish general scientific acceptance can be a hollow ritual if the underlying issue of scientific acceptance has not been adequately litigated,” the court concluded that “general acceptance of actuarial risk assessment has been thoroughly litigated in several states.” (Internal quotation marks omitted.) Id. at 537, 290 Ill.Dec. 610, 821 N.E.2d 1184. As an example, the Simons court reviewed a Florida appellate court opinion where the court reviewed testimony of four experts at trial (which showed that actuarial risk assessment was frequently used), “exhaustively examined” conclusions reached by a number of other psychologists in the academic literature, and “surveyed the nationwide jurisprudence,” which uniformly found actuarial risk assessment evidence admissible. Id. at 537–39, 290 Ill.Dec. 610, 821 N.E.2d 1184 (citing Roeling v. State, 880 So.2d 1234, 1238–40 (Fla.Dist.Ct.App.2004)). The court specifically noted the absence of judicial opinions contradicting the conclusion reached by the Florida court: “As importantly, we were unable to identify any state outside of Illinois in which expert testimony based upon actuarial risk assessment was deemed inadmissible on the question of sex offender recidivism.” Id. at 539, 290 Ill.Dec. 610, 821 N.E.2d 1184.
¶ 55 The Simons court then observed that several jurisdictions mandated actuarial risk assessment by statute or regulation. The court further “note[d] that the academic literature contains many articles confirming the general acceptance of actuarial risk assessment by professionals who assess sexually violent offenders for risk of recidivism.” Id. at 541, 290 Ill.Dec. 610, 821 N.E.2d 1184. Ultimately, the court concluded that “[t]aking all of this together-the case law, the statutory law, and the academic literature,” it was “more than convinced that actuarial risk assessment has gained general acceptance in the psychological and psychiatric communities.” Id. at 543, 290 Ill.Dec. 610, 821 N.E.2d 1184.
¶ 56 The court encountered a very different landscape of judicial opinions and scientific literature in McKown I. In McKown I, the trial court admitted the results of Horizontal Gaze Nystagmus (HGN) testing, purportedly measuring involuntary rapid movement of the eyeballs. McKown I, 226 Ill.2d at 247–48, 314 Ill.Dec. 742, 875 N.E.2d 1029. On appeal, the defendant argued that the trial court erred in admitting the results of a test without first holding a Frye hearing to determine whether it was generally accepted that HGN testing is a reliable indicator of alcohol consumption. Id. at 247, 314 Ill.Dec. 742, 875 N.E.2d 1029.
¶ 57 Our supreme court first determined that the methodology of HGN testing is novel for purposes of Frye “[g]iven the history of legal challenges to the admissibility of HGN test evidence, and the fact that a Frye hearing has never been held in Illinois on this matter.” Id. at 258, 314 Ill.Dec. 742, 875 N.E.2d 1029. The court relied on a decision of the California Supreme Court, which held that “ ‘HGN testing has been repeatedly challenged in court, with varying degrees of success, in this and other states, and accordingly its courtroom use cannot fairly be characterized as “routine” or settled in law.’ “ Id. at 257, 314 Ill.Dec. 742, 875 N.E.2d 1029 (quoting People v. Leahy, 8 Cal.4th 587, 34 Cal.Rptr.2d 663, 882 P.2d 321, 332 (Cal.1994)). The McKown I court noted that since the California decision in 1994, “HGN testing has been repeatedly challenged in courts around the nation, and the issue remains unsettled.” Id.
¶ 58 Turning to the issue of general acceptance, the court first reviewed the divergent decisions of the Illinois Appellate Court. While two districts of the Illinois Appellate Court had taken judicial notice of general acceptance of HGN testing as an indicator of intoxication, in another decision, People v. Kirk, 289 Ill.App.3d 326, 224 Ill.Dec. 452, 681 N.E.2d 1073 (1997), the appellate court declined to take judicial notice of the general acceptance of HGN testing. In Kirk, the appellate court found that in the seminal case establishing general acceptance relied on by most courts, the trial court had not conducted a Frye hearing, and the reviewing court relied on the testimony of the State's expert to establish general acceptance, as no defense witness had been called. Kirk, 289 Ill.App.3d at 333–34, 224 Ill.Dec. 452, 681 N.E.2d 1073 McKown I, 226 Ill.2d at 263–65, 314 Ill.Dec. 742, 875 N.E.2d 1029. Beyond the Kirk decision, the McKown I court reviewed several appellate court decisions from foreign jurisdictions and found that they were “as varied as the states that have made them” regarding the general acceptance of HGN testing. McKown I, 226 Ill.2d at 272, 314 Ill.Dec. 742, 875 N.E.2d 1029. The court concluded that these decisions did “not present the kind of unequivocal or undisputed viewpoint on the issue upon which a court can take judicial notice.” Id.
¶ 59 Turning to the State's argument that the court could take judicial notice of the general acceptance of HGN testing “based on the technical writings on the subject,” the court found that “HGN testing appears to have as many critics as it does champions.” Id. at 275, 314 Ill.Dec. 742, 875 N.E.2d 1029. Without an “unequivocal or undisputed viewpoint” from the scientific literature, the court decided that it could not “take judicial notice of the general acceptance of the HGN test as a reliable indicator of alcohol impairment based on these technical writings.” Id. In contrast to its resolution in Simons, the McKown I court concluded that general acceptance could not be resolved on judicial notice alone:
“In light of the disparate resolutions of the issue in foreign jurisdictions, the varying opinions expressed in articles on the subject, the fact that a Frye hearing has never been held on the matter in Illinois, and the fact that, as far as we are aware, the last Frye hearing held on this controversial methodology was held in Washington in 2000, we hold that a Frye hearing must be held to determine if the HGN test has been generally accepted as a reliable indicator of alcohol impairment.” Id. at 275, 314 Ill.Dec. 742, 875 N.E.2d 1029.
With this framework in mind, we turn to the relevant methodology in this case.
¶ 60 B. Latent Print Identification: The ACE–V Method
¶ 61 Print examiners compare impressions left by “friction ridge skin” found on the inner surfaces of the hand, the fingertips, between and along the fingers, the palms, and on the soles of the feet. While defendant does not dispute that human friction ridge skin is unique and permanent, defendant challenges the method used to match a latent print (i.e., a fingerprint impression that is not visible to the naked eye without chemical enhancement) with the print of an identified source. That method is known as “ACE–V,” which signifies analysis, comparison, evaluation, and verification. While the steps performed under ACE–V are essentially the same steps performed by fingerprint experts over the last hundred years, ACE–V has been identified in forensic literature as a means of comparative analysis of evidence since 1959. National Research Council of the National Academy of Sciences, Strengthening Forensic Science in the United States: A Path Forward 137 (2009) (NRC Report). The ACE–V method has been generally described as follows:
“The process begins with the analysis of the unknown friction ridge print (now often a digital image of a latent print). Many factors affect the quality and quantity of detail in the latent print and also introduce variability in the resulting impression. * * *
* * * If the examiner deems that there is sufficient detail in the latent print (and the known prints), the comparison of the latent print to the known prints begins.
Visual comparison consists of discerning, visually ‘measuring,’ and comparing—within the comparable areas of the latent print and the known prints—the details that correspond. The amount of friction ridge detail available for this step depends on the clarity of the two impressions. The details observed might include the overall shape of the latent print, anatomical aspects, ridge flows, ridge counts, shape of the core, delta location and shape, lengths of the ridges, minutia location and type, thickness of the ridges and furrows, shapes of the ridges, pore position, crease patterns and shapes, scar shapes, and temporary feature shapes (e.g., a wart).
At the completion of the comparison, the examiner performs an evaluation of the agreement of the friction ridge formations in the two prints and evaluates the sufficiency of the detail present to establish an identification (source determination). Source determination is made when the examiner concludes, based on his or her experience, that sufficient quantity and quality of friction ridge detail is in agreement between the latent print and the known print. Source exclusion is made when the process indicates sufficient disagreement between the latent print and known print. If neither an identification nor an exclusion can be reached, the result of the comparison is inconclusive. Verification occurs when another qualified examiner repeats the observations and comes to the same conclusion, although the second examiner may be aware of the conclusion of the first.” NRC Report, supra, at 137–38.
¶ 62 C. Latent Print Analysis as “New” or “Novel”
¶ 63 As a threshold matter, we must consider whether a Frye inquiry is required at all, for application of the Frye standard is limited to scientific methodology that is considered “new” or “novel.” McKown, 226 Ill.2d at 257, 314 Ill.Dec. 742, 875 N.E.2d 1029 Donaldson, 199 Ill.2d at 78, 262 Ill.Dec. 854, 767 N.E.2d 314. While recognizing that “a ‘new’ or ‘novel’ scientific technique is not always easy to identify, especially in light of constant scientific advances in our modern era,” our supreme court has instructed that generally a “scientific technique is ‘new’ or ‘novel’ if it is ‘original or striking’ or does ‘not resembl[e] something formerly known or used.’ “ Donaldson, 199 Ill.2d at 78, 262 Ill.Dec. 854, 767 N.E.2d 314 (quoting Webster's Third New International Dictionary 1546 (1993)).
¶ 64 The State initially contends that we can end our inquiry here because the long history of latent print analysis precludes a conclusion that the “methodology used in [the] comparison of the latent and known prints” is new or novel. People v. Mitchell, 2011 IL App (1st) 083143, ¶ 31, 353 Ill.Dec. 369, 955 N.E.2d 1180. In Mitchell, this court reasoned that “[u]ntil our supreme court decides otherwise, as it did with regard to the HGN evidence in People v. McKown, 226 Ill.2d 245, 257, 314 Ill.Dec. 742, 875 N.E.2d 1029 (2007), there is no authority in this state for the defendant's claim that the circuit court erred in rejecting the defendant's motion for a Frye hearing on the admissibility of fingerprint evidence. Nor are we persuaded to provide such authority in this case.” Id.
¶ 65 While it would be difficult to describe fingerprint evidence as “original or striking” or not “formerly known or used,” Donaldson acknowledges that “constant scientific advances in our modern era” may affect our inquiry as to the novelty of a particular methodology. Donaldson, 199 Ill.2d at 79, 262 Ill.Dec. 854, 767 N.E.2d 314. Drawing support from McKown I, defendant argues that latent print identification should be deemed “novel” because although latent print analysis has long been used by print examiners, the methodology, like HGN testing, has been subject to recent court challenges and has not been subject to a Frye hearing in Illinois. See McKown I, 226 Ill.2d at 258, 314 Ill.Dec. 742, 875 N.E.2d 1029 (“Given the history of legal challenges to the admissibility of HGN test evidence, and the fact that a Frye hearing has never been held in Illinois on this matter, we conclude that the methodology of HGN testing is novel for purposes of Frye.”).
¶ 66 According to defendant, “[n]ovelty for Frye purposes is not determined by the number of years a technique has been used, but by whether the methodology has been subject to recent challenge.” We find it unclear whether the court in McKown I found that HGN testing was novel (1) merely because there was a “history of legal challenges,” or (2) because the courts addressing those challenges had reached different conclusions as to the general acceptance of HGN testing. See id. at 257, 314 Ill.Dec. 742, 875 N.E.2d 1029. As explained below, courts have uniformly rejected challenges to the admissibility of print evidence under Frye or Daubert. In any event, even if we agree with defendant's view of the novelty threshold and find that the ACE–V methodology is “novel” merely because recent challenges have been brought, it does not follow that the trial court was required to hold a Frye hearing. We must consider whether it was proper for the trial court to take judicial notice of prior judicial decisions that have thoroughly explored the issue or technical writings on the subject. McKown I, 226 Ill.2d at 254, 314 Ill.Dec. 742, 875 N.E.2d 1029.
¶ 67 D. Judicial Notice of General Acceptance of the ACE–V Method
¶ 68 As noted above, and as defendant acknowledges, wholesale objections to the ACE–V methodology have been uniformly rejected by state appellate courts (under Frye, Daubert, or some hybrid standard of admissibility) and by federal appellate courts (under Daubert ). 1 See, e.g., State v. Dixon, 822 N.W.2d 664, 674–75 (Minn.Ct.App.2012) (affirming lower court's conclusion that ACE–V methodology was generally accepted within the relevant scientific community, after conducting Frye hearing involving three expert witnesses from the State and two expert witnesses for the defendant) Commonwealth v. Patterson, 445 Mass. 626, 840 N.E.2d 12 (Mass.2005) (affirming lower court's conclusion, after conducting five-day evidentiary hearing to consider defendant's challenge to the admissibility of print evidence, that latent print identification theory and the ACE–V process of identification were generally accepted in the fingerprint examiner community) Commonwealth v. Gambora, 457 Mass. 715, 933 N.E.2d 50, 55–61 (Mass.2010) (reaffirming its earlier decision in Patterson ) Markham v. State, 189 Md.App. 140, 984 A.2d 262 (Md.Ct.Spec.App.2009) (taking judicial notice of general acceptance of latent fingerprint analysis in case involving a palm print) Barber v. State, 952 So.2d 393, 422 (Ala.Crim.App.2005) (taking judicial notice of general acceptance of latent print analysis in case involving a palm print) State v. Escobido–Ortiz, 109 Hawai‘i 359, 126 P.3d 402, 413 (Haw.Ct.App.2005) (“We take judicial notice, based on the overwhelming case law from other jurisdictions, that the theory underlying latent fingerprint identification is valid and that the procedures used in identifying latent fingerprints, if performed properly, have been widely accepted as reliable.”) United States v. Herrera, 704 F.3d 480, 484 (7th Cir.2013) (rejecting the defendant's “frontal assault on the use of fingerprint evidence in litigation” under Daubert) United States v. Baines, 573 F.3d 979, 989–92 (10th Cir.2009) (finding, based on record developed at a Daubert hearing, “overwhelming acceptance” among “experts in the field,” but noting “little presence of disinterested experts such as academics”) United States v. Pena, 586 F.3d 105, 109–11 (1st Cir.2009) (taking judicial notice of Baines, Mitchell, and other federal decisions as to reliability of ACE–V method under Daubert) United States v. Vargas, 471 F.3d 255, 265–66 (1st Cir .2006) United States v. Abreu, 406 F.3d 1304, 1307 (11th Cir.2005) United States v. Mitchell, 365 F.3d 215, 244–46 (3d Cir.2004) (reviewing the “voluminous record” developed at a Daubert hearing where witnesses for the government and defendant testified, finding latent print evidence admissible under Daubert, and finding that “finger print identification is generally accepted within the fingerprint identification community”) United States v. Crisp, 324 F.3d 261, 269 (4th Cir.2003) (finding that defendant had “provided us no reason today to believe that this general acceptance of the principles underlying fingerprint identification has, for decades, been misplaced”) United States v. George, 363 F.3d 666, 672–73 (7th Cir.2004) United States v. Sherwood, 98 F.3d 402, 408 (9th Cir.1996) see also United States v. Llera Plaza, 188 F.Supp.2d 549, 563–64 (E.D.Pa.2002) (finding that latent print analysis satisfied Daubert test, after hearing from two government witnesses and three defense witnesses, and finding that “fingerprint community's ‘general acceptance’ of ACE–V should not be discounted because fingerprint specialists * * * have ‘technical, or other specialized knowledge’ [citation], rather than ‘scientific knowledge’ [citation]”).
¶ 69 Indeed, defendant has not cited a published opinion of any court 2 suggesting that ACE–V methodology is not generally accepted within the relevant scientific community or holding that finger or palm print evidence is inadmissible. Instead, while acknowledging the pervasive use of fingerprint identification—and the overwhelming conclusion by the courts that such evidence is admissible under Daubert or Frye—defendant argues that recent criticisms of the ACE–V method in the scientific literature preclude this court from taking judicial notice of its general acceptance. Focusing primarily on a 2009 report from the National Research Council of the National Academy of Sciences, defendant identifies several specific criticisms of the ACE–V method, which he contends establish a lack of general acceptance within the relevant scientific community. NRC Report, supra. Defendant further argues that because earlier court decisions merely reflect “[u]ncritical admission of latent print evidence,” or otherwise do not address the report's critiques, it was improper for the trial court to take judicial notice of court decisions finding a general acceptance of latent print identification. For several reasons, we conclude that these criticisms—which have already been considered in detail by courts since the report's release—do not undermine the uniform judicial conclusion that latent print identification is generally accepted in the scientific community.
¶ 70 First, under the Frye framework in Illinois, various critiques that defendant highlights from the NRC Report go to the weight of the evidence, not to its admissibility under Frye. Our supreme court has repeatedly reaffirmed that under Frye, our role is to evaluate the underlying methodology used to generate the conclusion, not the conclusion reached by the examiner: “If the underlying method used to generate an expert's opinion [is] reasonably relied upon by the experts in the field, the fact finder may consider the opinion-despite the novelty of the conclusion rendered by the expert.” Donaldson, 199 Ill.2d at 77, 262 Ill.Dec. 854, 767 N.E.2d 314 McKown I, 226 Ill.2d at 255, 314 Ill.Dec. 742, 875 N.E.2d 1029. Similarly, “[q]uestions concerning underlying data, and an expert's application of generally accepted techniques, go to the weight of the evidence, rather than its admissibility.” (Emphasis in original.) Donaldson, 199 Ill.2d at 81, 262 Ill.Dec. 854, 767 N.E.2d 314 In re Detention of Erbe, 344 Ill.App.3d 350, 372, 279 Ill.Dec. 295, 800 N.E.2d 137 (2003).
¶ 71 We acknowledge that the NRC Report and the scientific literature sound direct criticisms to specific claims from latent print examiners. For example, the report contends that examiners have no justification for “claims of absolute, certain confidence” when declaring a match, and that examiners sometimes testify to a “zero error rate,” even though there are known cases of false identification based on errors in human judgment or in execution of the ACE–V method. NRC Report, supra, at 143–44 see also Gambora, 933 N.E.2d at 60 (noting that these are “issues about which the NAS Report 3 is most critical”). Under Illinois law, the forum for these criticisms—directed at the examiner's stated conclusion regarding what his testing revealed—is trial, not a Frye admissibility challenge. Before the jury, the examining attorney “may expose shaky but admissible evidence by vigorous cross-examination or the presentation of contrary evidence.” Donaldson, 199 Ill.2d at 88, 262 Ill.Dec. 854, 767 N.E.2d 314. Donaldson, 199 Ill.2d at 81, 262 Ill.Dec. 854, 767 N.E.2d 314 see also Detention of Erbe, 344 Ill.App.3d at 372, 279 Ill.Dec. 295, 800 N.E.2d 137 (noting that “concern with ‘frequent scoring inconsistencies by different evaluators' [in actuarial risk assessment to determine probability of reoffending] goes to the expert's application of the actuarial instruments, not to their general acceptance”).
¶ 72 Our supreme court has also recently emphasized that expert testimony may be excluded or limited under traditional evidentiary rules, even where Frye is not a bar to admissibility. In McKown II, in response to the defendant's argument “that despite its relevance, a failed HGN test result ‘proves too much’ because of its ‘aura’ of scientific certainty,” the court noted that “finding that HGN evidence meets the Frye standard does not preclude the possibility that, in a given case, the trial court might rule such evidence inadmissible on grounds of undue prejudice.” McKown II, 236 Ill.2d at 305, 338 Ill.Dec. 415, 924 N.E.2d 941. Similarly, where an expert testifies as to a generally accepted method, the proponent of that testimony has the burden of laying a proper foundation. Id. at 311, 338 Ill.Dec. 415, 924 N.E.2d 941. While we express no opinion about the viability of specific efforts to exclude claims of zero error or testimony regarding the certainty of a match in future cases (which depend on the specific testimony and the support offered for those claims), we reiterate that a Frye admissibility challenge is not the proper vehicle to question the conclusions an examiner reaches in a particular case. Further, in this case, latent print examiner Onstwedder was thoroughly cross-examined by defense counsel about his ability to draw a conclusion as to the partial palm print in this case, as well as the reliability of latent print identification in light of past mistaken fingerprint identifications and the subjective nature of comparison.
¶ 73 Second, while there is no doubt that the report raises important criticisms of the ACE–V method and otherwise highlighted areas for further research, the report is not a proxy for the admissibility of latent print evidence under either Frye or Daubert:
“The committee decided early in its work that it would not be feasible to develop a detailed evaluation of each discipline in terms of its scientific underpinning, level of development, and ability to provide evidence to address the major types of questions raised in criminal prosecutions and civil litigation.” NRC Report, supra, at 7.
Nor do the specific critiques of ACE–V, in themselves, necessarily reflect a view that the methodology is not “sufficiently established to have gained general acceptance in the particular field in which it belongs.” (Internal quotation marks omitted.) Simons, 213 Ill.2d at 529–30, 290 Ill.Dec. 610, 821 N.E.2d 1184. To be sure, the report is critical of the ACE–V methodology and its reliance on subjective assessment of an examiner:
“ACE–V provides a broadly stated framework for conducting friction ridge analyses. However, this framework is not specific enough to qualify as a validated method for this type of analysis. ACE–V does not guard against bias it is too broad to ensure repeatability and transparency and does not guarantee that two analysts following it will obtain the same results. For these reasons, merely following the steps of ACE–V does not imply that one is proceeding in a scientific manner or producing reliable results.” NRC Report, supra, at 142.
Yet in its summary assessment of print identification, the report explains that “[b]ecause of the amount of detail available in friction ridges, it seems plausible that a careful comparison of two impressions can accurately discern whether or not they had a common source.” Id. And the report acknowledges that “[h]istorically, friction ridge analysis has served as a valuable tool, both to identify the guilty and to exclude the innocent.” Id. As the Supreme Court of Massachusetts explained, the report “does not conclude that fingerprint evidence is so unreliable that courts should no longer admit it.” Gambora, 933 N.E.2d at 58.
¶ 74 As in Gambora, several other courts have found that the nuanced report, while critical of various aspects of the ACE–V methodology, does not in itself establish a lack of general acceptance among the relevant scientific community or otherwise undermine the uniform body of precedent rejecting admissibility challenges to print evidence. See State v. Dixon, 822 N.W.2d 664, 674–75 (Minn.Ct.App.2012) (agreeing with the lower court's conclusion—after conducting a Frye hearing, hearing from experts for the State and the defendant, and assessing the NRC Report—that the relevant scientific field widely accepts the ACE–V methodology and that “[t]he fact that friction ridge analysis can and should be improved and strengthened does not mean that it is inadmissible under Frye ”) Johnston v. State, 27 So.3d 11, 21 (Fla.2010) (agreeing with a lower court that the NRC report “is merely a new or updated discussion of issues regarding developments in forensic testing” and concluding that it “lacks the specificity that would justify a conclusion that it provides a basis to find the forensic evidence admitted at trial to be infirm or faulty”) United States v. Rose, 672 F.Supp.2d 723, 725 (D.Md.2009) (“[T]he Report itself did not conclude that fingerprint evidence was unreliable such as to render it inadmissible under Fed. R. Ev. 702.”) United States v. Stone, 848 F.Supp.2d 714, 717–18 (E.D.Mich.2012) (rejecting defendants' request to exclude finger and palm print evidence under Daubert, despite defendants' challenge to the “the continuing viability of latent fingerprint evidence in the face of the NAS Committee's call for more research”) see also Pettus v. United States, 37 A.3d 213, 226–29 (D.C.2012) State v. McGuire, 419 N.J.Super. 88, 16 A.3d 411, 436–37 (N.J.Super.Ct.App.Div.2011) (both discussing relevance of NRC Report to admission of non-fingerprint scientific evidence). We find the conclusion of the Massachusetts Supreme Court captures the prevailing view of the NRC report:
“As our discussion of the NAS Report reflects, there is tension in the report between its assessments that, on the one hand, ‘it seems plausible that a careful comparison of two impressions can accurately discern whether or not they had a common source,’ NAS Report at 142 but that, on the other, ‘merely following the steps of ACE–V does not imply that one is proceeding in a scientific manner or producing reliable results.’ Id. We are not able to resolve that tension at this time, as the dialogue in the relevant community appears to be a continuing one, see NAS Report at 144–145 & n.37, but nothing in this opinion should be read to suggest that the existence of the NAS Report alone will require the conduct of Daubert-Lanigan hearings as to the general reliability of expert opinions concerning fingerprint identifications.” 4 Gambora, 933 N.E.2d at 61 n. 22.
¶ 75 Third, while the report represents the views of a segment of the scientific community, it does not neatly reflect the views of the entirety of the relevant scientific community. This court has counseled against too narrowly defining the relevant scientific community to those who share the views of the testifying expert. See Bernardoni v. Industrial Com'n, 362 Ill.App.3d 582, 595, 298 Ill.Dec. 530, 840 N.E.2d 300 (2005) (“The community of experts must include a sufficiently broad sample of experts so that the possibility of disagreement exists.”). But we would make a similar mistake if we assume that the criticisms in the report are representative of the views of experts throughout the community.
¶ 76 Although the parties offer little discussion of the makeup of the relevant scientific community, defendant concedes that it includes “forensic practitioners,” as well as those with a scientific background and training “sufficient to allow them to comprehend and understand [the ACE–V method] and form a judgment about it.” That view is generally consistent with decisions in Illinois considering other methodologies and scientific principles. See People v. Eyler, 133 Ill.2d 173, 215, 139 Ill.Dec. 756, 549 N.E.2d 268 (1989) (agreeing with lower court that “electrophoresis is generally accepted by forensic scientists as a reliable method of detecting genetic markers in blood and is therefore admissible”) People v. Thomas, 137 Ill.2d 500, 518, 148 Ill.Dec. 751, 561 N.E.2d 57 (1990) (“The Partee decision allowed the trial court in this case to effectively take judicial notice of the fact that the relevant scientific community, that is, the community of forensic scientists, has determined that the process itself is reliable.”) People v. Watson, 257 Ill.App.3d 915, 926, 196 Ill.Dec. 89, 629 N.E.2d 634 (1994) (rejecting the State's argument that the relevant field includes only forensic scientists and concluding that the general acceptance of the proposed DNA profiling evidence should be evaluated by scientists in the fields of molecular biology, population genetics, and forensic science) cf. McKown II, 236 Ill.2d at 300, 338 Ill.Dec. 415, 924 N.E.2d 941 (where the question was whether HGN testing could reliably determine alcohol impairment, relevant scientific fields included medicine, ophthalmology, optometry, and neurophysiology, and thus police officer's testimony that “HGN testing is generally accepted within the law enforcement community as a field-sobriety test” was irrelevant, for “law enforcement” is not a scientific field) In re Commitment of Sandry, 367 Ill.App.3d 949, 967, 306 Ill.Dec. 202, 857 N.E.2d 295 (2006) (framing the question as whether the penile plethysmograph testing is generally accepted “within the field consisting of those experts who specialize in treating sex offenders”). That view is also consistent with other decisions in fingerprint cases. See, e.g., Dixon, 822 N.W.2d at 674 (noting that “relevant scientific community” includes experts who are “actually involved in latent-print analysis and those who actually research the reliability of latent-print analysis”) Baines, 573 F.3d at 991 (finding “overwhelming acceptance” among the relevant community for general acceptance, which the court noted includes “experts in the field,” even though there is “little presence of disinterested experts such as academics”) Llera Plaza, 188 F.Supp.2d at 563–64 (describing relevant community as fingerprint specialists with technical or specialized knowledge) see also Mitchell, 365 F.3d at 241 (concluding friction ridge analysis is generally accepted in the “forensic identification community”).
¶ 77 While defendant accepts that forensic practitioners are part of the relevant scientific community, he argues that practitioners' “confidence in the practice of latent print identification, and its longstanding use, is insufficient to demonstrate scientific validity and generally acceptance in the broader community.” Contrary to defendant's characterization, however, including forensic practitioners within the relevant community does not mean that general acceptance merely depends on testimony regarding “[r]epeated use by self-interested practitioners” who lack the scientific knowledge and expertise to meaningfully assess the ACE–V method. For example, the federal court in Rose was presented with an amicus brief from 16 academicians and practicing scientists representing a variety of scholarly as well as technical and scientific disciplines concerned with the reliability of fingerprint individualization evidence. 5 See Rose, 672 F.Supp.2d at 724. These scientists, arguing in support of the admissibility of fingerprint evidence, represented that they had “authored books and articles in leading scientific, peer-reviewed journals on forensics and fingerprint issues.”
¶ 78 More recently, as part of a Frye hearing, a Minnesota trial court heard testimony from two forensic scientists, Glenn Langenburg and Dr. Cedric Neumann, who opined that the ACE–V method is generally accepted within the relevant scientific community. 6 Dixon, 822 N.W.2d at 667–68. The court also reviewed the testimony of Dr. Sandy Zabell, an expert for the defense with a master's degree in biochemistry and a Ph.D. in mathematics. While of the opinion that “ACE–V is not accepted as an objective, scientifically validated protocol,” Dr. Zabell acknowledged that “it is viewed by many in the scientific community as a framework for subjective assessment with a limited amount of detail.” Id. at 670. He continued that “it is not his opinion that fingerprint evidence is unreliable or should not be allowed in court rather, it is his opinion that it should be allowed with various safeguards about what an examiner can say.” Id. It was this testimony and the NRC Report-representing not just self-interested practitioners with limited knowledge, but experts “actually involved in latent-print analysis and those who actually research the reliability of latent-print analysis”—on which the court based its conclusion that experts in the relevant scientific community widely accept the ACE–V methodology. Id. at 674.
¶ 79 Neither party here has offered a holistic description of the current views of the scientific community, and we do not mean to suggest that we have provided an exhaustive review here. But we reject defendant's claim that the various courts considering latent print analysis have failed to “thoroughly explore” the views of the relevant community, including the criticisms listed in the 2009 NRC Report. Rather than simply concluding that fingerprint evidence is admissible based on repeated use by practitioners, the courts have heard forensic scientists and experts respond to critiques of the ACE–V method from the NRC Report and elsewhere—sometimes after extensive Frye or Daubert hearings. See, e.g., Dixon, 822 N.W.2d at 667–68 compare McKown I, 226 Ill.2d at 263–65, 314 Ill.Dec. 742, 875 N.E.2d 1029 (reviewing criticisms of seminal case establishing general acceptance of HGN testing where no Frye hearing was held and court relied upon testimony of prosecution witness alone to establish general acceptance). The overwhelming conclusion has been that these criticisms do not counsel against admissibility of latent print evidence.
¶ 80 Moreover, the “general acceptance” standard tolerates criticism of a methodology from experts within the scientific community: “[g]eneral acceptance of methodologies does not mean ‘universal’ acceptance of methodologies.” Donaldson, 199 Ill.2d at 77, 262 Ill.Dec. 854, 767 N.E.2d 314. Although a technique “is not ‘generally accepted’ if it is experimental or of dubious validity,” our supreme court has stated—and reaffirmed—that “general acceptance does not require that the methodology be accepted by unanimity, consensus, or even a majority of experts.” Id. at 78, 262 Ill.Dec. 854, 767 N.E.2d 314 Simons, 213 Ill.2d at 530, 290 Ill.Dec. 610, 821 N.E.2d 1184. Without addressing this binding precedent, defendant, relying on McKown I, repeatedly suggests that judicial notice is inappropriate in the absence of “unequivocal and undisputed” technical writings. In McKown I, however, the court found the views among court decisions to be as varied as their states of origin scientific publications, revealing no consensus within the relevant community, were likewise a poor source to glean general acceptance. Here, the landscape is very different. Defendant has not cited a single published opinion from any state or federal court finding latent print evidence inadmissible under Frye or Daubert. This is the same situation our supreme court encountered in Simons, where judicial notice was appropriate, not McKown I. Compare Simons, 213 Ill.2d at 539, 290 Ill.Dec. 610, 821 N.E.2d 1184 (“[W]e were unable to identify any state outside of Illinois in which expert testimony based upon actuarial risk assessment was deemed inadmissible on the question of sex offender recidivism.”), with McKown I, 226 Ill.2d at 272, 314 Ill.Dec. 742, 875 N.E.2d 1029 (“These disparate [judicial] opinions provide insight as to how HGN testing has been addressed, but do not present the kind of unequivocal or undisputed viewpoint on the issue upon which a court can take judicial notice.”).
¶ 81 Where “general acceptance” does not require unanimity, consensus, or even a majority of experts, and those courts considering latent print analysis have considered the range of views within the relevant scientific community, we conclude that the trial court did not err in taking judicial notice of the general acceptance of the ACE–V methodology. Accord In re Commitment of Sandry, 367 Ill.App.3d 949, 975, 306 Ill.Dec. 202, 857 N.E.2d 295 (2006) (taking judicial notice general acceptance of penile plethysmograph testing under Frye, where the testing was widely used, several courts had found that the methodology met the general acceptance test, and scientific literature supported its general acceptance, notwithstanding the criticism of penile plethysmograph testing within the literature and the rejection of it by several courts applying Daubert ).
¶ 82 We finally note that defendant, at times, attempts to narrow his outright attack on the acceptance of the ACE–V method, arguing that he is simply challenging the acceptance of “ACE–V as applied to palm prints.” The problem is that his criticisms cannot be divorced from those raised (and rejected uniformly by courts) as to the ACE–V method itself, whether applied to fingerprints or palm prints. Defendant seems to recognize this flaw, for he takes the unqualified position that “friction ride analysis is not generally accepted in the relevant scientific community.” And while he principally relies on the criticisms of “friction ridge analysis” in the NRC Report, the report does not draw any distinction between friction ridge analysis as applied to fingerprints, palm prints, and sole prints. See NRC Report, supra, at 136 (“Fingerprints, palm prints, and sole prints have been used to identify people for more than a century in the United States. Collectively, the analysis of these prints is known as ‘friction ridge analysis,’ which consists of experience-based comparisons of the impressions left by the ridge structures of volar (hands and feet) surfaces.”).
¶ 83 Defendant cites to specific criticisms of palm prints included in an affidavit submitted to the trial court by two print experts, Lyn and Ralph Norman Haber. The Habers argue that there has not been a “detailed analysis of how to describe the features of latent palmprints” and that “[t]here are no manuals on palm print comparisons approved or issued by any of the latent print community's professional, regulatory or advisory organizations.” Yet in their affidavit, the Habers acknowledge that others take the view that the ACE–V method can be applied to palm prints, and the Habers more broadly attack the validity of the ACE–V method as applied to palm prints or finger prints. Also, we see little distinction between these specific criticisms and those that have been lodged at the ACE–V method more generally. See, e.g., NRC Report, supra, at 141 (noting that “the latent print examiner learns to judge whether there is sufficient detail,” but contending that “more nuanced criteria are needed, and, in fact, likely can be determined”) Dixon, 822 N.W.2d at 669–70 (noting that Dr. Zabell opined that “there is no ACE–V manual and there is no precise statement as to how certain determinations are made”).
¶ 84 The crux of defendant's argument “vis a vis palm prints” is that the application of the methodology and the conclusion reached regarding a match may be subject to greater criticism because “even less in known about ACE–V as applied to palm prints,” especially with the latent palm print here. For example, defendant notes “several anomalies” as to “the partial latent print in this case.” These criticisms are plainly not directed at the general acceptance of the methodology used to identify finger or palm prints, but are an attack on Onstwedder's ability to apply the method and his conclusion as to a match between defendant's known print and the partial latent palm print. As noted above, the forum to address those criticisms is cross-examination at trial, not a Frye admissibility challenge. Cf. People v. Harris, 314 Ill.App.3d 409, 418, 247 Ill.Dec. 155, 731 N.E.2d 928 (2000) (finding that where defendant did “not challenge the admissibility of [Restriction Fragment Length Polymorphism DNA] testing in general, nor the calculation of probabilities based on complete matches,” defendant's challenge to the calculation of probabilities from incomplete matches went to weight, not admissibility). We conclude that the trial court properly took judicial notice of the general acceptance of the ACE–V methodology within the relevant scientific community.
¶ 86 Defendant next argues that he received ineffective assistance of counsel because his attorneys failed to seek a Frye hearing on the admissibility of DNA evidence. The right to counsel guaranteed by both the United States and Illinois Constitutions includes the right to effective assistance of counsel. U.S. Const., amends. VI, XIV Ill. Const.1970, art. I, § 8. To prevail on a claim of ineffective assistance of counsel, defendant must satisfy the two-part test set forth in Strickland v. Washington: first, “defendant must show that counsel's performance fell below an objective standard of reasonableness,” and second, defendant must show “there is a reasonable probability that, but for counsel's unprofessional errors, the result of the proceeding would have been different.” People v. Manning, 241 Ill.2d 319, 326, 350 Ill.Dec. 262, 948 N.E.2d 542 (2011) (citing Strickland v. Washington, 466 U.S. 668, 688, 104 S.Ct. 2052, 80 L.Ed.2d 674 (1984)).
¶ 87 Counsel's performance is measured by an objective standard of competence under prevailing professional norms. Id. In considering whether counsel's performance was deficient, “a court must indulge a strong presumption that counsel's conduct falls within the wide range of reasonable professional assistance that is, the defendant must overcome the presumption that, under the circumstances, the challenged action ‘might be considered sound trial strategy.’ “ Strickland, 466 U.S. at 689 (quoting Michel v. Louisiana, 350 U.S. 91, 101, 76 S.Ct. 158, 100 L.Ed. 83 (1955)) People v. Evans, 186 Ill.2d 83, 93, 237 Ill.Dec. 118, 708 N.E.2d 1158 (1999). The decision whether to pursue an admissibility challenge under Frye is a matter of trial strategy. People v. Gordon, 378 Ill.App.3d 626, 639, 317 Ill.Dec. 395, 881 N.E.2d 563 (2007) see also People v. Bew, 228 Ill.2d 122, 127–28, 319 Ill.Dec. 878, 886 N.E.2d 1002 (2008) (noting that the decision whether to file a motion to suppress is a matter of trial strategy) cf. People v. Perry, 224 Ill.2d 312, 344, 309 Ill.Dec. 330, 864 N.E.2d 196 (2007) (noting that decisions regarding “ ‘what matters to object to and when to object’ “ are matters of trial strategy (quoting People v. Pecoraro, 175 Ill.2d 294, 327, 222 Ill.Dec. 341, 677 N.E.2d 875 (1997))). Our supreme court has directed that counsel's strategic decisions made after an investigation of the law and the facts are “virtually unchallengeable” (People v. Palmer, 162 Ill.2d 465, 476, 205 Ill.Dec. 506, 643 N.E.2d 797 (1994)) or “virtually unassailable” (People v. Ramsey, 239 Ill.2d 342, 433, 347 Ill.Dec. 588, 942 N.E.2d 1168 (2010)). In other words, matters of trial strategy will not support a claim of ineffective assistance of counsel unless counsel failed to conduct any meaningful adversarial testing. People v. Guest, 166 Ill.2d 381, 394, 211 Ill.Dec. 490, 655 N.E.2d 873 (1995) People v. Patterson, 217 Ill.2d 407, 441, 299 Ill.Dec. 157, 841 N.E.2d 889 (2005). “[T]he defendant must prove that counsel made errors so serious, and that counsel's performance was so deficient, that counsel was not functioning as the ‘counsel’ guaranteed by the sixth amendment.” People v. Richardson, 189 Ill.2d 401, 411, 245 Ill.Dec. 109, 727 N.E.2d 362 (2000).
¶ 88 Under the second prong of Strickland, defendant must prove that there is a reasonable probability that the result of the proceeding would have been different without counsel's unprofessional errors. Strickland, 466 U.S. at 694. In order to establish prejudice resulting from failure to move for a Frye hearing, “ ‘a defendant must show a reasonable probability that: (1) the motion would have been granted, and (2) the outcome of the trial would have been different had the evidence been [excluded].’ “ Bew, 228 Ill.2d at 128–29, 319 Ill.Dec. 878, 886 N.E.2d 1002 (quoting Patterson, 217 Ill.2d at 438, 299 Ill.Dec. 157, 841 N.E.2d 889) People v. Givens, 237 Ill.2d 311, 331, 343 Ill.Dec. 146, 934 N.E.2d 470 (2010). The failure to file a motion does not establish incompetent representation when the motion would have been futile. Patterson, 217 Ill.2d at 438, 299 Ill.Dec. 157, 841 N.E.2d 889.
¶ 89 In this case, defendant argues that the amount of DNA used by the ISP laboratory to generate the DNA profile was so small that it fell below the ISP laboratory's standards for reliable test results and possibly qualified as low copy number (LCN) DNA. According to defendant, because using these amounts of DNA to obtain a profile is not generally accepted within the scientific community, his attorneys were ineffective by failing to request a Frye hearing. The State responds that defendant cannot overcome the strong presumption that his attorneys made the legitimate strategic choice to refrain from seeking a Frye hearing on LCN DNA testing, where the pretrial discovery in this case demonstrated that the sample size could not be considered “low copy number” and was actually within laboratory's standards for reliable test results. The State further argues that defendant cannot show that there is a reasonable probability that a Frye hearing would have been granted or would have resulted in the exclusion of the DNA evidence, where the underlying methodology has been generally accepted in the scientific community and is neither new nor novel.
¶ 90 A. Background of Testing Method
¶ 91 The testing that generated the DNA profile in this case used short tandem repeat (STR) DNA markers, which have become popular for forensic DNA typing:
“DNA resembles a twisted ladder with rungs of the ladder made of chemicals called nucleotides. DNA has four different types of nucleotides (A: adenine, T: thymine, G: guanine, and C: cytosine) that form interlocking pairs. D. Kaye & G. Sensabaugh, Reference Guide on DNA Evidence, Reference Manual on Scientific Evidence 485, 491 (2d ed.2000). It is the order (sequence) of these building blocks that determines each person's genetic characteristics. The great majority of DNA is identical from person to person but forensic scientists commonly examine 13 specific regions, or loci, where certain nucleotide patterns are repeated again and again. These patterns are called ‘Short Tandem Repeats' (STRs). The number of repeated sequences determines the length of an STR. This length of repeated sequences, often called an allele, may vary between people and is what analysts measure and use for comparison. D. Kaye & G. Sensabaugh, Reference Guide on DNA Evidence, Reference Manual on Scientific Evidence 485, 494 (2d ed.2000).” People v. Williams, 238 Ill.2d 125, 130 n. 1, 345 Ill.Dec. 425, 939 N.E.2d 268 (2010), aff'd Williams v. Illinois, ––– U.S. ––––, 132 S.Ct. 2221, 183 L.Ed.2d 89 (2012).
In the simplest terms, an allele is a genetic variation at a particular locus, or location in the DNA. An individual inherits one allele from his or her mother and another from his or her father, and thus, except in rare circumstances, all individuals have two alleles in any given locus. When both inherited alleles from each parent are identical, a person is considered to be homozygous at that locus. When the inherited alleles are different, a person is heterozygous at that locus. See generally John M. Butler, Forensic DNA Typing: Biology, Technology, and Genetics of STR Markers, 17–23 (2d ed.2005).
¶ 92 In 1998, the ISP laboratory was using the AmpFl STR Profiler Plus PCR Amplification Kit (Profiler Plus kit) to test DNA. The “PCR” in the kit name designates “polymerase chain reaction,” a procedure that allows a small amount of DNA to be amplified into an amount large enough for typing. See United States v. Davis, 602 F.Supp.2d 658, 664 (D.Md.2009) (“PCR [testing] is used to amplify targeted loci of the sample of DNA by replicating the process by which DNA duplicates itself naturally. Thus, the lab is able to produce a substantial number of specific, targeted segments of DNA which can then be typed and compared.”). The DNA segments amplified for purposes of PCR typing are ones which exhibit genetic variation, which make them useful for distinguishing individuals. Williams, 238 Ill.2d at 131, 345 Ill.Dec. 425, 939 N.E.2d 268 United States v. Beasley, 102 F.3d 1440, 1445–46 (8th Cir.1996). DNA kits like the Profiler Plus combine multiple PCRs into one reaction and target several distinct loci. The number of repeats in a specific STR sequence present at a given locus, combined over a designated number of loci, creates a unique DNA profile of an individual. The Profiler Plus kit tests nine loci, plus a marker indicating gender.
¶ 93 B. Testing in this Case
¶ 94 In August 1998, the ISP laboratory received a box containing food items that were recovered on January 11, 1993, from a trash container at the crime scene. Debra Depcyznski, a forensic scientist in the Biology–DNA section of the ISP laboratory, unsealed the box and chose five chicken bones. She designated this group “29A” and labeled the bones A through E. She swabbed each bone with a separate piece of filter paper and tested each swabbing for the presence of amylase, an enzyme indicative of saliva. Only bones C and D indicated amylase, and Depcyznski swabbed bones C and D with four more pieces of white filter paper each. She identified the swabs “29A1” (bone C) and “29A2” (bone D), and submitted the swabs to analyst Cecilia Doyle.
¶ 95 Doyle placed three of the 29A1 swabs and three of the 29A2 swabs in a chemical bath to extract any DNA. Doyle then resolubilized the extract from 29A1 in 50 microliters of buffer liquid and did the same for the 29A2 extract. At this stage, known as the quantitation stage, Doyle conducted two tests: a yield gel test and a slot blot test. As to the yield gel test, Doyle explained that it would detect if DNA is present and was also useful for estimating whether the DNA may be fragmented due to degradation. The yield gel test only detected degraded DNA.
¶ 96 Doyle next employed a slot blot assay called the QuantiBlot Human DNA Quantitation Kit, which quantifies DNA by visual comparison of sample band intensities with the band intensities of standards, produced from known amounts of DNA. See Davis, 602 F.Supp.2d at 669 (noting that the QuantiBlot method “requires the analyst to estimate the amount of DNA in each sample by comparison with a known reference standard”). The darker the bands, the more DNA is present. The DNA at this stage is called the “template DNA” or “[s]tarting DNA template” (Butler, supra, at 64–68), measured in picograms (pg) or nanograms (ng). 100 pg is 0 .1 ng.
¶ 97 Doyle conducted the slot blot test using various amounts of liquid DNA samples derived from 29A1 and 29A2. When she loaded 1 microliter and 5 microliters for both 29A1 and 29A2, the amount of DNA detected was less than 0.15 ng. When Doyle ran a third assay, loading 10 microliters, she detected a difference for the 29A2 sample. While the 29A1 sample still recorded less than 0.15 ng of DNA, the reading for the 29A2 sample was that it contained less, but not much less, than 0.15 ng of DNA. Doyle noted this by writing double left-pointing arrows “B” in front of “0.15” for 29A1, but a single left-pointing arrow “C” in front of “0.15” for 29A2. Doyle explained the difference in these designations: “In comparing the two against each other, as well as against the .15 nanogram standard, one was of greater intensity than the other and both were less intense than the .15 nanogram. It was an indication to me that there was a difference in intensity between the two.”
¶ 98 Doyle again consulted with Dr. Fish. Doyle explained that laboratory policy required that she consult a supervisor when proceeding with amplification where the amount of starting template DNA was not in the laboratory's minimum quantity standards for optimal results, “where you would most likely always get a full profile.” Based on internal validation studies, the ISP laboratory recommended between 0.6 and 2 ng per microliter of input DNA for amplification, with an optimal range of between 1 and 2.5 ng per microliter for forensic work. The manufacturer of the Profiler Plus kit likewise specified input between 1 and 2.5 ng of DNA for optimal results.
¶ 99 The next step was to set up the PCR amplification. Doyle concentrated the remaining 29 microliters of the 29A1 solution down to 20 microliters, and she did the same for the remaining 29 microliters of the 29A2 solution. According to her laboratory worksheet, the total amount of DNA in each 20–microliter sample was “B 0.5 ng.” Doyle testified that “based on the quantification, it was the—the actual estimate of the DNA was a little bit less than the lowest standard run on the slot blot, so I was unable to determine a good estimate of the quantity of DNA that I had present, so I took the remaining sample and concentrated it down to 20 microliters, which was the maximum volume that could be taken into the amplification step, and I amplified 20 microliters for each of those samples.” She further testified that in the amplification step, where the Profiler Plus kit was used, “specific STR areas of the DNA are isolated—or are targeted, and many copies of them are made.”
¶ 100 Doyle then loaded the amplified products from 29A1 and 29A2 into an ABI Prism 310 Genetic Analyzer, which was known as “the Joliet Instrument.” Doyle testified that, in the most basic terms, “[t]he genetic analyzer is an instrument that allows us to look at the amplified DNA.” Doyle then generated an “electropherogram,” a computerized “graph that displays a series of different-colored peaks of different heights.” Roberts v. United States, 916 A.2d 922, 927 (D.C.2007). The electropherogram lists an “allele call,” the number of repeats an allele has, and also lists the amount of fluorescence detected for that particular allele measured in relative fluorescent units (RFUs). “A relative fluorescent unit (RFU) is a measure of the amount of amplified [DNA] present in a test sample.” Commonwealth v. Greineder, 458 Mass. 207, 936 N.E.2d 372, 382 n. 3 (Mass.2010). The amount of fluorescence is also presented in graphic form. See State v. Bander, 150 Wash.App. 690, 208 P.3d 1242, 1246 (Wash.Ct.App.2009) (“When an allele is detected at a particular locus, it is represented as a peak plotted at a point along the electropherogram's X-axis that corresponds to a particular locus. The intensity at which alleles fluoresce is reflected in the peak heights along the Y-axis.”).
¶ 101 From the electropherogram, Doyle could see that each sample contained DNA from multiple contributors because there were more than two peaks at certain loci. As Doyle explained, she would generally expect one or two alleles at each locus: “If you—at that area of the DNA, you inherited a different allele from your mom than you did from your father, then you'd have * * * two peaks, or two alleles. If you inherited the same DNA alleles from both parents, then you would only exhibit one peak at that area of DNA.” In this sample, however, Doyle explained that “there was a profile that had fairly high peaks, so it was a major contributor, one individual had contributed more DNA. Then, there were also very minor peaks, very small, close to the baseline, that was contributed by a second person.” Kenneth Pfoser also testified that he could easily distinguish the major contributor's profile based on RFUs and peak heights. Pfoser testified that the profile for the major contributor on bone C was identical to the major contributor on bone D, and he testified that these profiles were identical to defendant's profile.
¶ 102 C. The Amount of DNA in the Samples
¶ 103 Defendant now asserts that where the slot blot test showed that “B 0.5 nanograms” of DNA were present in the sample, counsel was deficient for failing to request a Frye hearing. According to defendant, testing this low amount of DNA may produce unreliable results, and such testing has not gained widespread scientific acceptance.
¶ 104 Defendant specifically argues that the amount of DNA tested here potentially qualifies as low copy number (LCN) DNA. The parties generally agree that testing less than 0.2 ng of DNA qualifies as LCN, though they acknowledge divergent opinions on the matter. See United States v. Williams, No. CR 05–920–RSWL, 2009 WL 1704986, at *3 (C.D.Cal.2009) (“[T]he Court finds that LCN testing can be defined as testing conducted on an amount of input DNA that is less than .1 ng or .2 ng. Both parties agree with this definition and the weight of scientific authority supports it.”) Davis, 602 F.Supp.2d at 669, 672 (finding that there would have been LCN testing if less than 1 nanogram of DNA was tested, but noting “dueling definitions” ranging from less than half a nanogram to less than 0.1 nanograms) Butler, supra, at 167 (noting that LCN DNA testing “typically refers to examination of less than 100 pg of input DNA”) see also Charlotte Word, What is LCN? Definitions and Challenges (2009), http://www.promega.com/resources/articles/profiles-indna/2010/what-is-lcn-definitions-and-challenges/ (last visited Apr. 22, 2013) (“Samples containing C 100–200 pg of total DNA available for amplification fall into the range generally considered to be LCN DNA by most practitioners. * * * More recently, LCN has been used to refer to any DNA sample or DNA profile where stochastic effects (e.g ., allele and locus drop-out, heterozygous peak height imbalance and elevated stutter peaks) are likely present and/or where the alleles detected are below a laboratory-defined stochastic threshold.”). In defendant's view, 0.2 ng “would likely qualify as much less than .5 ng,” and thus the testing here could be classified as LCN testing.
¶ 105 Conducting PCR amplification on LCN DNA may produce “stochastic effects,” impacting the reliability and reproducibility of the results and further complicating their interpretation. See Davis, 602 F.Supp.2d at 668 (reciting expert testimony that “[f]our problematic effects are often seen with testing performed below the stochastic threshold: exaggerated stutter, peak height imbalance, allelic drop-in and allelic drop-out”) Butler, supra, at 168 (“[A]pplication of LCN results should be approached with caution due to the possibilities of allele dropout, allele drop-in, and increased risks of collection-based and laboratory-based contamination.”). Allele “drop-out” occurs when there is a failure to amplify an allele which is actually present, but does not appear on the electropherogram alternatively, contamination may cause preferential amplification of rogue alleles, which appear to become part of the DNA profile (allelic drop-in). Butler, supra, at 169. 7
¶ 106 Moreover, regardless of whether the amount of DNA here qualifies as LCN, defendant argues that the amount of DNA detected by the slot blot test was well below the laboratory's internally validated minimum standards for input DNA (i.e., 1 to 2.5 ng per microliter for forensic work or 0.6 to 2 ng per microliter generally). Defendant argues, and the State does not dispute, that there is an optimal range of DNA quantity for the Profiler Plus and similar kits to avoid stochastic effects such as allele drop-out and drop-in. The parties refer to this level as the “stochastic threshold.” Davis, 602 F.Supp.2d at 668 Butler, supra, at 65 (“The best results * * * are obtained if the DNA template is added in an amount that corresponds to the concentration range designed for the kit.”) id. at 50 (noting that “Profiler Plus * * * specif[ies] the addition of between 1–2.5 ng of template DNA for optimal results.”).
¶ 107 D. Counsel's Performance
¶ 108 There is no question that defense counsel was aware that Doyle went forward with the amplification even though the amount DNA, as measured at the quantitation stage, did not meet the laboratory's optimal quantity standards for testing, i.e., between 1 and 2.5 ng per microliter. Defendant would have the analysis end there and conclude that, in light of the possibility of stochastic effects, the failure to mount a Frye challenge was a failure that renders counsel's performance deficient by an objective standard. As defendant argues, “testing of such a low amount of DNA, below the DNA kit manufacturer's and the ISP's internally validated standards for reliable results, was ripe for a Frye hearing.” According to defendant, there is no “basis in the record” to suggest “that counsel's decision was the product of sound trial strategy.” People v. Little, 322 Ill.App.3d 607, 613, 255 Ill.Dec. 828, 750 N.E.2d 745 (2001).
¶ 109 But our review so far has not given a complete account of the record, the scientific literature, or counsel's performance. As the State points out on appeal, and as Doyle explained during her deposition, there are reasons to go forward with the amplification stage even when the template DNA appears small by the QuantiBlot test. One issue is simply sensitivity of the QuantiBlot: “Even when no results are seen * * * some forensic scientists still go forward with DNA testing and obtain a successful STR profile. Thus, the slot blot assay is not as sensitive as would be preferred.” Butler, supra, at 52 Davis, 602 F.Supp.2d at 669 (“Both parties agree that [QuantiBlot] is a visual, inexact comparison that is ‘not as sensitive as would be preferred.’ ”). Another issue is the possibility of underestimation, which was a particular concern in this case, when the DNA was fragmented due to degradation. This was not lost on defense counsel, who explored the issue at Doyle's March 10, 2005 deposition:
“Q. The amount of DNA that you were seeing there, it could have been a small amount of DNA because there was only a small amount deposited, or it could have been a small amount of DNA because there was degradation is that right?
A. The slot blot could have only been detecting a small amount of DNA because of the degradation. The site that the probe binds to from the quantiblot kit, if that area of DNA was degraded and there wasn't enough of those fragments available in the sample, then you get an underestimation of the quantity.
Q. But at this point you didn't know what the small amount of DNA was due to, whether it was—whether there wasn't much DNA there to begin or whether it had been degraded?
A. I knew it was degraded from the yield gel, but you're correct, I didn't know the exact reason why the slot blot results were as they were.”
Doyle had a similar exchange with codefendant's attorney:
“Q. Why do both of them? Why not just do the slot blot? What does the yield gel tell you that the slot blot doesn't?
A. Whether the sample is degraded.
Q. And does that change the way you handle it?
Q. And how would it change the way you handle it? When you say degraded DNA, what do you do differently?
A. You take that into account when you're looking at slot blot results. In this case there was [sic ] apparently samples there, but not—it was less than the 0.15 nanograms. That could be because it was a degraded sample, that the quantity was going to be underestimated because of degradation.
Q. Degradation causes DNA to be underestimated?
Q. Are there validation studies about that, how degradation—
A. It's done throughout the community. I am sure there are validation studies, but I couldn't quote you them.”
Doyle similarly testified at trial that “it could be that the areas of DNA being targeted for the slot blot itself was degraded, and so I was not getting—I was getting an underestimation of the amount of DNA present.”
¶ 110 Moreover, the State argues that by going forward with amplification despite the slot blot result, there would be another metric that could establish that the starting template DNA was optimal and above the stochastic threshold: fluorescence intensity. As noted above, Doyle generated an electropherogram, which revealed the amount of fluorescence detected for that particular allele and the amount of amplified DNA present in a test sample. The fluorescence intensity enabled both Doyle and Pfoser to distinguish the major donor's DNA profile from that of the minor contributor. See Butler, supra, at 159 (“[P]eak areas and heights * * * can in most cases be related back to the amount of DNA template components included in the mixed sample.”). That fluorescence also could be connected with the amount of starting template DNA, as testing can determine what degree of fluorescence can be obtained from varying known amounts of starting template DNA.
¶ 111 Here, the ISP laboratory conducted its own study that determined what degree of fluorescence is obtained from varying known amounts of starting template DNA when using Profiler Plus and the Joliet Instrument. On November 28, 2005, defense counsel subpoenaed and obtained the Joliet Instrument's validation studies, which included the Joliet Instrument's sensitivity study. The laboratory conducted three runs on the prepared samples, which showed that 2.5 ng of input DNA yields peak heights between 500 and 4,000 RFUs, and that input DNA of 1.25 ng produces peak heights between 270 and 1,800 RFUs. The State argues that where the tests on bone C generated peak heights between 266 and 1477 RFUs and bone D yielded peaks between 572 and 3,133 RFUs for the so-called major donor, the RFU readings correspond to approximately 1.25 ng of DNA for bone C and approximately 2.5 ng of DNA for bone D.
¶ 112 Thus, there are at least two related reasons Doyle may have gone forward with amplification after the slot blot gave a reading of less than 0.5 nanograms: (1) the possibility that the QuantiBlot assay was underestimating the amount of DNA in the degraded sample and (2) the possibility of obtaining a result above the stochastic threshold, as measured by RFUs. In other words, Doyle (and Dr. Fish) decided to proceed with amplification instead of adopting a “stop testing” approach:
“The ‘stop testing’ approach begs the question of how do you know that you are too low to obtain reliable results (i.e., information that accurately reflects the DNA sample). There are two primary points in the DNA-testing process where DNA reliability may be assessed: 1) at the DNA quantitation stage prior to performing PCR amplification of the short tandem repeat (STR) markers of interest, or 2) during examination of peak heights—and peak height ratios in heterozygous loci—in the STR profile obtained. An empirically determined threshold (usually termed a ‘stochastic threshold’) may be used at either the DNA quantitation or data interpretation stage to assess samples in the potential ‘danger zone’ of unreliable results. For example, if the total amount of measured DNA is below 150 pg, a laboratory may decide not to proceed with PCR amplification, assuming that allelic drop-out due to stochastic effects is a very real possibility. Alternatively, a laboratory may proceed with testing a low-level DNA sample, then evaluate the peak height signals and peak height ratios at heterozygous loci.” J.M. Butler & C.R. Hill, Scientific Issues with Analysis of Low Amounts of DNA (2010), http:// www.promega.com/resources/articles/profiles-in-dna/2010/ scientific-issues-with-analysis-of-low-amounts-of-dna/ (last visited Apr. 22, 2013).
See also Bruce Budowle et al., Validity of Low Copy Number Typing and Applications to Forensic Science, 50 Croat. Med. J. 207, 209 (2009) ( “Typically, minimum amounts of DNA template are recommended * * * so that stochastic effects can be reduced to manageable levels. However, since variation in the quantitation of template DNA and pipetting volume inaccuracies can impact the amount of template DNA placed in a PCR, a stochastic interpretation threshold is used instead for STR typing. * * * A minimum peak height (or area), which is established by in-house laboratory validation studies, serves as a stochastic control. Those peaks below this threshold are not interpreted or are interpreted with extreme caution for limited purposes.”) David H. Kaye & George F. Sensabaugh, Jr., Reference Guide on DNA Evidence, at 505 (“Whether a particular sample contains enough human DNA to allow typing cannot always be predicted in advance. The best strategy is to try if a result is obtained, and if the controls (samples of known DNA and blank samples) have behaved properly, then the sample had enough DNA.”).
¶ 113 We emphasize that the record shows that defense counsel recognized a connection between DNA quantity, as measured by the slot blot, and the possibility of stochastic effects, when he cross-examined Doyle at trial:
“Q. Now, the other thing besides degradation that you do, the yield gel and the slot blot for is quanitation [sic ]. You're trying it [sic ] figure out how much you have there right?
A. Try to get an estimate of the amount of DNA present in the sample.
Q. And, in this case, both samples, the 29A1 and the 29A2, came out to be less than the lowest comparison thing that you had on the gel right? Or on the slot blot is that right?
A. Yes. On the slot blot, there are a series of known DNA quantities, and I compare the quantities—the samples—the evidence samples to these known quantities in an attempt to estimate the amount of DNA present.
Q. I mean, we're talking about essentially microscopic stuff here are we not?
Q. You couldn't see it with the naked eye could you?
Q. And that's—it's so small that at the time you were doing the testing in this case, the Illinois State Police had not even been validated for amounts that small right?
A. Well, I reviewed the validation procedures, and in the validation study for STRs, amounts smaller than .15 nanograms had been looked at to determine what the lowest cutoff level was where you could obtain a full profile.
Q. Okay. And—so, in that sense it was validated. Is that what you mean?
A. It was looked at as part of that validation study. A determination was made of .3—* * * well, 1 nanogram to 1.5 nanogram was the ideal target amount of DNA, and that's, I believe, through all of the samples down to .3 nanograms of DNA, a full profile was obtained.
Below that, loci started to drop out, or an allele started to not be detected.” Counsel also connected slot blot sample size with results and further sought to establish that such a small amount of DNA could have come from an outside source:
“Q. And the amounts of DNA make a difference, ultimately, or can make a difference in what sort of results you get in the testing right?
A. They can. It depends on why the sample is degraded. In this case, it could be that the areas of DNA being targeted for the slot blot itself was degraded, and so I was not getting—I was getting an underestimation of the amount of DNA present.
Q. Now, because of the small amounts of DNA that are necessary in order to get some kind of result, you can get the DNA—you could get a DNA off of simply—well, for example if the chicken had been laid out an [sic ] a table somewhere, and there was DNA on that table, the chicken could pick up the DNA from the table could it not?”
Doyle eventually answered: “I suppose if the conditions were correct. If there was a lot of DNA if it was moist. There has to be some transfer. If you're putting something dry on something dry, the chance of transfer is very little.”
¶ 114 It may be that, faced with a possibly underestimated sample, the scientific literature, and the sensitivity study, defense counsel concluded that a request for a Frye hearing would not likely be granted or, if granted, the hearing would not result in the exclusion of evidence because there was a considerable chance the testing here was acceptable practice. Perhaps defendant's attorneys thought that by initiating a Frye challenge, they would have provoked an argument from the State that the slot blot reading was not a problem because the peak heights suggested that the sample was above the stochastic threshold. Cf. Commonwealth v. Greineder, 458 Mass. 207, 936 N.E.2d 372, 403 (Mass.2010) (holding that defense counsel may have wanted to forgo a Frye challenge to tip off the prosecution about a particular line of attack). That approach is plausible, as counsel did not abandon an attack on Doyle's choice to go ahead with testing when the slot blot revealed a small amount of DNA, and counsel used that point at trial to question whether such a small sample could have been picked up from a contaminated source.
¶ 115 In sum, in the record before us, we have: (1) an explanation by Doyle regarding why the slot blot, which may not be a particularly sensitive method of measurement, might have underestimated the amount of DNA at the quantitation stage (2) scientific literature suggesting that RFU peak heights can be used as a reference point to show a lack of stochastic effects (3) a validation study for the Joliet Instrument listing RFU peak heights, which could serve as a stochastic interpretation threshold and (4) cross-examination from defense counsel attacking the decision to go forward after the QuantiBlot testing showed a low sample size, exposing possible accuracy issues. Although we cannot say with any certainty what materials counsel reviewed or what strategic calls counsel made, certainty is not required: we must entertain a strong presumption that counsel's actions might have been the product of trial strategy. Strickland, 466 U.S. at 689 Evans, 186 Ill.2d at 93, 237 Ill.Dec. 118, 708 N.E.2d 1158.
¶ 116 Defendant responds that peak heights are “not necessarily” a reliable assessment of the amount of DNA present in this sample or otherwise do not show that the sample is above the stochastic threshold. Defendant cites a 2010 article that acknowledges “target DNA amount” and “peak height of alleles” may be used as criteria to determine LCN testing, but advocates “defining LCN typing based on the values where peak height imbalance becomes exaggerated” and argues that “[h]eterozygote peak height imbalance is a better criterion for defining the conditions where stochastic effects occur.” Bruce Budowle, Low Copy Number Typing Still Lacks Robustness and Reliability (2010), http:// www.promega.com/resources/articles/profiles-in-dna/low-copynumber-typingstill-lacks-robustness-and-reliability/ (last visited Apr. 22, 2013). According to defendant, whether RFUs can be connected to the amount of DNA in a sample is an “issue appropriately explored in a Frye hearing.” But defendant cannot establish ineffective assistance of counsel—or overcome the presumption that counsel's actions were the product of trial strategy—by showing that a motion seeking a Frye hearing could have been made to explore this issue. Weighing differing views among the scientific literature would certainly factor into the assessment of whether to request a Frye hearing if the best that defendant can conclude from the scientific literature is that “peak heights are not necessarily reliable assessment of the amount of DNA,” defendant surely cannot overcome the presumption that counsel made a sound strategic decision that a Frye hearing would have been unlikely to succeed. This is precisely the type of nuanced strategic decision that trial counsel must confront in cases of DNA and other scientific evidence. With the record before us, defendant has simply not overcome the presumption that counsel's decision to forgo a Frye hearing was the product of sound trial strategy.
¶ 117 We note that there is no reason to believe that defense counsel was ill-equipped to assess the testimony of forensic scientists or the scientific literature. The defense team was comprised of a private attorney, three attorneys from the State Appellate Defender Death Penalty Trial Assistance Unit, and a contract attorney, at least some of whom had considerable experience in DNA evidence. The defense team relied upon a quality assurance auditor of forensic laboratories and three DNA experts who held advanced degrees in the sciences. One of those experts, Dr. Reich, testified on behalf of the defense, challenging the reliability of a nine-loci sample and highlighting the possibility of contamination.
¶ 118 Even though the defense team had multiple experts at their disposal and otherwise aggressively challenged the DNA (and fingerprint) evidence, defendant now claims that counsel simply “missed” the issue completely. To endorse that view, however, we must conclude: (1) while defense counsel thoroughly questioned Doyle about her methods at the quantitation stage and obtained her admission that testing a quantity below standard levels may effect the profile obtained, the defense team simply failed to consider whether to further challenge the accuracy of the DNA results based going forward with testing what appeared to be a small sample (2) while counsel subpoenaed and obtained the Joliet Instrument's validation studies, which listed RFU peak height information, counsel did not review the data, failed to consider any relationship between peak height and the possibility of stochastic effects, or otherwise did not know of a possible connection between peak height and stochastic effects from their experts or from the literature (3) counsel failed to consult with their team of DNA experts to explore possible problems related to sample quantity and thus were not acting on expert advice in deciding against moving for a Frye hearing and (4) even with all this information and expert advice at hand, counsel completely failed to consider a possible challenge based on the quantitation results, never considered whether RFU peak heights showed that the sample was outside the stochastic threshold, and thus did not make a strategic call against requesting a Frye hearing. On this record, we find that view of events wholly speculative, and we conclude that defendant has failed to show that his counsel was so deficient that he was “not functioning as counsel” under the first prong of the Strickland test. Richardson, 189 Ill.2d at 411, 245 Ill.Dec. 109, 727 N.E.2d 362.
¶ 119 In light of defendant's inability to meet Strickland's first prong, we need not further address whether defendant could also show that a Frye hearing would have enjoyed a reasonable chance of success, nor is it necessary to consider whether defendant could show that without the DNA evidence presented here, the result of the trial would have been different. “Because a defendant's failure to satisfy either part of the Strickland test will defeat a claim of ineffective assistance, a court is not required to address both components of the inquiry if the defendant makes an insufficient showing on one. [Citation.]” (Internal quotation marks omitted.) People v. Edwards, 195 Ill.2d 142, 163, 253 Ill.Dec. 678, 745 N.E.2d 1212 (2001). Here, where defendant cannot show that his counsel's performance fell below professional standards, his ineffective assistance claim fails.
¶ 120 III. Adoption of Daubert
¶ 121 Defendant contends that we should abandon the Frye test and instead adopt the “more stringent” standard for the admission of scientific evidence established in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993). That is not an argument for this court. Our supreme court has repeatedly held that Illinois courts follow the Frye test rather than Daubert. See McKown I, 226 Ill.2d at 248, 314 Ill.Dec. 742, 875 N.E.2d 1029 Simons, 213 Ill.2d at 529, 290 Ill.Dec. 610, 821 N.E.2d 1184 Donaldson, 199 Ill.2d at 76–77, 262 Ill.Dec. 854, 767 N.E.2d 314 (“Illinois law is unequivocal: the exclusive test for the admission of expert testimony is governed by the standard first expressed in Frye v. United States, 293 F. 1013 (D.C.Cir.1923).”), abrogated on other grounds by Simons, 213 Ill.2d at 530, 290 Ill.Dec. 610, 821 N.E.2d 1184 see also Ill. R. Evid. 702, cmt. (eff.Jan.1, 2011) (“[Illinois Rule of Evidence] 702 confirms that Illinois is a Frye state.”). This court lacks the authority to overrule these decisions, which are binding on all lower courts. People v. Artis, 232 Ill.2d 156, 164, 327 Ill.Dec. 556, 902 N.E.2d 677 (2009). Unless and until our supreme court adopts a new test for the admission of scientific evidence, we must apply the Frye standard. People v. Safford, 392 Ill.App.3d 212, 230, 331 Ill.Dec. 70, 910 N.E.2d 143 (2009) Donnellan v. First Student, Inc., 383 Ill.App.3d 1040, 1057, 322 Ill.Dec. 448, 891 N.E.2d 463 (2008).
¶ 122 IV. Rebuttal Closing Argument
¶ 123 Defendant next argues that the prosecutor made inflammatory, improper remarks during the State's rebuttal closing argument and the cumulative effect of those remarks was to deny him a fair trial. Before turning to the comments at issue, we note that all but one of his claims of improper argument have been forfeited, as objections either were not made at trial or were not preserved in a posttrial motion. Defendant seeks review of these remarks under the plain-error doctrine, which requires defendant to show either: “ ‘(1) a clear or obvious error occurred and the evidence is so closely balanced that the error alone threatened to tip the scales of justice against the defendant, regardless of the seriousness of the error, or (2) a clear or obvious error occurred and that error is so serious that it affected the fairness of the defendant's trial and challenged the integrity of the judicial process, regardless of the closeness of the evidence.’ “ People v. Thompson, 238 Ill.2d 598, 613, 345 Ill.Dec. 560, 939 N.E.2d 403 (2010) (quoting People v. Piatkowski, 225 Ill.2d 551, 565, 312 Ill.Dec. 338, 870 N.E.2d 403 (2007)). As explained below, however, we conclude that nearly all of these comments were within the bounds of permissible argument, and even if we assume that defendant had preserved his objections to the few brief comments that were improper, these isolated remarks did not deny defendant a fair trial.
¶ 124 A. Whether the Remarks Were Improper
¶ 125 Regardless of whether defendant has preserved his objections to the prosecutor's comments, our initial inquiry is to determine whether there was error, i.e., whether any of the complained-of remarks were improper. Prosecutors are afforded wide latitude in closing argument. People v. Caffey, 205 Ill.2d 52, 131, 275 Ill.Dec. 390, 792 N.E.2d 1163 (2001). While a prosecutor may comment on the evidence and all inferences reasonably yielded by the evidence (People v. Pasch, 152 Ill.2d 133, 184, 178 Ill.Dec. 38, 604 N.E.2d 294 (1992)), a prosecutor may not use closing argument simply to “ ‘inflame the passions or develop the prejudices of the jury without throwing any light upon the issues.’ “ People v. Wheeler, 226 Ill.2d 92, 128–29, 313 Ill.Dec. 1, 871 N.E.2d 728 (2007) (quoting People v. Halteman, 10 Ill.2d 74, 84, 139 N.E.2d 286 (1956)). Although “[i]t is not clear whether the appropriate standard of review for this issue is de novo or abuse of discretion,” we need not resolve the issue, because our holding in this case would be the same under either standard. People v. Land, 2011 IL App (1st) 101048, ¶¶ 148–151, 353 Ill.Dec. 71, 955 N.E.2d 538 (noting conflict between People v. Wheeler, 226 Ill.2d 92, 121, 313 Ill.Dec. 1, 871 N.E.2d 728 (2007), and People v. Blue, 189 Ill.2d 99, 128, 244 Ill.Dec. 32, 724 N.E.2d 920 (2000)).
¶ 126 Defendant first argues that the prosecutor implied that there was other evidence, hidden from the jury, that confirmed defendant's guilt. At the start of his rebuttal argument, the prosecutor thanked the jurors for their service and then continued:
“I'd like to say that what I say is not important. I'm only going to speak for an hour and that's enough, you've heard enough from the lawyers. I'm sorry you cannot ask questions, I really am. I'd like to answer the questions you have. But our system doesn't allow that. It would be good to know what's on your minds, but I can't.”
By these comments, we understand that the prosecutor attempted to explain to the jurors that he could not take what he viewed as the more efficient course—to answer their direct questions about the case—and would instead have to guess at what the jurors consider significant about defendant's closing argument as he spoke to them for the next hour. In defendant's view, the prosecutor “implied that there was evidence not revealed to the jury,” but he could not “give the jurors the ‘inside scoop’ on the evidence” because he could not answer the jurors' direct questions. But the prosecutor made no reference to evidence outside the record or the fact that such evidence existed, as in the cases cited by defendant. See People v. Ray, 126 Ill.App.3d 656, 661, 82 Ill.Dec. 5, 467 N.E.2d 1078 (1984) (remanding for new trial where prosecutor told the jury he wished he could give the jurors his file and then referred to information not in evidence) People v. Barnes, 107 Ill.App.3d 262, 268, 63 Ill.Dec. 199, 437 N.E.2d 848 (1982) (remanding for new trial where prosecutor commented on lack of physical evidence, claimed that “ ‘[defense] [c]ounsel knows all the evidence in the apartment,’ and then stated that ‘[w]e would wish we could show it to you’ ”). Although defendant argues that the reference to outside evidence here is “more subtle” than in Ray or Barnes, we do not find a “more subtle” reference to outside evidence. We find no such reference whatsoever. To accept defendant's argument, we would have to credit speculation about hidden meanings and ignore the prosecutor's straightforward explanation that our system does not permit a question-and-answer process. The prosecutor's comments were not improper.
¶ 127 Defendant next challenges the prosecutor's remark about the testimony of the State's fingerprint expert, John Onstwedder:
“He made perfect sense. He's a great expert. They talked about—I guess to impeach him they talked about that there's been two or three misidentifications throughout the world. And then I guess from there you're supposed to conclude, oh, this is a misidentification. He had it verified by someone else.
But they want you to look at him, a layperson, and he told you that that requires expert testimony, years and years of experience. I respectfully submit, ladies and gentlemen, we need expert testimony, expert testimony, and he's the best, he's very, very good at it, and it was unrebutted. No other expert—
[Defense Counsel]: Objection to shifting the burden.
[Assistant State's Attorney]: I'm not—excuse me.
THE COURT: Ladies and gentlemen, I'm going to overrule the objection. But here's the thing is that the burden of proof always is with the State as we discussed in voir dire examination, as I told you in the beginning with my opening remarks. The burden of proof is always with the State and the defendant is presumed to be innocent and he does not have to prove his innocence.
[Assistant State's Attorney]: We welcome the burden of proof. They are right. For a minute don't stop saying that to yourself. From the time you're back there the State's burden of proof we welcome it.”
¶ 128 Defendant raises two concerns about these remarks. First, defendant claims that stating that Onstwedder was “a great expert” and was “the best” was an improper attempt by the prosecutor to give his personal opinion of the witness and vouch for his credibility, thus placing the integrity of the State's Attorney behind the witness. See People v. Valdery, 65 Ill.App.3d 375, 378, 21 Ill.Dec. 673, 381 N.E.2d 1217 (1978) (concluding that prosecutor's remarks—including a statement that “ ‘I have never had contact with people who have come forth and testified as witnesses who have had the level of integrity and character of those people and I'm particularly impressed’ “—were “highly prejudicial because they place the integrity of the office of the State's Attorney behind the credibility of the witnesses”). As the State points out, however, “[t]he credibility of a witness is a proper subject for closing argument if it is based on the evidence or inferences drawn from it.” (Internal quotation marks omitted.) People v. Hickey, 178 Ill.2d 256, 291, 227 Ill.Dec. 428, 687 N.E.2d 910 (1997). And the remarks here—when viewed in the context of the prosecutor's attempt to defend Onstwedder from impeachment, emphasize his experience, and point out that his identification was verified—are based on Onstwedder's testimony and inferences drawn from it.
¶ 129 Defendant also argues that the prosecutor sought to shift the burden of proof when he described Onstwedder's testimony as “unrebutted.” While a prosecutor must not state that the defendant has an obligation to come forward with evidence that would create a reasonable doubt as to his guilt, a prosecutor may comment on the “defendant's failure to submit any evidence that would tend to refute the case against him.” People v. Albanese, 104 Ill.2d 504, 521–22, 85 Ill.Dec. 441, 473 N.E.2d 1246 (1984) People v. Kliner, 185 Ill.2d 81, 153–55, 235 Ill.Dec. 667, 705 N.E.2d 850 (1998) (holding that prosecutor did not shift the burden to the defendant when he argued in rebuttal that the defense could have called various witnesses). Thus, our supreme court has held that a prosecutor may comment that the testimony of the State's expert witness was not contradicted at trial. People v. Peter, 55 Ill.2d 443, 460–61, 303 N.E.2d 398 (1973) (finding comments “concerning the lack of evidence contradicting the evidence the State had offered on the subject of fingerprints” did not shift the burden of proof to the defendant).
¶ 130 We acknowledge that this court has drawn a fine line between comments that properly emphasize that the State's “evidence is uncontradicted” and comments that “improperly suggest[ ] to the jury that defendant had a burden to introduce evidence.” People v. Giangrande, 101 Ill.App.3d 397, 402, 56 Ill.Dec. 911, 428 N.E.2d 503 (1981) see People v. Gorosteata, 374 Ill.App.3d 203, 218, 312 Ill.Dec. 492, 870 N.E.2d 936 (2007) (collecting cases). Defendant argues that the remarks here fall into the latter category because a comment on the “unrebutted” testimony of a witness, without further elaboration, implies that the defendant had a burden to rebut that testimony. In this case, however, the prosecutor's comment does not stand alone: the trial court immediately instructed the jury that “[t]he burden of proof is always with the State and the defendant is presumed to be innocent and he does not have to prove his innocence,” and the prosecutor then acknowledged and “welcome [d]” the State's burden of proof. These clarifying comments dispelled any risk that the jury would interpret the word “unrebutted” as a general pronouncement that defendant must come forward with evidence to prove his innocence. Viewing the remarks in context, we perceive the prosecutor's comment as an argument that Onstwedder's credibility was not meaningfully challenged, despite the defense counsel's effort to impeach his credibility.
¶ 131 Defendant next points to the State's rebuttal argument concerning defendant's DNA expert, Dr. Karl Reich:
“Did [defense counsel] even talk about Wright [sic ], their expert Wright [sic ]? I wouldn't either, would you. No. How much money do you make a year? He tells you, first I'm on salary. Then we said, come on, Doctor, tell me a little bit more about yourself. Well, I'm vice-president of the company. Come on, Doctor, you owe it to the ladies and gentlemen of the Jury, don't you, because you have to weigh his bias, interest, motive, right? Then he finally says, well, I have a one-third interest. Why do you have to pull it out of this doctor. You know, professionals, they come in here, they're qualified as experts. They lie like everybody else. How much do you make? $100,000. $100,000. They guy comes in here, $100,000. He's motivated by money. Can you imagine if he persuaded you 12 that the DNA don't match, he would be hounded by defense attorneys throughout the country.
Just because they wear suits, they're doctors and they're experts doesn't mean anything. You judge their credibility like you judge the credibility of everybody else. That's the law. You're going to get an instruction on that. The law says you have to look at people's bias, motive, and interest clear across the board.”
Defendant argues that such comments, specifically that Reich was “motivated by money,” were an improper attack on Reich's character and integrity.
¶ 132 The credibility of an expert witness is a proper subject for closing argument if it is based on the evidence or inferences drawn from it. People v. Hickey, 178 Ill.2d 256, 290, 227 Ill.Dec. 428, 687 N.E.2d 910 (1997). In People v. Hickey, for example, the prosecutor stated in closing argument that the defense expert was “ ‘an assistant professor making 35 grand who jumped to $120,000 a year to testify for criminal defendants' “ and argued that because of his financial interest, the expert could never be expected to comment favorably on the state laboratory's work. Id. The court concluded that the prosecutor's statements “were acceptable comments on [the expert's] possible bias,” drawn from evidence that he testified primarily for defendants and that his income had increased considerably when he began testifying as an expert. Id. at 291, 227 Ill.Dec. 428, 687 N.E.2d 910.
¶ 133 We acknowledge that the State suggested that because Dr. Reich's company received over $100,000 for its work, the jury had a reason to doubt the truthfulness of Dr. Reich's testimony. This was part of a larger argument that Dr. Reich and those witnesses “qualified as experts” should be judged by the same standards of bias, motive, and interest as other witnesses when the jury considered his testimony. As in Hickey, we find no error in the prosecutor's attempt to establish the expert's possible bias.
¶ 134 We do find error in two comments by the prosecutor that were not drawn from the evidence, but were needlessly disparaging to Dr. Reich and defense counsel. First, while discussing Dr. Reich's testimony, the prosecutor stated: “Dr. Wright [sic], whatever you want to call yourself * * *.” We see no purpose for this fleeting comment other than to demean the expert's status as a doctor of molecular biology, without apparent basis in the evidence. The State on appeal offers no explanation for this remark, and we conclude that it was improper. Defendant also points to comments by the State directed at defense strategy:
“I commend Mr. Burch for his passion. His passion at the beginning was quite extraordinary when he talked about the blood, if you recall that. Remember he said, it's in the blood, it's in the blood. And he said, all these other issues, and he showed a lot of passion. I give Mr. Burch credit I guess because even though his theory—
[Defense Counsel]: I object at this time.
[Assistant State's Attorney]: Sure.
I have a hard time getting passionate about something that I don't believe in where I have to change course, but—
[Defense Counsel]: Your Honor—
THE COURT: He's talking about his personal experience. Overruled .”
¶ 135 The State may challenge a defendant's credibility and the credibility of his theory of defense in closing argument when there is evidence to support such a challenge. People v. Hudson, 157 Ill.2d 401, 193 Ill.Dec. 128, 626 N.E.2d 161 (1993). In this case, then, it would have been proper for the prosecutor to point out that defense counsel argued during opening statements that a footprint was left in blood by the killer, but did not again press that point during closing argument, after the prosecutor argued at closing that the evidence showed the footprint was the result of processing the bloody crime scene. But here, the prosecutor took that permissible argument a step further by focusing his remarks on defense counsel's style of advocacy, suggesting that counsel presented feigned enthusiasm for a defense he knew was false. Such comments “improperly shift the focus of attention from the evidence in the case to the objectives of trial counsel.” People v. Emerson, 97 Ill.2d 487, 498, 74 Ill.Dec. 11, 455 N.E.2d 41 (1983) cf. People v. Monroe, 66 Ill.2d 317, 5 Ill.Dec. 824, 362 N.E.2d 295 (1977) (finding prosecutor's comment that defense counsel did not believe in the theory of defense was improper) see also People v. Kirchner, 194 Ill.2d 502, 549, 252 Ill.Dec. 520, 743 N.E.2d 94 (2000) (distinguishing between comments on the credibility of the defendant and his theory of defense, which are proper, and impermissible attack on defense counsel). The same can be said for the prosecutor's later remark that he “told his partners don't object during their closing because they have a large responsibility here, as do we.” That comment followed two of defense counsel's objections, and it needlessly focused the jury on the motives and tactics of defense counsel rather than the arguments and evidence presented at trial.
¶ 136 Additionally, we find error in one other comment by the prosecutor—that the verdict “transcends the courtroom walls” and “goes out to the community.” After the comment was made, the trial judge immediately interjected, telling the prosecutor to stop and instructing the jurors to disregard these comments: “[Y]ou're only supposed to concern yourselves with the facts and the law as it applies to Mr. Luna. You're not sending message to the world. That's not your job.” The prosecutor later continued on the same theme:
“Your verdict will also talk to the Annie Locketts of this world, will it not? And there's others out there like Annie Lockett, a lot of Annie Locketts out in this world who know something, who know something and yet say to themselves, do I have the inner strength, the moral fiber to come to court.
[Defense Counsel]: Objection.
THE COURT: I'm sorry, [Assistant State's Attorney], you're not here to reinforce any self-doubt that Anne Lockett has either. Forget about the other Anne Locketts in the world and stay away from that.
Ladies and gentlemen, again, you're supposed to concern yourselves with the facts in this case and the law as I give it to you, and that way reach a verdict.
Not about any messages and not about reinforcing any individuals .”
On appeal, the State concedes that the prosecutor's comments were improper. See, e.g., People v. Johnson, 208 Ill.2d 53, 79, 281 Ill.Dec. 1, 803 N.E.2d 405 (2003).
¶ 137 B. Whether the Improper Remarks Require a New Trial
¶ 138 Of the several comments defendant has claimed were error, we have found three isolated statements were improper. While defendant's claim of improper argument have been forfeited as to two of these comments, defendant did preserve his claim of error (by objecting at trial and renewing his objection in a posttrial motion) as to the prosecutor's comments about the verdict sending a message to the community. Where an objection to a prosecutor's comments has been preserved, we must determine whether any improper remarks, when viewed in the context of the entire argument, “constituted a material factor in a defendant's conviction.” Wheeler, 226 Ill.2d at 123, 313 Ill.Dec. 1, 871 N.E.2d 728. A new trial should be granted “[i]f the jury could have reached a contrary verdict had the improper remarks not been made, or the reviewing court cannot say that the prosecutor's improper remarks did not contribute to the defendant's conviction.” Id.
¶ 139 As to the prosecutor's improper remark that the verdict “transcends the courtroom walls,” the parties agree that “[t]he trial court's act of promptly sustaining a defense objection to a closing argument comment is generally sufficient to cure any error which may have occurred.” People v. Hope, 168 Ill.2d 1, 26, 212 Ill.Dec. 909, 658 N.E.2d 391 (1995). In this case, the trial court not only instructed the jury that it was to consider only the facts of the case before it, the court also emphasized that it was not the State's role “to reinforce any self-doubt that Anne Lockett has” and it was not the jury's role to send “any messages” to the community with the verdict. We conclude that the trial court's direct and forceful instructions to the jury were sufficient to cure any prejudicial impact of the prosecutor's improper comment. See People v. Harris, 225 Ill.2d 1, 33, 310 Ill.Dec. 351, 866 N.E.2d 162 (2007) (considering the possibility of prejudice and noting that “defense counsel's objection to the comments was sustained and the jury was properly instructed that the arguments of counsel were not evidence that it could consider”) People v. Chavez, 327 Ill.App.3d 18, 28–29, 260 Ill.Dec. 894, 762 N.E.2d 553 (2001) (finding that potential prejudicial effect of prosecutor's comments about “social evil of the drug trade in Chicago” was vitiated by two objections from defense counsel and court instructions to only consider the evidence presented at trial) People v. Sutton, 353 Ill.App.3d 487, 501, 288 Ill.Dec. 858, 818 N.E.2d 793 (2004).
¶ 140 As to the comments about Dr. Reich or defense counsel's tactics, even if we assume that defendant had not forfeited his objection to these comments, we conclude that they “did not engender substantial prejudice against defendant sufficient to warrant reversal of his convictions.” Hickey, 178 Ill.2d at 289–90, 227 Ill.Dec. 428, 687 N.E.2d 910. These comments were so brief, and of such small import in the State's lengthy closing argument, that they did not amount to reversible error. See People v. Runge, 234 Ill.2d 68, 142–43, 334 Ill.Dec. 865, 917 N.E.2d 940 (2009) (“All of the comments were brief and isolated in the context of lengthy closing arguments, a factor we have found significant in assessing the impact of such remarks on a jury verdict.”) see also People v. McCann, 348 Ill.App.3d 328, 338–39, 284 Ill.Dec. 89, 809 N.E.2d 211 (2004) (holding that a prosecutor's reference to defense counsel as “an octopus releasing inky fluid” and as “throwing dust on the road to justice” did not deny the defendant a fair trial where the “comments, when viewed in the context and totality of the closing arguments, were brief”). Moreover, the prosecutor's isolated statements, at the end of a month-long trial, cannot be said to be the cause of the convictions in this case. The State presented defendant's videotaped statement and his statements to Anne Lockett and Eileen Bakalla admitting to the murders. Defendant's statements were corroborated by physical evidence, DNA found on the partially eaten chicken bones and a partial palm print on a napkin, that show defendant was at the Brown's Chicken restaurant on January 8, 1993. After hearing that evidence over several weeks of trial, and after hearing the State's lengthy closing argument focusing on the evidence presented, we cannot say that the few brief comments we have found improper constituted a material factor in defendant's conviction. The improper remarks “did not so prejudice the jury as to deny defendant a fair trial or have a disproportionate impact on the jury's finding of guilt.” People v. Easley, 148 Ill.2d 281, 332–33, 170 Ill.Dec. 356, 592 N.E.2d 1036 (1992).
¶ 141 Defendant also argues that he was denied the effective assistance of counsel based on his counsel's failure to object to the remarks challenged on appeal. Because we have concluded that any improper remarks did not impact the jury's finding of guilt or deprive defendant of a fair trial, defendant cannot demonstrate that counsel's failure to object prejudiced the outcome of his trial. See, e.g., People v. Glasper, 234 Ill.2d 173, 215–16, 334 Ill.Dec. 575, 917 N.E.2d 401 (2009). We therefore reject his ineffective assistance claim.
¶ 142 V. Statements of Todd Wakefield and Casey Sander
¶ 143 Defendant finally claims that the trial court erred when it denied his motion in limine to admit out-of-court statements from Todd Wakefield and Casey Sander. While the court admitted a statement from John Simonek—that he committed the murder with Todd Wakefield—as a “statement against penal interest” under Chambers v. Mississippi, 410 U.S. 284, 93 S.Ct. 1038, 35 L.Ed.2d 297 (1973), the court ruled that neither Sander's nor Wakefield's statements were admissible. A ruling on the admission of evidence is within the sound discretion of the trial court and will only be reversed if the trial court abused its discretion. People v. Caffey, 205 Ill.2d 52, 89, 275 Ill.Dec. 390, 792 N.E.2d 1163 (2001) People v. Bowel, 111 Ill.2d 58, 68, 94 Ill.Dec. 748, 488 N.E.2d 995 (1986) (citing People v. Ward, 101 Ill.2d 443, 455–56, 79 Ill.Dec. 142, 463 N.E.2d 696 (1984)). The trial court abuses its discretion only where its ruling is arbitrary, fanciful, or where no reasonable person would take the view adopted by the trial court. Caffey, 205 Ill.2d at 89, 275 Ill.Dec. 390, 792 N.E.2d 1163.
¶ 144 Generally, declarations against interest are admissible as an exception to the hearsay rule, based on the assumption that a person is unlikely to fabricate a statement against his or her own interest. People v. Tenney, 205 Ill.2d 411, 433, 275 Ill.Dec. 800, 793 N.E.2d 571 (2002). While courts generally will not admit an unsworn, out-of-court declaration that the declarant (rather the defendant on trial) committed the crime, even though the declaration is against the declarant's penal interest, such a statement may be admitted under the statement-against-penal-interest exception to the hearsay rule “where justice requires.” Id. at 433, 275 Ill.Dec. 800, 793 N.E.2d 571. “[W]here the hearsay statement bears persuasive assurances of trustworthiness and is critical to the accused's defense, its exclusion deprives the defendant of a fair trial in accord with due process.” Id. at 434, 275 Ill.Dec. 800, 793 N.E.2d 571 (citing Chambers, 410 U.S. at 302). To help assess the reliability of such a statement, we look to the four factors identified in Chambers: (1) the statement was spontaneously made to a close acquaintance shortly after the crime occurred (2) the statement is corroborated by other evidence (3) the statement is self-incriminating and against the declarant's interests and (4) there was adequate opportunity for cross-examination of the declarant. Chambers, 410 U.S. at 300–01. These factors are merely guidelines, not requirements, for admissibility. People v. Keene, 169 Ill.2d 1, 29, 214 Ill.Dec. 194, 660 N.E.2d 901 (1995) People v. Pecoraro, 175 Ill.2d 294, 307, 222 Ill.Dec. 341, 677 N.E.2d 875 (1997). Ultimately, we must determine whether the statement at issue “was made under circumstances which provide ‘considerable assurance’ of its reliability by objective indicia of trustworthiness. [Citation.]” People v. Thomas, 171 Ill.2d 207, 216, 215 Ill.Dec. 679, 664 N.E.2d 76 (1996).
¶ 145 Assessing the reliability of the statements in this case starts and ends with the Chambers factor that is at the heart of the inquiry: whether Wakefield's and Sander's statements were self-incriminating and against their penal interest. Our supreme court has directed that “ ‘a statement of such a nature is the bedrock for the exception, that factor, obviously, must be present .’ “ (Emphasis added.) Tenney, 205 Ill.2d at 436, 275 Ill.Dec. 800, 793 N.E.2d 571 (quoting Keene, 169 Ill.2d at 29, 214 Ill.Dec. 194, 660 N.E.2d 901). A declaration against penal interest “need not be a confession, but must involve exposure to criminal liability.” Id. (citing 2 John W. Strong, McCormick on Evidence § 319(b), at 323–24 (5th ed.1999)). The trial court found that unlike Simonek's statement, neither the statement from Wakefield nor the statement from Sander met this requirement.
¶ 146 Todd Wakefield, a friend of John Simonek and Casey Sander, gave several statements to the police. On June 18, 1998, Todd Wakefield told police that he ate a meal at Brown's Chicken around 9 p.m. on January 8, 1993. When asked “So chances are you're the last person that ate there,” Wakefield said “I would definitely agree on that.” On September 15, 1999, Wakefield told police that he ordered food from Michael Castro on the night of the murders. He said “the only reason he went to Browns [sic ] Chicken was to get something to eat, that he never shot or killed anyone.”
¶ 147 Casey Sander, a former Brown's Chicken employee and Wakefield's girlfriend, was interviewed by police several times in the months following the shootings. Unlike earlier interviews where Sander claimed that her accounts were simply dreams or nightmares, on April 28, 1999—at her tenth interview—Sander claimed that what she told police was based on her actual observations. Sander said that on January 8, 1993, her boyfriend, Todd Wakefield, picked her up at her home at 6:30 p.m. in his brown station wagon, and they went to Brown's Chicken sometime after 9 p.m., because Sander wanted to talk to Lynn Ehlenfeldt about her work schedule. When Sander went into the restaurant, Wakefield insisted on following her. Sander then stated that Wakefield went to talk to Rico Solis and Mike Castro, became enraged, and yelled “Everybody get in the back or I'll kill you,” while waving a gun. Sander told the police that she saw Wakefield shoot people in the restaurant and there was smoke and haze in the room. Sander ran back to her house afterward. Sander did not mention Simonek in her statement, but during her interview, she stated “I walked in the back door and they came in behind me.” When asked who “they” were, Sander claimed to have said “Todd,” but officers and an assistant State's Attorney present confirm that she said “they.” When told that there was a bloody footprint left at the scene, Sander stated that “it was probably a size 8 Reebok and it was probably mine.” After Sander was given her Miranda warnings, she said that “she was tired and wanted to go home” and “she made all this up just so she could go home.” Sander was allowed to leave the police station.
¶ 148 While defendant agrees that neither Wakefield nor Sander acknowledged any role in the murders, he nevertheless claims that “an admission to being present at [a] crime scene, while denying participation, is against one's penal interest.” We cannot agree with such a sweeping view of the “against penal interest” requirement. There is simply no crime in being present at the scene of a crime or witnessing a crime, without more.
¶ 149 Defendant's single piece of authority for this broad principle is People v. Murray, 254 Ill.App.3d 538, 193 Ill.Dec. 589, 626 N.E.2d 1140 (1993), where the question was whether statements to police were sufficiently reliable to establish probable cause to arrest in connection with the shooting of two men. Murray, 254 Ill.App.3d at 549, 193 Ill.Dec. 589, 626 N.E.2d 1140. The police spoke with a man named Washington, who was subject to a “stop order” issued for a double murder. Id. at 541, 193 Ill.Dec. 589, 626 N.E.2d 1140. When the police eventually spoke to the defendant (who was found based on information Washington provided), defendant told them he was at a dope house when a man named Washington arrived with two machine guns and claimed he had “done a ‘mission.’ “ Id. at 542, 193 Ill.Dec. 589, 626 N.E.2d 1140. The police again confronted Washington, who said that the defendant was the getaway driver for the shootings designated by the leader of a street gang Washington “also implicated himself and said he had witnessed the shooting.” Id. Specifically, Washington said “ ‘he was a witness to the shooting, and that his involvement was limited as far as shooting at the victims.’ “ Id. at 549, 193 Ill.Dec. 589, 626 N.E.2d 1140. On appeal, defendant argued that Washington's statement was unreliable because he was “under suspicion for the murders and was denying culpability.” Id. at 550, 193 Ill.Dec. 589, 626 N.E.2d 1140. Noting that “[a] co-offender's statements to the police constitute probable cause where the statements are against the informant's penal interest,” the court concluded that while Washington, a codefendant who later pled guilty, “did deny shooting the victims as [the] defendant claim[ed], his admission that he was at the scene of the crime and witnessed its occurrence could certainly be considered as a statement against penal interest.” Id. at 550, 193 Ill.Dec. 589, 626 N.E.2d 1140.
¶ 150 With the facts of the case in view, it is clear that Murray does not stand for the broad proposition that defendant assigns it. In Murray, after police confronted the witness, he made a statement admitting involvement with the shooting that, while “limited,” exposed him to criminal liability. The exposure was real: Washington was charged along with the defendant, and he pled guilty. We disagree that admission to being present and witnessing a crime, without all the attendant facts in Murray, satisfies the against-penal-interest exception. This court has specifically held (in a probable cause case like Murray ) that admitting to being a witness to a crime does not qualify as a statement against penal interest, without admission of involvement in the crime. See People v. Lindner, 24 Ill.App.3d 995, 998, 322 N.E.2d 229 (1975) (“[T]he informant's statements of his presence during the [drug] transaction were not of such character as to be statements against penal interest. Although the affidavit did recite that the informant was present, it did not sufficiently demonstrate the informant's complicity in the criminal activities.”) see also 2 Wayne R. LaFave, Search and Seizure § 3.3(c), at 178 (5th ed.2012) (noting, in the fourth amendment context, that “acknowledgments that merely create a suspicion of the informant's involvement in the criminal activity, such as that he was present at the time of a transaction involving narcotics or stolen property, will not suffice [as a statement against penal interest]”).
¶ 151 Here, Wakefield did not state that he was involved with the shooting in any way. He did not even admit that he had witnessed the crime. We therefore conclude that Wakefield's statement was not the “type of confession [which] was in a very real sense self-incriminatory and unquestionably against interest.” See Chambers, 410 U.S. at 301.
¶ 152 Sander's statement suffers from the same defect. Like Wakefield, Sander did not admit involvement in the shooting. She claimed that she saw Wakefield shoot several people and she then ran from the scene. While defendant argues that her statement suggests a greater degree of involvement than that of a mere bystander, Sander's statement suggests no more involvement than being at the restaurant and then fleeing the scene after the shooting began. Moreover, our supreme court has cautioned that “[w]hether a statement is actually against the declarant's interest must be determined from the circumstances of each case.” People v. Caffey, 205 Ill.2d 52, 99, 275 Ill.Dec. 390, 792 N.E.2d 1163 (2001). As an example, the court explained that even “a statement admitting guilt and implicating another person, made while in custody, may well be motivated by a desire to curry favor with the authorities and, accordingly, fail to qualify as against interest.” Id. (citing Williamson v. United States, 512 U.S. 594, 601–02, 114 S.Ct. 2431, 129 L.Ed.2d 476 (1994)). In other words, the court has recognized that in some circumstances the rationale underlying the against-penal-interest exception (that a person only would say something so damning if it is true) does not hold because the declarant may be motivated to make such a statement, even if untrue. While here it does not appear that Sander was trying to “curry favor with the authorities,” the record does show that she had a motivation to say what she said. As she explained when she recanted her statement almost immediately after it was made, Sander simply hoped to end the interrogation (apparently her tenth interview) “just so she could go home.” Sander said the minimal amount she thought was necessary to end the questioning, yet she did not even implicate herself or expose herself to criminal liability.
¶ 153 Defendant now proposes an alternate theory for treating Sander's statement as a statement against penal interest: that her disclosure that she witnessed the murders could have subjected her to prosecution for obstruction of justice because Sander previously told police that she did not know anything about the crime. This argument fails. We first note that this obstruction of justice theory was never raised to the trial court and could not have formed the basis of the court's ruling on the motion in limine. In fact, after the trial court asked how Sander's statements could possibly be against her interest, defendant stated in a supplemental brief that “Sander's statements need not be against her penal interest in order to be admissible,” so long as other Chambers factors were satisfied.
¶ 154 In any event, we agree with the State that nothing in the record suggests that Sander knowingly furnished false information with an “intent to prevent the apprehension or obstruct the prosecution or defense of any person,” as required under Illinois's obstruction of justice statute. 720 ILCS 5/31–4 (West 2010). Contrary to defendant's claim, the record does not suggest that Sander's earlier statements denying knowledge (i.e., the statements that, under defendant's theory, are false) were made to protect Wakefield or anyone else. Moreover, in light of Sander's immediate recantation, defendant's theory that Sander's statement could be used as part of an obstruction of justice prosecution is wholly implausible. As with Wakefield's statement, we conclude that because Sander's statement was neither self-incriminatory nor against penal interest, it did not bear “persuasive assurances of trustworthiness.” Tenney, 205 Ill.2d at 434, 275 Ill.Dec. 800, 793 N.E.2d 571. The trial court did not abuse its discretion in excluding Sander's and Wakefield's statements.
¶ 156 In accordance with the foregoing, we affirm defendant's conviction for first degree murder.
Justice EPSTEIN delivered the judgment of the court, with opinion:
Presiding Justice LAVIN and Justice FITZGERALD SMITH concurred in the judgment and opinion.