Can all biological processes be understood as computations and studied as such?

Can all biological processes be understood as computations and studied as such?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Can we understand a cell as an organic computational device? In that case the whole organism can be considered an ensemble of a large number of interacting computational devices?

If this is the case, biology can be studied using methods of graph theory, network theory, computation theory and complex systems theory.

If you abstract enough, anything cam be considered as a computational device. The issue with doing this with cells is the sheer number of variables.

For any given cell, the following internal variables exist:

  • Internal ion concentrations for dozens of ions of importance importance
  • Internal concentrations of hundreds or thousands of various simple organic molecules, including "raw" internalised molecules, various steps in dozens or even hundreds of metabolic processes, and metabolic end-products and by-products.
  • Internal concentrations of hundreds or thousands of different proteins and other complex biological "mechanical parts", as well as the states of these "parts".
  • The physical state of the cell - stretched, contracted, relaxed, hot, cold, etc.

This list is not exhaustive.

It's also worth noting that the above variables may also exist for multiple separate "compartments" within the cell; vesicles, the endoplasmic reticulum and the golgi apparatus being three which come to mind immediately.

The other issue is that cells do not exist in a vacuum; the external environment plays an important role in their functioning. The human body is at any one moment existing in direct contact and interaction with the following extracellular environments:

  • The atmosphere (Mainly, but not only, temperature exchange)
  • The air within the respiratory system, including nose and mouth
  • Stomach contents
  • Small intestine contents (which would need to be considered in several segements, since the nature of the interaction changes along the path of the small intestine.
  • Large intestine contents
  • Blood
  • Cerebrospinal fluid (Fluid found "inside" the brain)
  • Extracellular fluid, or "tissue fluid" (A separate compartment for each small swatch of tissue in the body)
  • Lymph
  • Pleural fluid (Fluid surrounding the lungs; a separate compartment for each lung)
  • Pericardial fluid (A small amount of fluid surrounding the heart)
  • Joint capsule contents (A separate compartment for each joint)

… and the list goes on. Each of these compartments requires tracking of the same variables as individual cells.

This is yet further complicated by the fact that some of these compartments can't easily be considered as one big compartment, because of the importance of spatial relations. For example, the oxygen and carbon dioxide concentrations of blood (as well as the concentrations of other substances like alcohol) change centimeter-to-centimeter. Yet another issue is the fact that cells are not static in terms of their relations to each other; red blood cells move with blood flow, and experience turbulence and other effects, and other cells (like macrophages) are capable of "deliberate" movement in the blood and in tissue.

You would also have to account for physical disturbances - things like a stab wound or even a pinprick are ludicrously complex at a cellular level.

Of course, human beings are very complex organisms, and there exist much simpler organisms. You might be interested in OpenWorm, which is an attempt to computationally simulate Caenorhabditis elegans, a species of roundworm, at a cellular level. Doing so even for an organism as simple as C. elegans is a massive undertaking, as evidenced by the fact that even with the contributions of dozens of experts in their fields the project has been ongoing for some time and is yet to reach stage one.

The short version: Is it possible? Perhaps. Is it easy? Definitely not.

As a mathematician interested in biology I am very curious about informed answers, here I add mine, with the understanding that it is in no way complete and it might be very biased or ignorant as concerns biology.

We may understand a cell as a chemical and physics based program which runs itself. The cell is a computational device in the sense that the outcomes of its activity are computable functions of its inputs (a reasonable hypothesis), but more than that: at the cell level there is no distinction between the computer, the program which runs on the computer, input and output data AND the execution of the program. All is at the same level, i.e. every abstraction is embodied in a concrete chemical or physical thing.

This concreteness part adds, I believe to the difficulty, because the usual thinking in computer science is all about structuring abstractions, while in biology everything is ultimately at only one level: real, physics and chemistry embodied.

This is a claim which needs strong supporting evidence. Being a matter of principle, it cannot be proved rigorously, but it might be given a rigorous support by constructing simple proofs of principle models.

There are many computing models which are inspired by chemistry, therefore in return they can be seen as such proof of principles.

There are Chemical Reaction Networks and Petri Nets models which are more like structuring tools than real embodied models of computation, because they don't consider the structure of molecules (they are just nodes in a graph), nor the way chemical reactions happen (they are edges in a graph). They are very useful tools though and the description given here is very much simplified.

There is the CHAM (chemical abstract machine), G. Berry and G. Boudol. The chemical abstract machine. Theoretical Computer Science, 96(1):217-248, 1992. In this model states of the machine (imagine: a cell) "are chemical solutions where floating molecules can interact according to reaction rules" (cite from the abstract). In this model "solution" means a multiset of molecules, reaction rules are between molecules and they do not apply inside molecules. This is a limitation of the model because the structure of the molecule is not as important as the number of molecules in a species.

Another very interesting model is the Algorithmic Chemistry of Fontana and Buss. The main idea is that chemistry and computation are basically the same. The reasoning goes as following. There are two pillars of the rigorous notion of computation: the Turing Machine (well known) and Church's lambda calculus. Lambda calculus is less known outside computer science, but is a formalism which may be more helpful to chemists, or biologists even, than the Turing machine. Fontana and Buss propose that lambda calculus is a kind of chemistry, in the sense that the its basic operations, namely abstraction and application, can be given chemical analogies. Molecules are like mathematical functions, abstractions are like reaction sites and applications are like chemical reactions.

The Algorithmic Chemistry is almost as closer as possible to be (proof of principle) answer to the question.

Finally I mention chemlambda, or the Chemical concrete machine, which is like Algorithmic Chemistry, but it is far more concrete. Molecules are graphs, applications and abstractions are molecules, chemical reactions are graph rewrites.

What is very interesting in all these models, in my opinion, is that they suggest that answering to the question "Can we understand a cell as an organic computational device?" is somehow relevant to the Computer Science question "How to design an asynchronous, decentralized Internet?".


Biological systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: cladograms, phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth.

The word systematics is derived from Latin word `systema', which means systematic arrangement of organisms. Carl Linnaeus used 'Systema Naturae' as the title of his book.

History of understanding biological computation

In the last 50 years, biology has inspired computing in several ways (Navlakha and Bar-Joseph 2011 Cardelli et al. 2017). During this time, computational thinking has also improved our understanding of biological systems (Bray 1995 Goldbeter 2002 Nurse 2008). Using principles from chemistry, physics and mathematics, we have understood that the highly complex behaviour of biological systems is caused by a multitude of coupled feedback and feed-forward loops in the underlying molecular regulatory networks (Alon 2007 Tyson and Novák 2010). In particular, we have learned that positive and negative feedback loops are responsible for driving biological switches and clocks, respectively (Tyson et al. 2008). We have understood much about the behaviour of these basic units of biological computation (Ferrell 2002 Novák and Tyson 2008 Tyson et al. 2003) and simple switches and clocks have been synthesized in single cells more than 15 years ago (Becskei and Serrano 2000 Gardner et al. 2000 Elowitz and Leibler 2000). Nevertheless, we still lack a comprehensive understanding of how these computational modules have emerged, and which features and molecular interactions are responsible for their efficient and robust behaviour (Cardelli et al. 2017). Ideas from computing might help us to take this last step, which might enable biological switches and clocks to be influential in the development of future computing technologies. The similarity between the biological switch controlling mitotic entry and the approximate majority algorithm of distributed computing (Angluin et al. 2008 Cardelli and Csikász-Nagy 2012) suggests that computing and molecular biology could further influence each other in the future. With the emergence of the fields of systems and synthetic biology, there has been increased interaction between computer science and biology, but there are a few steps needed before we can realise a biology-inspired soft-matter computational revolution. In this paper we review some of the key advances we have seen as a result of the interplay between computing and biology and speculate on the directions that a possible joint field could take in the near future.


The basic components of computational devices, including modern electronic circuits and earlier mechanical equivalents, consist mainly of Boolean and arithmetic functional units (Boolean control logic, integer and floating point units, analog to digital converters, etc.), of registers to hold intermediate results of iterative algorithms, and of coordination components that orchestrate the flow of information across registers and functional units. Coordination is most often achieved by clocks: at each tick data is frozen into registers, and between ticks data flows between registers through the functional units. This is the so called von Neumann architecture that, despite dramatic technology improvements and architectural refinements, has remained largely unchanged since the first electronic computers.

Functional units compute Boolean and mathematical functions by combinational logic (that is, without requiring memory or timing coordination). We can easily find analogues of these in biology, like the function computed by the regulatory region of a single gene (Arnone and Davidson 1997). Synthetic biology has demonstrated how many such functions, typically Boolean gates, can be engineered in vivo by a variety of genetic and protein-based mechanisms (Siuti et al. 2013). More theoretically, it has been shown how chemical reaction networks can compute complex functions (Buisman et al. 2009). Although much of this work has mimicked digital components, there is a sentiment that functional units in biology work mostly in the analog domain, and that synthetic biology could benefit from this approach (Sauro and Kim 2013).

In this review we focus mainly on the other two classes of components: memory and coordination. A switch is a memory unit capable of storing a single bit: at the core there is a bistable dynamical systems coupled with a mechanism to force the system from one stable state to the other. Switching behaviour is pervasive in biology: it is achieved by a range of mechanisms, from individual molecular components like phosphorylation sites and riboswitches, to whole complex biochemical networks that switch from one configuration to another, such as in the cell cycle switch. Synthetic genetic switches have also been demonstrated (Gardner et al. 2000).

The intricate feedback loops of biochemical networks tend to produce oscillations in abundance, both stable and transient, many of which are poorly understood. The most prominent oscillators in biology, found also in the most primitive organisms, are those involved in the cell cycle and in circadian clocks, whose cyclic activities coordinate much of cellular function. Oscillations can also be observed in systems consisting of just 1 to 3 proteins, as in the case of the KaiC circadian oscillator (Nakajima et al. 2005), although those proteins have a very sophisticated structure. Theoretically, many chemical oscillators consisting of 2 to 3 simple species have been studied (Bayramov 2005).

Although similar basic components (switches, oscillators, and functional units) are found both in biology and in computer engineering, it does not necessarily mean that these systems compute “in the same way”. In particular, coordination is achieved in fundamentally different ways in biological systems than in the von Neumann architecture. In biology, oscillators coordinate events only at the coarsest level of granularity, while fine-grained coordination is achieved by direct interaction between molecular components. In the central processing unit of computers, oscillators instead coordinate events at the finest grain, and do so at great cost. As a result, low-power devices tend to employ clock-free coordination strategies to save power. At the level of computer networks, though, coordination is achieved by message passing, because individual clocks can get out of step and network latency may vary. Many non-von Neumann models of computation have been studied in the area of distributed computing: these models resemble, and sometimes even technically coincide, with biochemical models (Angluin et al. 2006 Chen et al. 2014).

The general architecture of computation in biochemical systems is still a matter of investigation, and so is the functioning of many subsystems that appear to process information. For the moment we can focus on how nature achieves the functionality of the basic components, switches, oscillators and functional units, while using material and constraints that are very different from those that come from engineering.

How do natural systems compute?

The complex dynamics of natural systems drew research interest a long time ago. The theory of dynamical systems and chaos was born at the turn of the twentieth century, with a focus on understanding the weather and the many-body problem (Strogatz 2000). Pioneers of mathematical modelling of biological systems came from the field of chemical physics and used their experience learned from non-equilibrium chemical systems to investigate biological switches and clocks (Goldbeter 2017). Ideas on the chemical basis of biological behaviour were also used by the computer scientist Alan Turing to explain developmental pattern formation (Turing 1952). Yet still computing had far less influence on our thinking about biological systems than chemistry, physics or mathematics. Indeed, biological behaviour is controlled by (bio)chemical reactions and the underlying reaction kinetics can be understood by looking at the microscopic physical behaviour of molecules, but to turn these into a comprehensive form, mathematical expertise is required. Since the 1990s, advances in computing have enabled us to solve highly complex equations describing the physical interactions of the chemical reactions driving biological behaviour, but it was the appearance of systems biology (Kitano 2002a) that led to the understanding that we need more computing to truly understand biological systems (Kitano 2002b). Data-rich biological experiments at the molecular level have identified the ubiquity of switches and clocks (Goldbeter 2002) as core components of complex biological regulatory networks.

Feedback loops

Already by the 1960’s it was known that feedback loops are the key determinants of the dynamics of biological systems (Griffith 1968a, b). Positive feedback loops are key to the appearance of switching behaviour, while negative feedback loops are needed for oscillations (Ferrell 2002 Goldbeter 2002). The complex dynamics of biological systems is determined by the combination of multiple of such feedback loops (Tyson et al. 2003). Here we present the main features of feedback loops that enables them to drive key biological processes.

Feedback loops (FBLs) arise when at least two molecular species regulate each other’s activity (Fig. 1). There are two types of FBLs, negative or positive. Negative FBLs (NFBLs) appear when the production or activation of a species is either directly or indirectly repressed when this same species is active (autoregulation) (Thomas and D’Ari 1990 Thomas et al. 1995). Negative feedback loops contain an odd number of inhibitions. In Fig. 1, a system of only two components is shown, where one of the molecular species (X) exhibits inhibitory activity over the other (Y), while this other molecule Y is activating the first molecule X.

Examples of Feedback loops. Left, a negative feedback loop composed of two molecules. Right, a pure positive feedback loop is formed by only positive interactions, while a double-negative feedback loop contains an even number of negative interactions

Positive FBLs (PFBLs) auto-enhance the production of the species involved in the loop. There are two subtypes of PFBL, pure positive or double-negative. Pure PFBLs contain only interactions of activation, while double-negative PFBLs, or antagonistic interactions, contain an even number of inhibitions (plus any number of activations). (Fig. 1).

Feedback loops (FBLs) constitute a basic relationship between molecular species to construct complex behaviours and consequently are abundant in protein regulatory networks. FBLs can produce various dynamical behaviours, such as efficient switching and oscillations (Thomas et al. 1995 Thomas 1981 Tyson et al. 2003 Tyson and Novák 2010 Hernansaiz-Ballesteros et al. 2016 Cardelli et al. 2017). Switch-like dynamics requires PFBLs, producing two (or more) stable states of the system (usually on/off states), when a given species is either fully active or inactive. This feature of PFBLs is known to be key for developmental and decision-making processes (Ferrell 2002). In contrast, oscillations require the presence of NFBLs. While direct negative feedback can stabilize a system, the introduction of a delay arising from regulation via an intermediate, or simply through a slow accumulation, can very easily lead to oscillations. If a system contains at least three different molecular species and a strong non-linearity, a damped or sustained oscillator may arise (Griffith 1968b). Systems with only two molecular species and without explicit time delays can also oscillate, but they require the presence of a PFBL, creating a switch that drives the oscillation. In contrast, the combination of positive feedbacks with the depletion of one of the species creates systems that can oscillate without an explicit negative feedback loop. These so-called relaxation oscillators produce characteristic fast switching in one direction, with slow switching in the other direction, producing triangular-like waveforms (Sel'Kov 1968). Finally, several natural oscillations are known to integrate positive and negative feedback loops, which is thought to enhance the oscillator network robustness to intrinsic or extrinsic fluctuations (Thomas 1981 Thomas et al. 1995 Novák and Tyson 2008 Ferrell et al. 2011).

Systems biology of switches and clocks

The importance of switches and clocks as basic modules of biological networks was highlighted at the birth of systems biology (Hartwell et al. 1999). Two contrasting approaches of systems biology modelling are (1) a top-down approach, where large-scale datasets are used to infer an underlying molecular regulatory network and (2) a bottom-up approach, where an abstract model of a regulatory system is derived from existing experimental data, and the model is subsequently tested against additional experimental data (Bruggeman and Westerhoff 2007). The bottom-up approach often involves models that combine feedback loops to explain complex dynamical behaviour, which often include a combination of switches and clocks (Tyson et al. 2003). Indeed, some of the earliest examples of cycles of model refinement and testing (Chen et al. 2000, 2004 Cross et al. 2002) came from the analysis of the cell cycle regulatory network, which combines two switches to control the major cell cycle transitions and an oscillator that is responsible for the periodicity of the process (Novák and Tyson 2008). Oscillators and switches were also shown to be important in the context of spatio-temporal control of cell signalling (Kholodenko 2006). Furthermore, the effect of the coupling between positive and negative feedback loops was also shown to be important for the robust periodicity of oscillators (Tsai et al. 2008). These and several other landmark papers have led to legitimate claims of understanding the functioning of these network motifs (Shoval and Alon 2010) and initial thinking about what could be the algorithms underlying cellular computation (Lim et al. 2013). In recent years, major steps have been taken to understand biological algorithms by synthesizing biological regulatory networks de novo, which aim to compute specific functions.

Chemical reaction network design and synthetic biology

The advent of ever more precise genetic engineering requires an understanding of information processing in reaction-diffusion networks and harnessing the emergence of self-organising properties of such systems. Systems with switch-like and oscillatory behaviours have been a focus of synthetic biology for almost two decades. In a now classic Nature edition from 2000, the genetic toggle switch and the repressilator systems were described, which opened up a new field of biological engineering (Gardner et al. 2000 Elowitz and Leibler 2000). These systems not only serve as models for the engineering of complex emergent behaviours, but also allow us to test our hypotheses on how biological systems use feedback mechanisms within complex networks to function and perform computations. In the past few years, genetic switches and oscillators have also been used in a number of applications.

Synthetic switching systems

The classic genetic toggle switch used two mutually repressing transcription factors, which gives rise to bistablity and hysteresis (Gardner et al. 2000 Litcofsky et al. 2012). Subsequently, genetic switches were also constructed using positive autoregulatory feedback loops (Isaacs et al. 2003 Atkinson et al. 2003). More recently, circuits combining mutual repression with positive autoregulatory feedback have been built, including the addition of a single positive feedback loop (Lou et al. 2010) and double positive autoregulatory loops, resulting in a quadrastable switch (Wu et al. 2017). The genetic toggle switch has also been coupled with quorum sensing systems to create a population-based switch, which switched states dependent on the local cell density (Kobayashi et al. 2004). In bacterial cells, the cellular context is of increasing interest and this can affect genetic switch performance in a number of ways including changes in stability at low molecule numbers (Ma et al. 2012), plus dependence on host growth rate (Tan et al. 2009), sequence orientation (Yeung et al. 2017) and copy number (Lee et al. 2016). This suggests that natural systems have likely evolved mechanisms that are robust to some of these factors. However, gene regulatory networks are only one way to create switch-like behaviours. Alternatives include the use of recombinases, which allow the DNA itself to flip orientation (Friedland et al. 2009 Bonnet et al. 2012 Courbet et al. 2015 Fernandez-Rodriguez et al. 2015), and the use of transcriptional (RNA) systems (Kim et al. 2006). Accompanying theoretical and computational work has been equally diverse, with insights into possible network topologies (Angeli et al. 2004 Otero-Muras et al. 2012), stochasticity (Tian and Burrage 2006 Munsky and Khammash 2010 Jaruszewicz and Lipniacki 2013 Leon et al. 2016), robustness (Kim and Wang 2007 Barnes et al. 2011), time dependent transient behaviour (Verd et al. 2014), and emergent properties of populations of switches linked by quorum sensing (Kuznetsov et al. 2004 Wang et al. 2007 Nikolaev and Sontag 2016). Following the pioneering work in bacteria, there has now been an explosion of engineered switches for mammalian systems (see Kis et al. 2015 for a comprehensive review), which use components from diverse backgrounds (prokaryotic, eukaryotic and synthetic), and target a variety of applications.

Engineered biological oscillators

Synthetic genetic oscillators have undergone a number of significant developments. The original repressilator was constructed from three transcriptional repressor proteins arranged in a negative feedback cycle (Elowitz and Leibler 2000). Another topology that combined positive and negative feedback was first studied theoretically (Barkai and Leibler 2000) and then constructed in E. coli (Atkinson et al. 2003). An extension of this negative feedback oscillator, combining a further negative autoregulatory feedback loop, showed increased tunability and robustness (Hasty et al. 2002 Stricker et al. 2008). In a series of landmark papers, this network topology was coupled with quorum sensing to create populations of synchronised oscillators at different scales (Danino et al. 2010 Mondragón-Palomino et al. 2011 Prindle et al. 2012). This population-based circuit was eventually used for the treatment of tumours in mice, the oscillatory dynamics causing bacterial cells to lyse and release a chemotherapeutic agent directly into metastatic sites (Din et al. 2016). More recently, in an interesting development, the original negative feedback topology of the repressilator was revisited and re-engineered using detailed stochastic modelling to vastly improve its robustness, so much so that the oscillations remained synchronised without any need for quorum system interactions (Potvin-Trottier et al. 2016). Oscillators have also been implemented at the RNA level (Kim and Winfree 2011), metabolic network level (Fung et al. 2005), and in mammalian cells (Tigges et al. 2009, 2010). The theoretical properties of genetic oscillators have been studied extensively, including design principles (Guantes and Poyatos 2006 Novák and Tyson 2008), robustness (Wagner 2005 Ghaemi et al. 2009 Tsai et al. 2008 Woods et al. 2016 Otero-Muras and Banga 2016) and stochasticity (Vilar et al. 2002 Turcotte et al. 2008).

The engineering of biological systems in all organisms faces similar implementation challenges. Perhaps the main challenge is context dependence, which can occur at multiple levels (sequence, parts, evolutionary and environmental) (Cardinale and Arkin 2012 Arkin 2013). These include predictability of transcription and translation (Mutalik et al. 2013a, b) development of orthogonal part libraries (Wang et al. 2011 Nielsen et al. 2013 Chen et al. 2013b Stanton et al. 2014) resource demand (burden, see later discussion) and impedance matching or retroactivity (balancing input sensitivity and output strengths) (Vecchio et al. 2008 Jayanthi et al. 2013). Eukaryotic systems offer additional challenges over prokaryotes due to their multi-cellularity, more complex genomes and higher levels of regulation (Ceroni and Ellis 2018). These challenges are increasingly being met with an interdisciplinary approach incorporating mathematical modelling, biochemistry, ‘omics’ approaches and ultimately a deeper understanding of the biology.

Synthetic biology and computation

Within the field of synthetic biology, a large body of work on computation has focussed on genetic Boolean gates (Moon et al. 2012). In this arena the state-of-the-art in transcription circuitry is the CELLO algorithm, which uses a characterised library of repressor proteins to design functional genetic implementations for any three-input Boolean circuit (Nielsen et al. 2016). Recombinases (Siuti et al. 2013) and the CRISPR/Cas system (Nielsen and Voigt 2014) can also be used to construct Boolean gates, and genetic Boolean circuits have also been combined with the toggle switch to create sequential logic operations (Lou et al. 2010), including a Pavlovian-like conditioning genetic circuit (Zhang et al. 2014). Most recently, work has shown that ribocomputing devices based on RNA operations can be used to create complex logic functions in living cells (Green et al. 2017). Notable examples of the translation of these approaches include cancer cell type discrimination (Xie et al. 2011) and immunotherapy (Nissim et al. 2017), both of which use Boolean logic computations on intracellular mRNA signals within mammalian cells.

The synthetic switches and oscillators described above have been used in a small number of non-Boolean computing applications inside living cells. For example, genetic switches have been used in signal processing applications including detecting small molecule signals in the mammalian gut (Kotula et al. 2014 Riglar et al. 2017) and glucose sensing (Chen and Jiang 2017). In another landmark study, coordination of genetic oscillators was achieved through coupling of post-translational processing of proteins (Prindle et al. 2014). External input signals in the form of chemical inducers and flow rate were encoded into frequency modulated oscillations. By exploiting the inherent queuing structure of protein degradation, both oscillators become coupled and the corresponding input signals combined into a single multispectral timeseries encoding both signals (Prindle et al. 2014). The theoretical study of multifunctionality in fixed network topologies has become of great interest recently (Jiménez et al. 2017) and work has shown that a genetic circuit comprising of both a toggle switch and a repressilator, known as the AC–DC circuit, has emergent properties such as coherent oscillations, excitability and spatial signal processing (Perez-Carrasco et al. 2018). These examples show that biological systems can be engineered to exploit feedback structures for analog and digital signal processing and that complex computations are possible at different scales. A computer science viewpoint of how biological systems process information and perform computation could help synthetic biology construct more complex systems, further elucidating how natural biological systems function.

Perhaps the most developed area of non-Boolean computing within synthetic biology is molecular programming, which uses nucleic acids (DNA, RNA) as the computational substrate. The use of DNA for computation was first introduced by Adelman to solve an instance of the Hamiltonian path problem (Adleman 1994). It worked by mapping DNA oligomers to edges between nodes in a small network and exploiting the huge parallelism of (sim 10^<19>) molecules to compute all possible paths using repeated use of polymerase chain reaction (PCR). Finally, oligomers of the correct length and containing the correct start and end sequences were extracted, in principle providing solutions to this NP-complete problem. Furthermore, the number of oligomers required is linear in the size of the network. Since then, molecular programming has progressed significantly and two modern approaches will be discussed in detail in Sect. 3.

Now comes the hard part

Applying data science to biology may sound simple, but it’s far from easy. Take the example of “biological optimization”, such as trying to get a yeast or bacteria to produce large amounts of a valuable biomolecule — something that Zymergen does every day.

“Working with microbes isn’t exactly an engineering problem,” as Kurt puts it. That’s because we lack a comprehensive, fundamental understanding of how biology really works. For example, we routinely find ways to improve the performance of a microbe by changing genes that have no known function. Fields like systems and synthetic biology have risen specifically to try to make the engineering of biology “routine”. But compared to building a house or designing a computer, where the inner workings are well understood and very predictable, biology’s complexity often defies engineering.

For this reason, Kurt’s data science team is a combination of specialized deep domain experts and highly interdisciplinary researchers, all of whom communicate across disciplines like machine learning, software engineering, statistics, and biology to achieve their ends. This requires cross-training individuals from each of these backgrounds to work successfully across disciplinary boundaries. Again, easier said than done.

Then, if you’re able to build such an interdisciplinary dream team, you have three big technical challenges on the way to biological optimization: the size of the search space, the cost of measuring biological data, and biology’s complex and poorly understood systems. This trifecta of troublesome problems makes it hard to apply data science in the same way you would in other realms.

Interdisciplinary approaches to dynamics in biology

Biology is dynamic in nature. From ecological systems to embryonic pattern formation: change is at the centre of any biological phenomenon. The last three decades of molecular genetics have been incredibly successful at identifying the components involved in many biological processes, and now we find ourselves at the advent of very exciting times where new methodologies and technologies are, for the first time, allowing us to address the dynamics of these processes directly. Biologists can now quantify the dynamics of biological processes [1–4], analyse [2,5,6] and image them [7–9] in unprecedented resolution. These and other related advances have been shifting the way we represent biological phenomena, away from static representations and towards increasingly more dynamic and therefore realistic accounts.

Biological dynamics are steadily moving to the forefront of many fields in biology. Increasingly more dynamic perspectives and explanations are challenging the validity of static analyses, which although generally more tractable both from a theoretical and an experimental perspective, will have to be justified rather than assumed. The mechanisms underlying biological phenomena will need to address and explain the timing of the processes being investigated as well as their components and spatial distribution. Close interdisciplinary collaborations will be required in order to develop new techniques, methodologies, models, computational tools and conceptual frameworks to address and explain the dynamics that have always characterized biological systems and processes at every level of their organization.

2. Introduction to the theme issue

In the light of these advances in dynamics, and to ensure that this new and growing body of knowledge moves beyond a descriptive level towards mechanistic and causal accounts of biological processes, in February 2020 we hosted a Royal Society Hooke Theo Murphy meeting at Chichely Hall in Buckinghamshire. The meeting, ‘Interdisciplinary approaches to dynamics in biology', brought together a highly interdisciplinary cohort of scientists from quantitative biologists working infields as far ranging as cell biology to ecology, to live-imaging experts, mathematical modellers and philosophers of biology. By shifting the focus away from any biological process in particular to the dynamics of biological phenomena more generally, the meeting helped find common ground between fields that would otherwise seldom overlap, and exploited the intersection between them to translate methodologies, tools and perspectives. In this theme issue, our authors present some of the core ideas and main topics that emerged from the many discussions held at the meeting.

2.1. Bridging spatio-temporal scales

A focus on timing draws our attention to the many previously unappreciated mechanisms by which biological systems regulate and tune their dynamics. In their review, Busby & Steventon [10] explore the role of tissue tectonics—the movement of tissues relative to one another—in controlling and regulating developmental timing and evolutionary change. They propose that the dynamics of cell signalling and commitment depend on various kinds of timers across different spatio-temporal scales within the developing embryo and highlight the importance of considering downward causation from the tissue to the single-cell level, to understand developmental dynamics. With a similar focus, in their review, Rayon & Briscoe [11] identify the mechanisms controlling developmental pace and tempo, while arguing for the value and explanatory potential of cross-species comparisons to understand developmental timing across evolutionary time-scales. These papers go on further to illustrate also how understanding developmental timing is critical to advance bio-engineering and translational medicine.

The challenges that arise from the study of the interplay between different dynamical scales is not restricted to molecular and cellular processes. In their paper, Brejcha et al. [12] study the coevolutionary process of mimicry, defined by the interaction of two different dynamic processes—prey–prey interactions and predator perception. By formalizing this interaction using an attractor field model, the authors reveal how novel mathematical frameworks are key.

2.2. Dynamical modules

The inherent complexity of including time in our conceptualization of biological processes poses the question of how best to understand biological processes in general and their dynamics in particular. Jaeger & Monk [13] present a thorough review of the different accounts of biological modularity to date, focusing on how biological dynamics can be understood as modular too. Centring their argument on the dynamics of biological processes, the authors propose top-down approaches to decompose systems' dynamics and explain through the use of a wide range of examples from metabolism, and cell and developmental biology, the advantages of adopting such a framework, often in concomitance with more traditional approaches.

Clark's paper explores how the concept of dynamical modules can be applied to understand the evolution of segmentation [14]. By defining and combining different dynamical modules, the author is able to describe the relationship and possible evolutionary transitions between the different modes of segmentation observed in vertebrates and arthropods. This approach illustrates that understanding and insight can be obtained by focusing on dynamics without the need to consider any of the gene regulatory mechanisms that generate them.

To finalize, diFrisco and Jaeger define homology of processes [15] as a conceptual framework from which to address the evolution of biological dynamics. The authors propose a marked departure from previous accounts of homology, which have systematically focused on establishing homology at the level of individual genes, networks and traits, but not at the level of the developmental process. They illustrate through examples of how processes can be homologous without the need for their components to be and present a set of criteria to help determine homology at the level of the process.

Case studies of `pathways as programs'

Having described the general algorithmic approach to modelling pathways as programs, we need to know whether it actually works in practice as a useful tool for the biologist. We describe three examples that illustrate different ways in which `pathways as programs' have been used to identify new biological findings through the application of process-calculus and model-checking methods.

Exploring the RAS-RAF-MAPK pathway

An early study of the application of process-calculus formulations combined with model checking used a model of the well-known RAS-RAF-MAP kinase (MAPK) pathway (Calder et al., 2006). One of the first findings was that the original description of the pathway (which had been employed for mathematical modelling) `deadlocked' when articulated as a computer program, indicating that the biological formulation was incomplete or inconsistent. Having reformulated the model, it was shown that the dynamics of MAPK activation were inhibited by the RAF-binding protein RAF kinase inhibitor protein (RKIP), with differential effects on the singly and doubly phosphorylated forms of MAPK. This finding was in accord with experimental data. The lessons from this study are that a well-understood pathway can be successfully formulated and studied using the process-calculus and model-checking techniques, and that biologists can make errors that formal logic (see below) can detect.

Analysing FGF signalling dynamics

Our own work has centred on a process-calculus model of the fibroblast growth factor (FGF) pathway using stochastic simulation (using BioSPI) and probabilistic model checking (using PRISM) (Kwiatkowska et al., 2006 Heath et al., 2007). Our motivation in this study was to analyse the dynamics (i.e. the duration, amplitude and time-dependent behaviour) of FGF signalling by evaluating a number of different positive and negative regulation mechanisms that had been reported in the literature. These include the action of a tyrosine phosphatase (SHP2), the role of FGF-receptor (FGFR) inactivation by Src-kinase-mediated internalisation, the role of ubiquitin-mediated proteolysis, and the action of the signal attenuator Sprouty, which has been argued to act by sequestration of the signalling adaptor growth factor receptor-bound protein 2 (GRB2) (Hanafusa et al., 2004).

We first constructed and verified a full version of the model (Fig. 2), showing that it yielded outcomes that accorded well with experimental data. We then analysed versions of the model in which individual components had been systematically eliminated (Fig. 3) to study the relative significance of different means of controlling signal propagation. This is in effect an `in silico genetics' approach. We also undertook a parameter exploration approach, in which we systematically varied the rate of certain reactions to emulate the effects of known (or hypothetical) modes of drug inhibition.

Our main conclusion from analysing the model was that the most prominent determinants of the dynamics of FGF signalling are the relative rates of receptor-dependent kinase activation pathways and Src-kinase-activated inhibition pathways. Thus, removal of Src from the model leads to extended duration of MAPK activation compared to the full model (Fig. 3), and this was subsequently experimentally verified (Sandilands et al., 2007). Another prediction from these experiments is that inhibiting the phosphatase activity of Shp2 paradoxically destabilises and then suppresses MAPK activation (Fig. 3). This arises because the phosphatase acts concurrently on both positive and negative pathways, and the negative pathway in the model `wins'. The conclusion here is that process calculi can be used to reason about a complex signalling pathway in an informative manner.

Further work in our group on formulating and interrogating models of the Wnt signalling pathway (Tymchyshyn and Kwiatkowska, 2008) and JAK-STAT (Janus kinases signal transducers and activators of transcription) pathway (Guerriero et al., 2007 Guerriero et al., 2009) has indicated the broad applicability of the algorithmic approach to the analysis of signal transduction networks.

Vulval-cell fate determination in C. elegans

Fisher and coworkers (Fisher et al., 2005 Fisher et al., 2007) have created a formal computer model for determination of the fate of Caenorhabditis elegans vulval cells that is based on the biological model of Sternberg and Horvitz (Sternberg and Horvitz, 1989). In the Sternberg model, the developmental fate of six vulval precusor cells (VPCs) depends on the integration of two signals, an inductive EGF signal emanating from an anchor cell (AC) and a lateral signal (of the Notch/Delta class) that is induced in response to the primary inductive signal in a time-dependent manner (Fig. 4). The developmental fate of the VPCs is determined by their distance from the AC (and thus the primary induction signal) and their receipt of the lateral inhibition signal: those receiving the strongest primary induction signal adopt a primary fate (1° in Fig. 4), those receiving the strongest lateral inhibition signal adopt a secondary (2°) fate, and the default tertiary (3°) fate is adopted by cells that receive neither class of signal. The system therefore exhibits interdependency, feedback and time delays.

A computational approach to analyzing FGF signalling dynamics. (A) Example of a text-based narrative of the FGF signaling pathway, which provides the basis for translation into a process-calculus language for execution and model checking (from Kwiatkowska et al., 2006). The full execution can be studied at In the narrative, molecules (processes) are denoted in italics (e.g. FGF). Specified sites on molecules are denoted in brackets [e.g. (Y653,Y654)FGFR]. Interactions between molecules (communications) are denoted by colons (e.g. FGF:FGFR). Complexes (e.g. FGF:FGFR) are treated as new processes. Modifications (state changes) are denoted in bold (e.g. phosphorylates). Phosphorylation (state change) of specified sites is denoted by adding P after the site identification (e.g. Y653P). Modified molecules [e.g. (Y653P,Y654P)FGFR] are treated as new processes. Each step (line) of the narrative describes an interaction (communication) between molecules (processes), resulting in a modification (state change). Note that, as the narrative develops, the number of types of molecules (processes) changes as the result of previous events. Note also that some steps exhibit dependencies (i.e. a requirement for a particular molecular species to be created in the course of execution) whereas others are present from the start. Thus, multiple steps in the narrative occur concurrently. The narrative can be readily modified by the removal of specific steps or addition of new steps. (B) Diagrammatic version of the FGF signalling pathway articulated in A. Binding reactions are denoted by black arrows, phosphorylation reactions by blue arrows and inhibitory (dephosphorylation or degradation) reactions by red arrows. The RAS, RAF, MAPKK and MAPK components are not explicitly included in the model in A.

A computational approach to analyzing FGF signalling dynamics. (A) Example of a text-based narrative of the FGF signaling pathway, which provides the basis for translation into a process-calculus language for execution and model checking (from Kwiatkowska et al., 2006). The full execution can be studied at In the narrative, molecules (processes) are denoted in italics (e.g. FGF). Specified sites on molecules are denoted in brackets [e.g. (Y653,Y654)FGFR]. Interactions between molecules (communications) are denoted by colons (e.g. FGF:FGFR). Complexes (e.g. FGF:FGFR) are treated as new processes. Modifications (state changes) are denoted in bold (e.g. phosphorylates). Phosphorylation (state change) of specified sites is denoted by adding P after the site identification (e.g. Y653P). Modified molecules [e.g. (Y653P,Y654P)FGFR] are treated as new processes. Each step (line) of the narrative describes an interaction (communication) between molecules (processes), resulting in a modification (state change). Note that, as the narrative develops, the number of types of molecules (processes) changes as the result of previous events. Note also that some steps exhibit dependencies (i.e. a requirement for a particular molecular species to be created in the course of execution) whereas others are present from the start. Thus, multiple steps in the narrative occur concurrently. The narrative can be readily modified by the removal of specific steps or addition of new steps. (B) Diagrammatic version of the FGF signalling pathway articulated in A. Binding reactions are denoted by black arrows, phosphorylation reactions by blue arrows and inhibitory (dephosphorylation or degradation) reactions by red arrows. The RAS, RAF, MAPKK and MAPK components are not explicitly included in the model in A.

In the computer model, quantitative rate parameters were simplified as HIGH (over-expression of the primary induction signal), MEDIUM (wild-type signal) or OFF (absence of signal). This model has 48 possible initial states corresponding to 48 combinations of mutations or genotypes. Model checking was employed to interrogate the model by calculating the fate of the six VPCs and the reproducibility of all 48 conditions. These conditions correspond to mutations that had been described in the literature and mutant combinations that had not been generated. Of the initial 48 conditions, 44 yielded a stable fate state (i.e. repeated execution of the model yielded the same outcome), including those that conformed to the published mutant phenotypes. The remaining four mutants yielded an unstable fate (i.e. repeated execution of the model yielded different outcomes). Again, using model checking to query the program, it was found that unstable fates were dependent upon variations in the timing of the lateral inhibition signal, which would not have been obvious from analysis of the published mutant phenotypes. The computer-generated phenotypes were then verified by creating the appropriate C. elegans mutants and revealing the unstable fate, providing evidence for the significance of timing in the sequential induction of the inductive and lateral signals during VPC development.

Example of the investigation of the dynamic behaviour of the FGF pathway by testing the removal of components. The output represents the activation of MAPK (denoted by GRB2 bound to FRS2). Traces show the behaviour of the full model (blue), no SPRY (green), no SHP2 (red) and no Src (turquoise). The model predicts that removal of Src produces extended signalling duration or failure to decay (Sandilands et al., 2007).

Example of the investigation of the dynamic behaviour of the FGF pathway by testing the removal of components. The output represents the activation of MAPK (denoted by GRB2 bound to FRS2). Traces show the behaviour of the full model (blue), no SPRY (green), no SHP2 (red) and no Src (turquoise). The model predicts that removal of Src produces extended signalling duration or failure to decay (Sandilands et al., 2007).

This example shows the ability of model checking to test all possible behaviours of a program (in this case, all possible mutant phenotypes) to guide selection of a biological experiment. This is a case in which modelling has accelerated the pace of biological discovery by enabling the prioritisation of experiments.

Graphical model of the C. elegans vulval-cell fate specification pathway [modelled by Fisher et al. (Fisher et al., 2007)]. AC is the anchor cell and P3-P8 are the vulval precursor cells. 1°, 2° and 3° denote the normal fates of particular vulval cells. The thick black arrow represents the primary inducing signal from the AC. The thickness of the arrows indicates the relative levels of signal received by the three VPCs shown. In the absence of an inducing signal (as in P3, P4 and P8), the pathway is below the threshold needed for induction and the VPCs adopt the 3° fate. A high level of inducing signal (P6) induces the 1° fate. A high inducing signal also results in the production of a strong lateral signal (blue arrows) by P6 and the suppression of 1° responses in P5 and P7. P5 and P7 thus adopt the 2° fate.

Graphical model of the C. elegans vulval-cell fate specification pathway [modelled by Fisher et al. (Fisher et al., 2007)]. AC is the anchor cell and P3-P8 are the vulval precursor cells. 1°, 2° and 3° denote the normal fates of particular vulval cells. The thick black arrow represents the primary inducing signal from the AC. The thickness of the arrows indicates the relative levels of signal received by the three VPCs shown. In the absence of an inducing signal (as in P3, P4 and P8), the pathway is below the threshold needed for induction and the VPCs adopt the 3° fate. A high level of inducing signal (P6) induces the 1° fate. A high inducing signal also results in the production of a strong lateral signal (blue arrows) by P6 and the suppression of 1° responses in P5 and P7. P5 and P7 thus adopt the 2° fate.

The examples discussed above are encouraging and indicate that the `pathway as computer program' concept can generate new biological insights, has the potential to dramatically reduce the time and cost of exploring experimental `space', and can be used to reason about complex biological processes in a rigorous and formal manner.

Goal differences

Evolution has managed to develop a neural architecture that can accomplish many tasks. Several studies have shown that our visual system can dynamically tune its sensitivities to the goals we want to accomplish. Creating computer vision systems that have this kind of flexibility remains a major challenge, however.

Current computer vision systems are designed to accomplish a single task. We have neural networks that can classify objects, localize objects, segment images into different objects, describe images, generate images, and more. But each neural network can accomplish a single task alone.

“A central issue is to understand ‘visual routines,’ a term coined by Shimon Ullman how can we flexibly route visual information in a task-dependent manner?” Kreiman said. “You can essentially answer an infinite number of questions on an image. You don’t just label objects, you can count objects, you can describe their colors, their interactions, their sizes, etc. We can build networks to do each of these things, but we do not have networks that can do all of these things simultaneously. There are interesting approaches to this via question/answering systems, but these algorithms, exciting as they are, remain rather primitive, especially in comparison with human performance.”

The computational stance in biology

The goal of this article is to call attention to, and to express caution about, the extensive use of computation as an explanatory concept in contemporary biology. Inspired by Dennett's ‘intentional stance’ in the philosophy of mind, I suggest that a ‘computational stance’ can be a productive approach to evaluating the value of computational concepts in biology. Such an approach allows the value of computational ideas to be assessed without being diverted by arguments about whether a particular biological system is ‘actually computing’ or not. Because there is sufficient difference of agreement among computer scientists about the essential elements that constitute computation, any doctrinaire position about the application of computational ideas seems misguided. Closely related to the concept of computation is the concept of information processing. Indeed, some influential computer scientists contend that there is no fundamental difference between the two concepts. I will argue that despite the lack of widely accepted, general definitions of information processing and computation: (1) information processing and computation are not fully equivalent and there is value in maintaining a distinction between them and (2) that such value is particularly evident in applications of information processing and computation to biology.

This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.

1. Introduction: computational biology and biological computation

In 1960, the physicist Eugene Wigner published the now-classic ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences' in which he explored the reasons for the seemingly ubiquitous value of mathematics in the physical sciences [1]. Anyone familiar with the current biological literature might well expect to find an analogous paper entitled ‘The Unreasonable Effectiveness of Computation in the Biological Sciences’. As evidenced by many of the articles in this issue and others throughout the biological literature, the ideas of computation and computing are becoming increasingly pervasive in describing and explaining biological phenomena.

Computers and computation have been fellow travellers with biology since modern computers were invented in the past century. As in physics and most other sciences, computers quickly became essential tools for experimental control and data acquisition, for data analysis, for modelling and theory development, and for communication and publishing (e.g. [2,3]). Depending on which of ‘computation’ and ‘biology’ is noun and which is adjective, two broad scientific sub-disciplines have developed. While they are different enough to be distinguished, the boundaries are fuzzy and do not encourage any attempt at rigid distinction.

Computational biology (including bioinformatics) generally refers to the use of computational techniques in service of various branches of biological science, to ‘the understanding and modeling of the structures and processes of life’ ( and to the ‘develop[ment of] algorithms or models to understand biological systems and relationships ( Bioinformatics focuses on the development and application of large-scale databases of biological information, in particular databases in molecular biology where the approach developed with protein and genetic data [4].

Biological computation, in contrast, focuses on the use of ideas from computation as theoretical and explanatory concepts in biology. In what is perhaps the clearest articulation of this perspective, Melanie Mitchell states:

the term biological computation refers to the proposal that living organisms themselves perform computations, and, more specifically, that the abstract ideas of information and computation may be key to understanding biology in a more unified manner… [I]t is only the study of biological computation that asks, specifically, if, how, and why living systems can be viewed as fundamentally computational in nature [5, p. 2].

It is this role of computation in biology that has seen such ubiquitous application in recent years and that will be the focus of this paper.

Both computation and information processing are abstract ideas that were originally developed in non-biological domains, for purposes other than a better understanding of biological phenomena. The fact that both sets of ideas have found valuable applications in biology speaks to their inherent generality. However, a fundamental issue is whether they are *too general*. That is, if every biological phenomenon is computational (or, similarly, if every biological phenomenon is information processing), then there seems to be little gained by the application of those concepts. I distinguish between information processing and computation in biological systems because I believe there are cases in which we are tempted to attribute computation to a biological phenomenon, when a lesser attribution of information processing (without computation) would be just as effective.

2. Alternative definitions of computation

In order to address the questions of ‘if, how, and why biological systems can be viewed as fundamentally computational in nature’, one needs a reasonably clear and generally accepted definition of ‘computation’, and how that concept is similar to and different from the related ideas of information and information processing. 1 Perhaps not surprisingly, there remain significant differences among computer science experts about those definitions. An excellent discussion of various alternatives is the symposium ‘What is computation?’ sponsored and published by the leading computer science and engineering organization, the Association for Computing Machinery (ACM) [12].

Contributors to the ACM symposium, all experts in computer science, offered a number of different definitions of computation, not mutually exclusive but offering different focus and emphasis.

(a) Formal definitions of computation

The mid-1930s was an extraordinary time in the history of computers and computer science. In just a few years Gödel, Church and Turing each made what would become key contributions to formal definitions of computing that were subsequently shown to be equivalent: recursive functions (Gödel), lamda expressions (Church) and most famously, the state sequence of an abstract machine with tape and control unit (Turing's ‘Universal Machine’) [13–15]. This work provides important background for early textbook definitions of computing:

The standard formal definition of computation, repeated in all the major textbooks, derives from these early ideas. Computation is defined as the execution sequences of halting Turing machines (or their equivalents). An execution sequence is the sequence of total configurations of the machine, including states of memory and control unit. The restriction to halting machines is there because algorithms were intended to implement functions: a nonterminating execution sequence would correspond to an algorithm trying to compute an undefined value. [12]

(b) Computation as algorithm

An algorithm is a step-by-step procedure for accomplishing a specified goal. Today the term is mostly commonly encountered in the computing context, but its more general definition applies widely in other contexts. A recipe is an algorithm, for example. Defining computation in terms of algorithms is closely related to the formal definitions just discussed, with particular emphasis on algorithms as sequences of steps needed to solve a specified mathematical problem or accomplish a given task [12].

(c) Computation as symbol manipulation

This perspective emphasizes that both the problem and its solution must be encoded in the form of symbols, that each step (state transition) in the computation is a manipulation of symbols that transforms one set of symbols (the problem set) into another (the solution set), and that many intervening symbol transformations may be needed for intermediate steps [16].

(d) Computation as process

As noted in the quotation from Denning above, the issues of whether a particular machine halts and whether it is possible in advance to determine whether a particular machine will or will not halt (the ‘halting problem’) were important considerations in Turing's original formulations [15]. Subsequently, a number of computer scientists have argued that computations that halt are a too-limited perspective and that many computations, such as operating systems, for example, are specifically designed NOT to halt but to run continuously. Indeed, for such systems halting is anathema, not the sign of a properly completed computation! This perspective has led to proposals that computation be viewed as process:

…the program is a description of the process, the computer is the enactor of the process, and the process is what happens when the computer (or, more correctly, the processor—since a computer may have multiple processors) carries out the program [17].

This view naturally incorporates both non-terminating and non-deterministic computations, which are significant advantages in the view of many.

(e) Digital versus analogue computation

In the variety of definitions considered thus far, computation has been mostly (and often implicitly) considered to be a discrete or digital (usually binary) process. The discrete states of a Turing machine are a clear example. But well before Gödel, Turing and Church [13–15], numerous applications of analogue computing, that is computing based on continuous processes, were used to solve important engineering problems. Lord Kelvin's tidal analyzer [18] and Vannevar Bush's differential analyzer [19] are early examples, and subsequent developments in electrical engineering resulted in a number of differential equation solvers. Should these important examples of problem-solving by machine be included in a formal definition of computing? As Denning asks: ‘Why is solving a differential equation on a supercomputer a computation but solving the same equation with an electrical network is not?’ [12, p. 6]. And as we will see below, analogue computing will become an important element in considering the application of computational concepts to biology.

(f) Computation as representations and transformations thereof

This perspective, offered by Denning [12] as an inclusive summary of many aspects of the preceding definitions, emphasizes the importance of the representational role of symbols and of the processing or transformational role of the operations on symbols in symbol manipulation.

(g) Computation as information processing

Continuing the effort to broaden definitions of computation, Rosenbloom [20] contends that computation should be defined very generally as information and transformations of information as opposed to other definitions that emphasize process, algorithm and representation. This close relationship between the concepts of information processing and computation is also evident in Mitchell's definition of biological computation above, in which a statement that ‘biological computation refers to the proposal that living organisms themselves perform computations' is followed immediately in the same sentence by ‘…more specifically, that the abstract ideas of information and computation may be key to understanding biology in a more unified manner’ [5, p. 2]. So are computation and information processing just different terms for the same underlying concepts? Or do they differ in identifiable and substantive ways?

3. Information processing and computation

Perhaps it should come as no surprise that the concepts of information and information processing, despite their ubiquitous use in contemporary science, engineering, commerce and public discourse, are also characterized by the lack of clear, generally agreed upon definitions (Rocchi [21] assembled over 25 definitions from the literature, and that was in 2010!).

Claude Shannon, in his landmark 1948 paper, ‘A Mathematical Theory of Communication’, is generally credited as the father of information theory (cf. [22,23]). But that paper, and the essence of Shannon's contribution [21], were as much about a theory of communication as a theory of information per se [24]. Shannon emphasized that his definition in terms of the signal actually transmitted relative to the ensemble of signals that could have been transmitted, was only one of many possible useful definitions:

The word ‘information’ has been given different meanings by various writers in the general field of information theory. […] It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field ([25], p. 180).

Focused primarily on the engineering problem of designing effective communication devices, Shannon took the radical step of defining information in a manner that completely eliminated meaning from the definition. ‘Shannon information’ was defined without regard to the content or meaning of the signal or its alternatives. This separation of information from meaning, of signals from semantics, has allowed ‘Shannon Information’ to become the dominant mathematical and scientific characterization of information, extensively applied not only to engineering problems, but to a wide variety of phenomena in the physical, biological and social sciences.

Should we allow the similarities and sometimes overlapping usages of information processing and computation to drive us to treat them synonymously? Despite Rosenbloom's [20] suggestion that we define computation as information processing, I contend that the two concepts are not identical and that there is real value in attempting to distinguish between them, despite their close relationships.

The sub-set/super-set relationship that I believe best characterizes the relationship between information processing and computation is illustrated in figure 1.

Figure 1. Proposed taxonomy of information processing and computation.

Starting with the ‘physical stuff’ that constitutes the universe as a whole, each lower level in the diagram divides the preceding level into two distinct parts that completely characterize the preceding higher level.

The entire universe is divided into physical stuff that is Potential Information and Actual Information (and systems that process it). Potential information is just variance or entropy: variability in some properties of a set of objects. As Shannon showed, such variance across a set of physical objects or signals is essential for them to be used to transmit information. Actual information requires, in addition to the necessary variance among objects or signals, the demonstration that some system or other actually uses that variance to transmit information that is, to reduce uncertainty about the problem under consideration.

To take what is perhaps a trivial example, the pattern of stones on the hillside outside my office may or may not be information in the Shannon sense. In my terminology, they clearly are potential information (as are the atoms and molecules that constitute the stones, as well as the location and height of the hills on which the stones rest). But whether or not they are actual information depends upon determining that some system or other is using those stones to convey information in the Shannon sense.

At the next level down, that sub-set of the universe that is Actual Information and Information Processing Systems is in turn divided into two exhaustive components: non-computational systems and computational systems. This division expresses the key idea that while all computational systems are information processing systems, not all information processing systems are computational.

Formal definitions: Are there aspects of the system that meet the formal definitions of computing?

Algorithms: Are there elements or properties of the system that can be characterized as algorithmic?

Symbol manipulation: Is there evidence in the system and its operation for symbol manipulation?

Processes: Are there identifiable processes or subprocesses that are continuous rather than terminating (halting)?

Representations and transformations: Do internal aspects of the system represent aspects outside the system itself and do aspects of the operation of the system constitute transformations of those representations?

Most would be unwilling to attribute computation to planetary mechanics (e.g. ‘the earth computes its orbit’) even though it is perfectly possible to use computational techniques to model and predict planetary orbits with great precision.

Most would be willing to attribute computational capabilities to the laptop on which this manuscript is being written, because of our understanding that both the hardware that constitutes the system and the software used to provide instructions to the hardware meet many of the criteria listed above.

4. The computational stance in biology

But what about more interesting intermediate cases?

Does the human brain compute? If so, what and how?

Does an ant hill compute? What are the symbols? What is being represented?

Does a flock of birds compute? If so, what are the algorithms and processes?

It should come as no surprise that these intermediate cases all come from biology. All are cases in which one or more researchers has attributed computational properties to the systems in question, with varying degrees of explicitness in consideration of whether and to what degree computational properties apply. While I believe that arguments about whether a given biological system ‘actually computes’ can be counterproductive, I also believe that attributing such properties to a biological system simply because it is biological is unwarranted and unproductive.

Inspired by Dennett's analysis of physical, design and intentional ‘stances’ in the philosophy of mind [26,27] (see also [28,29]), I suggest that a ‘computational stance’ can be a productive approach to the application of computational ideas in biology.

My appropriation of the ‘stance stance’ is rather different in its particulars from the intentional stance in philosophy of mind. In particular, it does not raise the issues of realism that Dennett's critics are quick to offer (e.g. [28,30]). Nevertheless, by foregoing any deep arguments about ‘computational realism’ (i.e. that a biological system ‘really computes’), I am aware than some may consider the proposal to be so weak and timid as not to be worth their effort.

5. A real biological example

In order to illustrate the value of attempting to distinguish between a biological phenomenon that reflects computation versus another that reflects information processing but not computation, consider the famous ‘waggle dance’ of the honeybee [31].

von Frisch showed that honeybees communicate to their hive mates the location and distance of attractive food sources by means of a figure-eight-like dance with alternating left and right loops [31]. At the end of each loop, the honeybee ‘waggles’ its body in such a manner that the angle of the waggle represents the angle between the sun and the direction to the food source, and the duration of the waggle represents distance to the food source.

Many would consider the acquisition and storage of information about food sources, and communication of that information by foraging honeybees, a quintessential example of biological computation. In terms of the criteria for computation identified above, the dance behaviour can be effectively described as algorithmic: there is clear evidence of symbolic representation (direction and distance from the hive) and there are identifiable processes associated with different components of the dance. Taken together, Denning's ‘representations and transformations of those representations' are an effective characterization of the waggle dance phenomenon.

Now consider a related phenomenon exhibited by the same honeybees. More recent studies [32] have demonstrated that other honeybees in the hive can intervene to stop the waggle dance if they have had negative experiences (predators, for example) at the food source for which the waggle dance is being performed. This information is communicated by a head-butt against the dancer, which stops the dance and the recruitment of other observers to the food source.

By contrast with the waggle dance, this ‘danger’ or ‘stop’ signal communication is much simpler. At the level of the individual honeybee issuing the ‘stop’ signal, there is no complex symbolic representation of direction or distance, no elaborate sequence of behaviours, just the head-butt signal conveying information about a potential food site. Even though one could certainly describe the ‘stop’ signal phenomenon in computational terms, it is adequately explained at the level of the individual honeybee within the framework of information and communication in the Shannon, and Shannon and Weaver senses. At the level of the entire hive, however, Seeley et al. [33] have convincingly argued for a computational account.

6. Conclusion

Given the focus of this Special Issue, I have tried unsuccessfully to identify ways in which the criteria for computation can contribute to the discussion of the similarities and differences between ‘solid’ and ‘liquid’ brains. I conclude that the distinction between ‘solid’ and ‘liquid’ reflects structural and architectural differences—different physical implementations [34]—more than it reflects any systematic association with computation versus information processing.

My goal has not been to reach a principled determination about whether a particular biological system computes or not, because I do not believe such a principled determination is achievable (hence my emphasis on ‘the computational stance’). Rather, it is the process of examining whether and to what extent the biological system in question exhibits each of the criteria for computation that I believe will be helpful to understanding and explaining that system's operation. Ultimately, it is that criterion of utility for the explanatory goals that I believe to be more important than seeking a ‘does it compute or not?’ determination.

It is possible we may often find ourselves in the position identified by Frailey [17]:

Perhaps biological systems carry out processes that do not quite fit our notions of algorithmic, but they exhibit properties that are more readily understood because of what we know about computation.

Hardware differences

In the introduction to Biological and Computer Vision, Kreiman writes, “I am particularly excited about connecting biological and computational circuits. Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.”

And indeed, the study of the visual cortex has been a great source of inspiration for computer vision and AI. But before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors.

Therefore, a theory of vision must be defined at a level that can be implemented in computers in a way that is comparable to living beings. Kreiman calls this the “Goldilocks resolution,” a level of abstraction that is neither too detailed nor too simplified.

For instance, early efforts in computer vision tried to tackle computer vision at a very abstract level, in a way that ignored how human and animal brains recognize visual patterns. Those approaches have proven to be very brittle and inefficient. On the other hand, studying and simulating brains at the molecular level would prove to be computationally inefficient.

“I am not a big fan of what I call ‘copying biology,’” Kreiman told TechTalks. “There are many aspects of biology that can and should be abstracted away. We probably do not need units with 20,000 proteins and a cytoplasm and complex dendritic geometries. That would be too much biological detail. On the other hand, we cannot merely study behavior—that is not enough detail.”

In Biological and Computer Vision, Kreiman defines the Goldilocks scale of neocortical circuits as neuronal activities per millisecond. Advances in neuroscience and medical technology have made it possible to study the activities of individual neurons at millisecond time granularity.

And the results of those studies have helped develop different types of artificial neural networks, AI algorithms that loosely simulate the workings of cortical areas of the mammal brain. In recent years, neural networks have proven to be the most efficient algorithm for pattern recognition in visual data and have become the key component of many computer vision applications.

Invited Speakers and Talks

Name Affiliation Email
Alla Borisyuk University of Utah [email protected]
Duan Chen University of North Carolina at Charlotte [email protected]
Veronica Ciocanel Mathematical Biosciences Institute [email protected]
Casey Diekman NJIT [email protected]
German Enciso UC Irvine [email protected]
Daniel Forger University of Michigan [email protected]
Jeff Gaither Nationwide Children’s Hospital [email protected]
Juan Gutierrez University of Texas San Antonio [email protected]
Wenrui Hao Pennsylvania State University [email protected]
Sam Isaacson Boston University [email protected]
Hye-won Kang UMBC [email protected]
Jae Kyoung Kim KAIST [email protected]
Yangjin Kim Konkuk University [email protected]
Adrian Lam The Ohio State University [email protected]
Sean Lawley University of Utah [email protected]
Bo Li University of California San Diego [email protected]
Tie-Jun Li Peking University [email protected]
Sookkyung Lim University of Cincinnati [email protected]
Yoichiro Mori University of Pennsylvania [email protected]
Jay Newby University of Alberta [email protected]
Qing Nie University of California, Irvine [email protected]
David Rand University of Warwick [email protected]
Alexandria Volkening Northwestern University [email protected]
Martin Wechselberger University of Sydney [email protected]

Alla Borisyuk (Univ. of Utah): Effect of Astrocytes in Neuronal Networks

Astrocytes are glial cells making up 50% of brain volume, and playing multiple important roles, e.g. control of synaptic transmission. We are developing tools to include “effective” astrocytes in neuronal network models in an easy-to-implement, and relatively computationally-efficient way. In our approach we first consider neuron-astrocyte interaction at fine spatial scale, and then extract essential ways in which the network is influenced by the presence of the astrocytes.

For example, the tightness of astrocyte wrapping (or “degree of ensheathement”) and the number of the synapses ensheathed varies by brain region and in certain disease states such as some forms of epilepsy. Do the changes in ensheathment properties contribute to the diseased state of the network or, conversely, play a protective role?

To address this question, first, we consider an individual synapse as a DiRT (Diffusion with Recharging Traps) model: diffusing particles can escape through absorbing parts of the boundary, or can be captured by traps on the boundary. We show that a synapse tightly ensheathed by an astrosyte makes neuronal connection faster, weaker, and less reliable. These influences can then be included in a neuronal network model by adding a simplified “effective” astrocyte on each synapse. We find that depending on the number of synapses ensheathed, and the ensheathment strength, the astrocytes are able to push the network to synchrony and to exhibiting strong spatial patterns, possibly contributing to epileptic disorder.

Duan Chen (Univ. of North Carolina at Charlotte): Fast stochastic compression algorithms for Biological Data Analysis

Our recent work is motivated by two types of biological problems. One is inferring 3D structures of chromatins based on chromosome conformation capture (3C), such as Hi-C, which is a high-throughput sequencing technique that produces millions of contact data between genomic loci pairs. The other problem is computational deconvolution of gene expression data from heterogeneous brain samples, for extracting cell type-specific information for patients with Alzheimer's Disease (AD). Both problems involve large volumes of data, thus fast algorithms are indispensable in either direct optimization or machine learning methods. A central approach is the low-rank approximation of matrices. Conventional matrix decomposition methods such as SVD, QR, etc, are expensive, so not suitable for repeated implementation in these biological problems. Instead, we develop fast stochastic matrix compressions based on randomized numerical linear algebra (RNLA) theories. In this talk, we will emphasize on a recently developed stochastic kernel matrix compression algorithm. In this algorithm, samples are taken at no (or low) cost and the original kernel matrix is reconstructed efficiently with desired accuracy. Storage and compressing processes are only at O(N) or O(NlogN) complexity. These stochastic matrix compressing can be used to the above-mentioned biological problems to greatly improve algorithm efficiency, they can also be applied to other kernel based machine learning algorithms for scientific computing problems with non-local interactions (such as fractional differential equations), since no analytic formulation of the kernel function is required in our algorithms.

Veronica Ciocanel (Mathematical Biosciences Institute): Computational modeling and topological data analysis for biological ring channels

Contractile rings are cellular structures made of actin filaments that are important in development, wound healing, and cell division. In the reproductive system of the roundworm C. elegans, ring channels allow nutrient exchange between developing egg cells and the worm and are regulated by forces exerted by myosin motor proteins.

In this work, we use an agent-based modeling and data analysis framework for the interactions between actin filaments and myosin motor proteins inside cells. This approach may provide key insights for the mechanistic differences between two motors that are believed to maintain the rings at a constant diameter. In particular, we propose tools from topological data analysis to understand time-series data of filamentous network interactions. Our proposed methods clearly reveal the impact of certain parameters on significant topological circle formation, thus giving insight into ring channel formation and maintenance.

Casey Diekman (NJIT): Data Assimilation Methods for Conductance-Based Neuronal Modeling

Modern data assimilation (DA) techniques are widely used in climate science and weather prediction but have only recently begun to be applied in neuroscience. In this talk I will illustrate the use of DA algorithms to estimate unobserved variables and unknown parameters of conductance-based neuronal models and propose the bifurcation structure of inferred models as a qualitative measure of estimation success. I will then apply DA to electrophysiological recordings from suprachiasmatic nucleus neurons to develop models that provide insight into the functioning of the mammalian circadian clock. Finally, I will frame the selection of stimulus waveforms to inject into neurons during patch-clamp recordings as an optimal experimental design problem and present preliminary results on the optimal stimulus waveforms for improving the identifiability of parameters for a Hodgkin-Huxley-type model.

German Enciso (UC Irvine): Absolute concentration robustness controllers for stochastic chemical reaction network systems

In this work, we provide a systematic control of a given biochemical reaction network through a control module reacting with the existing network system. This control module is designed to confer so-called absolute concentration robustness (ACR) to a target species in the controlled network system. We show that when the deterministic network system is controlled with the ACR controller, the concentration of a species of interest has a steady state at the desired value for any initial amounts, and it converges to the value under some mild conditions. For the stochastic counterparts of reaction network systems, we further show that when the abundance of the control species is high enough, the ACR controller can be utilized to make a target species approximately follow a Poisson distribution centered at the desired value. For this framework, we use the deficiency zero theorem (Anderson et al, 2010) in chemical reaction network theory and multiscaling model reduction methods. This control module also brings robust perfect adaptation, which is a highly desirable goal of the control theory, to the target species against transient perturbations and uncertainties in the model parameters.

Daniel Forger (Univ. of Michigan): The mathematics of the wearable revolution

Millions of Americans track their steps, heart rate, and other physiological signals through wearables. The scale of this data is unprecedented I will describe several of our ongoing studies each of which collects wearable and mobile data from thousands of users, even in > 100 countries. This data is so noisy that it often seems unusable. It is in desperate need of new mathematical techniques to extract key signals that can be used in the (ode) mathematical modeling typically done in mathematical biology. I will describe several techniques we have developed to analyze this data and simulate models including gap orthogonalized least squares, a new ansatz for coupled oscillators which is similar to the popular ansatz by Ott and Antonsen, but which gives better fits to biological data and a new level-set Kalman Filter that can be used to simulate population densities. I will also describe how these methods can be used to understand the impact of social distancing and COVID lockdowns on circadian timekeeping and sleep.

Jeffrey Gaither (Nationwide Children's Hospital): SNPDogg: Feature-importances in the identification of harmful missense SNPs

Recent years have seen an explosion in the use of machine-learning algorithms to classify human mutations. There are now at least 30 scores designed to identify mutations likely to be deleterious to humans, but almost all are "black boxes" that provide no explanation of how they arrived at their predictions. In this talk I'll introduce a new mutational pathogenicity score, SNPDogg, that is transparent, insofar as every prediction can be decomposed as a sum of contributions from the model's features. SNPDogg's feature-importance ​values are computed via a game-theoretic approach implemented in the "shap" python package.

Juan B. Gutierrez (Univ. of Texas San Antonio): Investigating the Impact of Asymptomatic Carriers on COVID-19 Transmission

Jacob B Aguilar, PhD, Saint Leo University.
Jeremy Samuel Faust, MD, Brigham and Women's Hospital
Lauren M. Westafer, MD, University of Massachusetts, Medical School-Baystate
Juan B. Gutierrez, PhD, University of Texas at San Antonio

It is during critical times when mathematics can shine and provide an unexpected answer. Coronavirus disease 2019 (COVID-19) is a novel human respiratory disease caused by the SARS-CoV-2 virus. Asymptomatic carriers of the virus display no clinical symptoms but are known to be contagious. Recent evidence reveals that this sub-population, as well as persons with mild, represent a major contributor in the propagation of COVID-19. The asymptomatic sub-population frequently escapes detection by public health surveillance systems. Because of this, the currently accepted estimates of the basic reproduction number (Ro) of the virus are too low. In this talk, we present a traditional compartmentalized mathematical model taking into account asymptomatic carriers, and compute Ro exactly. Our results indicate that an initial value of the effective reproduction number could range from 5.5 to 25.4, with a point estimate of 15.4, assuming mean parameters. It is unlikely that a pathogen can blanket the planet in three months with an Ro in the vicinity of 3, as reported in the literature in fact, no other plausible explanation has been offered for the rapid profession of this disease. This model was used to estimate the number of cases in every county in the USA.

Wenrui Hao (Penn State University): Homotopy methods for solving nonlinear systems arising from biology

Many nonlinear systems are arising from biology such as the pattern formation of nonlinear differential equations and data-driven modeling by using neural networks. In this talk, I will present a systematic homotopy method to solve these nonlinear systems in biology. In specific, I will introduce the homotopy continuation technique to compute the multiple steady states of nonlinear differential equations and also to explore the relationship between the number of steady states and parameters. Two benchmark problems will be used to illustrate the idea, the first is the Schnakenberg model which has been used to describe biological pattern formation due to diffusion-driven instability. The second is the Gray-Scott model which was proposed in the 1980s to describe autocatalytic glycolysis reactions. Then I will also introduce a homotopy training algorithm to solve the nonlinear optimization problem of biological data-driven modeling via building the neural network adaptively. Examples of assessing cardiovascular risk by pulse wave data will be used to demonstrate the efficiency of the homotopy training algorithm.

Samuel Isaacson (Boston University): Strong intracellular signal inactivation produces sharper and more robust signaling from cell membrane to nucleus

For a chemical signal to propagate across a cell, it must navigate a tortuous environment involving a variety of organelle barriers. In this work we study mathematical models for a basic chemical signal, the arrival times at the nuclear membrane of proteins that are activated at the cell membrane and diffuse throughout the cytosol. Organelle surfaces within human B cells are reconstructed from soft X-ray tomographic images, and modeled as reflecting barriers to the molecules’ diffusion. We show that signal inactivation sharpens signals, reducing variability in the arrival time at the nuclear membrane. Inactivation can also compensate for an observed slowdown in signal propagation induced by the presence of organelle barriers, leading to arrival times at the nuclear membrane that are comparable to models in which the cytosol is treated as an open, empty region. In the limit of strong signal inactivation this is achieved by filtering out molecules that traverse non-geodesic paths.

Hye Won Kang (UMBC): A stochastic model for enzyme clustering in glucose metabolism

A sequence of metabolic enzymes tightly regulates glycolysis and gluconeogenesis. It has been hypothesized that these enzymes form multienzyme complexes and regulate glucose flux. In the previous work, it was identified that several rate-limiting enzymes form multienzyme complexes and control the direction of glucose flux between energy metabolism and building block biosynthesis. A recent study introduced a mathematical model to support this finding, in which the association of the rate-limiting enzymes into multienzyme complexes in included. However, this model did not fully account for dynamic and random movement of the enzyme clusters, as observed in the experiment.

In this talk, I will introduce a stochastic model for enzyme clustering in glucose metabolism. The model will describe both the enzyme kinetics and the spatial organization of metabolic enzyme complexes. Then, I will discuss underlying model assumptions and approximation methods.

Jae Kyoung Kim (KAIST): Analysis of timeseries data with hidden components

Despite dramatic advances in experimental techniques, many facets of intracellular dynamics remain hidden, or can be measured only indirectly. In this talk, I will describe three strategies to analyze timeseries data from biological systems with hidden parts: replacement of hidden components with either time delay, quasi-steady-state or random regulatory process. Then, I will illustrate how the simplification with the time delay can be used to understand the processes of protein synthesis, which involves multiple steps such as transcription, translation, folding and maturation, but typically whose intermediates proteins cannot be measured. Furthermore, I will illustrate how the simplification with the quasi-steady-state can be used to develop an accurate method to estimate drug clearance, which occurs in multiple steps of metabolism, which greatly improved the canonical approach used in more than 65,000 published papers for last 30 years. Finally, I will describe a systematic modeling selection approach to identify hidden regulatory biochemical connections leading to the observed timeseries data. Then, I will illustrate how we applied the approach to find the connection between the circadian clock and cell cycle checkpoints.

Yangjin Kim (Konkuk University): Cellular infiltration, intra- and inter-cellular signaling and cell mechanics in tumor biology: hybrid multi-scale approaches

Tumor cells interact with many players such as stromal cells (fibroblasts, myofibroblasts), immune cells (N1/N2 neutrophils, M1/M2 macrophages, NK cells, T cells), and extracellular matrix (ECM) in a tumor microenvironment in order to increase survival rates in response to multiple biomechanical and biochemical challenges. Quite often, these tumor cells exchange major regulatory molecules with other cells and use intracellular signaling pathways for regulation of cellular decision such as cell motility, proliferation, apoptosis, and necroptosis after receptor binding. For example, stem cells-like astrocytes and M1/M2 microglia communicate with glioma cells for regulation of tumor growth and cellular dispersion after surgical resection of the primary tumorcore, and one of major ECM components in brain, CSPG, was shown to play a key role in regulation of anchoring invasive glioma cells. We developed hybrid multi-scale models of cancer dynamics where intracellular components (ODEs), diffusible molecules (PDEs), and individual cells are integrated in the hybrid domain. We show how up- or down-regulation of components in these pathways in cancer cells affects the key cellular decision to infiltrate or proliferate by interacting with many players in a complex microenvironment. We take some examples in glioblastoma (brain cancer) before and after surgery, breast cancer, and metastatic circulating tumor cells (CTC).

Sean Lawley (Univ. of Utah): Extreme First Passage Times of Diffusion

Why do 300 million sperm cells search for the oocyte in human fertilization when only a single sperm cell is necessary? Why do 1000 calcium ions enter a dendritic spine when only two ions are necessary to activate the relevant Ryanodine receptors? The seeming redundancy in these and many other biological systems can be understood in terms of extreme first passage time (FPT) theory.

While FPT theory is often used to estimate timescales in biology, the overwhelming majority of studies focus on the time it takes a given single searcher to find a target. However, in many scenarios the more relevant timescale is the FPT of the first searcher to find a target from a large group of searchers. This so-called extreme FPT depends on rare events and is often orders of magnitude faster than the FPT of a given single searcher. In this talk, we will explain recent results in extreme FPT theory and show how they modify traditional notions of diffusion timescales.

King-Yeung Lam (The Ohio State University): PDEs in Evolution of Dispersal

Beginning with the work of Alan Hastings in 1983, PDE models have played a major role in the mathematical study of evolution of dispersal. In this talk, I will discuss two classes of PDE models that comes from evolution of dispersal. In the first part, I will discuss existence/non-existence of evolutionarily stable strategies (ESS) in two-species competition models, which is motivated by the adaptive dynamics approach. In the second part, I will introduce a new class of models that describes a population structured by a quantitative trait, which describes the competition of an infinite number of species in a certain sense. We show the convergence to ESS in these models of a quantitative trait, and explain how that is connected to the aforementioned adaptive dynamics framework. This talk contains projects in collaboration with R.S. Cantrell, C. Cosner, M. Golubitsky, W. Hao, B. Perthame, Y. Lou, and F. Lutscher.

Bo Li (Univ. of California San Diego): Spatiotemporal Dynamics of Bacterial Colony Growth with Cell-Cell Mechanical Interactions

The growth of bacterial colony exhibits striking complex patterns and robust scaling laws. Understanding the principles that underlie such growth has far-reaching consequences in biological and health sciences. In this work, we develop a mechanical theory of cell-cell and cell-environmental interactions and construct a hybrid three-dimensional computational model for the growth of E. coli colony on a hard agar surface. Our model consists of microscopic descriptions of the growth, division, and movement of individual cells, and macroscopic diffusion equations for the nutrients. The cell movement is driven by the cellular mechanical interactions. Our large-scale simulations and analysis predict the linear growth of the colony in both the radial and vertical directions in a good agreement with the experimental observations. We find that the mechanical buckling and nutrient penetration are the key factors in determining the underlying growth scalings. This work is the first step toward detailed computational modeling of bacterial growth with mechanical and biochemical interactions. This is joint work with Mya Warren, Hui Sun, Yue Yan, Jonas Cremer, and Terence Hwa.

Tiejun Li (Peking University): Differential network inference via the fused D-trace loss with cross variables

Detecting the change of biological interaction networks is of great importance in biological and medical research. We proposed a simple loss function, named as CrossFDTL, to identify the network change or differential network by estimating the difference between two precision ma- trices under Gaussian assumption. The CrossFDTL is a natural fusion of the D-trace loss for the considered two networks by imposing the l1 penalty to the differential matrix to ensure sparsity. The key point of our method is to utilize the cross variables, which correspond to the sum and difference of two precision matrices instead of using their original forms. Moreover, we developed an efficient minimization algorithm for the proposed loss function and further rigorously proved its convergence. Numerical results showed that our method outperforms the existing methods in both accuracy and convergence speed for the simulated and real data.

Sookkyung Lim (Univ. of Cincinnati): How do bacteria swim? Modeling, Simulations & Analysis

Swimming bacteria with helical flagella are self-propelled micro-swimmers in nature, and the swimming strategies of such bacteria vary depending on the number of flagella and where the flagella are positioned on the cell body. In this talk, I will introduce two microorganisms, multi-flagellated E. coli and single-flagellated Vibrio A. We describe a rod-shaped cell body as a rigid body that can translate and rotate, and each helical flagellum as an elastic rod using the Kirchhoff rod theory. The hydrodynamic interaction of the bacterium is described by the regularized Stokeslet formulation. In this talk, I will focus on how bacteria can swim and reorient their swimming course for survival and how Mathematics can help to understand the swimming mechanism of such bacteria.

Yoichiro Mori (Univ. of Pennsylvania): Planar front Instabilities of the Bidomain Allen-Cahn Equation

The bidomain model is the standard model describing electrical activity of the heart. We discuss the stability of planar front solutions of the bidomain equation with a bistable nonlinearity (the bidomain Allen‐Cahn equation) in two spatial dimensions. In the bidomain Allen‐Cahn equation a Fourier multiplier operator whose symbol is a positive homogeneous rational function of degree two (the bidomain operator) takes the place of the Laplacian in the classical Allen‐Cahn equation. Stability of the planar front may depend on the direction of propagation given the anisotropic nature of the bidomain operator. We establish various criteria for stability and instability of the planar front in each direction of propagation. Our analysis reveals that planar fronts can be unstable in the bidomain Allen‐Cahn equation in striking contrast to the classical or anisotropic Allen‐Cahn equations. We identify two types of instabilities, one with respect to long‐wavelength perturbations, the other with respect to medium‐wavelength perturbations. Interestingly, whether the front is stable or unstable under long‐wavelength perturbations does not depend on the bistable nonlinearity and is fully determined by the convexity properties of a suitably defined Frank diagram. On the other hand, stability under intermediate‐wavelength perturbations does depend on the choice of bistable nonlinearity. Intermediate‐wavelength instabilities can occur even when the Frank diagram is convex, so long as the bidomain operator does not reduce to the Laplacian. We shall also give a remarkable example in which the planar front is unstable in all directions. Time permitting, I will also discuss properties of the bidomain FitzHugh Nagumo equations. This is joint work with Hiroshi Matano, Mitsunori Nara and Koya Sakakibara.

Jay Newby (Univ. of Alberta) Resolving spatial heterogeneity of the cytoplasm in living cells

Despite being one of the fundamental cell structures, we know surprisingly little about the cytosol. Its physical properties are difficult to measure due to technical challenges: the means of spatially resolving viscosity, elasticity, flow, crowding, and confinement within cells that fluctuate and grow. Changes in macromolecular crowding can directly influence protein diffusion, reaction rates, and phase separation. I will discuss new particle tracking tools and how we use them to quantitatively measure the physical state of the cytosol by studying the three-dimensional stochastic motion of genetically expressed fluorescent nanoparticles (GEMs). Using these particle probes, we find that the physical properties of the cytosol vary significantly within and between cells, indicating that the fundamental state of the cytosol is a key source of heterogeneity within genetically identical cells.

Qing Nie (Univ. of California, Irvine): Multiscale inference and modeling of cell fate via single-cell data

Cells make fate decisions in response to dynamic environmental and pathological stimuli as well as cell-to-cell communications. Recent technological breakthroughs have enabled to gather data in previously unthinkable quantities at single cell level, starting to suggest that cell fate decision is much more complex, dynamic, and stochastic than previously recognized. Multiscale interactions, sometimes through cell-cell communications, play a critical role in cell decision-making. Dissecting cellular dynamics emerging from molecular and genomic scale in single-cell demands novel computational tools and multiscale models. In this talk, through multiple biological examples we will present our recent effort in the center to use single-cell RNA-seq data and spatial imaging data to uncover new insights in development, regeneration, and cancers. We will also present several new computational tools and mathematical modeling methods that are required to study the complex and dynamic cell fate process through the lens of single cells.

David Rand (Univ. of Warwick): TimeTeller: a New Tool for Precision Circadian Medicine and Cancer Prognosis

Recent research has shown that the circadian clock has a much more profound effect on human health than previously thought. I will present a machine-learning approach to measuring circadian clock functionality from the expression levels of key genes in a single tissue sample and then apply this to study survival in a breast cancer clinical trail.

A principal aim of circadian medicine is to develop techniques and methods to integrate the relevance of biological time into clinical practice. However, it is difficult to monitor the functional state of the circadian clock and its downstream targets in humans. Consequently, there is a critical need for tools to do this that are practical in a clinical context and our approach tackles this. We apply our algorithm to breast cancer and show that in a large cohort of patients with non-metastatic breast cancer the resulting dysfunction metric is a prognostic factor for survival providing evidence that it is independent of other known factors. While previous work in this area is focused on individual genes, our approach directly assesses the systemic functionality of a key regulatory system, the circadian clock, from one sample.

Alexandria Volkening (Northwestern University): Modeling and analysis of agent-based dynamics

Agent-based dynamics appear across the natural and social world applications include swarming and flocking, pedestrian crowd movement, and self-organization of cells during the early development of organisms. Though disparate in application, many of these emergent patterns and collective dynamics share similar features (e.g. long-range communication, noise, fluctuations in population size, and multiple types of agents) and face some of the same modeling and analysis challenges. In this talk, I will focus on the example of pigment-cell interactions during zebrafish-pattern formation to illustrate various ways of modeling agent behavior. We will discuss how agent-based models are related to other approaches (e.g., cellular automaton and continuum models) and highlight methods for analyzing cell-based dynamics using topological techniques.

Martin Wechselberger (Univ. of Sydney): Geometric singular perturbation theory beyond the standard form

In this talk I will review geometric singular perturbation theory, but with a twist— I focus on a coordinate-independent setup of the theory. The need for such a theory beyond the standard form is motivated by looking at biochemical reaction, electronic and mechanical oscillator models that show relaxation-type behaviour. While the corresponding models incorporate slow and fast processes leading to multiple time-scale dynamics, not all of these models take globally the form of a standard slow–fast system. Thus from an application point of view, it is desirable to provide tools to analyse singularly perturbed systems in a coordinate-independent manner.

Padi Fuster Aguilera (Tulane University): A PDE model for chemotaxis with logarithmic sensitivity and logistic growth

We study a particular model derived from a chemotaxis model with logarithmic sensitivity and logistic growth. We obtain existence and uniqueness of solutions as well as results for the limit diffusion of the solutions with Neumann boundary conditions.

Yonatan Ashenafi (Rensselaer Polytechnic Institute): Statistical Mobility Properties of Choanoflagellate Colonies

We study the stochastic hydrodynamics of aggregate random walkers (ARWs) typified by organisms called Choanoflagellates. The objective is to link cell-scale dynamics to colony-scale dynamics for Choanoflagellate rosettes and chains. We use a synthesis of linear autoregressive stochastic processes to explain the effective statistical dynamics of the Choanoflagellate colonies in terms of colony parameters. We model and characterize the non-linear chemotactic reaction of the aggregates to a local chemical gradient in terms of colony parameters.

Judy Day (Unv. of Tennessee): Modeling inhalation anthrax infection: a research journey

From work initiated at the Mathematical Biosciences Institute, a mathematical model was published in 2011 that investigated the immune response to inhalation anthrax infection. That publication led to a collaboration with the U.S. Environmental Protection Agency which blossomed into a Investigative Working Group effort supported by the National Institute for Mathematical and Biological Synthesis. This group included experts from both the anthrax research community as well as mathematical modelers. Over a period of several years, members of this group explored the utility of mathematical modeling in understanding risk in low dose inhalation anthrax infection. This poster describes the journey of the research that was inspired by these events and discusses the results and relationships it generated.

Dan Dougherty (Amyris, Inc.): Techniques for Driving Progress in Industrial Biotechnology

Amyris (NASDAQ: AMRS) is a science and technology leader in the research, development and production of pure, sustainable ingredients for the Health & Wellness, Clean Beauty and Flavors & Fragrances markets. Amyris applies its exclusive, advanced technology, including state-of-the-art machine learning, robotics and artificial intelligence to engineer yeast, that when combined with sugarcane syrup through fermentation, is converted to highly pure molecules for specialty ingredients. Amyris manufactures sustainably-sourced ingredients at industrial scale for B2B partners and further distribution to over 3,000 of the world's top brands, reaching more than 200 million consumers. Amyris stands by its No Compromise® promise that everything it makes is better for people and the planet. In this presentation, we provide examples of computational techniques used throughout the design, build, test, and learn phases of research and development. We’ll highlight prominent aspects of the natural biology of yeast and how they inform the computational approaches used. Measures of statistical and computational efficiency will be provided and we’ll conclude with some recommendations for future developments.

Paul Hurtado (Univ. of Nevada, Reno): Extending ODE models using the Generalized Linear Chain Trick: An SEIR Model Example

The Linear Chain Trick (LCT) has long been used to build ODE models (specifically, mean field state transition models) by replacing the implicit assumption of exponentially distributed passage times through each state with more "hump shaped" gamma (or more specifically, Erlang) distributions. Recently, we introduced a Generalized Linear Chain Trick (GLCT) where we showed that there was a straightforward way of writing down mean field ODEs for a much broader family of assumed "dwell-time" distributions known as the Phase-type distributions. These are essentially the hitting-time or absorbing-time distributions for Continuous Time Markov Chains (CTMCs), and include Erlang, Hypoexponential, Coxian, and related distributions. Methods for fitting these matrix exponential distributions to data have been developed for applications of queuing theory, allowing for more flexibility than just incorporating best-fit Gamma distributions into ODE model assumptions. In this presentation, I will illustrate how the SEIR model can be extended using the LCT and the GLCT, and how the structure of the resulting model, when viewed through the lens of the GLCT, can be leveraged in subsequent analytical and computational analyses.

Wasiur KhudaBukhsh (Mathematical Biosciences Institute): Survival Dynamical Systems: individual-level survival analysis from population-level epidemic models

Motivated by the classical Susceptible-Infected-Recovered (SIR) epidemic models proposed by Kermack and Mckendrick, we consider a class of stochastic compartmental dynamical systems with a notion of partial ordering among the compartments. We call such systems uni-directional Mass Transfer Models (MTMs). We show that there is a natural way of interpreting a uni-directional MTM as a Survival Dynamical System (SDS) that is described in terms of survival functions instead of population counts. This SDS interpretation allows us to employ tools from survival analysis to address various issues with data collection and statistical inference of unidirectional MTMs. We use the SIR model as a running example to illustrate the ideas. We also discuss several possible generalizations of the method.

Jinsu Kim (Univ. of California, Irvine): Stochastic epigenome systems with different TF binding locations as a predictor of in vivo parameters for nucleosome accessibility

In cellular immune responses, inflammatory ligands activate signal-dependent transcription factors (SDTFs), which can display complex temporal profiles. SDTFs are central effectors for inflammatory gene expression. However, the information contained in SDTF signals must also be decoded by the epigenome in a stimulus-specific manner, to allow controlled plasticity in cellular epigenetic states in response to environmental encounters. The mechanisms and biophysical principles that generate distinct epigenomes in response to different SDTF signaling remain unclear. Here, we develop and analyze stochastic models of nucleosome accessibility to study how SDTF signals alter the epigenome dynamics. Interestingly the response of our epigenome model to SDTF signals helps us to predict the cooperativity of genome-scale nucleosome in vivo. Two alternative but reasonable hypotheses on the cooperativity of parameters in nucleosome unwrapping steps were experimentally tested by ATAC sequencing. On the genome-scale, the location of SDTF binding is a predictor of nucleosome accessibility since the epigenome dynamics depends on SDTF binding sites differently under cooperative and non-cooperative parameters. We could compare our numerical results to experimental measurements to test our prediction. Our work proposes a framework that allows a predictive understanding of how nucleosomes respond to SDTF signaling at specific genomic locations to produce chromatin alterations in health and disease conditions.

Ruby Kim (Duke University): A mathematical model of circadian rhythms and dopamine

The superchiasmatic nucleus (SCN) serves as the primary circadian (24hr) clock in mammals, and is known to control important physiological functions such as the sleep-wake cycle, hormonal rhythms, and neurotransmitter regulation. Experimental results suggest that some of these functions reciprocally influence circadian rhythms, creating a complex and highly homeostatic network. Among the clock's downstream products, orphan nuclear receptors REV-ERB and ROR are particularly interesting because they coordinately modulate the core clock circuitry. Recent experimental evidence shows that REV-ERB and ROR are not only crucial for lipid metabolism, but are also involved in dopamine (DA) synthesis and degradation, which could have meaningful clinical implications for conditions such as Parkinson's disease and mood disorders.

We create a mathematical model that includes the circadian clock, REV-ERB and ROR and their feedback to the clock, and the influences of REV-ERB, ROR, and BMAL1-CLOCK on the dopaminergic system. We compare our model predictions to experimental data on clock components in different light-dark conditions and in the presence of genetic perturbations. Our model results are consistent with experimental results on REV-ERB and ROR and allow us to predict circadian oscillations in extracellular dopamine and homovanillic acid that correspond well with observations.

The predictions of the mathematical model are consistent with a wide variety of experimental observations. Our calculations show that the mechanisms proposed by experimentalists by which REV-ERB, ROR, and BMAL1-CLOCK influence the DA system are sufficient to explain the circadian oscillations observed in dopaminergic variables. Our mathematical model can be used for further investigations of the effects of the mammalian circadian clock on the dopaminergic system. RR

Bismark Oduro (California Univ. of Pennsylvania): Initial aggressive treatment strategies for controlling vector-borne disease like Chagas

Chagas disease is a major health problem in rural South and Central America where an estimated 8 to 11 million people are infected. It is a vector-borne disease caused by the parasite Trypanosoma cruzi, which is transmitted to humans mainly through the bite of insect vectors from several species of so-called kissing bugs. One of the control measures to reduce the spread of the disease is insecticide spraying of housing units to prevent infestation by the vectors. However, re-infestation of units by vectors has been shown to occur as early as four to six months after insecticide-based control interventions. I will present ordinary differential

equation models of type SIRS that shed light on long-term cost effectiveness of certain strategies for controlling re-infestation by vectors. The results show that an initially very high spraying rate may push the system into a region of the state space with low endemic levels of infestation that can be maintained in the long run at relatively moderate cost.

Jeungeun Park (Univ. of Cincinnati): A swimming strategy of flagellar bacteria via wrapping of the flagella around the cell body

Yongsam Kim (Chung-Ang University), Wanho Lee (National Institute for Mathematical Sciences), Sookkyung Lim (University of Cincinnati), and Jeungeun Park(University of Cincinnati)

Flagellated bacteria swim by rotating flagella that are connected to rotary motors in their cell wall. The rotational direction and rate of each motor, and the elastic properties of the flagellum characterize their swimming patterns the patterns help them to move toward favorable environments efficiently. In this poster, we present one of swimming patterns that is observed from polarly flagellar species living in obstructed natural environment such as Pseudomonas putida and Shewanella putrefaciens. When these species in straights runs try to change the direction, they can undergo a slow swimming phase by wrapping the flagella around the cell body. We numerically investigate the mechanism of wrapping motion. We particularly show what factors facilitate the formation of wrapping mode, and compare our numerical examples with experimental observation in the literature.

Marissa Renardy (Univ. of Michigan): Temporal and spatial analyses of TB granulomas to predict long-term outcomes

Mycobacterium tuberculosis (Mtb), the causative agent of tuberculosis (TB), kills more individuals worldwide per year than any other infectious agent. As the hallmark of TB, lung granulomas are complex structures composed of immune cells that interact with and surround bacteria, infected cells, and a necrotic core. This interaction leads to diverse granuloma outcomes across time, ranging from bacterial sterilization to uncontrolled bacterial growth, as well as diverse spatial structures. Currently, there are no systematic quantitative methods to classify the formation, function, and spatial characteristics of granulomas. This type of analysis would enable better understanding and prediction of granuloma behaviors that have known associations with poor clinical outcomes for TB patients. Herein, we develop a temporal and spatial analysis framework for TB granulomas using a systems biology approach combining in silico granuloma modeling, geographic information systems, topological data analysis, and machine learning. We apply this framework to simulated granulomas to understand temporal granuloma dynamics, quantify granuloma spatial structure, and predict the relationship between granuloma structure and bacterial growth. As a proof-of-concept, we apply our in silico predictions to in vivo derived data to test our framework for future applications and as a personalized medicine intervention.

Adam Rhodes (Univ. of Alberta): Mathematical Modeling of the Immune-Mediated Theory of Metastasis

Adam Rhodes, Department of Mathematical and Statistical Science, University of Alberta
Prof. Thomas Hillen, Department of Mathematical and Statistical Science, University of Alberta

Accumulating experimental and clinical evidence suggest that the immune response to cancer is not exclusively anti-tumor. Indeed, the pro-tumor roles of the immune system — as suppliers of growth and pro-angiogenic factors or defenses against cytotoxic immune attacks, for example — have been long appreciated, but relatively few theoretical works have considered their effects. Inspired by the recently roposed “immune-mediated” theory of metastasis, we develop a mathematical model for tumor-immune interactions in the metastatic setting, which includes both anti- and pro-tumor immune effects, and the experimentally observed tumor-induced phenotypic plasticity of immune cells (tumor “education” of the immune cells). Upon confrontation of our model to experimental data, we use it to evaluate the implications of the immune-mediated theory of metastasis. We find that tumor education of immune cells may explain the relatively poor performance of immunotherapies, and that many metastatic phenomena, including metastatic blow-up, dormancy, and metastasis to sites of injury, can also be explained by the immune-mediated theory of metastasis. Our results suggest that further work is warranted to fully elucidate the pro-tumor effects of the immune system in metastatic cancer.

Suzanne L. Robertson (Virginia Commonwealth University): Neighborhood control of vector-borne disease

Outbreaks of vector-borne diseases such as Zika virus can occur after an infected individual introduces the virus to a residential neighborhood after traveling. Management strategies for controlling vector-borne disease typically involve large-scale application of larvicide or adulticide by truck or plane, as well as door-to-door control efforts that require obtaining permission to access private property. The efficacy of the latter efforts depend highly on the compliance of local residents. We present a model for vector-borne disease transmission in a neighborhood, considering a network of houses connected via mosquito dispersal. We use this model to compare the effectiveness of various control strategies and determine how optimal use of door-to-door control and aerial spraying depends on the level of resident compliance as well as mosquito movement. This is joint work with Jeffery Demers, Sharon Bewick, Folashade Agusto, Kevin Caillouet, and Bill Fagan.

Deena R. Schmidt (Univ. of Nevada, Reno): Contagion dynamics on adaptive networks: Norovirus as a case study

Classical contagion models, such as SIR, and other infectious disease models typically assume a well-mixed contact process. This may be unrealistic for infectious disease spread where the contact structure changes due to individuals' responses to the infectious disease. For instance, individuals showing symptoms might isolate themselves, or individuals that are aware of an ongoing epidemic in the population might reduce or change their contacts. Here we investigate contagion dynamics in an adaptive network context, meaning that the contact network is changing over time due to individuals responding to an infectious disease in the population. We consider norovirus as a specific example and investigate questions related to disease dynamics and applications to public health.

Alessandro Maria Selvitella (Purdue Univ. Fort Wayne) & Kathleen Lois Foster (Ball State Univ.): Uncovering the Impact of the Environment in Lizard Biomechanics: from classical methods to modern statistical learning

Prof. Kathleen Lois Foster, Department of Biology, Ball State University
Prof. Alessandro Maria Selvitella, Department of Mathematical Sciences, Purdue University Fort Wayne

Extraordinary advancements in computing power have facilitated the development and application of sophisticated statistical analyses to biological fields such as genomics, ecology, and evolution. However, even now, when powerful hardware and software tools have never been more accessible and despite significant advancements in statistical theory, physiological branches of biology, like biomechanics, seem to be stuck in the past, with the ubiquitous and almost exclusive use of classical univariate statistics. In this poster, we will discuss how more modern machine learning methods impact and revolutionize the extraction and analysis of biomechanical data. This will be discussed in the context of lizard locomotion and contrasted with the results of classical univariate analyses.

Shuying Sun (Texas State University): Comparative analysis of a few haplotype assembly algorithms

Shuying Sun, Sherwin Massoudian, and Allison Bertie Johnson

Haplotype information is important to further understand the genetic processes of diseases. Therefore, it is crucial to obtain haplotypes for disease studies. With the development of next generation sequencing (NGS) technologies, it is now possible to obtain haplotypes using sequencing reads. The process of determining haplotypes based on sequencing reads is called haplotype assembly. It is challenging to conduct haplotype assembly because NGS datasets are very large and have complex genetic and technological features. Even though a large number of approaches or software packages have been developed, it is unclear how well these programs perform. Most of them are not well evaluated as they may be only compared with a small number (e.g., 1 or 2) of other methods and are validated based on different datasets. In this project, we conduct a comprehensive analysis to compare a few currently available haplotype assembly software packages. We will assess them based on their statistical or computational methods, algorithmic components, and evaluation features as well. We will show our comparison results based on a publicly available dataset. With our comparison results, we shall provide users with both detailed input on the performance of current methods and new perspectives on haplotype assembly, which will be helpful for developing more accurate and efficient algorithms.

Nayana Wanasingha (Univ. of Cincinnati): Mathematical Model for Frequency Demultiplication in Neurospora Crassa

Circadian rhythms are a feature that found in many organisms, which play a vital role in maintaining the daily activities of

24 hours. Recent studies have discovered that disruption of the circadian rhythms leads to various neurological and metabolic diseases. Entrainment to environmental cycles is a defining property of circadian rhythms, and entrainment of these rhythms by cycles that repeat twice or more often per day, which is known as subharmonic entrainment or frequency demultiplication, is also a characteristic that has been used to understand the architecture of circadian systems. The mechanistic blueprints of the circadian system of Neurospora Crassa, which is a filamentous fungus is similar to the mammalian system. Therefore, findings in Neurospora are transferrable to the mammalian system. Experiments show that Neurospora exhibits frequency demultiplication to external temperature cycles with short periods. In this study, I plan to establish a mathematical model representing core components of circadian system of Neurospora and theoretically predict molecular profiles of frq gene expression under different entrainment conditions to demonstrate that the rhythmic conidiation of Neurospora is a direct reflection of molecular responses under various entrainment regimens.

Watch the video: 1. Introduction to Human Behavioral Biology (February 2023).