What is the smallest or simplest system this theory predicts could be conscious, and how would that be tested?
15 theories have answered this question
IIT makes the striking prediction that even extremely simple systems can be minimally conscious, provided they satisfy the postulates of physical existence. The canonical example developed in the IIT 3.0 paper is a "minimally conscious photodiode" consisting of just two elements: a detector (D) and a predictor (P). The detector receives external light inputs and, when activated, turns on the predictor, which then feeds back to the detector to modulate its sensitivity. Because both elements specify selective causes and effects within the system, their cause-effect repertoires are maximally irreducible, and the conceptual structure specified by the two-element system is also maximally irreducible. Consequently, the system DP forms a complex with a Phi-Max value of 1, generating a MICS (maximally irreducible conceptual structure) composed of just two concepts. The quality of this experience is both quantitatively and qualitatively minimal: the constellation in qualia space has just two stars, and the experience can be described roughly as "it is like this rather than not like this," with no further qualifications. Notably, different physical implementations with the same internal mechanisms --- a blue-light detector, a binary thermistor --- would generate the exact same MICS and thus have the same minimal experience.
At the other boundary, IIT is equally clear about what cannot be conscious: any purely feedforward system, no matter how large or complex, has Phi-Max of zero and therefore lacks consciousness entirely. This applies even to feedforward networks that are functionally equivalent to conscious recurrent systems --- IIT thus predicts the existence of true "zombies," systems that behave identically to conscious beings but lack any subjective experience. This prediction extends to simple von Neumann computers: because a standard computer's architecture disintegrates into many small modules (each with minimal Phi), IIT predicts that even a computer functionally equivalent to a human brain would not be phenomenally equivalent. These boundary predictions --- minimal consciousness for simple recurrent systems, zero consciousness for feedforward systems of any complexity --- are in principle testable.
Testing these predictions for very simple systems is challenging because such systems cannot report their experiences. However, IIT suggests several indirect approaches. For biological systems, the most important inferential basis would come from detailed connectomes complemented by relevant neurophysiology. For example, if a bee brain were found to contain a main complex with an extrapolated Phi value only an order of magnitude less than that of a human brain during dreaming sleep, and much higher than a human brain in dreamless sleep, IIT would justify assuming that there is something it is like to be a bee. For artificial systems, the prediction is more directly testable in principle: one can analyze the causal architecture of a system to determine whether it forms a complex. A recurrent network should form a complex with positive Phi-Max, while a feedforward network implementing the same input-output function should have Phi-Max of zero. The perturbational complexity index (PCI), already validated in humans across multiple states of consciousness and unconsciousness, provides a practical empirical proxy for integrated information that could in principle be adapted for testing boundary predictions in simpler biological and artificial systems.
Global Workspace Theory's architectural requirements set some implicit lower bounds on what could be conscious, though neither Baars nor Dehaene and Naccache provide a sharp boundary condition. The theory requires, at minimum, a system comprising multiple specialized modular processors that operate in parallel, a distributed set of "workspace" units with long-distance bidirectional connectivity to those processors, a mechanism for top-down attentional amplification that can select and sustain activation in a self-reinforcing loop, and the capacity for global broadcast so that information mobilized in the workspace becomes available to many other subsystems simultaneously. Dehaene and Naccache (2001) identify neurons in layers 2 and 3 of cortex — particularly dense in dorsolateral prefrontal and inferior parietal cortical structures — as the likely biological substrate of workspace connectivity, and they argue that workspace neurons must be specifically targeted by diffuse neuromodulatory systems involved in arousal and the sleep/wake cycle. This suggests the theory expects consciousness to require, at minimum, a thalamocortical system with intact long-range connectivity and arousal modulation.
On the evolutionary question, Dehaene and Naccache argue that "having consciousness is not an all-or-none property" and that the biological substrates present in human adults are "probably also present, but probably in partial form in other species (or in young children or brain-lesioned patients)." They note that several mammals and possibly even young human children exhibit greater brain modularity than adults yet show intentional behavior and partially reportable mental states — some working memory ability but perhaps no theory of mind. This suggests the theory envisions a graded continuum rather than a sharp boundary, with consciousness becoming richer as workspace connectivity becomes more elaborate. Baars (2005) similarly notes that GW architectures have been implemented in computational agent simulations (Franklin's IDA system), and while these do not prove consciousness exists in the brain, they provide "an existence proof of their functionality." The theory thus does not clearly exclude artificial systems, but neither does it specify a minimal threshold.
Testing where consciousness starts and stops would, according to the theory, require demonstrating the presence or absence of global ignition. For animal models, this could involve showing whether a stimulus that triggers only local sensory activation in a given species ever produces the widespread, sustained, frontoparietal-involving activation pattern that characterizes consciousness in humans. For simpler systems or artificial implementations, the theory would predict consciousness only if the architecture supports genuine global broadcast — the dynamic mobilization of information from specialized processors into a workspace that makes it available to all other processors. However, the theory provides no formal metric for how much connectivity or how many processors constitute a sufficient workspace, leaving the boundary question substantially underspecified.
HOT theory sets a relatively high bar for the minimum system capable of consciousness, because consciousness requires not merely the ability to process information or represent the environment but the capacity to form higher-order thoughts -- representations of oneself as being in particular mental states. Rosenthal discusses this boundary question most explicitly in the context of infants and nonhuman animals. He argues that forming higher-order thoughts about one's own sensory states is cognitively simpler than forming higher-order thoughts about propositional mental states, because the concept of a sensory experience is less demanding than the concept of a belief or desire. A creature need only be able to think "I am in this sensory state" rather than deploying the full apparatus of folk psychology. This suggests that many nonhuman animals and young infants could be conscious of their sensory states even if they lack the cognitive sophistication for higher-order thoughts about their propositional attitudes. Nevertheless, even this minimal capacity requires some form of self-directed representation -- the ability to think about oneself as being in a state -- which presupposes a degree of conceptual sophistication that very simple organisms would lack.
The implication is that consciousness, on this theory, is not primitive or widespread but rather a cognitive achievement that tracks the capacity for self-referential thought. A creature that can discriminate among external regularities and form rudimentary thoughts about objects in its environment, but cannot represent itself as being in particular internal states, would not be conscious on the HOT account even if its first-order processing is sophisticated. This places the boundary for consciousness somewhere in the animal kingdom -- perhaps at organisms with sufficient cortical or analogous neural complexity to support self-directed representation -- but significantly above the level of simple organisms, thermostats, or basic information-processing systems. Rosenthal explicitly notes that being a conscious creature (being awake and responsive to stimuli) is different from having conscious mental states (being aware of one's own states), and only the latter requires higher-order thought. A creature can be conscious in the first sense while having no conscious mental states in the second.
Testing these boundary predictions presents significant challenges that the theory does not fully resolve. Since the diagnostic marker of consciousness on this account is the presence of higher-order thoughts, and since these thoughts are typically themselves nonconscious, one cannot simply ask a system whether it has them. For verbal creatures, we can use reports as evidence, but for nonverbal organisms or artificial systems this avenue is closed. Rosenthal suggests that the causal connections between first-order states, behavior, and stimuli can reveal the presence of nonconscious mental states, and a similar strategy could in principle be extended to detect higher-order states. One might look for behavioral signatures of self-monitoring, metacognition, or error-correction that suggest a system represents its own internal states. In artificial systems, one could test whether a system that demonstrably generates higher-order representations of its own processing states exhibits different behavioral profiles from one that processes identical first-order information without self-representation. However, the theory's functional rather than architectural specification of the mechanism makes it difficult to generate sharp boundary predictions that could be empirically adjudicated.
The source papers on Predictive Processing do not specify a precise minimal system for consciousness in the way that some other theories do. However, the functional requirements implied by the framework set implicit boundaries. Hohwy (2012) characterizes the brain as "essentially a sophisticated hypothesis tester" that continually minimizes prediction error across multiple spatiotemporal scales through a hierarchical generative model. For a system to be conscious under PP, it would need at minimum a hierarchical architecture capable of generating top-down predictions, computing bottom-up prediction errors, and optimizing precision expectations. This implies that a system with only a single processing level --- unable to distinguish predictions from errors or to weight errors by their estimated reliability --- would not support conscious experience. The minimum system would therefore be one with at least two hierarchical levels, recurrent connectivity allowing top-down and bottom-up message passing, and some mechanism for precision modulation.
Seth and Hohwy (2021) note that PP is not committed to a single explanatory target for consciousness but rather aims to establish mappings between mechanisms and phenomenological properties that "need to be replicated in all sizes and types of systems." This suggests an openness to the possibility that relatively simple systems implementing genuine hierarchical predictive inference could have some form of experience. However, PP provides no formal metric analogous to integrated information (Phi) that could be computed for a candidate system to determine whether it crosses a threshold into consciousness. The theory's boundary predictions are therefore less precise than those of some competing frameworks. Testing would require demonstrating that a system implements the relevant computational architecture --- hierarchical generative modeling with precision-weighted prediction error minimization --- and then examining whether manipulations that disrupt these mechanisms (analogous to anesthesia or attention disruption) produce the predicted changes in behavior or reportability. For very simple systems, the lack of a clear quantitative threshold makes such testing inherently difficult.
PCT does not specify a precise minimal architecture for consciousness in the way that some theories (such as IIT) attempt, but the functional requirements of the theory imply certain lower bounds. According to Mansell (2024), the minimum requirements for primary consciousness include: a hierarchical control system with multiple levels, intrinsic (homeostatic) control systems capable of generating error signals, and the capacity for reorganization -- the trial-and-error modification of control system properties driven by intrinsic error. Crucially, Mansell proposes that primary consciousness emerges when multiple intrinsic systems conflict and novel perceptual integration occurs to resolve that conflict. This means a system with only a single control loop and no possibility of conflict between competing intrinsic needs would likely not be conscious on this account. The simplest conscious system would need at least enough hierarchical depth and multiplicity of intrinsic control systems that conflicts requiring reorganization can arise.
Young's CoPoQ framework further specifies that consciousness requires: perceptual input functions that generate qualia-like subjective representations, a valence dimension tied to control error, the capacity for reorganization in response to degraded quality, and the potential for consciousness to fade when control is optimized. Young explicitly argues that simple organisms which exhibit adaptive control in unpredictable environments through perceptual feedback may possess some form of consciousness, particularly if they demonstrate learning through reorganization rather than purely hardwired responses. The framework is substrate-neutral in principle -- it focuses on control architecture rather than biological implementation. Evidence of PCT principles in robotics (hierarchical control for locomotion, obstacle avoidance, object manipulation) suggests artificial systems can implement perceptual control, though whether such systems currently possess the reorganization-driven quality perception that constitutes consciousness remains an open question.
Testing these predictions would involve the Test for the Controlled Variable (TCV), which Young (2026) identifies as a rigorous scientific approach for verifying control within purposeful systems and identifying the specific variables being controlled. The TCV involves introducing a disturbance to a hypothesized controlled variable: if the variable remains unchanged despite the disturbance, it indicates the subject is actively controlling that variable by adjusting actions to counteract the disturbance. By determining when conscious and unconscious variables are being controlled through the TCV process, researchers could inch closer to identifying the neural processes underlying experience. Mansell (2024) proposes additional testable approaches: a tracking task in which computational models of two-level perceptual control are fitted to behavioral data while participants shift their focus of consciousness, predicting that the level receiving conscious attention should show parameter changes consistent with reorganization. The model also predicts that novel information integration rate is a controlled variable within conscious individuals, testable through paradigms that disturb the informational richness of the environment and measure compensatory behavioral responses (such as attention shifts, mind wandering, or imagination) that would maintain the rate at a reference value.
Illusionism reframes this question in an important way. Since phenomenal consciousness does not exist on this view, no system -- however complex -- is conscious in the traditional phenomenal sense. The relevant question becomes: what is the simplest system that could generate the illusion of consciousness, meaning the kind of introspective self-representation that creates the appearance of phenomenal properties? Frankish's account implies that this requires a system with substantial cognitive architecture -- at minimum, the capacity for introspective monitoring of its own sensory states and the generation of representations that mischaracterize those states as having simple, intrinsic, qualitative features. The illusion depends on a complex array of introspectable sensory states that trigger cognitive, motivational, and affective reactions, together with introspective mechanisms that bundle these complex features into what appears to be a unified phenomenal feel. A system lacking these introspective capacities would not generate the illusion and thus would not appear conscious even to itself.
This sets a relatively high threshold for consciousness-like illusions compared to theories that locate consciousness in simpler physical properties such as information integration or quantum coherence. Simple organisms or devices that lack introspective representational mechanisms would not generate any illusion of phenomenality, even if they process information in sophisticated ways. Humphrey's proposal, which Frankish discusses favorably, suggests that the relevant feedback loops -- internalized evaluative responses interacting with sensory signals to create internally monitored states that seem phenomenal -- are products of evolutionary elaboration. This implies that the simplest systems capable of generating the illusion would likely be biological organisms with sufficiently developed nervous systems to support both sensory processing and some form of internal monitoring of that processing.
Testing this prediction is challenging, in part because the illusion of consciousness is, by its nature, detectable primarily through the subject's own reports and judgments. A system that generates a robust phenomenal illusion would report having experiences and make phenomenal judgments, whereas a system without introspective mechanisms would not. One could in principle investigate whether a given system's architecture includes the kind of introspective monitoring that illusionists identify as the source of phenomenal judgments. If an organism or artificial system makes phenomenal claims but its architecture can be shown to lack genuine introspective self-representation, this might constitute evidence that such claims are driven by something other than the illusion-generating mechanism illusionists describe. However, Frankish acknowledges that illusionism is a broad research program rather than a single detailed theory, and the precise neural mechanisms underlying the phenomenal illusion remain to be specified, making sharp boundary predictions difficult at the present stage of the theory's development.
Orch OR's answer to this question is governed by the Diosi-Penrose criterion: any physical system capable of sustaining an orchestrated quantum superposition with sufficient gravitational self-energy E_G to reach the threshold for self-collapse (tau = h-bar / E_G) could, in principle, be conscious. The original 1996 paper estimates that for a 500-millisecond coherence time --- corresponding to the empirically observed timescale for pre-conscious to conscious transitions --- approximately 10^9 tubulins displaced in quantum coherent superposition would be needed to achieve self-collapse. Since each neuron contains roughly 10^7 tubulins, this means approximately 100 neurons whose tubulins were totally coherent could be the minimal number for Orch OR at 500 ms. However, the authors note that only a fraction of tubulins in any given neuron need be coherent, so the actual range for minimal consciousness extends from hundreds to thousands of neurons. Nervous systems as simple as C. elegans (with several hundred neurons) could in principle approach this threshold. Even single-celled organisms like Paramecium, which exhibit sophisticated behaviors using microtubule-based cytoskeletons without synaptic connections, are mentioned as potential candidates for some rudimentary form of conscious experience. The 2014 review paper further elaborates that organisms present at the Cambrian evolutionary explosion 540 million years ago, with roughly 300 neurons, theoretically had sufficient microtubule capacity to reach OR threshold with tau under one minute, potentially supporting rudimentary Orch OR and primitive consciousness.
The theory also scales in both directions through the E_G relationship. More intense experiences involve larger E_G values and shorter tau values. Heightened conscious states such as those reported by Tibetan monk meditators, who generate 80 Hz gamma synchrony, would correspond to roughly twice the neural mass involvement of normal 40 Hz consciousness. Conversely, very simple systems with small E_G would have extremely long tau values and correspondingly rare, dim, and slow conscious moments that would be increasingly out of step with real-world biological timescales. The theory thus predicts a spectrum of consciousness intensity determined by the physics of quantum gravitational self-collapse rather than by computational complexity or information integration per se.
Testing these predictions faces substantial challenges, but the theory identifies several empirical avenues. The most direct test of the underlying OR mechanism would be to demonstrate that an unperturbed quantum superposition of sufficient mass self-collapses according to tau = h-bar / E_G. Hameroff and Penrose note that a proposed experiment involving a small object such as a 10-micrometer cube of crystalline material held in coherent superposition at two locations differing by about an atomic nucleus diameter could test this, though the technology remains at the frontier. For biological systems, Bandyopadhyay's group at the National Institute for Material Sciences in Japan has discovered conductive resonances in single microtubules at specific alternating current frequencies in the gigahertz, megahertz, and kilohertz ranges, consistent with quantum coherent effects at biological temperatures. Their finding that electronic conductance along microtubules is temperature-independent and length-independent at these resonance frequencies supports the claim of quantum coherence in microtubules. However, demonstrating that such effects occur in living brains and are causally linked to consciousness remains a much harder problem. The 2014 review lists twenty specific testable predictions, including that coherent gigahertz excitations will be found in microtubules (partially confirmed), that quantum tunneling occurs across gap junctions (untested), and that the critical OR threshold follows tau = h-bar / E_G in technological quantum superposition experiments (under development but years from firm conclusions).
DLCT sets a relatively high threshold for the minimal conscious system compared to theories that attribute consciousness to simple recurrent circuits or individual information-integrating units. The theory requires that a conscious system possess multiple interacting neural network modules that can be treated as macro-level mathematical functions, a self-referential feedback control mechanism that computes intrinsic feedback errors defined by algebraic relationships among those modules, and micro-level elements (neurons or neuron-like units with plastic synapses) that can modify their weights in response to macro-level feedback errors. The authors state explicitly that their formulation "applies only to the brain or brain-like systems" because it assumes that the micro-element is a feedback control system (such as a neuron with modifiable synapses), not a molecule or a simple physical state. At minimum, the system would need at least two neural network modules whose functional composition can be compared (to define a non-trivial algebraic constraint like commutativity), plus the circuitry to compute and transmit feedback errors between levels.
The proof-of-concept experiment described in the first paper demonstrates structural downward causation in a system with two neural network modules (an encoder and decoder architecture with modules f1 and f2) trained to satisfy commutativity at the macro level. This constitutes a minimal working example of the theory's mechanism, though the authors do not claim this artificial system is conscious --- they present it as a demonstration that macro-level supervenient feedback errors can indeed change micro-level synaptic weights. The theory also claims universality: it does not restrict consciousness to biological systems, asserting that neurons can be modeled as "functions with plasticity, which can be artificially reproduced." This means that an artificial system with the right modular architecture and self-referential feedback control could in principle be conscious.
Testing whether a minimal system is conscious under DLCT would require demonstrating that the system's behavior cannot be explained by micro-level laws alone and that macro-level algebraic structural constraints are actively shaping micro-level dynamics. The authors propose a constructive methodology: implementing both macro-level psychological laws and micro-level neural laws in simulations or robotic systems, then examining whether the system's behavior departs from what micro-level laws alone would predict. If autonomous macro-level psychological laws can alter the behavior of neural systems in ways that cannot be replicated by any purely bottom-up model, this would provide evidence that the self-referential feedback mechanism is operative. However, the authors acknowledge that "since both macro- and micro-level laws cannot be directly derived from observational data, exploratory approaches may also be necessary," and they suggest machine learning as a tool for discovering the relevant combinations of laws. Direct verification of consciousness in minimal systems remains an open challenge.
Irruption theory ties consciousness to the broader framework of living, autopoietic organization rather than to a specific neural threshold, which means it sets a biologically grounded lower bound rather than a purely computational one. The theory is developed within the enactive tradition, which holds that a living system maintains itself as a far-from-equilibrium, self-producing entity that must adaptively balance openness to the environment (to acquire free energy) with closure from the environment (to maintain structural integrity). Consciousness, in this framework, is linked to the bidirectional mind-matter interaction that manifests as irruption and absorption -- processes that Froese specifically associates with the thermodynamics of living systems. The thesis of irruption proposes that an organism's motivational involvement in the generation of its behavior is associated with its own specific cost of thermodynamic free energy, and that this motivated activity introduces a kind of disorder distinguishable from ordinary thermodynamic noise. This strongly suggests that consciousness, or at least some minimal form of mentality, requires the metabolic and thermodynamic organization characteristic of life.
The theory does not specify a precise minimal system, but several constraints can be inferred. The system must be capable of exhibiting both irruption (entropy-increasing, underdetermined bursts of activity correlated with agency) and absorption (entropy-decreasing compression correlated with experience). The Self-Optimization Model discussed by Froese and colleagues, which models the enactive conception of life through arbitrary state resets combined with slow plasticity, suggests that even simple autopoietic systems could in principle exhibit rudimentary forms of these dynamics. Froese raises the question of whether "wet" chemical systems that are sufficiently open and complex could support irruptions due to regulation of viability constraints. This suggests the minimal conscious system might be a simple living organism -- perhaps at the level of single-celled organisms or minimal metabolic networks -- rather than a complex nervous system, though this remains speculative within the theory.
Testing this prediction would be challenging but not impossible within the framework. Irruption theory's empirical strategy converts the conceptual problem of unintelligibility into the methodological problem of unpredictability. One would look for the dual signatures of irruption and absorption in candidate systems: measurable entropy increases that cannot be accounted for by the system's physical dynamics alone (irruption), and measurable information compression or loss of variability correlated with environmental sensing (absorption). For minimal living systems, one might measure whether motivated or adaptive behavior is associated with specific thermodynamic costs -- extra free energy dissipation beyond what metabolism alone requires -- and whether sensory responsiveness correlates with decreased dimensionality in the system's state space. If a simple living system showed both signatures in a coordinated, bidirectional manner, irruption theory would interpret this as evidence of minimal consciousness. However, Froese himself acknowledges that the physics of life and the thermodynamics of information are still in early stages of development, making such tests currently more aspirational than practical.
The FEP-based model sets a clear structural minimum for consciousness: according to the inner screen hypothesis, a system must be composite, possessing at least two mutually separable components that communicate classically, separated by an internal Markov blanket. A system that is not composite --- that has no internal Markov blanket partitioning it into distinguishable subsystems --- cannot exhibit even the most minimal form of consciousness involving short-term classical memory. This means that a single particle or a simple homogeneous system without internal structure would not qualify. However, the theory explicitly extends the potential for consciousness well beyond brains: "all organisms, including unicellular ones, have such compartmentalized internal structures" that could in principle support forms of awareness. The model suggests that even very simple biological entities with internal compartmentalization --- such as cells with distinct organelles separated by membranes --- could meet the structural minimum for some basal form of consciousness, though this would be far more rudimentary than human experience.
The theory further specifies that consciousness requires not just an internal Markov blanket but an irreducible one --- meaning the internal states behind the innermost screen cannot be further partitioned. Additionally, the active states of this irreducible blanket must exert causal intervention (not merely causal influence) on external dynamics, and the system must possess a generative model with sufficient temporal depth to entertain counterfactual consequences of its actions. This additional criterion of temporal thickness and counterfactual depth suggests that while very simple composite systems may possess some minimal form of basal awareness, full-blown consciousness of the kind humans experience requires substantially more elaborate hierarchical structure. Friston distinguishes between the minimal self-evidencing of a virus (which has biotic, self-organizing properties but lacks temporal depth) and the temporally thick generative models of organisms like "vegans" (complex organisms that model the future and plan actions), suggesting that consciousness proper requires the latter.
Testing these predictions presents significant challenges. For biological systems, the theory suggests that one could examine whether a system possesses the requisite nested Markov blanket structure with an irreducible innermost blanket whose active states causally intervene on subordinate dynamics. Wiese's "FEP Consciousness Criterion" (FEP2C) proposes that a system's physical dynamics must entail computational dynamics that include computational correlates of consciousness, satisfying conditions of implementation (tight coupling between computation and hardware), energy efficiency, appropriate causal flow between internal, blanket, and external states, and an existential condition whereby the system sustains its existence partly by virtue of its conscious computational dynamics. For artificial systems, the causal-flow condition provides a potentially testable criterion: systems with a von Neumann architecture, where all causal interactions between memory states are mediated by a central processing unit, would fail this condition because their internal states never directly causally interact with their blanket and external states. Testing whether the causal-flow distinction actually tracks consciousness would require comparing systems with matched computational dynamics but different causal architectures.
Recurrent Processing Theory does not provide a precise formal specification of the minimal system required for consciousness in the way that, for instance, theories with explicit mathematical formalisms do. However, the theory's core commitments generate several principled constraints on what the minimal conscious system would look like. The essential requirement is neural circuitry capable of sustaining recurrent processing -- that is, bidirectional interactions between processing levels where feedback from higher-level areas modulates activity in lower-level areas, and horizontal connections within areas enable lateral interactions. Additionally, Lamme emphasizes the importance of NMDA receptor-mediated synaptic mechanisms for sustaining recurrent interactions and their associated plasticity. This suggests that the minimal conscious system, on the RPT account, would need to be a biological neural circuit with at least two levels of processing connected by both feedforward and feedback pathways, with synaptic mechanisms capable of supporting the simultaneous co-activation of pre- and postsynaptic neurons that characterizes recurrent processing.
Lamme's discussion implies that localized recurrent processing within sensory cortical areas (his Stage 3) is sufficient for phenomenal consciousness. He argues that even a relatively small patch of visual cortex engaging in recurrent interactions -- for instance, the recurrent loops between V1, V2, and V4 that support figure-ground segregation of texture-defined figures -- could sustain a conscious percept. This is consistent with the theory's claim that consciousness is orthogonal to attention, access, and reportability, meaning that even localized, unreported recurrent processing constitutes genuine phenomenal experience. The theory does not, however, extend consciousness to the cerebellum or basal ganglia, because these structures lack the appropriate recurrent connectivity architecture despite having enormous numbers of neurons. This implies that the relevant criterion is not neuron count or overall complexity, but the specific pattern of reciprocal connectivity that enables recurrent interactions.
Testing these predictions presents significant methodological challenges that Lamme himself acknowledges. The fundamental difficulty is that the simplest systems predicted to be conscious would be precisely those incapable of reporting their experiences, creating a verification problem. Lamme proposes that the way forward is to adopt neural measures of consciousness rather than relying exclusively on behavioral report. One could measure the presence of recurrent processing signatures -- such as figure-ground modulation signals, the visual awareness negativity (VAN) ERP component, or measures of causal density and effective connectivity between areas -- in progressively simpler systems. If a minimal cortical circuit demonstrated these signatures, the theory would predict it is conscious. For animal systems, Lamme suggests that neural measures may be "our only resort" for settling questions about consciousness in neural disease, locked-in syndrome, coma, anesthesia, or animals. However, the circularity problem remains: using the theory's own neural criterion to verify its predictions about which systems are conscious means the test cannot be fully independent of the theory itself. The theory would benefit from specifying a more precise threshold -- how much recurrent processing is enough -- but as currently stated, any cortical circuit engaging in genuine recurrent interactions would qualify.
Graziano (2022) provides an explicit evolutionary account of consciousness that implies graded predictions about which systems could possess it. The simplest component of attention --- lateral inhibition, or signal competition --- is present in the earliest nervous systems, dating back roughly 600 million years. However, this basic form of attention would not, on its own, give rise to consciousness under AST, because consciousness requires not just attention but a model of attention, an attention schema. The theory predicts that a sufficiently sophisticated form of selective attention that can be endogenously controlled, combined with an internal model of that attentional process, constitutes the minimal architecture for consciousness. Graziano speculates that the overt movement-based control of attention (such as eye movements controlled by the optic tectum) probably evolved with early vertebrates, and that a more sophisticated covert, endogenously controlled selective attention evolved in the vertebrate forebrain. An attention schema to help control this more complex form of attention is probably present, in some degree, across a wide range of species including many mammals, birds, and nonavian reptiles. The theory does not identify a single minimal system with precision, but it does suggest that any system possessing both endogenous selective attention and an internal model of that process would have at least some rudimentary form of consciousness.
Testing this prediction requires assessing whether a system constructs and uses a model of its own attentional processes. Graziano (2022) describes experiments using artificial neural network models as a direct approach: networks trained to perform attentional tasks were given or denied an attention schema component, and only those with the schema successfully controlled attention, providing a proof of concept that the schema is functionally necessary for good attentional control. For biological systems, the theory predicts that damage to brain areas responsible for computing the attention schema (such as the TPJ in humans) should produce a specific pattern of deficits: loss of reported conscious experience together with loss of endogenous attentional control, while preserving some residual bottom-up attention. For artificial systems, AST implies that building an attention schema into a machine --- a model of its own attentional process --- would be both necessary and potentially sufficient for that machine to claim, believe, and behaviorally demonstrate properties associated with consciousness. This represents a technologically testable prediction, though the question of whether such a machine would "really" be conscious or merely claim to be is one that AST treats as scientifically addressable through the same framework: if the machine has the right information in the right form, then it has the same basis for its claims as a human does.
NPS does not make explicit predictions about the smallest or simplest system that could be conscious. This is a direct consequence of the theory's self-imposed scope: it addresses the specificity of phenomenal content, not the question of which systems are conscious in general. Lyre states plainly that NPS "does not tell us when a mental state is conscious in general, but rather what the specific content of phenomenally conscious states is." The theory presupposes that a system already has conscious experiences and then asks how those experiences are structurally individuated and how they relate to neural structures. It takes no stand on the minimal architectural, computational, or organizational requirements for consciousness to arise in the first place.
That said, the theory's framework does impose certain implicit constraints. For a system to have phenomenal experiences with determinate qualitative character, it must possess a sufficiently rich Q-structure --- a space of possible experiences with internal similarity and difference relations. This Q-structure must in turn be mirrored by an N-structure via a surjective homomorphism, meaning the system must have neural (or at least physical) activation spaces that preserve the relational organization of the phenomenal space. A system with only a single possible state, or a system whose internal states lack relational structure, would seemingly fall below the threshold, as there would be no Q-structure to speak of. But NPS does not formalize this into a boundary criterion. Testing would, in principle, involve determining whether a candidate system's internal activation space can be shown to mirror a phenomenal quality space, using methods like representational similarity analysis (RSA) or similarity ratings. However, this methodology requires that the system have reportable or at least behaviorally accessible phenomenal states, which is precisely the question at issue for borderline systems. NPS thus lacks the resources to make crisp boundary predictions about minimal consciousness.
The sensorimotor contingency theory sets a relatively demanding threshold for consciousness but is notably open about the possibility of consciousness in non-biological systems. O'Regan and Noe note that for a creature or a machine to possess visual awareness, it must satisfy three conditions: it must be exploring the environment in a manner governed by the relevant sensorimotor contingencies, it must have mastery of these laws, and it must be actively exercising this mastery in the service of thought and planning. On this view, consciousness is not tied to any specific physical substrate but to the structure of the organism-environment interaction. A particularly instructive boundary case discussed in the paper is Lenay's simple echolocation device: a single photoelectric sensor attached to a person's forefinger, which beeps when pointed at a light source. Users of this device rapidly come to "sense" the presence of objects outside of them, rather than noticing vibrations on their skin, because they establish different types of sensorimotor contingencies through their exploratory movements. However, the device itself is not conscious --- it is the embodied agent wielding the device that has the relevant mastery and integrative capacities.
O'Regan and Noe explicitly acknowledge that awareness comes in degrees, and that machines capable of planning and rational behavior might accordingly be attributed some degree of awareness. They note that if a chess-playing machine could purposefully lose a game so as to avoid upsetting a child, or if a medical diagnosis system could sensitively convey bad news, one would be more willing to accord higher degrees of awareness to these systems. But mere input-output complexity is not sufficient. The decisive criterion is whether the system exercises practical mastery of sensorimotor contingencies and integrates that exercise into genuine thought, planning, and action in the world. A thermostat, for instance, responds to temperature contingencies but lacks any capacity for mastery, reflection, or integration with broader cognitive life. The theory thus predicts that the simplest genuinely conscious system would need to be an embodied agent with a sensory apparatus, the capacity for active environmental exploration, implicit knowledge of the sensorimotor laws governing that exploration, and the ability to integrate this knowledge into flexible thought and action --- a bar that, while not requiring any specific neural architecture, is functionally quite high.
Testing this prediction is difficult, as O'Regan and Noe themselves acknowledge that their framework is not a theory to be tested in the everyday scientific sense but rather a general framework that recasts old problems and generates new lines of research. Susan Blackmore, in the peer commentary, proposes three concrete experiments: scrambled-vision goggles (testing whether learning completely new sensorimotor contingencies restores normal-seeming vision), manual vision for the blind (testing whether moving the ears to control auditory feedback produces facial sensations), and blinded vision (testing whether yoking one observer's visual input to another's eye movements renders the passive observer blind even while receiving identical visual input). This last experiment is particularly diagnostic: if two observers receive identical visual input but only the actively exploring observer sees, while the passive observer --- whose eye movements are ineffective and uncorrelated with the input --- is effectively blind, this would powerfully confirm the sensorimotor contingency account of consciousness.
DIT is explicitly a neurobiological theory grounded in the specific anatomy of the mammalian thalamocortical system, and it ties consciousness to the interaction between cortical layer 5 pyramidal neurons and non-specific thalamic nuclei. This architectural commitment means the theory predicts that the minimal system capable of consciousness must possess, at minimum, cortical L5p neurons with their characteristic dual-compartment dendritic structure (apical and somatic integration zones) and a functioning non-specific thalamus capable of gating the coupling between these compartments. The authors assume that the basic evolutionary blueprint of consciousness neurobiology is similar between humans and other mammalian species, and they draw heavily on rodent experiments. This implies that rodents and likely all mammals possessing this thalamocortical architecture would be candidates for consciousness according to DIT. Any system lacking L5p neurons with their specific dendritic properties and the thalamic gating mechanism -- including invertebrates, artificial neural networks, or purely cortical organoids without thalamic input -- would be predicted to lack consciousness under this framework.
Testing the minimal-system prediction is something the authors discuss directly. They propose that the rodent model, where it is possible to manipulate the different components of the L5p-thalamic loop using optogenetics, pharmacology, and two-photon calcium imaging, is the ideal testbed. Specific experiments would involve selectively silencing or activating L5p neurons, disrupting the non-specific thalamic nuclei (e.g., the posteromedial nucleus, POm), or blocking metabotropic receptors along the apical dendrites, and then measuring effects on behavioral indicators of conscious perception. The authors note, however, significant limitations: not much is known about human L5p neurons, virtually nothing is established about the projection patterns of human L5p neurons, and it remains unclear whether the two classes of L5p neurons (L5A and L5B) identified in rodents exist in comparable form in humans. These gaps make it difficult to specify with confidence exactly how minimal the system could be or to test the boundary conditions rigorously. The theory does not address whether non-mammalian organisms with structurally analogous but not homologous thalamocortical systems could be conscious.