Back to Questions

What specific, falsifiable predictions does this theory make that could distinguish it from competing theories, and what evidence would falsify it?

15 theories have answered this question

IIT generates several specific, falsifiable predictions that arise directly from its postulates. First, the theory predicts that the posterior-central cortex, not the prefrontal cortex, constitutes the main complex underlying consciousness, because the dense, lattice-like connectivity of posterior cortical areas supports higher integrated information than the modular organization of prefrontal regions. This prediction is controversial and is the subject of ongoing empirical tests: if prefrontal areas were shown to be essential constituents of the substrate of consciousness (rather than serving as processing loops and background conditions), this would challenge a core prediction of IIT. Second, IIT predicts that the cerebellum does not contribute directly to consciousness despite its enormous neuronal count, because its modular, largely feedforward architecture cannot support a large complex. If cerebellar lesions were found to directly and consistently alter the content of conscious experience (as opposed to affecting motor coordination and cognitive processing), this would pose a problem for the theory.

Third, IIT makes the distinctive prediction that changes in connectivity within the main complex should result in changes in experience even without accompanying changes in neural activity. An initial test of this prediction used a training paradigm in which repeated co-flashing of two spots enhanced connectivity between their cortical targets, causing the perceived space between distant, untrained spots to contract --- even though the cortical activity triggered by those spots was presumably unchanged. Further experiments are testing the related prediction that patients with lesions of primary visual cortex should experience space as contracted. Fourth, IIT predicts that purely feedforward systems cannot be conscious regardless of their complexity or functional capabilities --- meaning that a feedforward network could be a true "zombie," functionally equivalent to a conscious system but lacking experience entirely. If it could be demonstrated that a purely feedforward system is conscious (by some independently validated criterion), this would falsify a central tenet of IIT.

Perhaps the most operationalized falsification tool developed from IIT is the perturbational complexity index (PCI), which measures the complexity of cortical responses to TMS perturbations as a proxy for integrated information. PCI has been extensively validated across many conditions of consciousness and unconsciousness in healthy adults and brain-damaged patients. IIT predicts that PCI should be invariably high whenever a subject is conscious and low whenever consciousness is absent. Evidence that PCI is consistently low during verified conscious states, or consistently high during verified unconscious states, would undermine the theory's empirical foundation. Additionally, IIT's explanatory identity --- the claim that the cause-effect structure accounts for all properties of experience with no additional ingredients --- is itself a falsifiable commitment: if systematic study revealed aspects of experience that could not be accounted for by properties of the corresponding cause-effect structure (for example, if the phenomenology of spatial extendedness could not be mapped onto grid-like substrate organization), this would constitute evidence against IIT's central identity claim. However, it should be noted that computing Phi-Max exactly is currently infeasible for systems of more than about a dozen elements, which limits the practical testability of several of IIT's more precise predictions for realistic biological substrates.

Global Workspace Theory generates several specific, empirically testable predictions that are laid out in both source papers. Dehaene and Naccache (2001) articulate a key structural prediction: information that is actively represented in neural firing but lacks bidirectional connectivity with workspace neurons should be permanently inaccessible to consciousness, regardless of how much introspective effort the subject applies. This is testable — if a neural population with no anatomical pathway to prefrontal or parietal workspace areas could be shown to support conscious content, the theory would be falsified. Baars (2005) lists a set of functional predictions in his Table 1, including that conscious perception enables access to widespread brain sources while unconscious sensory processing is much more limited; that conscious events enable almost all kinds of learning while unconscious ones do not; and that conscious perceptual feedback enables voluntary control over any neuronal population, perhaps even single neurons. Each of these is testable by contrasting matched conscious and unconscious conditions.

A particularly distinctive prediction concerns the dynamics of workspace ignition. The theory predicts a sharp, nonlinear threshold between unconscious and conscious processing: as stimulus strength or duration increases, there should be a sudden transition from localized, feed-forward activation to a large-scale, self-sustained, reverberant pattern involving prefrontal and parietal areas. This "ignition" should be all-or-none at the neural level, not graded. Dehaene and Naccache further predict that it should be impossible for an unconscious stimulus to exert top-down control over circuit selection on a trial-by-trial basis — a stimulus that contacts the workspace for a duration sufficient to alter top-down control should always be globally reportable. If experiments demonstrated that a genuinely unreportable stimulus could nonetheless reconfigure task sets or exert strategic top-down influence that varies from trial to trial, this would constitute a serious challenge to the theory. Conversely, the workspace model makes the counter-intuitive prediction that even a novel, non-automatized processing pathway, once consciously prepared, can be applied unconsciously to a subsequent masked stimulus.

What would falsify the theory most decisively would be evidence that consciousness can occur without widespread cortical activation and long-distance synchrony — for example, if a localized, purely feed-forward neural process could be shown to support genuine phenomenal consciousness, or if consciousness persisted despite complete disruption of frontoparietal connectivity. The finding that rich conscious experience arises in brain regions entirely disconnected from prefrontal and anterior cingulate cortex would also challenge the framework. However, the theory faces a significant vulnerability in that its core prediction — that consciousness correlates with widespread activation — is shared by several other frameworks, making it difficult to distinguish empirically from alternative accounts that also predict distributed neural signatures. Dehaene and Naccache acknowledge this partially by noting that increased high-frequency coherence and synchrony, while predicted to accompany workspace mobilization, may also occur during non-conscious modular processing, making synchrony alone a "necessary but not sufficient" neural precondition.

HOT theory generates several distinctive predictions that follow directly from its core commitment. First, the theory predicts that conscious mental states should always be accompanied by distinct higher-order thoughts, and that there should exist a causal mechanism connecting first-order states to their corresponding higher-order representations. This implies the existence of separable neural populations or circuits: one supporting first-order processing and another generating higher-order representations. Lesions or disruptions that selectively target the higher-order system should eliminate consciousness of first-order states while leaving those states functionally intact -- they should still influence behavior, interact with other mental states, and retain their intentional and phenomenal properties. This prediction distinguishes HOT theory from theories that treat consciousness as an intrinsic property of certain types of neural processing, which would not predict such clean dissociations. If it were found that no neural system can be disrupted to selectively eliminate consciousness while preserving the full functional profile of first-order states, this would weaken the theory considerably.

Second, the theory makes the bold prediction that "empty" higher-order thoughts are possible -- that is, one can have a conscious experience of being in a mental state that one is not actually in, simply because an errant higher-order thought represents one as being in that state. Rosenthal argues these cases would be rare and pathological, but they are a direct consequence of treating consciousness as arising from a distinct higher-order state rather than being intrinsic to first-order processing. If empirical investigation showed that such misrepresentations are impossible -- that one can never have a conscious experience of a quality without the corresponding first-order sensory state actually occurring -- this would count as evidence against the theory. Conversely, clinical cases in which patients report vivid conscious experiences that cannot be traced to any corresponding first-order sensory activation would provide striking confirmation.

Third, the theory predicts that consciousness should not vary with the complexity, intensity, or integration of first-order processing per se, but only with the presence or absence of higher-order representation. A theory like IIT, by contrast, predicts that consciousness tracks integrated information, and Global Workspace Theory predicts it tracks global broadcast. HOT theory is thus falsifiable in the following way: if consciousness could be shown to track reliably with properties of first-order processing alone -- integration, broadcast, recurrence -- in cases where the higher-order thought mechanism is held constant, this would undermine the theory's explanatory core. Furthermore, if the theory's implied high bar for consciousness is wrong -- if organisms demonstrably incapable of self-referential representation (such as very simple invertebrates) are shown to be phenomenally conscious through some independently validated criterion -- this would challenge the theory's boundary claims. However, it must be acknowledged that many of these predictions are difficult to test in practice because the higher-order thoughts posited by the theory are typically nonconscious and therefore not directly observable, and because the theory does not specify the neural implementation in enough detail to guide targeted experimental interventions.

PP generates several testable predictions about the relationship between attention, perception, and consciousness. Hohwy (2012) derives specific empirical predictions about inattentional blindness, change blindness, Troxler fading, covert attention effects, and biased competition, all framed in terms of precision-weighted prediction error minimization. For instance, the framework predicts that inattentional blindness for unexpected stimuli (such as the famous gorilla experiment) should diminish if the unexpected stimulus occurs at the beginning of a counting task, before endogenous attention has strongly biased precision weighting toward the task-relevant model. It also predicts that subthreshold stimuli can be brought into consciousness through precision-weighting modulation by attention, and that sustained covert attention should diminish conscious perception of stable stimuli (Troxler fading) because prediction errors are progressively suppressed. These predictions are specific to the precision-optimization mechanism and are not straightforwardly derived from generic theories of attention or consciousness.

However, Seth and Hohwy (2021) caution that the appropriate standard for evaluating PP is not strict Popperian falsifiability but rather Lakatosian experimental fecundity --- whether the theory generates "a steady stream of testable predictions that collectively build explanatory insight." They note that PP does not directly address consciousness per se, and that this should be seen as a potential strength, since it allows the theory to generate empirically tractable mappings between mechanisms and phenomenological properties without requiring a monolithic identification of consciousness with a single process. Evidence that would challenge PP would include findings that conscious perception does not correlate with optimal precision-weighted prediction error suppression across the cortical hierarchy --- for example, if stimuli with low precision and poor accuracy reliably dominated conscious perception over stimuli with high precision and accuracy. More fundamentally, if the core computational architecture of hierarchical predictive coding were shown to be an inadequate description of cortical processing --- if, for instance, the brain were found not to operate through hierarchical generative models with recurrent prediction error signaling --- the theoretical foundation for PP's account of consciousness would be undermined. The theory also makes the distinctive prediction that attention and consciousness can dissociate in specific ways predicted by the accuracy-precision framework, and systematic failure of these dissociations to appear in the predicted pattern would constitute evidence against the theory.

PCT generates several distinctive falsifiable predictions that follow directly from its account of consciousness as reorganization-driven quality control. First, the theory predicts that directing conscious awareness toward a specific perceptual variable should produce measurable changes in the parameters of the control systems governing that variable, consistent with reorganization (Mansell 2024). This can be tested using computational PCT models fitted to behavioral data in tracking tasks: when a participant's focus of consciousness shifts from one controlled variable to another, the model parameters at the attended level should show random, trial-and-error changes characteristic of reorganization, while parameters at the unattended level should remain stable. If conscious attention produced no detectable reorganization effects on control parameters, or if reorganization occurred equally at attended and unattended levels, this would undermine a core prediction.

Second, the CoPoQ framework uniquely predicts that consciousness should disappear during Flow states -- periods of optimal control performance where error approaches zero (Young 2026). This is distinctive because most competing theories predict consciousness should persist or even intensify during peak cognitive engagement. Young proposes testing this through cross-disciplinary studies that introduce controlled disturbances into tasks typically performed under Flow conditions: if increased error or degraded quality reintroduces consciousness as predicted, this would provide converging support. If subjects in verified Flow states reported rich conscious awareness of task details despite zero perceptual control error, this would challenge the CoPoQ account. Third, Mansell (2024) predicts that novel information integration rate is a controlled variable within conscious individuals. This can be tested using the Test for the Controlled Variable: if the environment is disturbed in ways that shift the provision of novel information (for example, by presenting highly familiar versus highly complex environments), conscious individuals should act against this disturbance to keep the integration rate at their reference value. Predicted compensatory behaviors include attention shifts, exploration, mind wandering, and imagination in low-information environments, and simplification, avoidance, or repetitive behavior in excessively complex environments. Failure to observe such compensatory behavior would weaken the claim.

Fourth, the theory predicts that consciousness is specifically tied to learning and that unconscious processing should not support genuine novel skill acquisition. Young (2026) argues that conscious awareness is essential for complex, explicit learning, and proposes that a blindsight patient taught to juggle might unconsciously perceive the balls but would be unable to learn the qualitative consequences of their actions, leaving reorganization ineffective and the skill unlearned. If blindsight subjects demonstrated robust novel skill learning in their blind field comparable to sighted learning, this would challenge the theory's link between consciousness and reorganization-based learning. Finally, PCT's general behavioral predictions -- that organisms control perceptual input rather than producing specific outputs, and that behavior varies dynamically to counteract disturbances while keeping controlled variables stable -- are themselves testable through the TCV methodology and distinguishable from stimulus-response or reinforcement-learning accounts. Systematic failure of the TCV to identify controlled perceptual variables in purposive behavior, or evidence that organisms fundamentally control outputs rather than inputs, would undermine the entire PCT foundation on which the consciousness account rests. The model currently lacks detailed neural substrate mapping (Mansell 2024), and if the specific neural mechanisms implementing reorganization, intrinsic error, and hierarchical control could not be identified despite sustained investigation, this would weaken the theory's empirical credentials.

Illusionism's most distinctive prediction is negative: it predicts that neuroscience will never discover phenomenal properties as real features of brain states, because there are no such properties to discover. If the properties represented by introspection do not show up from other perspectives -- and Frankish asserts that as far as we can check through external inspection of brain states, they do not -- then illusionism predicts this pattern will continue. The theory predicts that every proposed neural correlate of consciousness will turn out to correlate with the functional and representational processes underlying the illusion rather than with any genuine phenomenal property. Specifically, illusionism predicts that proposed reductive explanations of phenomenal consciousness will, upon careful analysis, turn out to be explanations of quasi-phenomenal properties -- functional properties that create the disposition to judge that one has phenomenal experiences -- rather than explanations of genuinely qualitative feels. Frankish argues that in practice, most physicalist theories already covertly take this illusionist form, identifying phenomenal character with representational content or functional role in ways that explain the representation of phenomenality rather than phenomenality itself.

A second class of predictions concerns the relationship between introspective mechanisms and phenomenal judgments. Illusionism predicts that manipulating the introspective system -- for instance, through meditation, hypnotic suggestion, or targeted neural interventions -- should alter phenomenal judgments and the apparent character of experience, even when the underlying sensory processing remains constant. Frankish notes the possibility that the phenomenal illusion might be partially dispelled through indirect means such as meditation and hypnotic suggestion. If such practices reliably altered people's phenomenal judgments while leaving their sensory discriminative capacities intact, this would support the illusionist prediction that phenomenal character is a product of introspective representation rather than an intrinsic feature of sensory states. Conversely, if it could be demonstrated that phenomenal character persists in full even when all introspective mechanisms are verifiably disrupted, this would challenge illusionism's core thesis.

What would falsify illusionism? The theory would be undermined if it were shown that phenomenal properties play an explanatory role that cannot be captured by quasi-phenomenal properties -- for example, if we discovered that our beliefs about consciousness are caused by direct acquaintance with genuine phenomenal properties in a way that bypasses representational mechanisms entirely. Frankish addresses this possibility by arguing that acquaintance theory, which holds that we are directly and non-representationally aware of phenomenal properties, comes at a high cost: it makes phenomenal consciousness psychologically inert, since cognitive access to any property requires representation. Additionally, if the illusion problem turned out to be genuinely intractable -- if no account of introspective mechanisms could explain the vivid appearance of phenomenality -- this would weaken illusionism's main selling point, namely that it replaces an impossible explanatory task with a merely difficult one. However, it must be acknowledged that illusionism, as Frankish presents it, is more a broad theoretical orientation and research program than a precisely specified theory, which limits the sharpness of its falsifiable predictions and makes it somewhat resistant to decisive empirical refutation.

Orch OR is distinguished from most competing theories of consciousness by the specificity and physical concreteness of its predictions. The 2014 review paper catalogs twenty testable predictions organized into nine categories. Among the most distinctive: the theory predicts that neuronal microtubules are directly necessary for cognition and consciousness, not merely structural scaffolding. If it could be conclusively demonstrated that microtubules play no role in the cognitive aspects of neural function --- that consciousness persists fully even when microtubule dynamics are completely disrupted in relevant brain regions --- this would be deeply damaging to the theory. Relatedly, the theory predicts that anesthetics act specifically through microtubule quantum channels in tubulin's hydrophobic regions, dispersing the dipole couplings necessary for quantum coherence. Evidence that anesthetics operate exclusively through membrane receptor mechanisms with no action on microtubules would undermine a central claim. The 2014 paper notes that evidence from Emerson et al. showing anthracene anesthetic binding specifically in tadpole microtubules supports this prediction, but the question remains open.

The theory's most distinctive falsifiable prediction concerns the objective reduction threshold itself: tau = h-bar / E_G. If technological experiments demonstrate that isolated quantum superpositions do not self-collapse according to this criterion --- for example, if a sufficiently massive superposition persists indefinitely without environmental decoherence causing its reduction --- this would falsify the Diosi-Penrose mechanism on which the entire theory rests. Conversely, confirmation of OR at the predicted threshold would powerfully support Orch OR over competing theories that invoke no such quantum gravitational mechanism. Further distinguishing predictions include that quantum coherence in microtubules should be detectable at warm biological temperatures (partially supported by Bandyopadhyay's resonance findings); that gap junctions should support quantum tunneling and entanglement between neurons (untested); that microtubule quantum vibrations should correlate with EEG signatures, particularly gamma synchrony; and that the amount of neural tissue involved in a conscious event should be inversely related to the event time by tau = h-bar / E_G. This last prediction is particularly noteworthy because it implies a precise quantitative relationship between the spatial extent of neural involvement and the temporal frequency of conscious moments --- a prediction that no purely classical neural theory makes.

The evidence that would most decisively falsify Orch OR would come from two directions. First, a demonstration that the Diosi-Penrose objective reduction mechanism does not occur as described --- that quantum superpositions of sufficient mass do not self-collapse at the predicted timescale --- would eliminate the physical foundation of the theory. Second, a demonstration that consciousness can be produced by a system entirely lacking microtubules or any analogous quantum-coherent substrate --- for instance, a purely classical digital computer exhibiting all the hallmarks of consciousness --- would falsify Orch OR's claim that consciousness requires quantum gravitational processes. The theory explicitly predicts, following Penrose's non-computability argument, that no algorithmic system can be genuinely conscious, making the existence of conscious classical AI a decisive counterexample. It should be noted, however, that many of the theory's predictions remain at the edge of experimental feasibility, and Hameroff and Penrose themselves acknowledge that several key predictions remain untested, meaning that the theory occupies a space where it is in principle falsifiable but in practice difficult to conclusively test with current technology.

DLCT's most distinctive falsifiable prediction is that observed neural activity in conscious brains cannot be fully explained by micro-level neural laws alone. The theory claims that if a physical system contains a self-referential feedback control mechanism where the whole influences its parts, "it is impossible to construct a bottom-up theory that explains the system's behavior based solely on micro-level observations." This is a strong methodological prediction: it asserts that reductionist neuroscience will systematically fail to account for certain aspects of neural dynamics in conscious organisms, specifically those aspects driven by macro-level psychological laws operating through downward causation. If a purely bottom-up neural model were shown to fully predict all neural activity in a conscious brain --- including the synaptic weight changes that DLCT attributes to macro-level feedback --- this would falsify the theory's core claim that dual-level dynamics are necessary.

The theory also predicts that systems implementing structural downward causation (where macro-level algebraic constraints drive micro-level changes through intrinsic feedback errors) will exhibit measurably different neural dynamics from systems operating under functional downward causation (where an external reference signal drives learning) or no feedback control at all. The proof-of-concept experiment demonstrates this: when feedback gain for commutativity constraints is set to 1, the commutativity error converges to near zero, while with gain set to 0, it does not. This is a testable architectural prediction distinguishing conscious-type processing from conventional machine learning. Additionally, the theory predicts that consciousness requires structural hierarchy (where the whole and parts share the same physical entities) rather than functional hierarchy (where higher and lower levels involve different physical entities), which would distinguish conscious systems from control systems like a computer directing a robot arm.

However, DLCT's falsifiability is significantly limited by the current state of the theory's development. The authors acknowledge that "our formulation of structural downward causation in the brain lacks supporting evidence" and that "there is also a lack of evidence that would refute the formulation." The macro-level psychological laws remain unspecified for biological systems --- "their principles, dynamics, and how they autonomously evolve remain unspecified." The theory does not yet identify which specific algebraic constraints the brain enforces, making it difficult to derive precise empirical predictions about what neural patterns should look like during conscious versus unconscious processing. Evidence that would strongly challenge the theory includes demonstrating that intra-level causal closure holds universally in neural systems (that all synaptic changes can be fully explained by local neural-level interactions), or showing that systems with verified self-referential feedback control mechanisms lack any form of consciousness. The theory would also be undermined if the constructive methodology it proposes --- implementing psychological laws in simulations and testing whether they produce consciousness-like behavior --- consistently fails to generate predictions that match observed data from neuroscience or psychology.

Irruption theory makes several distinctive predictions that follow from its core framework. First and most centrally, the theory predicts that motivated activity -- the exertion of agency -- is necessarily associated with a measurable increase in entropy or unintelligible noise in the organism's physical processes. This is not merely the claim that neural entropy correlates with cognitive effort (which is empirically supported but theoretically ambiguous); the theory specifically predicts that the source of this entropy increase is not explicable in terms of the system's physical dynamics alone. If motivated activity is correlated with irruptions, this leads to the prediction of a physical trace that would otherwise not be present, specifically measurable as an increase in entropy production. Froese proposes that the brain's unexpectedly high resting energy consumption, the metabolic burden of maintaining synaptic vesicle pools even in electrically quiet neurons, and the robust hemodynamic responses to stimuli that evoke little neural spiking may all be manifestations of irruption rather than mere inefficiency. A targeted prediction is that neurons' sensitivity to tiny noise fluctuations could be a feature of irruption rather than a bug of biological imprecision.

Second, the theory predicts that action and perception have contrary material effects on the body and therefore should not directly overlap in space or time. Irruption (associated with action) should coincide with decreases in shared variance among physiological processes, while absorption (associated with perception) should coincide with increases. This yields the testable prediction that material processes involved in action and perception optimally do not overlap spatially (which aligns with but is not identical to the known motor-sensory separation in the brain) and do not overlap temporally, especially when otherwise co-located in space. The theory further predicts that conscious experience occurs in aperiodic cycles coinciding with moments of large-scale neural integration (absorption), and that these alternate with moments of segregation (irruption). This temporal alternation prediction could distinguish irruption theory from theories that expect consciousness to track sustained integration or global broadcasting.

Third, and most distinctively, irruption theory predicts that the mind-matter relation is characterized by intrinsic, irreducible unpredictability. If a complete physical description of the organism were available, motivated behavior would still exhibit residual unpredictability attributable to irruption. The theory would be falsified if it could be demonstrated that all entropy and variability in neural systems during motivated activity can be fully accounted for by physical and stochastic processes without residual unexplained variance -- that is, if a complete thermodynamic accounting of the living brain leaves no room for an extra source of disorder beyond what physics predicts. The theory would also be challenged if consciousness were found in a system entirely lacking the thermodynamic signatures of life, such as a purely digital computer, since the theory specifically links irruption and absorption to the metabolic organization of living systems. Additionally, if absorption signatures (compression, dimensionality reduction) were found to be entirely dissociable from conscious experience -- for example, if unconscious processing showed equivalent compression -- the theory's proposed mechanism for how matter makes a difference to mind would be undermined.

The FEP-based model of consciousness generates several predictions, though their falsifiability varies considerably. The model predicts that consciousness requires a nested hierarchy of Markov blankets with an irreducible innermost blanket possessing active states that causally intervene on external neuronal dynamics via neuromodulation. This predicts that the neural substrate of consciousness includes not only the cortex but critically depends on subcortical brainstem structures --- the cells of origin of ascending modulatory neurotransmitter systems --- as the active states of the innermost screen. This contrasts with views that locate consciousness exclusively in cortical activity. The theory also predicts that purely feedforward systems, and systems whose internal states never directly interact causally with their blanket states (such as standard von Neumann computers), cannot be conscious regardless of their functional sophistication. A demonstration that a system with purely feedforward or CPU-mediated architecture is genuinely conscious would challenge this prediction. The model further predicts that psychedelics, which act on precisely the neuromodulatory systems identified as active states of the innermost blanket, should alter consciousness by changing the sparse coupling among neuronal populations, and the theory's proponents note this is indeed what is observed empirically.

However, the FEP faces well-documented challenges regarding falsifiability, which the source papers themselves acknowledge to varying degrees. The FEP is described as a mathematical principle akin to the principle of least action --- it does not posit new entities or processes but provides a re-description of existing dynamics. As Wiese notes, "the FEP does not posit new entities or processes, but only provides a different view on processes that are already assumed to unfold," and should be regarded as "a metaphysically neutral re-description, not as a substantial hypothesis about a system's internal states." This raises the concern that the FEP may be unfalsifiable as a principle, since it applies to all self-organizing systems by definition. The consciousness-specific predictions layered on top of the FEP --- such as the requirement for irreducible Markov blankets, temporal depth, and causal intervention --- are more testable, but the theory's proponents acknowledge that the model is presented as a "minimal unifying model" (MUM) that specifies only necessary properties of conscious experience, not sufficient conditions. This means the theory can explain why systems that lack these features are not conscious, but it cannot definitively predict which systems that possess them are conscious.

Evidence that would seriously challenge the FEP-based model would include demonstrations that consciousness can persist when ascending neuromodulatory systems are completely abolished (not merely disrupted); that systems with no internal Markov blanket structure exhibit consciousness; or that purely feedforward architectures or standard digital computers can support genuine conscious experience. Additionally, if the causal-flow condition --- the requirement for direct causal interaction between internal and blanket states --- were shown to be irrelevant to consciousness (for instance, if a brain-in-a-vat simulation on a von Neumann computer were shown to be conscious by some independent criterion), this would undermine a key distinguishing prediction. More fundamentally, the theory would be challenged if the hierarchical predictive coding framework it relies upon were shown to be an inadequate description of neural processing, since the identification of consciousness with precision-weighting and covert action at deep levels of such hierarchies depends on this computational architecture being approximately correct.

Recurrent Processing Theory generates several distinctive and empirically testable predictions. The most fundamental is the feedforward-recurrent dissociation: any stimulus that is processed only via the feedforward sweep should be unconscious regardless of how extensively it activates the brain, while any stimulus that evokes recurrent processing of sufficient strength should be conscious regardless of whether the subject can report it. This prediction is tested through backward masking paradigms, where the temporal dynamics of masking allow researchers to selectively interrupt recurrent processing while leaving feedforward activation intact. Lamme's prediction is that interrupting feedback signals to V1 (for example with TMS at approximately 100 ms post-stimulus) should abolish visual awareness while preserving unconscious priming and other feedforward-driven effects. This prediction has received substantial empirical support, but it also generates a falsifiable converse: if it could be demonstrated that purely feedforward processing in a system with no recurrent connections is sufficient for conscious experience, or that full recurrent processing consistently fails to produce consciousness in an otherwise normal cortex, the theory would be undermined.

A second distinctive prediction concerns the dissociation between consciousness and cognitive access. RPT predicts that there should be cases of genuine phenomenal consciousness without reportability -- what Lamme calls Stage 3 processing without Stage 4. Specifically, the theory predicts that during inattentional blindness, change blindness, and attentional blink, subjects are phenomenally conscious of unreported stimuli at the moment of their presentation, even though they cannot later report them. The neural signature of this should be recurrent processing in visual areas in the absence of frontoparietal activation. Lamme and colleagues have demonstrated exactly this pattern: EEG, MEG, and fMRI signals consistent with recurrent processing between visual areas were present for texture-defined figures during inattentional blindness, even though subjects reported not seeing them. The theory further predicts that learning should follow phenomenal properties of stimuli (such as perceived color) rather than physical properties (such as wavelength), even for unreportable stimuli, because recurrent processing induces synaptic plasticity. This is a prediction that could distinguish RPT from global workspace theories, which would predict no consciousness and hence no phenomenally-driven learning for unreported stimuli.

Third, the theory predicts that consciousness should be dissociable from attention and from cognitive control, with these functions being orthogonal to phenomenal experience. This generates the specific prediction that manipulations of attention should not eliminate the neural signatures of recurrent processing, even when they eliminate reportability. Conversely, masked stimuli should be able to activate prefrontal networks and drive cognitive control processes without producing consciousness, because feedforward activation of these areas is insufficient. Both predictions have received empirical support, with evidence that the visual awareness negativity (VAN) -- an ERP correlate of recurrent processing -- is largely independent of selective attention, and that masked stimuli can trigger inhibitory control via feedforward activation of prefrontal networks without conscious awareness. However, an important challenge comes from commentators who note that recurrent processing is prevalent in states generally considered unconscious, including general anesthesia, dreamless sleep, and epileptic absence. If recurrent processing were conclusively demonstrated to persist robustly during verified unconscious states without any accompanying phenomenal experience, this would pose a serious problem for the sufficiency claim of the theory. Lamme partially addresses this by suggesting that anesthesia specifically disrupts recurrent processing through NMDA blockade, but the empirical picture remains contested, and the theory lacks a precise quantitative threshold for how much recurrent processing is required, which limits the sharpness of its falsification conditions.

AST generates several specific, empirically testable predictions. The most distinctive is the control-theory prediction: consciousness of a stimulus should be necessary for good top-down control of attention directed at that stimulus, but not necessary for attention itself. Webb and Graziano (2015) present this as the core empirical signature of the theory. If awareness is an internal model used to regulate attention, then without awareness, attention should still occur but should suffer measurable control deficits. Evidence from spatial attention paradigms supports this: subliminal stimuli can capture attention (showing attention without awareness) but the resulting attentional allocation cannot be strategically redirected (showing impaired control without awareness). A second prediction, tested in Graziano's lab, is that artificial neural networks trained on attentional tasks should require an attention schema component to achieve effective endogenous attentional control. This was confirmed, providing a direct computational test. A third prediction concerns neural substrates: AST predicts that the brain regions most associated with constructing the attention schema --- particularly the TPJ and surrounding cortical networks --- should be jointly implicated in modeling one's own attention, modeling others' attention (theory of mind), and reported conscious experience. Graziano (2022) cites converging neuroimaging evidence that the TPJ is activated during all three of these functions, and that damage to the right TPJ produces the most severe disruptions of conscious awareness in neglect patients.

The theory would be challenged or falsified by several possible findings. If consciousness could be shown to have no functional relationship to the control of attention --- that is, if removing awareness of a stimulus had no effect whatsoever on the ability to endogenously control attention toward that stimulus --- this would undermine AST's central mechanism. If the brain regions implicated in attention schema construction (TPJ and associated networks) could be damaged or inactivated without any disruption of conscious experience, this would also pose a serious problem. Conversely, if consciousness were found to persist fully intact in cases where no internal model of attention could plausibly be computed, this would challenge the theory's core claim. Graziano (2022) also explicitly identifies what AST cannot explain: it cannot account for decision-making, emotions, memory, or creativity as such, nor can it explain why a conscious experience sometimes accompanies these processes. If consciousness turns out to be fundamentally tied to one of these other cognitive functions rather than to attention and its modeling, AST would be falsified. However, AST's relatively modest ontological commitments --- it does not posit new physics, exotic substrates, or unmeasurable quantities --- mean that its predictions are, in principle, readily testable with existing neuroscience methods, though the theory's reliance on the concept of an "internal model" makes it somewhat difficult to pin down exactly what neural architecture would or would not count as implementing an attention schema.

NPS generates several distinctive falsifiable predictions, primarily through its structural similarity constraint (SSC). The most concrete prediction is that the neural correlate of any specific phenomenal domain must preserve the structure of the corresponding phenomenal space. Fink, Kob, and Lyre demonstrate this with color experience: the SSC predicts that among candidate visual areas, only those whose neural activation patterns mirror the circular, asymmetric topology of the phenomenal color space qualify as NCCs proper. This led to the specific prediction, confirmed by reanalysis of Brouwer and Heeger's (2009) data, that V4 rather than V1 is the NCC proper for color experience, because V4's activation patterns form a closed loop matching the perceptual hue circle while V1's do not. If it were shown that no neural area preserves the structure of a well-established phenomenal space --- that the structural similarity between phenomenal and neural domains is consistently absent across modalities --- this would falsify a core assumption of NPS.

A second set of predictions flows from neurophenomenal holism. NPS predicts that changes in one part of a quality space should affect the phenomenal character of experiences throughout that space. Lyre illustrates this with color blindness: dichromats should not merely fail to distinguish red and green but should experience all colors differently, since their entire Q-structure differs from that of trichromats. This is in principle testable through detailed psychophysical mapping of color spaces in dichromats, tetrachromats, and individuals with varying cone distributions. Furthermore, the theory predicts that inter-individual differences in neural structure (such as the size of V1 or V4) should predict inter-individual differences in the structure of phenomenal spaces, not merely in threshold sensitivity. If phenomenal character turned out to be atomistic rather than holistic --- if removing or altering one region of a quality space left the character of distant experiences completely unchanged --- this would undermine the holism that is central to NPS. Finally, the theory's commitment to structural rather than intrinsic individuation of qualia predicts that systematically inverted qualia scenarios (the full spectrum inversion) are empirically impossible, because the asymmetric structure of actual Q-spaces (for instance, the highly asymmetric color space) rules out structure-preserving inversions. Evidence that a genuine, global spectrum inversion is empirically realized would count against NPS.

The sensorimotor contingency theory generates several distinctive predictions that follow from its core claim that experience is constituted by the exercise of sensorimotor mastery rather than by internal neural representations. First, the theory predicts that purely passive exposure to sensory stimulation, without the possibility of active exploration, should be insufficient for full visual consciousness. Experiments with stabilized retinal images, in which the image is artificially fixed on the retina and does not change with eye movements, demonstrate that visual experience degrades --- contrast is lost, fragmentation occurs, and the image fades --- consistent with the prediction that active sensorimotor engagement is required for seeing. Second, the theory predicts that sensory substitution should be capable of producing genuinely visual-like experience provided the substitution preserves the relevant sensorimotor contingencies. The tactile visual substitution system (TVSS), in which a camera translates visual information into patterns of vibration on the skin, provides partial support: users who actively manipulate the camera come to "see" objects as externally located in space, while passive users fail to identify objects at all. The theory predicts that as the resolution and contingency structure of such devices improve, the resulting experience should become increasingly visual in character.

Third, the theory makes the striking prediction that the perceived color of a surface is constituted not by the spectral properties of reflected light but by the structure of the changes in sensory input that occur as the observer explores the surface. O'Regan and Noe propose a specific armchair experiment: using a device that tracks eye movements, one could arrange for a color patch to appear red when fixated directly but to change to green when the eyes look away. The theory predicts that after adaptation, green patches in peripheral vision and red patches in central vision should come to be perceived as the same color --- a highly counterintuitive result that would follow naturally from the sensorimotor account but not from standard theories of color processing. Fourth, the theory predicts an asymmetry in adaptation to inverting lenses: adaptation should be piecemeal and task-specific (which it is, according to the classic Stratton and Kohler experiments), because sensorimotor contingencies are exercised in the context of specific behavioral sub-systems rather than a global internal representation.

As for falsification, O'Regan and Noe candidly state that the theory is a general framework rather than a set of point predictions subject to direct verification. Nevertheless, several findings would seriously undermine the framework. If it were demonstrated that consciousness can be fully present in the complete absence of any capacity for sensorimotor interaction --- for instance, if a completely deafferented, paralyzed patient with no efferent motor capacity and no possibility of attentional shifts could be shown to have rich visual experience purely through passive stimulation of cortical areas --- this would challenge the theory's central claim. Similarly, if a purely feedforward computational system with no sensorimotor coupling to an environment were shown to be conscious by some independently validated criterion, the theory would face a fundamental problem. The discovery that sensory substitution devices preserving the full structure of visual sensorimotor contingencies consistently fail to produce any visual-like experience, even after extensive training and active manipulation, would also pose a serious challenge. Ned Block's peer commentary raises the additional worry that the theory faces a form of the behaviorism objection: if two systems share exactly the same sensorimotor contingencies but only one has experience, the theory cannot account for the difference. Evidence of such systematic dissociation between sensorimotor contingency structure and consciousness would constitute substantial disconfirmation.

DIT advances one headline falsifiable prediction with notable clarity: cortical processing that does not involve L5p neurons will be unconscious. This is stated explicitly in both source papers. If it could be demonstrated that conscious perception occurs in the absence of L5p neuron activation -- for instance, through cortical processing confined entirely to superficial layers and not engaging L5p cells or the non-specific thalamus -- this would constitute a direct falsification of the theory. A related testable prediction is that feedforward cortical processing flowing through superficial layers and bypassing L5p neurons and their thalamocortical connections should always remain non-conscious. The theory further predicts that the breakdown of information integration in unconscious states (such as during NREM sleep or anesthesia, as measured by the perturbational complexity index) should specifically be traceable to the decoupling of apical and somatic compartments in L5p neurons and the disruption of their interactions with non-specific thalamic nuclei.

Additionally, DIT generates predictions about backward masking phenomena: the temporal resolution of conscious experience should correspond to the propagation time within the thalamo-cortical loop from L5p neurons through the NSP thalamus and back to the apical dendrites. If backward masking effects were found to operate on timescales inconsistent with this loop propagation time (estimated from the known 40-90 ms delay of NSP relative to SP pathways), this would challenge the theory. The theory also predicts that directly manipulating apical dendritic activity in L5p cells should alter conscious perception -- a prediction already partially confirmed by Takahashi et al. (2016), who showed that pharmacological and optogenetic modulation of apical dendrites shifted psychometric detection curves and could even produce illusory conscious percepts. What would most clearly distinguish DIT from purely cortico-cortical theories (such as recurrent processing theory or global workspace theory) is the prediction that consciousness requires thalamic involvement, not merely as a passive relay or "enabling condition," but as an active gating mechanism controlling dendritic integration. If consciousness could be shown to persist with a fully intact cortex but completely inactivated non-specific thalamus, DIT would be falsified. Conversely, if purely cortical recurrent processing were shown sufficient for consciousness without any thalamic contribution, this would also undermine the theory's core claim.