What specific mechanism distinguishes conscious processing from unconscious processing of equal complexity?
15 theories have answered this question
IIT identifies integrated information as the critical quantity that distinguishes conscious from unconscious processing. Specifically, a system is conscious if and only if it constitutes a "complex" --- a set of units that generates a local maximum of integrated conceptual information (big Phi, or Phi-Max). Integrated information measures the irreducibility of the system's cause-effect structure: it quantifies the extent to which the conceptual structure specified by the whole set of mechanisms goes above and beyond what could be specified by its parts when the system is partitioned at its minimum information partition (MIP). For a system to have high integrated information, its units must interact in a way that is simultaneously highly differentiated (each mechanism specifying selective causes and effects) and highly integrated (the information being irreducible to independent subsets). This requires a dense lattice of specialized yet overlapping connections among units capable of effective causal interactions, rather than a modular or purely feedforward architecture.
The theory makes the decisive mechanistic prediction that purely feedforward systems --- no matter how complex or functionally sophisticated --- cannot be conscious. In a feedforward architecture, the input layer has no causes within the system and the output layer has no effects within the system, so the elements do not form a complex and generate no integrated conceptual structure (Phi-Max equals zero). This means that a feedforward network, regardless of how many elements it contains or how complicated its connectivity, cannot constitute a complex and therefore generates no quale. By contrast, a recurrent system with the same input-output behavior --- a "functionally equivalent" system --- can form a complex with positive Phi-Max, and would therefore be conscious according to IIT. This is a sharp architectural criterion: consciousness requires that the system's elements have both selective causes and selective effects within the system, forming irreducible causal loops rather than one-directional information flow.
Furthermore, IIT's exclusion postulate specifies that among all overlapping candidate sets of elements, only the one whose mechanisms specify a maximally irreducible conceptual structure (MICS) actually constitutes a complex. This explains why, for example, the cerebellum --- despite having four times more neurons than the cerebral cortex --- does not appear to contribute to consciousness. Cerebellar micro-zones are largely independent of one another and organized in a feedforward manner, meaning they cannot constitute a large complex. By contrast, the posterior-central cerebral cortex, with its dense, divergent-convergent hierarchical lattice of connections among specialized units, is ideally suited to supporting high values of integrated information. The same mechanistic distinction explains why consciousness is lost during generalized seizures (when neural activity becomes stereotypically bistable, reducing differentiation and thus integrated information to near zero) even though the brain remains highly active.
The central mechanistic claim of Global Workspace Theory is that conscious processing is distinguished from unconscious processing not by the complexity of computation, but by the pattern of neural activation — specifically, whether information is mobilized into a brain-scale "global workspace" through top-down attentional amplification and long-distance reverberant connectivity. Dehaene and Naccache (2001) are explicit that "there is no systematic relation between the objective complexity of a computation and the possibility of its proceeding unconsciously." Face processing, word reading, and postural control are all computationally complex yet can proceed without attention on specialized neural subsystems. Conversely, computationally trivial operations such as solving 21 minus 8 require conscious effort. What matters is not computational complexity per se but whether the neural population representing the information is "mobilized by top-down attentional amplification into a brain-scale state of coherent activity that involves many neurons distributed throughout the brain."
The mechanism has two concrete requirements articulated in the Dehaene and Naccache framework. First, the information must be represented in an active state — encoded in the firing patterns of neuronal assemblies rather than latent in synaptic weights or anatomical connections. Second, bidirectional connections must exist between those assemblies and the set of "workspace neurons," which are particularly dense in prefrontal cortex (PFC) and anterior cingulate (AC), so that a self-sustained amplification loop can be established. When these two structural and dynamic criteria are met, the result is a sudden, coherent, "auto-catalytic" ignition: workspace neurons send top-down amplification signals that boost the active processor neurons, whose bottom-up signals in turn help maintain workspace activity. This reverberant loop must persist for a minimal duration to count as conscious. Without this dynamic mobilization — for instance, when a masked stimulus produces only a brief, transient burst of feed-forward activation that decays before the long-distance loop can stabilize — the same information propagates through multiple processing stages (perceptual, semantic, even motor) entirely unconsciously.
Baars (2005) describes the same distinction through the theater metaphor: unconscious processing corresponds to activity in specialized processors "sitting in the dark" audience of the theater, while conscious processing occurs when information enters the "bright spot" on stage and is broadcast globally. Empirically, this is reflected in the finding, replicated across many paradigms, that conscious stimuli evoke far more widely distributed cortical activation — particularly in frontoparietal regions — than identical unconscious stimuli, which tend to activate only local sensory projection areas. The theory thus identifies a specific neural signature: consciousness corresponds to the transient formation of a large-scale coherent network involving long-distance cortico-cortical and thalamocortical connections, rather than to any particular level of computational complexity within local circuits.
The core mechanism proposed by HOT theory is precise and elegant: a mental state is conscious if and only if it is accompanied by a suitable higher-order thought -- a thought that represents oneself as currently being in that very mental state. The higher-order thought must be roughly contemporaneous with the first-order state, must be about oneself (it must employ something like a first-person concept), and must not be derived through inference or sensory observation. When these conditions are met, the first-order state is conscious; when they are not met, the state exists but remains nonconscious. Importantly, the complexity of the first-order processing is entirely irrelevant. A highly complex cognitive state that happens to lack an accompanying higher-order thought will be nonconscious, while a relatively simple sensory state that is the target of an appropriate higher-order thought will be conscious. The distinguishing factor is not anything intrinsic to the first-order state's processing but rather the presence or absence of a distinct mental state directed at it.
Rosenthal is careful to distinguish this mechanism from several related phenomena that might be confused with it. First, higher-order thoughts are not themselves typically conscious. The theory does not require an infinite regress of ever-higher thoughts; instead, the higher-order thought that makes a first-order state conscious is itself usually nonconscious. It becomes conscious only if there is a yet further higher-order thought directed at it, which Rosenthal argues would be relatively rare. Second, the mechanism differs from introspection, which is a more demanding process involving deliberate, attentive examination of one's mental states. Ordinary consciousness requires only the nonconscious higher-order thought; introspection requires additionally that one consciously attend to the higher-order thought itself, adding a third level to the hierarchy. Third, the higher-order thought must be an actual occurrent thought, not merely a disposition to have one. Rosenthal argues that a dispositional account fails because consciousness is phenomenologically something occurrent -- it comes and goes in ways that actual higher-order thoughts can explain but dispositions cannot.
This mechanism makes specific claims about what should be observable in the brain. There should be a causal mechanism linking first-order mental states to the generation of higher-order thoughts about them. Disrupting this mechanism should eliminate consciousness of the first-order state while leaving the state itself intact and causally efficacious. The mechanism also predicts the possibility of "empty" higher-order thoughts -- cases where a higher-order thought occurs without a corresponding first-order state, producing a conscious experience of being in a state one is not actually in. Rosenthal argues these would be rare and pathological, but they are a direct and distinctive prediction of the theory. However, the account remains primarily functional rather than neuroanatomical: Rosenthal does not specify which neural circuits implement the higher-order thought mechanism, leaving a significant gap between the conceptual architecture and empirical neuroscience.
Hohwy (2012) proposes that what distinguishes conscious from unconscious processing within the predictive coding framework is the joint optimization of two statistical dimensions: accuracy and precision. Accuracy refers to the inverse amplitude of prediction errors per se --- how well the internal generative model predicts sensory input. Precision refers to the inverse amplitude of random fluctuations around predictions --- the confidence the system places in its prediction errors. Conscious perception, on this account, is determined by whichever prediction or hypothesis achieves the highest overall posterior probability, which depends on both dimensions. The hypothesis that best suppresses precise prediction error across multiple levels of the cortical hierarchy is the one selected for conscious experience. Processing that is equally complex but fails to achieve sufficient precision weighting, or whose prediction errors are fully explained away, remains unconscious.
The critical mechanism that mediates this selection is precision expectation, which Hohwy maps onto attention. Attention is conceived as the optimization of precision expectations --- the process by which the brain assigns reliability weights to its prediction errors in a context-dependent fashion. Precision weighting occurs in synaptic error processing, where units that expect high precision (reliable signals) are given greater synaptic gain than units expecting imprecision. When prediction errors are precise and cannot be explained away by the current best model, they propagate up the hierarchy and drive model revision, giving rise to conscious perception. When prediction errors are imprecise or are already well-predicted, they are attenuated and do not reach consciousness. Crucially, Hohwy notes that this means a case of attention without consciousness can occur when precision expectations are high but prediction error is well minimized (top-down driven inference), and a case of consciousness without attention can occur when precision is relatively low but the model still achieves the highest posterior probability. This two-dimensional framework --- accuracy and precision as separable contributors to conscious content --- provides a more nuanced mechanism than simple computational complexity for explaining why some processing enters awareness and other processing does not.
PCT proposes a clear mechanistic distinction between conscious and unconscious processing that does not depend on the complexity of the processing involved. According to Mansell (2024), the PCT hierarchy works automatically to control perceptual variables at each level without requiring consciousness. Complex behavior -- including high-level goal pursuit, routine maintenance of personal principles, and even activities performed during sleepwalking such as dancing or searching for objects -- can be managed entirely outside conscious awareness. What distinguishes conscious from unconscious processing is not computational complexity but the presence of significant unresolved error within the control hierarchy, particularly error that requires reorganization of the control system's own structure.
The specific mechanism is reorganization -- the trial-and-error modification of the input functions, output functions, and parameters of control units in the hierarchy. Consciousness is drawn to control units where error has built up because the existing control architecture is insufficient to counteract disturbances (Mansell 2024, drawing on Powers 1960b, 1973). This most commonly occurs during conflict, when two or more control systems specify opposing reference values for the same perceptual variable, creating persistent error that cannot be resolved by ordinary output adjustments. Mansell proposes that primary consciousness emerges specifically from the novel integration of lower-level input signals during control, particularly when multiple intrinsic (homeostatic) control systems are in error and require prioritization. Qualia emerge as information from diverse lower-level systems is integrated by an input function to specify a novel, often more abstract, controlled variable that resolves ongoing conflict and re-establishes control.
Young's CoPoQ framework refines this by defining consciousness as the control of perceived quality -- a construct that integrates qualia, valence, and task performance. Conscious awareness arises during the transient reorganization phase in which attention is directed to degraded quality. Small, transient control errors indicate competent performance and remain unconscious, while large, persistent errors signal poor control and drive internal reorganization, which is what constitutes conscious engagement. Once control is optimized and error is eliminated, consciousness of that process fades -- the system has learned an adaptive structure and enters what Young identifies as a post-conscious or Flow state. This explains the well-documented phenomenon of skills becoming automatic: the novice juggler is consciously aware of individual ball movements, but as lower-level control becomes optimized, attention transitions to higher-level perceptions, and eventually, when error-free mastery is achieved, consciousness of the activity ceases entirely.
Illusionism locates the distinction between conscious and unconscious processing in the operation of introspective representational mechanisms rather than in any difference in phenomenal status. On this view, processing becomes conscious -- or more precisely, creates the appearance of being conscious -- when it becomes the target of introspective monitoring that generates quasi-phenomenal representations. Frankish allows for multiple possible accounts of how this introspective access works. Introspection may issue directly in dispositions to make phenomenal judgments about the character of particular experiences and about phenomenal consciousness in general. Alternatively, introspection may generate intermediate representations of sensory states, perhaps of a quasi-perceptual kind, that ground our phenomenal judgments. In either case, what distinguishes "conscious" processing is not that it acquires some additional phenomenal property but that it is represented by the introspective system in a way that creates the illusion of phenomenality.
The theory identifies several dimensions along which the introspective mechanism may vary and that determine which processing appears conscious. The sensory states that serve as the basis for the illusion are, on most illusionist accounts, representational states encoding features of stimuli such as position in an abstract quality space, egocentric location, and intensity. These are probably modality-specific analogue representations. What creates the illusion of phenomenality -- the quasi-phenomenal properties -- may depend on the content of these sensory states, or on properties of their neural vehicles, or on the reactions and associations they evoke, or on some combination of these factors. The key point is that whether processing appears conscious depends on whether introspective mechanisms engage with it and how they represent it, not on whether some special phenomenal property is instantiated.
This account draws support from phenomena such as change blindness, which Dennett and others cite as evidence that our sense of having a rich, detailed, and continuous visual experience is itself a kind of cognitive illusion reflecting expectations and assumptions about the information vision provides. Processing that is not introspectively represented does not seem conscious, not because it lacks a phenomenal glow that attended processing possesses, but because no quasi-phenomenal representation of it is generated. Frankish also notes that the phenomenal illusion may be hardwired and cognitively impenetrable -- much like a persistent perceptual illusion, knowing it is an illusion does not make it disappear. This suggests the introspective mechanisms generating the illusion are deeply entrenched in our cognitive architecture, possibly shaped by natural selection for their adaptive value in creating a sense of self and engagement with the environment, as Humphrey argues.
Orch OR draws a sharp mechanistic line between conscious and unconscious processing based on whether quantum coherence in microtubules reaches the threshold for objective reduction. The theory posits three distinct modes of microtubule information processing. First, "classical" microtubule automata activity --- in which tubulin conformational states interact through ordinary electrostatic dipole coupling without entering quantum superposition --- corresponds to non-conscious, autonomic processing. Second, quantum coherent superposition of tubulin states, evolving deterministically according to the Schrodinger equation (the U process), corresponds to pre-conscious or sub-conscious processing. Third, the moment of self-collapse --- when the gravitational self-energy of the superposed tubulin mass distributions reaches the Diosi-Penrose threshold and objective reduction occurs --- is identified with consciousness itself. Crucially, this means that mere computational complexity, however great, is insufficient for consciousness if it operates through classical mechanisms alone. A system of enormous classical complexity would remain unconscious, while even a relatively modest quantum system that achieves orchestrated OR would be conscious.
The decisive mechanistic factor is therefore not complexity but the nature of the physical process. A quantum superposition that collapses due to environmental entanglement --- subjective reduction (SR), or standard decoherence --- yields a random outcome with no non-computable element, and the theory holds that this process is unsuitable for consciousness. Only when the quantum system is sufficiently isolated from its environment so that it self-collapses by reaching the gravitational threshold does the non-computable, proto-conscious character of OR emerge. This is why the theory places such emphasis on isolation mechanisms: ordered water layers surrounding microtubules, gel states of cytoplasm, Frohlich coherence in tubulin assemblies, and the hollow core dynamics of microtubules are all proposed as means by which quantum coherence is shielded from premature environmental decoherence. Without such shielding, the collapse would be environmentally driven and random --- physically indistinguishable from ordinary decoherence --- and would lack the orchestrated, non-computable quality that Orch OR identifies with genuine consciousness.
The theory also proposes a specific biological architecture for the conscious/unconscious distinction. Hameroff's "conscious pilot" model suggests that consciousness moves through the brain as a mobile zone of dendritic gamma synchrony, regulated by gap junction openings and closings that are in turn controlled by microtubules. Within this synchronized zone, microtubules carry out quantum computations and Orch OR produces conscious experience. Outside this zone, microtubules may still process information, but classically and without the orchestrated quantum coherence needed for OR. This provides a neuroanatomical correlate: consciousness is associated with dendritic-somatic integration phases in which microtubule quantum states are entangled across neurons via gap junctions, while unconscious processing corresponds to regions where microtubules operate in classical mode or where quantum coherence collapses prematurely through environmental interaction.
DLCT distinguishes conscious from unconscious processing not by complexity but by whether the processing engages the self-referential feedback control mechanism that implements dual-level causal dynamics. The theory proposes that the brain contains at least two types of neural circuits: micro-level neural circuits (circuit A) that operate under micro-level neural laws alone, handling sensory, motor, and life-support functions, and macro-level neural circuits (circuit B) that compute and transmit algebraic feedback errors at the psychological level to the micro level. Conscious processing occurs specifically within and through the self-referential feedback control mechanism, where macro-level psychological laws --- expressed as algebraic structural constraints among neural network modules --- exert downward causation on micro-level neural states. Processing that occurs entirely within circuit A, governed solely by micro-level neural laws without engagement of macro-level feedback, remains unconscious regardless of how computationally complex it may be.
The critical architectural feature is that conscious processing requires a whole-parts relationship in which the "whole" (macro-level cell groups treated as mathematical functions) and the "parts" (constituent neurons and synapses) share the same physical entities, and the whole exerts causal influence on the parts through intrinsic feedback errors defined by the algebraic relationships among macro-level functions. The theory explicitly states that "the activity of the micro-level neural circuit A that is not influenced by macro-level intrinsic causes is unlikely to be involved in the generation of conscious experience." This means that two processing streams of equal complexity could differ in conscious status based entirely on whether they participate in the self-referential feedback loop. A highly complex feedforward process that never generates or responds to macro-level algebraic structural errors would remain unconscious, while a simpler process integrated into the feedback control mechanism would be conscious.
Importantly, DLCT also notes that even within the self-referential feedback mechanism, the engagement of macro-level laws is conditional rather than automatic. The micro-level retains the ability to "turn off" the negative feedback control, meaning that the decision to reduce macro-level feedback errors lies with the micro level. Unless the micro level actively reduces macro feedback errors, the system will not be affected by the macro level. This introduces a further distinction: conscious processing requires not only the presence of the dual-level architecture but the active participation of micro-level elements in responding to macro-level constraints. This conditional causal relationship differs from conventional notions of causation and adds nuance to the boundary between conscious and unconscious processing.
Irruption theory distinguishes conscious from unconscious processing not by computational complexity but by two complementary information-theoretic signatures: irruption and absorption. Absorption is the mechanism specifically associated with conscious experience. When a material process makes a difference to the mind -- when physical activity gives rise to or shapes subjective experience -- the theory predicts a corresponding unintelligible decrease in that process's observable capacity to make material differences. In operational terms, absorption manifests as a reduction in the variability or entropy of neural activity, corresponding to the compression of high-dimensional neural dynamics into low-dimensional collective order parameters. Conscious processing, on this account, is processing that has been "absorbed" into the black-box interface with the mind, and its material signature is precisely this loss of informational diversity -- a pattern of increased redundancy and decreased dimensionality in neural state space.
Unconscious processing, by contrast, would lack the absorption signature. Neural activity could be equally complex in computational terms, but if it does not undergo the specific transformation associated with absorption -- if it does not compress into the low-dimensional structures that correlate with experience -- it remains unconscious. The theory also predicts that conscious processing will co-occur with irruption on the action side: the mind's motivational involvement in bodily activity shows up as an increase in unintelligible entropy or noise that cannot be accounted for by physical laws alone. The two signatures have contrary effects on the body: irruption coincides with decreases in shared variance among physiological processes (diversification), while absorption coincides with increases in shared variance (convergence). Material processes involved in action and perception are thus expected to be spatially and temporally segregated, because a system cannot simultaneously undergo both increased and decreased shared variance at the same location and time.
This framework offers a measurable criterion. Irruption theory predicts that conscious perception should be associated with specific entropy-reduction and compression signatures in neural data, while unconscious processing of equal complexity would not show these signatures. The theory further proposes that conscious experience occurs in aperiodic cycles, coinciding with moments of large-scale neural integration -- absorption events -- while moments of segregation and diversification correspond to irruption and volitional agency. The distinguishing factor is not the amount of processing but whether that processing undergoes the specific thermodynamic-informational transformation that marks its passage through the black-box middle into the domain of mind.
According to the FEP-based model of consciousness, the mechanism distinguishing conscious from unconscious processing is not complexity per se but rather a specific structural and dynamical feature: the presence of an irreducible Markov blanket whose active states exert neuromodulatory, causally interventional influence over external neuronal dynamics, embedded within a hierarchy of nested holographic screens. The critical distinction involves what the theory calls "covert action" --- mental action corresponding to the deployment of attention, which is the precision-weighting of prediction errors at subordinate levels of the hierarchy by expectations at superordinate levels. Conscious processing occurs when the active states of an internal Markov blanket causally intervene on external dynamics in a nonlinear fashion, changing the coupling between external states rather than merely influencing them linearly. This is identified neurobiologically with neuromodulation --- the modulation of synaptic efficacy or neuronal gain --- mediated by ascending modulatory neurotransmitter systems originating in brainstem nuclei.
The model proposes that consciousness is associated with an "innermost screen" --- the deepest Markov blanket in the nested hierarchy --- that is irreducible in the sense that its internal states cannot be further partitioned by additional Markov blankets. This irreducible bulk, housing densely connected internal states, must be sufficiently complex to function as a flexible meta-controller orchestrating both overt and covert action. The contents of consciousness correspond to the information encoded on this innermost screen, which includes objects, space-time, the self, memories, plans, and imaginative experience. The claim is that all conscious systems must have such an innermost screen that mediates covert action (such as arousal and attention), and that it is only through this mental action --- the selective allocation of precision to sensory impressions --- that a system can be said to consciously experience anything. As the theory puts it, "we cannot see unless we look, we cannot hear unless we listen, and we cannot feel unless we touch."
This account draws a principled line between conscious and unconscious processing by identifying consciousness with the capacity for active, attentional selection of sensory evidence within a hierarchical generative model. Much information encoded across the hierarchy of Markov blankets never enters consciousness because it is not selected by the innermost screen's covert action. The difference between consciously perceiving one's breathing and not consciously perceiving one's heartbeat, for instance, is attributed to the ability to consciously attend to the sensory consequences of respiration but not (usually) to those of the cardiac cycle. The temporal thickness or depth of the generative model is also crucial: conscious processing requires models with sufficient temporal depth to infer the counterfactual consequences of action, enabling planning and the experience of agency, whereas unconscious processes operate with "thin" models that respond reflexively in the here and now without entertaining alternative futures.
According to Recurrent Processing Theory, the critical mechanism that distinguishes conscious from unconscious processing is not the complexity, depth, or anatomical location of neural activity, but rather the direction of information flow: specifically, whether processing involves recurrent (re-entrant, feedback) interactions between cortical areas or remains confined to the feedforward sweep. When a visual stimulus hits the retina, it is processed through successive levels of visual cortex via feedforward connections at astonishing speed, with each level taking only about 10 milliseconds. Within 100-150 milliseconds, the entire brain has been activated by the new image, and potential motor responses are prepared. Neurons throughout this feedforward sweep exhibit complex tuning properties, including selectivity for motion, depth, color, shape, and even faces. Yet despite this sophisticated and extensive processing, the feedforward sweep alone does not produce conscious experience. Multiple masking studies in both humans and monkeys demonstrate that stimuli that activate neurons throughout the brain via feedforward pathways -- including in V1, inferotemporal cortex, frontal eye fields, and motor cortex -- remain completely invisible when recurrent processing is interrupted.
Lamme's four-stage model formalizes this distinction. Stage 1 (shallow feedforward processing) and Stage 2 (deep feedforward processing reaching prefrontal and motor areas) are both unconscious, regardless of which brain areas are reached or how complex the resulting neural activity becomes. It is only at Stage 3 -- when horizontal connections within areas and feedback connections from higher to lower areas become active, establishing recurrent processing loops -- that phenomenal consciousness emerges. Stage 4 extends these recurrent interactions to include frontoparietal networks, adding cognitive access and reportability. The crucial empirical evidence comes from backward masking paradigms: when a visual stimulus is followed rapidly by a mask, the mask's feedforward sweep interrupts the recurrent processing that would otherwise develop for the target stimulus. The target's feedforward activation can still reach the highest cortical levels, driving priming and other unconscious effects, but without recurrent processing the stimulus remains invisible. TMS studies confirm this: disruption of V1 activity at approximately 100 milliseconds after stimulus onset -- the time when feedback signals arrive back at V1 -- abolishes visual awareness, further indicating that it is the recurrent interaction between areas, not the initial activation of any particular area, that is critical.
A key neural argument supporting this distinction is that activation of cortical neurons per se, even activation of frontoparietal neurons, is insufficient for conscious experience. Several studies have demonstrated that masked stimuli can activate prefrontal regions including the frontal eye fields, anterior cingulate, pre-supplementary motor area, inferior frontal gyrus, and anterior insula without producing any conscious sensation. This convergence of information toward prefrontal cortex via feedforward pathways does not yield consciousness, even though it can drive functional effects such as response inhibition and strategic switching. The theory thus identifies a specific architectural criterion -- recurrent interaction -- rather than a threshold of activation strength or a particular anatomical locus as the mechanism that separates conscious from unconscious processing.
In Attention Schema Theory, the mechanism that distinguishes conscious from unconscious processing is the construction and deployment of an attention schema --- an internal model of the process of attention. Webb and Graziano (2015) define attention as the process by which signals compete for the brain's limited processing resources, a competition that is partly bottom-up (driven by stimulus salience) and partly top-down (driven by task demands and goals). When a signal wins this competition, it gains enhanced signal strength and exerts a greater influence on downstream decision-making, memory, and behavior. This attentional processing can occur without any accompanying awareness. What transforms processing from merely attended to consciously experienced is an additional computational step: the brain constructs a simplified model of the attentional process itself --- one that represents the relationship between self, the attended item, and the act of attending (the "S+A+V" representation described in Webb and Graziano 2015, Figure 1B). The content of this attention schema, because it omits the mechanistic underpinnings of attention, depicts its object as having a mysterious, nonphysical, experiential quality, which is precisely what people report when they claim to be consciously aware of something.
The critical empirical signature of this mechanism is the relationship between awareness and the control of attention. Webb and Graziano (2015) argue that if awareness functions as an internal model used to regulate attention, then without awareness, attention should still be possible but should suffer from deficits in control. This prediction is supported by several lines of evidence. When participants are unaware of a stimulus that captures their attention, they are unable to use top-down mechanisms to redirect attention away from that stimulus, resulting in poorer task performance than when they are aware of the distractor. Conversely, when participants are aware of a distracting stimulus, they can strategically suppress or redirect their attention. This pattern --- attention without awareness is possible but poorly controlled --- distinguishes AST's account from theories that treat awareness and attention as identical or as fully independent processes. In the 2022 paper, Graziano further reports that artificial neural network models trained to perform spatial attention tasks could only successfully control attention when equipped with an attention schema, directly testing the control-theory prediction that good endogenous control of attention requires an internal model of that process.
NPS does not propose a single mechanism that toggles consciousness on or off. The theory is explicitly framed as compatible with various general mechanistic accounts of consciousness, including Global Neural Workspace, recurrent processing, higher-order theories, and neural synchrony, and it takes no stand on which of these correctly identifies the general neural correlate of consciousness (gNCC). Instead, NPS operates at a different level of analysis: it focuses on the neural correlate of specific phenomenal content (NCc) and proposes a "structural similarity constraint" (SSC) that distinguishes genuine neural substrates of conscious experience from merely statistical correlates. The SSC states that neural substrates of phenomenal types must share the structure governing the phenomenal types they are associated with --- that is, the structure of a phenomenal space.
The mechanism NPS emphasizes is therefore not about the presence or absence of consciousness per se, but about what makes a given neural activation pattern a proper correlate of a specific conscious content. A neural region qualifies as an NCC proper only if its activation space mirrors the structure of the relevant phenomenal space via a surjective homomorphism. Fink, Kob, and Lyre illustrate this with Brouwer and Heeger's (2009) study of color perception: area V4 in the visual cortex preserves the circular structure of the perceptual color space (the CIE Lab* hue circle), while V1, despite statistically correlating with color stimuli and even showing better decoding performance, fails to preserve this circular phenomenal structure. MT+ shows only random patterns. According to the SSC, V4 counts as the NCC proper for color experience because it mirrors the phenomenal structure, while V1 is relegated to a mere statistical correlate. NPS thus distinguishes conscious content-bearing processing from unconscious processing of equal complexity by whether the neural activation structure is homomorphic to the relevant quality space, not by any single architectural or dynamical property of the neural processing itself.
The sensorimotor contingency theory identifies three conditions that jointly distinguish conscious perception from unconscious or merely automatic processing. First, the organism must be engaged in active exploration of the environment in a manner governed by the relevant laws of sensorimotor contingency --- the structured regularities that determine how sensory input changes as a function of motor actions. These contingencies come in two kinds: apparatus-related contingencies, determined by the structure of the sensory organ itself (such as the way retinal stimulation distorts during eye movements due to the spherical shape of the eye), and object-related contingencies, determined by the properties of the objects or attributes being explored (such as the way color appearance changes with shifts in illumination or viewing angle). Second, and crucially, the organism must possess practical, implicit mastery of these laws --- a non-propositional, skill-based knowledge of how sensory input will change given various possible actions. Third, the organism must be currently exercising this mastery, integrating the sensorimotor patterns into ongoing planning, reasoning, and action-guidance, including speech and rational reflection.
This third condition is what distinguishes mere sensitivity from genuine awareness. O'Regan and Noe use the analogy of an automatic pilot controlling the flight of an airplane: the autopilot is regulated by sensorimotor contingencies but lacks visual awareness because it does not integrate its tracking activity into broader capacities for thought, reflection, and action-guidance. Similarly, a driver who is talking to a friend while driving may be automatically responsive to the red traffic light ahead without being visually conscious of its redness, because the driver's mastery of the relevant sensorimotor contingencies is not being exercised in the service of thought, deliberation, or speech about that feature. Only when the driver turns attention to the color of the light --- when the sensorimotor mastery is drawn into play for current planning and reflection --- does visual consciousness of the redness arise. Visual awareness is thus a matter of degree, not an all-or-nothing property, and it requires the integration of sensorimotor skill exercise with the organism's higher-order cognitive life.
Varela's neurophenomenological account complements this by emphasizing the role of large-scale neural integration as the neural counterpart of conscious experience. He points to research on neural synchrony in the gamma band (40-70 Hz) as a manifestation of the long-range neuronal integration in the brain that correlates with the emergence of a conscious cognitive event. The critical point, however, is that for Varela, such neural mechanisms must be validated not only by third-person measurement but also by disciplined first-person phenomenological accounts. The neurophenomenological method demands that the experiential distinctions between forms of attention --- orienting to sensory stimulation, activating patterns from memory, maintaining an alert state --- be systematically mapped onto their neural counterparts, and that both sides serve as mutual constraints. Unconscious processing, in this framework, is processing that has not been taken up into the integrated, emergent pattern of neural and experiential activity that constitutes a present moment of consciousness.
DIT identifies a precise biophysical mechanism that separates conscious from unconscious processing: the coupling or decoupling of the apical and somatic compartments of L5p neurons, which is gated by non-specific thalamic nuclei. In the conscious state, signals arriving at the apical dendrites of L5p neurons -- carrying contextual, top-down, and modulatory information -- successfully propagate down to the somatic compartment, where they interact with bottom-up feedforward input. This coupling enables dendritic calcium spikes that produce burst firing and, critically, allows the integrated result to be transmitted back to the non-specific thalamus and broadcast across the cortex. Suzuki and Larkum (2020) demonstrated this directly: when they optogenetically stimulated the apical compartment of L5p cells in awake animals, the perturbation propagated to the soma and influenced firing. Under anesthesia, however, the very same optogenetic stimulation of the apical compartment failed to propagate to the soma -- the two compartments were decoupled.
The critical distinction, then, is not about the sheer complexity or quantity of neural activity, but about whether cortical processing engages the L5p-mediated thalamo-cortical loop. Feedforward cortical processing that flows mainly through superficial layers, bypassing L5p neurons and never reaching the non-specific thalamus, remains unconscious regardless of its computational sophistication. The theory makes this explicit: motor computations in the cerebellum and basal ganglia, for instance, are highly complex but remain unconscious because they are detached from the L5p-thalamic loop. Similarly, cortical processing that does not sufficiently activate L5p neurons or that fails to propagate through the thalamo-cortical broadcasting system will be subliminal. The gating of apical-to-somatic signal propagation by non-specific thalamic nuclei -- particularly via metabotropic receptor activation and acetylcholine-dependent mechanisms -- is the specific switch that DIT identifies as the difference-maker between conscious and unconscious processing of comparable complexity.