The hard problem: why does physical processing give rise to subjective experience?
15 theories have answered this question
According to Integrated Information Theory, physical processing does not "give rise to" subjective experience in the usual sense of that phrase. Instead, IIT takes a "consciousness-first" approach, beginning from the one thing that is immediately and irrefutably certain: that experience exists. IIT identifies the essential properties of every conceivable experience through introspection --- intrinsicality (it exists for itself), information (it is specific), integration (it is unitary and irreducible), exclusion (it is definite), and composition (it is structured) --- and codifies these as the axioms of phenomenal existence. The theory then translates these phenomenological axioms into corresponding postulates of physical existence, where "physical" is understood operationally as cause-effect power: the ability to take and make a difference. A substrate of consciousness must therefore have cause-effect power upon itself (intrinsicality), in a specific state (information), as an irreducible whole (integration), that is maximally irreducible (exclusion), and structured by causal distinctions and relations (composition).
The central claim of IIT is not that physical processes produce consciousness as a byproduct, but rather that there is an explanatory identity between an experience and the cause-effect structure unfolded from a complex of mechanisms in its current state. Phenomenal existence --- consciousness --- is intrinsic existence, the only existence that is absolute rather than relative. Physical existence, defined operationally as cause-effect power, is the way intrinsic existence is formulated in objective terms. The identity is explanatory because it accounts for why an experience feels the way it does: the cause-effect structure of the substrate's main complex, composed of causal distinctions and relations, corresponds to all the properties of the experience --- both essential properties (being intrinsic, specific, unitary, definite, structured) and accidental properties (the feeling of spatial extendedness, temporal flow, objects, colors, sounds). This framework yields an intrinsic powers ontology in which consciousness is not an emergent property layered atop a physical substrate, but rather what genuinely exists is the cause-effect structure itself, unfolded from the substrate here and now.
IIT thus reframes the hard problem by denying that consciousness must be "conjured out of" physics. By starting from experience as the primary datum and formulating its essential properties as physical postulates, IIT makes the relationship between the subjective and the objective a matter of identity rather than production. The quantity of consciousness corresponds to the amount of structure integrated information (big Phi) of the cause-effect structure, while its quality corresponds to the specific shape of that structure in cause-effect space. In this way, rather than explaining why physical processing should feel like anything at all --- the classic formulation of the hard problem --- IIT proposes that what genuinely exists, intrinsically and absolutely, simply is the experience as specified by its cause-effect structure.
Global Workspace Theory does not offer a direct answer to why physical processing gives rise to subjective experience in the philosophical sense of the "hard problem." Dehaene and Naccache (2001) explicitly acknowledge this gap, stating that in the present state of methods, trying to address hard problems such as qualia head-on "can actually impede rather than facilitate progress," and that they believe many of these problems "will be found to dissolve once a satisfactory framework for consciousness is achieved." The theory's strategy is therefore deflationary: rather than explaining why there is something it is like to be a conscious system, it aims to identify the functional and neural conditions under which information becomes consciously accessible, with the hope that phenomenal consciousness will be illuminated as a consequence.
The closest the theory comes to addressing subjective experience is through the notion of enormous combinatorial diversity in workspace states. Dehaene and Naccache argue that the flux of neuronal workspace states associated with a perceptual experience is "highly differentiated" and of "high complexity," and that the contents of perceptual awareness are "complex, dynamic, multi-faceted neural states that cannot be memorized or transmitted to others in their entirety." They suggest these biological properties are "potentially capable of substantiating philosophers' intuitions about the 'qualia' of conscious experience," though they concede considerable neuroscientific research will be needed. The workspace model also proposes a distinction between three levels of information accessibility — permanently inaccessible, potentially accessible, and currently mobilized into consciousness — and suggests that the intuitive gap between phenomenal and access consciousness may simply reflect the discrepancy between the vast pool of potentially accessible information and the narrow stream that is actually broadcast at any moment.
Baars (2005) frames the issue somewhat differently through his theater metaphor, suggesting that the "bright spot" of consciousness on the stage of working memory is what creates subjective experience by making information globally available to a vast "audience" of unconscious specialized processors. But this remains a functional description of what consciousness does — enabling widespread access — rather than an explanation of why that access is accompanied by felt experience. The theory thus provides a rich functional account of the conditions under which subjective experience occurs while openly sidestepping the question of why those conditions produce phenomenality at all.
Higher-Order Thought theory does not claim that physical processing straightforwardly "gives rise to" subjective experience in the way the hard problem presupposes. Instead, Rosenthal argues that the apparent intractability of this question stems from adopting a Cartesian conception of mind on which consciousness is definitional of mentality itself. If consciousness is what makes a state mental in the first place, then no non-mental phenomenon can be invoked to explain it, and the gap between physical processing and subjective experience appears unbridgeable. Rosenthal's strategy is to dissolve this impasse by adopting a non-Cartesian framework in which mentality is defined not by consciousness but by the possession of intentional and phenomenal properties. Once this move is made, mental states can exist without being conscious, and the question shifts from the metaphysically loaded "why does experience exist?" to the more tractable "what makes some mental states conscious rather than others?"
On the HOT account, a mental state becomes subjectively experienced -- becomes conscious -- when it is accompanied by a roughly contemporaneous higher-order thought to the effect that one is in that very state. This higher-order thought is not mediated by inference or sensory observation; it is a noninferential, nonsensory awareness of the first-order state. Rosenthal maintains that this causal-relational account is compatible with both materialist and dualist ontologies, though it fits most naturally with naturalism. The materialist can hold that what makes mental states conscious is their causing higher-order thoughts via suitable neural connections, and that intentional and sensory properties are themselves special sorts of physical properties. By explaining consciousness in terms of the relation between nonconscious mental states, the theory aims to dissolve the intuitive gulf between physical reality and consciousness, since the stark discontinuity between conscious states and physical reality does not arise when we consider only nonconscious mental states and explain consciousness by reference to them.
However, it is important to recognize what this strategy does and does not accomplish. It provides a functional account of the conditions under which a mental state becomes conscious, and it defuses one source of the hard problem by rejecting the assumption that consciousness is essential to mentality. But it does not offer a positive explanation of why being the target of a higher-order thought should generate subjective experience rather than merely being a cognitive event with no felt quality. Rosenthal acknowledges this concern indirectly by arguing at length that the seeming inseparability of sensory qualities from consciousness is an artifact of how we pick out mental states through introspection, not a deep metaphysical truth. The theory reframes the hard problem rather than solving it directly, trading a seemingly intractable metaphysical puzzle for a set of empirical and conceptual questions about higher-order representation.
Predictive Processing, as articulated by Seth and Hohwy, does not claim to solve the hard problem of consciousness --- the question of why physical processing should feel like anything at all. Hohwy (2012) explicitly sets aside metaphysics, noting that "we will not discuss any metaphysics in this paper, however." Seth and Hohwy (2021) reinforce this by arguing that PP is best understood as a theory for consciousness science rather than a theory of consciousness. The framework describes perception as a process of inference in which the brain constructs its "best guess" of the hidden causes of sensory input through hierarchical generative models, and it proposes that conscious perceptual content is constituted by whichever prediction or hypothesis currently achieves the highest overall posterior probability. In this view, what we experience is the brain's top-down model of the world, not raw sensory data. The phenomenology of experience --- why a red apple looks the way it does, why an emotion feels the way it does --- is to be explained by the specific structure of the generative model and the precision-weighted prediction errors it processes.
Seth and Hohwy (2021) propose that rather than monolithically identifying consciousness with a single process or mechanism, progress will come from establishing "systematic mappings between physical and biological mechanisms, and the functional and (crucially) phenomenological properties of consciousness." This strategy replaces the single grand mystery of solving consciousness with a series of smaller, tractable challenges: explaining why a visual experience of an object feels different from an emotional experience, for instance. PP addresses these by locating different aspects of phenomenology in different aspects of the predictive hierarchy --- interoceptive inference for emotion and selfhood, active inference for the sense of agency, and exteroceptive inference for perceptual content. However, the authors are candid that this does not resolve why prediction error minimization should be accompanied by subjective experience in the first place. The theory's strength lies in its capacity to generate testable predictions about the structure and dynamics of conscious contents, not in offering a metaphysical account of why consciousness exists.
Perceptual Control Theory does not offer a direct ontological answer to the hard problem of consciousness, but it provides a functional account that reframes the question in terms of control architecture. According to PCT, living organisms are hierarchical systems of negative feedback control in which perceptual input signals are continuously compared against reference values, generating error signals that drive corrective output. The core insight is that this entire control architecture can operate without consciousness -- robots implementing PCT hierarchies exhibit purposive, naturalistic behavior through real-time control of perceptual input without any need for prediction or awareness (Mansell 2024; Young 2026). The question then becomes not why physical processing produces experience in general, but why experience arises specifically at certain points within an already-functioning control hierarchy.
The Control of Perception of Quality (CoPoQ) framework developed by Young (2026) offers the most direct PCT-based answer. Young argues that phenomenal experience, understood as the perception of quality, is fundamentally tied to the process of learning and adaptive reorganization within control systems. Perceptions are subjective functions that transform raw environmental input into internal experiences tailored to support control, and these perceptions emerge as qualia -- qualitative experiential characters constructed via evolved or reorganized perceptual functions. The organization of these functions determines the shape of phenomenal experience. Critically, these qualia are inherently valenced, carrying affective significance along a positive-to-negative spectrum, because they reflect the quality of the system's interaction with its environment as indexed by perceptual control error.
Young's framework challenges the conceivability of philosophical zombies by arguing that consciousness is a direct and necessary consequence of controlling perceptions of quality. If a system demonstrates the same behavioral output, engages the same feedback regulation processes, operates within equivalent perceptual control hierarchies, and adapts based on valence-linked reorganization, then it is necessarily participating in the conscious regulation of perceived quality. To deny such a system experience is, on this view, to fundamentally misunderstand experience as the active control of valence-laden perceptual error. However, PCT proponents acknowledge that this account ultimately provides a functional explanation of what consciousness does within control systems -- linking it to quality perception, learning, and reorganization -- rather than resolving the metaphysical question of why these control dynamics are accompanied by phenomenal character at all. The hard problem remains open within the framework, even as the functional role of experience is given a precise specification.
Illusionism's answer to this question is radical and revisionary: physical processing does not give rise to subjective experience in the way the question presumes. According to Frankish, phenomenal consciousness -- the supposedly qualitative, ineffable, intrinsic "what-it's-like" character of experience -- is illusory. Experiences do not actually possess phenomenal properties; rather, our introspective mechanisms misrepresent complex physical properties of sensory states as simple, intrinsic, qualitative feels. What seems like an irreducible subjective experience is in fact a distorted introspective rendering of underlying neural processes that have complex chemical, biological, representational, and functional features. Introspection, on this view, delivers a partial and systematically misleading picture of our inner states, bundling their complexity into what appears to be a simple phenomenal feel -- much as a magician's sleight of hand causes an audience to perceive a simple magical effect where only complex trickery exists.
The theory thus replaces the "hard problem" -- why does physical processing produce subjective experience? -- with what Frankish calls the "illusion problem": why does physical processing seem to produce subjective experience? This reframing is central to the illusionist program. The hard problem gains its apparent intractability from the assumption that phenomenal properties are real, anomalous features of the world that resist functional analysis and float free of physical explanation. If those properties are instead illusory -- if experiences have only "quasi-phenomenal" properties, which are non-phenomenal physical properties that introspection typically misrepresents as phenomenal -- then the explanatory challenge becomes tractable. We need only explain how introspective mechanisms generate non-veridical representations that create the vivid appearance of phenomenality.
Dennett's user-interface analogy illuminates the illusionist position. Just as the icons, pointers, and files on a computer desktop correspond only metaphorically to the machine's actual internal structures, the items populating our introspective world are metaphorical representations of real neural events that facilitate mental self-manipulation without yielding deep insight into the processes involved. There is no internal display presented for the benefit of a conscious observer; the illusion arises from the limited access relations among multiple non-conscious subsystems. On Humphrey's related proposal, sensations occur when internalized evaluative responses interact with incoming sensory signals to create complex feedback loops that, when internally monitored, seem to possess otherworldly phenomenal properties -- a "magic show" that evolution has sculpted because it is fitness-enhancing. The illusionist's core claim is that explaining this magic show is a problem for cognitive science and neurophysiology, not a problem requiring new physics or non-physical ontology.
Orch OR proposes that subjective experience does not arise from physical processing in the conventional computational sense. Instead, consciousness is grounded in a specific physical event --- objective reduction (OR) --- that is itself taken to be a fundamental feature of the universe. According to the Diosi-Penrose (DP) scheme, when a quantum system is held in superposition, the mass displacement between the superposed states creates a separation in the underlying spacetime geometry. When this separation reaches a critical threshold, determined by the gravitational self-energy E_G of the superposition, the system spontaneously self-collapses in a timescale of roughly tau = h-bar / E_G. Hameroff and Penrose propose that each such OR event is accompanied by a moment of "proto-conscious experience" --- a primitive element of phenomenal awareness tied to the resolution of Planck-scale spacetime geometry. Subjective experience, on this account, is not produced by computation over physical inputs but is an intrinsic accompaniment of a particular kind of physical process: the self-collapse of quantum superpositions under gravity.
The theory explicitly rejects the premise that consciousness can be explained by neuronal firing patterns, complexity thresholds, or any classical computational mechanism. Penrose argued, drawing on Godel's incompleteness theorems, that human understanding --- particularly mathematical insight --- involves a quality of "non-computability" that cannot be captured by any algorithmic system. The only apparent source of non-computability in known physics, Penrose contended, is objective reduction, because the outcome of self-collapse is not determined by the prior computable evolution of the quantum state. Orch OR thus links subjective experience to a feature of physics that transcends standard computation. The experiential quality is proposed to arise because OR events involve a selection among superposed spacetime geometries at the Planck scale, connecting consciousness to the most fundamental geometric structure of the universe.
In the biological context, these OR events are "orchestrated" --- meaning they are not random or uncontrolled but are shaped by the information-processing architecture of microtubules within neurons. Tubulin proteins in microtubules enter quantum superposition states, with their dipole configurations encoding information. Microtubule-associated proteins (MAPs) act as nodes that tune the quantum oscillations and constrain the probabilities for post-reduction outcomes. When orchestrated OR occurs, the resulting selection among tubulin conformational states constitutes a conscious moment that carries cognitive content --- a "conscious now" event. Sequential cascades of Orch OR events then produce the familiar stream of consciousness. The theory thus claims that physical processing gives rise to subjective experience because the relevant physical events --- quantum gravitational self-collapses in microtubules --- are not merely correlated with consciousness but are, in their very nature, moments of proto-conscious experience woven into the fabric of spacetime.
According to Dual-Level Causal Theory, physical processing gives rise to subjective experience because certain physical systems --- specifically those with modular, hierarchical neural architectures --- develop a self-referential feedback control mechanism through which the whole system exerts genuine causal influence on its own parts. DLCT begins from the premise that conscious experience is a macro-level phenomenon that cannot be explained by micro-level neural laws alone. The theory rejects the reductionist assumption of "intra-level causal closure," which holds that micro-level phenomena are fully determined by micro-level laws. Once this assumption is rejected, room opens for macro-level psychological laws that are nomologically irreducible to neural-level laws yet physically implementable within the brain's architecture. Subjective experience arises, on this account, because the brain is a "System B" --- a physical system possessing a self-referential feedback control mechanism in which the whole influences the behavior of its own parts --- rather than a "System A" whose behavior can be exhaustively described from the bottom up.
The specific mechanism proposed involves structural downward causation: the brain's neural network modules, treated as macro-level mathematical functions, are subject to algebraic structural constraints (such as commutativity between module pairs) that define intrinsic feedback errors at the macro level. These macro-level feedback errors cannot be described at the micro level alone because they are defined by relationships among the supervenient functions of multiple neural network modules. The micro-level neurons and synapses then adjust --- modifying synaptic weights --- to reduce these macro-level errors through negative feedback control. This inter-level feedback loop, where macro-level constraints that are irreducible to neural-level descriptions actively reshape micro-level physical states, constitutes the intrinsic causal power that the theory identifies with consciousness. Generating conscious experience requires this macro-level intrinsic causal power operating on the neural activity that represents conscious content.
DLCT is forthright that this account reframes the hard problem rather than dissolving it entirely. The theory argues that the question should shift from "What physical states give rise to consciousness?" to "How do macro-level psychological laws contribute to generating physical states corresponding to consciousness, and how do those states subsequently alter physical states?" By reversing the explanatory direction --- using hypothetical psychological laws to predict observable physical phenomena, rather than trying to derive consciousness from neural observations --- DLCT proposes what it calls a "Copernican Turn" in consciousness science. The theory acknowledges that the subjective and unobservable nature of conscious experience makes direct testing difficult, but it maintains that the causal efficacy of consciousness on physical systems is what makes a scientific theory of consciousness possible at all.
Irruption Theory holds that physical processing does not give rise to subjective experience in any intelligible, derivable sense -- and that accepting this is the key to making progress. The theory begins from Schrodinger's observation that the scientific method is built on two foundational principles: the principle of objectification (excluding the observing subject from the domain of nature being studied) and the principle of understandability (expecting that natural phenomena can be rendered fully intelligible). Froese argues that these two principles, taken jointly, make the emergence of subjective experience from physical processes necessarily unintelligible. The principle of objectification excludes the conscious subject from the scientific world image at the outset, so any attempt to recover subjective experience from within that image faces a self-imposed barrier. Rather than treating this as a problem to be overcome, irruption theory proposes that the relation between matter and mind passes through a "black-box middle" -- an interface that is real and efficacious but not transparent to scientific explanation.
The theory proposes that when material processes make a difference to consciousness, this occurs through what Froese terms "absorption": a measurable reduction in the observable variability of a material process, corresponding to that process making a difference to the unobservable mind. In information-theoretic terms, absorption can be approximated by compression or redundancy measures in neural activity. There is a broad consensus in neuroscience that a key signature of consciousness in the brain involves the transformation of high-dimensional neural state spaces into low-dimensional structures, and irruption theory reinterprets this as the material signature of physical processes being absorbed into conscious experience. The feeling of qualia, on this view, corresponds to "an efficient compression of information about prior experiences," as Jost (2021) has characterized it.
Crucially, irruption theory does not claim to explain why this compression-absorption process yields subjective experience rather than merely constituting a computational operation. The theory explicitly accepts that an intelligible entailment from matter to consciousness should not be expected in the first place, because the hard problem of consciousness is recast as a problem of inherent unintelligibility rather than of missing explanation. From the perspective of the natural sciences, the consequences of mind-matter interaction are unobservable on the mind side and unintelligible on the matter side. This is not mysterianism or resignation; rather, by accepting fundamental limits on intelligibility, the theory claims we can derive testable hypotheses about the measurable material correlates of absorption -- about what the physical signatures of experience look like, even if we cannot explain why those signatures are accompanied by something it is like.
The Free Energy Principle does not claim to solve the "hard problem" of why physical processing gives rise to subjective experience in a direct, metaphysical sense. Instead, it offers a framework for naturalizing consciousness by grounding it in the same mathematical formalism that describes any self-organizing system. Under the FEP, any system that persists over time --- that maintains its existence as a distinguishable entity --- must possess a Markov blanket separating its internal states from the external world, and its internal dynamics can be described as performing approximate Bayesian inference, minimizing variational free energy with respect to a generative model of its environment. Since all biological processes can be construed as forms of inference, from evolution through to conscious processing, the question becomes not why physical processing gives rise to experience as such, but rather at what point in the continuum of self-evidencing processes experience emerges. The FEP thus reframes the hard problem as a question about the conditions under which inference acquires the hallmarks of consciousness.
The FEP's approach to this question draws on the holographic principle from quantum information theory and the concept of nested Markov blankets. According to the "inner screen hypothesis," consciousness is associated with the classical information encoded on internal Markov blankets --- the boundaries between subsystems within a composite system. A system must be composite, having at least two mutually separable components communicating classically, for any form of consciousness involving even short-term memory to arise. The sufficiently sparse coupling between internal subsystems, which admits an internal Markov blanket structure, allows classical information processing and integration to occur, and it is this internal information processing that may be identified with, or at least enable, conscious experience. The theory thus links the emergence of subjective experience to a particular kind of physical structure --- hierarchically nested, sparsely coupled subsystems whose interactions can be described as reading and writing information on holographic screens.
Importantly, the FEP acknowledges that it has "little to say about consciousness per se," but offers a way of naturalizing consciousness because its attendant Bayesian mechanics shares the same mathematical foundations as quantum, statistical, and classical mechanics. The FEP provides a metaphysically neutral re-description of physical dynamics rather than positing new entities or processes. It proposes that what it means for a physical system to exist as an identifiable entity --- to have a Markov blanket and minimize free energy --- inherently involves a form of self-modeling and environmental tracking that, under the right structural conditions, constitutes experience. Whether this ultimately resolves the hard problem or merely reformulates it remains an open question the FEP's proponents readily acknowledge.
Recurrent Processing Theory, as developed by Victor Lamme, does not provide a direct answer to why physical processing gives rise to subjective experience in the traditional philosophical sense of the hard problem. Lamme himself acknowledges this explicitly, noting that "this so-called 'hard problem' is difficult to answer at this point" (Lamme, 2006). However, the theory does offer a distinctive reframing of the question by arguing that we should let neuroscience go beyond merely finding "neural correlates of" consciousness and instead allow neural arguments to have genuine explanatory authority over what counts as conscious. On this view, the question is not why a particular neural process feels like something, but rather which neural process can be scientifically identified with phenomenal experience on the basis of converging behavioral, neural, and theoretical evidence.
The theory's positive contribution to this question centers on the claim that recurrent processing -- the bidirectional exchange of signals between higher and lower cortical areas via feedback and horizontal connections -- possesses properties that make it uniquely suited to ground phenomenality. Lamme argues that recurrent processing creates the conditions for perceptual organization: feature binding, figure-ground segregation, depth sorting, and the integration of disparate stimulus attributes into coherent perceptual wholes. These are precisely the properties that characterize conscious percepts as they appear to us phenomenologically. When face-selective neurons in inferotemporal cortex engage in recurrent interactions with neurons encoding color, shape, and spatial position in lower visual areas, the result is not merely a categorical detection of "face" but the rich, integrated experience of seeing a particular face with all its features unified. Feedforward processing, by contrast, yields only isolated categorical signals that lack this perceptual integration.
Lamme further argues that recurrent processing is fundamentally different from feedforward processing at the cellular and molecular level, creating conditions that satisfy the Hebb rule -- where pre- and postsynaptic neurons are simultaneously active -- thereby triggering NMDA receptor activation and synaptic plasticity. This provides a functional rationale for why consciousness (neurally defined as recurrent processing) exists: "we need consciousness to learn." Stimuli that evoke recurrent processing change the brain, while stimuli processed only in a feedforward manner have no lasting impact. While this does not solve the hard problem of why there is something it is like to undergo recurrent processing, it does connect consciousness to a fundamental biological function and opens the possibility of understanding consciousness at the cellular and molecular level, which Lamme considers a more promising avenue than traditional neural correlate approaches.
According to Attention Schema Theory, physical processing does not give rise to subjective experience in the way the question traditionally implies. Instead, AST proposes that subjective experience is best understood as the content of a specific internal model that the brain constructs. Graziano (2022) lays out two foundational principles: first, that information which comes out of a brain must have been represented in that brain, and second, that the brain's internal models are never accurate. Taken together, these principles imply that when people believe, claim, and insist that they possess an intangible, nonphysical feeling of consciousness, this belief is itself the product of information in the brain --- specifically, information contained in a simplified, schematic model of the process of attention. The brain constructs this model, the "attention schema," in the same way it constructs a body schema to represent the physical body. Because the model omits the mechanistic details of how attention actually works --- the neurons, synapses, lateral inhibition, and signal competition --- it depicts attention as something physically incoherent: a ghostly, nonphysical essence or mental possession that defies physical explanation.
AST therefore reframes the hard problem. Rather than asking what alchemical process converts neural activity into felt experience, the theory asks why the brain would construct an internal model that portrays its own processes as having an experiential, nonphysical character. The answer is functional: just as the body schema exists because model-based control of the body is more effective than control without an internal model, the attention schema exists because controlling attention --- a complex, variable, and constantly shifting process --- benefits from a descriptive and predictive model of that process. The content of this model is what people access when they introspect and report having conscious experience. The apparent mystery of consciousness, in this account, arises because people are accessing a caricature of a real physical process, mistaking the simplified model for a literal description of what is happening inside them. AST thus dissolves rather than solves the hard problem: the question of why physical processing produces a nonphysical feeling is treated as ill-posed, because the "nonphysical feeling" is itself informational content in a model that, by its nature as a useful simplification, lacks any reference to physical mechanism.
Neurophenomenal structuralism does not claim to explain why physical processing gives rise to subjective experience in the generic sense demanded by the hard problem. The theory explicitly acknowledges that it addresses specific rather than generic consciousness: it aims to explain "what the specific content of phenomenally conscious states is," not to tell us when a mental state is conscious in general. NPS holds that the specific qualitative character of any phenomenal experience --- what it is like to see red, hear a tone, or smell coffee --- is constituted by that experience's relational position within a broader structure of possible experiences, what the theory calls a "Q-structure" or quality space. This is a structuralist ontology in which phenomenal content and phenomenal character are rooted in relational, not intrinsic, properties of sentient subjects.
The theory's contribution to the question of why there is subjective experience at all comes through its structuralist reframing. NPS proposes that we should reject the idea that qualia are intrinsic, non-relational "quiddistic" properties and instead understand them as positions in a web of similarity and difference relations. Lyre argues that a rigorous structuralist understanding of what-it-is-like character offers a better handle on the hard problem than the traditional intrinsicalist picture, because the mysterious primitive "suchness" of qualia that makes them seem inexplicable is itself an illusion born of intrinsicalism. Under NPS, the what-it-is-like character of an experience reduces to holistic phenomenal content, which in turn reduces to neural vehicle representations. The theory combines this with reductive physicalism: proponents assume a metaphysically necessary Q-N supervenience, meaning Q-properties do not supervene over N-properties, and all causal work is done by the N-structure. This makes the relationship between physical processing and subjective experience one of structural mirroring and reduction rather than mysterious emergence, though Lyre concedes this may contribute to rather than fully solve the hard problem.
According to the sensorimotor contingency theory developed by O'Regan and Noe, physical processing in the brain does not, by itself, give rise to subjective experience. The theory proposes that experience is not generated by the activation of internal representations in the brain, but is instead constituted by the organism's active exploration of the environment, mediated by practical knowledge of sensorimotor contingencies. Sensorimotor contingencies are the lawful regularities governing how sensory stimulation changes as a function of the organism's movements and actions. Vision, for instance, is not a matter of constructing an internal picture of the world; it is a mode of skillful bodily engagement with the environment, in which the perceiver exercises mastery over the relevant sensorimotor laws. The outside world serves as its own representation --- an "external memory" that the organism probes through its sensory apparatus --- and subjective experience arises from the dynamic, law-governed coupling between perceiver and world rather than from any neural event taken in isolation.
Varela's neurophenomenological framework reinforces this anti-representationalist stance by insisting that first-hand experience is an irreducible field of phenomena, not something that can be explained away by appeal to neural correlates or computational processes. Varela argues that the hard problem of consciousness cannot be solved by finding a "theoretical fix" or "extra ingredient" within the brain, but requires a disciplined phenomenological method that takes the structure of lived experience as a legitimate and irreducible domain of inquiry. His working hypothesis of neurophenomenology proposes that phenomenological accounts of experience and cognitive-scientific accounts of brain processes relate to each other through "reciprocal constraints" --- each domain providing structure and evidence that shapes the other. In this way, the embodied cognition approach reframes the question: rather than asking how physical processing manufactures experience from the inside, it holds that experience is the temporally extended activity of an embodied agent exploring its world, and that the brain's role is to enable and sustain the sensorimotor skills that constitute this exploration.
O'Regan and Noe make the further claim that there is, in the relevant sense, no explanatory gap. The apparent gap between brain processes and experience arises from a mistaken conception of experience as consisting of internally occurring sensation-like states or qualia. Once experience is reconceived as a mode of activity --- something the organism does rather than something produced inside it --- the puzzle of how neural activity could generate subjective states dissolves. There is no need to explain how a Porsche-driving "quale" emerges from the brain, because the felt character of driving a Porsche is constituted by the various things one does while driving and by one's confident mastery of the sensorimotor contingencies governing the car's behavior. The brain supports this activity by enabling mastery and exercise of sensorimotor contingencies, but the brain's activation does not, by itself, constitute the seeing, just as the body position of a dancer does not, by itself, constitute the dance.
Dendritic Integration Theory does not claim to solve the hard problem of consciousness outright, but it offers a specific neurobiological account of why certain physical processes are associated with subjective experience while others are not. According to DIT, conscious experience arises when two fundamentally distinct information streams -- sensory/feedforward data and contextual/feedback data -- are integrated within the dendritic architecture of cortical layer 5 pyramidal (L5p) neurons. These neurons possess two functionally distinct compartments: a somatic (basal) compartment that receives bottom-up, feedforward sensory information from lower cortical areas and first-order thalamic relay nuclei, and an apical compartment near the top of their dendritic tree that receives diverse top-down contextual input from higher cortical areas and non-specific (NSP) thalamic nuclei. When both compartments are activated simultaneously, the neuron fires a characteristic burst of action potentials via dendritic calcium spikes, and this coincidence detection is proposed as the cellular basis for conscious experience.
The theory argues that what makes this particular physical process give rise to experience -- as opposed to other equally active neural processes -- is its unique position at the intersection of two large-scale brain loops. The cortico-cortical loop processes and refines representational content across cortical areas, while the thalamo-cortical loop, running through the non-specific thalamus, modulates the global state of the brain. L5p neurons sit at the nexus of both loops, and the integration happening within their dendrites effectively couples the state of consciousness (wakefulness, alertness) with the contents of consciousness (what is perceived, felt, or thought). DIT proposes that it is precisely this coupling -- the merging of "what" information with "whether" modulation within a single cellular locus -- that constitutes the physical basis of subjective experience. Without this dendritic integration, cortical processing can still occur, but it remains unconscious. The theory thus reframes the question: subjective experience is what it is like when contextually modulated cortical representations are broadcast through the thalamo-cortical system via the dendritic integration mechanism of L5p neurons.