The illusion of machine understanding

Machine Learning


Artificial intelligence (AI) has rapidly transitioned from a specialized research discipline to an infrastructural force embedded in everyday life, governance, industry, and culture. Public awareness of AI is now nearly universal. A 2025 Pew Research Center survey found that 95% of U.S. adults have heard at least a little about artificial intelligence, yet 53% are not confident they can distinguish AI-generated content from human-created content (Pew Research Center, 2025). This gap between exposure and discernment signals a critical epistemic tension: familiarity does not guarantee conceptual understanding.

Simultaneously, AI adoption is accelerating at an unprecedented scale. A 2025 global survey reported that approximately two-thirds of respondents regularly interact with AI systems, and 83% believe AI can deliver meaningful benefits, even as 58% characterize it as untrustworthy (Reuters Institute, AI Adoption Report, 2025). This duality, widespread usage entwined with skepticism, captures the complex socio-technological relationship contemporary societies have with AI.

Public concern also extends to the cognitive and social implications of pervasive AI use. According to Pew Research data, 53% of Americans believe AI will worsen people’s ability to think creatively, and 50% believe it may harm people’s capacity to form meaningful relationships as AI products increasingly mediate communication and cultural production (Pew Research Center, 2025).

This juxtaposition, rapid social integration alongside conceptual uncertainty, reveals a deeper paradox. Modern AI systems confidently generate essays, design artifacts, policy analyses, and urban simulations. Their outputs exhibit structural coherence and contextual responsiveness. Yet these systems operate through probabilistic modeling and high-dimensional pattern recognition; they do not possess awareness, intentionality, or experiential understanding.

It is within this divergence between output fluency and internal comprehension that the concept of Performative Intelligence emerges. Performative Intelligence describes computational systems that simulate reasoning, creativity, or understanding without possessing intrinsic cognition or semantic grounding. The “intelligence” here is enacted as performance, syntactic fluency without phenomenological awareness.

As AI systems increasingly mediate knowledge production, social coordination, and decision support, distinguishing between computational performance and genuine cognition becomes essential. Without such discernment, societies risk conflating surface-level fluency with true understanding, granting epistemic authority to systems that merely predict patterns rather than grasp meaning.

Defining performative intelligence

The concept of Performative Intelligence emerges from a critical distinction between intelligence as cognition and intelligence as performance. Cognition traditionally implies internal mental processes: perception, memory, reasoning, abstraction, and intentionality. In cognitive science, intelligence is often associated with adaptive problem-solving grounded in embodied interaction with the world (Neisser, 1967; Varela, Thompson & Rosch, 1991). Performance, by contrast, refers to the external manifestation of intelligent behavior, observable outputs that conform to expectations of reasoning or creativity, irrespective of the underlying mechanism.

Artificial systems excel at performance. They generate structured arguments, compose music, simulate urban growth patterns, and draft policy recommendations. However, these outputs arise from computational operations rather than conscious deliberation. Performative Intelligence therefore refers to systems that simulate the appearance of reasoning without possessing phenomenological awareness or intentional states.

An operational definition can be articulated as follows:

Performative Intelligence is the capacity of a computational system to produce outputs that resemble intelligent reasoning through statistical inference and pattern synthesis, absent conscious understanding or self-reflexive awareness.

This distinction requires clarifying three foundational terms:

  • Computation: formal symbol manipulation governed by algorithmic rules. Computation processes inputs to outputs through defined mathematical transformations (Turing, 1936).

  • Cognition: adaptive information processing associated with learning, abstraction, and contextual reasoning, typically studied in humans and biological systems.

  • Consciousness: subjective awareness, self-referential experience, and intentionality, the qualitative “what it is like” dimension (Nagel, 1974; Chalmers, 1995).

Current AI systems operate squarely within computation. They may approximate aspects of cognition functionally, but they do not demonstrate consciousness. Philosophical arguments such as John Searle’s Chinese Room (1980) illustrate this gap: syntactic symbol manipulation does not constitute semantic understanding.

Within AI discourse, this distinction aligns with debates between “strong AI” (machines truly possessing minds) and “weak AI” (machines simulating intelligence). Performative Intelligence situates contemporary large-scale models firmly within the latter category. Despite increasingly convincing outputs, there is no empirical evidence that these systems possess awareness, intention, or intrinsic meaning-making capacities.

Thus, Performative Intelligence is not a dismissal of AI capability; it is a conceptual clarification. The system performs intelligence; it does not inhabit it.

How AI produces the illusion of understanding

The persuasive power of Performative Intelligence lies in its technical architecture. Modern AI systems, particularly large language models, rely on high-dimensional pattern recognition and probabilistic modeling. Trained on vast corpora of text, images, or structured data, these systems learn statistical relationships between tokens, phrases, or visual elements. During generation, they predict the most probable continuation of a sequence given prior context.

This mechanism produces linguistic fluency, but fluency is not equivalent to semantic comprehension. A language model does not “know” what a city is, nor does it possess embodied experience of urban life. It predicts associations between words like “infrastructure,” “density,” and “mobility” based on frequency patterns in training data. The resulting text feels coherent because it mirrors the statistical regularities of human discourse.

This distinction parallels the difference between statistical correlation and causal reasoning. AI models identify correlations across datasets but lack intrinsic causal models of the world. Judea Pearl (2018) argues that genuine understanding requires causal inference, reasoning about why events occur, not merely identifying patterns. Contemporary AI systems remain largely correlation-driven.

Why, then, do outputs feel intelligent?

Several factors contribute:

  1. Surface coherence: outputs follow grammatical and rhetorical conventions, satisfying human expectations of logical structure.

  2. Context sensitivity: models adapt responses to prompts, giving the impression of responsiveness and situational awareness.

  3. Scale of training data: exposure to immense datasets allows recombination of highly refined patterns, producing sophisticated synthesis.

  4. Anthropomorphic projection: humans naturally attribute agency and intention to systems exhibiting conversational fluency (Reeves & Nass, 1996).

The illusion arises because human observers evaluate intelligence behaviorally. If a system produces structured reasoning, we infer cognition. Yet the internal mechanism remains probabilistic token prediction rather than conceptual modeling grounded in experience.

Thus, the “illusion of understanding” is not deception in a malicious sense; it is a byproduct of advanced statistical modeling interacting with human interpretive tendencies. The system generates outputs that meet formal criteria of intelligence while lacking the ontological properties traditionally associated with mind.

The architecture of unawareness

If Performative Intelligence explains what AI does, the Architecture of Unawareness explains what it lacks. Contemporary AI systems, despite their sophistication, operate without core attributes traditionally associated with the mind.

Lack of embodiment

Human cognition is deeply embodied. Perception, movement, spatial orientation, and sensory feedback shape conceptual development (Varela, Thompson & Rosch, 1991). Meaning emerges from interaction with physical and social environments. AI systems, by contrast, process symbolic representations without sensory grounding. Even multimodal systems that analyze images or audio do so through encoded data streams rather than lived bodily experience.

Without embodiment, AI lacks the experiential substrate from which human concepts arise. It does not inhabit space; it processes representations of space.

Absence of intentionality

Intentionality, in philosophical terms, refers to the “aboutness” of mental states, the capacity for thoughts to be directed toward objects or goals (Brentano, 1874). AI systems do not possess intrinsic goals or desires. Their outputs are functions of optimization objectives defined during training. Apparent purposefulness is externally imposed through loss functions and user prompts.

Thus, any semblance of intention is derivative, not inherent.

No self-reflexivity or moral agency

Self-reflexivity involves awareness of one’s own cognitive processes. Humans can reflect on their reasoning, revise beliefs, and assume responsibility for decisions. AI systems lack such meta-awareness. They do not “know” that they are generating responses; they cannot evaluate the moral consequences of their outputs beyond programmed constraints.

Moral agency requires accountability and the capacity to deliberate ethically. AI systems operate without subjective experience, responsibility, or ethical intentionality. Responsibility remains with designers, deployers, and users.

Dependency on training data and human Input

AI models are fundamentally dependent on the data used to train them and the prompts that activate them. Their knowledge is derivative of historical corpora, texts, images, and structured datasets generated by humans. Biases, omissions, and distortions in these datasets propagate into outputs. The system does not independently verify truth; it extrapolates patterns.

This dependency underscores a crucial point: AI does not originate knowledge. It reorganizes existing information within probabilistic constraints. Apparent creativity is recombinatory, not experiential.

Sections 3 and 4 together clarify the central thesis of Performative Intelligence. AI systems generate the appearance of understanding through statistical modeling, linguistic fluency, and contextual adaptation. Yet beneath this performance lies a computational architecture devoid of embodiment, intentionality, consciousness, or moral agency.

Recognizing this distinction does not diminish AI’s utility. Rather, it situates AI within its proper ontological category: a powerful predictive system that performs intelligence without possessing awareness.

The social amplification of performative intelligence

If Performative Intelligence describes a technical condition, its societal impact is magnified through cultural, economic, and media ecosystems. The perception of AI as autonomous, intelligent, or even quasi-conscious is not generated solely by algorithms; it is socially constructed and amplified.

Media narratives and technological hype

Mainstream media frequently frame AI breakthroughs using language associated with human cognition, “AI learns,” “AI understands,” “AI decides.” Such narratives compress technical complexity into accessible metaphors but risk overstating capabilities. Research on technological hype cycles (Gartner, 2018) demonstrates how emerging technologies often pass through phases of inflated expectations before encountering public recalibration.

Historically, AI discourse has oscillated between utopian optimism and dystopian alarmism. These polarized framings obscure technical nuance. When probabilistic systems are described as “thinking,” the public imagination shifts from tool-based interpretation to agent-based attribution. The result is conceptual drift: computational processes are reinterpreted as cognitive phenomena.

Anthropomorphization of AI systems

Humans possess a strong cognitive bias toward anthropomorphism, the attribution of human traits to non-human entities. Reeves and Nass (1996) demonstrated that individuals respond socially to computers even when they consciously know the systems lack consciousness. Conversational fluency, voice interfaces, and human-like avatars intensify this effect.

Large language models, in particular, trigger anthropomorphic projection because they operate in natural language, the primary medium of human thought and interaction. When an AI system expresses uncertainty (“I think…”), provides structured reasoning, or offers empathy-like responses, users may infer internal mental states. Yet these expressions are stylistic artifacts of training data rather than indicators of subjective awareness.

Anthropomorphization thus acts as a psychological amplifier of performative intelligence.

Corporate incentives and the branding of “intelligence”

The term “artificial intelligence” itself carries rhetorical power. From a commercial standpoint, branding systems as “intelligent” enhances perceived value, investor interest, and market adoption. The global AI market is projected to surpass hundreds of billions of dollars within the decade (McKinsey Global Institute, 2023). Such economic stakes incentivize expansive claims about capability.

Corporate communications often emphasize autonomy, reasoning ability, and innovation potential while downplaying limitations such as hallucinations, bias propagation, or dependency on training data. This asymmetry between marketing language and technical constraints reinforces the perception of cognitive equivalence between humans and machines.

Performative Intelligence, therefore, is not only computational, but it is narratively constructed through strategic communication.

Public perception vs. technical reality

Empirical studies show a widening gap between public perception and technical reality. Surveys indicate high levels of awareness but limited understanding of AI mechanisms (Pew Research Center, 2025). Many users struggle to differentiate between predictive text generation and genuine reasoning.

Technically, most generative AI systems operate through large-scale pattern prediction and optimization functions. They do not maintain stable world models, possess self-awareness, or engage in intentional deliberation. However, public discourse often interprets outputs as evidence of understanding.

This misalignment between perception and mechanism is the social amplifier of Performative Intelligence. The system performs intelligence; society narrates it as cognition.

Risks of mistaking performance for cognition

The conflation of performance with cognition is not merely conceptual; it carries material consequences across epistemic, institutional, and ethical domains.

Epistemic overtrust

When AI-generated outputs are perceived as authoritative, users may suspend critical evaluation. This phenomenon, sometimes termed algorithmic authority, occurs when computational outputs are granted epistemic credibility beyond their evidentiary basis.

Research in human-computer interaction indicates that users frequently overestimate the accuracy of automated systems, particularly when outputs are presented with confidence (Dzindolet et al., 2003). Generative systems that produce fluent explanations can create an illusion of reliability even when the underlying reasoning is flawed.

Epistemic overtrust shifts knowledge validation from critical inquiry to surface plausibility.

Delegation of authority across sectors

AI systems are increasingly integrated into governance, healthcare diagnostics, financial risk assessment, architectural design optimization, and urban analytics. In each domain, outputs can influence high-stakes decisions.

If performative outputs are misinterpreted as genuine reasoning, decision-makers may delegate authority prematurely. In healthcare, diagnostic models may shape treatment pathways; in governance, predictive analytics may inform policing or resource allocation; in design, generative systems may guide spatial planning decisions.

Delegation without full comprehension of system limitations risks institutionalizing computational biases and probabilistic errors as policy decisions.

Automation bias and decision complacency

Automation bias refers to the human tendency to favor suggestions from automated systems and to ignore contradictory information (Parasuraman & Riley, 1997). As AI outputs grow more fluent and persuasive, the likelihood of automation bias increases.

Decision complacency emerges when human oversight diminishes due to perceived system competence. Instead of functioning as decision-support tools, AI systems risk becoming decision substitutes. This transition erodes human accountability and critical reasoning capacity.

In complex fields, urban systems, environmental planning, and public health, where contextual nuance and ethical judgment are essential, such complacency can produce systemic consequences.

Ethical implications

Mistaking performance for cognition also generates ethical ambiguity. If AI systems are perceived as autonomous agents, responsibility may become diffused. Who is accountable for harmful outputs: the developer, deployer, or user?

Moreover, anthropomorphic framing may obscure structural inequalities embedded in training data. Bias in datasets becomes normalized when outputs are interpreted as neutral intelligence rather than probabilistic extrapolation.

At a broader level, redefining intelligence in purely performative terms risks diluting the conceptual boundary between human cognitive agency and computational simulation. This shift has implications for labor, creativity, authorship, and democratic deliberation.

Performative Intelligence is not only a technical artifact but a socio-cultural phenomenon. Media narratives, corporate branding, and anthropomorphic psychology amplify computational fluency into perceived cognition. When this perception solidifies into institutional trust, risks emerge, epistemic overreliance, automation bias, and ethical diffusion.

Recognizing these dynamics does not necessitate technological rejection. Rather, it demands calibrated integration: maintaining human interpretive authority while leveraging AI as a high-capacity analytical instrument.

Performative intelligence in creative and knowledge domains

Performative Intelligence exerts particularly visible influence in domains historically associated with originality, authorship, and interpretive judgment, writing, art, architecture, research, and policy formulation. These fields do not merely process information; they construct meaning, negotiate context, and encode cultural values. The entry of generative AI into these domains therefore raises foundational questions about creativity, authorship, and epistemic authority.

Writing, art, architecture, and policy drafting

Generative systems can now produce essays, poems, architectural renderings, urban masterplans, legislative summaries, and technical reports within seconds. In writing, AI synthesizes arguments by recombining stylistic and structural patterns learned from large corpora. In visual arts, diffusion models generate compositions by interpolating between aesthetic features present in training datasets. In architecture and urbanism, generative tools optimize form based on parametric constraints, environmental data, and precedents. In policy drafting, models can assemble regulatory language consistent with institutional tone and format.

Yet across these applications, the mechanism remains recombinatory. The system extrapolates from prior patterns rather than engaging in situated interpretation. Creativity becomes statistically emergent rather than experientially grounded.

Margaret Boden (1998) distinguishes between combinational creativity (new arrangements of existing elements) and transformational creativity (restructuring the conceptual space itself). Most contemporary generative AI operates within combinational bounds. It reconfigures known forms with extraordinary speed but does not independently redefine conceptual paradigms.

Reproduction vs. originality

The central tension lies between reproduction and originality. Human creativity often emerges from lived experience, cultural memory, ethical struggle, and contextual negotiation. It is embedded in temporality and embodiment. AI-generated outputs, by contrast, are derivative of historical data distributions. Even when outputs appear novel, they remain probabilistic recompositions of existing artifacts.

This does not render AI outputs valueless; recombination has always been part of artistic and intellectual production. However, conflating recombination with originality risks flattening distinctions between iterative synthesis and conceptual innovation. Performative Intelligence can simulate novelty without undergoing the epistemic rupture that characterizes paradigm shifts (Kuhn, 1962).

Acceleration of production vs. depth of thought

One of AI’s most transformative impacts is acceleration. Drafts that once required days can be produced in minutes. Concept sketches, data analyses, and literature reviews can be rapidly generated. This acceleration enhances productivity and lowers entry barriers.

However, speed may compress deliberation. Cognitive science suggests that reflective reasoning often depends on slow, iterative processing (Kahneman, 2011). When AI systems generate immediate outputs, users may bypass the generative friction that fosters deep insight. The risk is epistemic shallowness: abundant content with diminished critical engagement.

In academic contexts, AI-assisted drafting can expedite synthesis but may erode the formative process of grappling with sources. In architecture, generative optimization can propose numerous formal solutions, yet without careful interpretation, design risks becoming data-responsive rather than culturally responsive. In policy drafting, linguistic fluency may obscure the absence of contextual nuance or stakeholder deliberation.

Implications for academic and professional practice

The integration of Performative Intelligence demands recalibration of authorship norms, evaluation criteria, and professional ethics.

  • Authorship: what constitutes intellectual contribution when AI assists in drafting or design?

  • Assessment: how should originality be evaluated in AI-augmented environments?

  • Professional standards: how do architects, researchers, and policymakers maintain accountability when generative tools shape outputs?

Rather than replacing expertise, AI shifts the locus of expertise. The critical skill becomes interpretive oversight, curating prompts, evaluating outputs, identifying hallucinations, and embedding contextual knowledge. Professional authority increasingly depends on the capacity to mediate between computational generation and human judgment.

Performative Intelligence thus transforms creative and knowledge domains not by eliminating human agency, but by reconfiguring it.

Toward critical AI literacy

If Performative Intelligence is to be integrated responsibly, societies require not only technical proficiency but Critical AI Literacy, a structured understanding of how AI systems function, where their limits lie, and how to evaluate their outputs.

The need for conceptual clarity

Conceptual ambiguity fuels misinterpretation. Terms such as “learning,” “understanding,” and “intelligence” are often applied metaphorically in AI discourse. Critical literacy demands precise differentiation between computational optimization and cognitive awareness.

Understanding that generative models operate via probabilistic inference rather than intentional reasoning recalibrates expectations. Conceptual clarity mitigates anthropomorphic projection and prevents epistemic overreach.

Distinguishing tool from thinker

AI systems are tools, highly advanced, adaptive, and context-responsive tools, but tools nonetheless. Distinguishing the tool from the thinker preserves human epistemic authority. Tools extend capability; thinkers assume responsibility.

This distinction is particularly important in governance, healthcare, and academic research. AI may support decision-making, but it cannot assume moral agency. Responsibility remains human.

Human oversight and interpretive responsibility

Effective integration requires structured oversight mechanisms:

  • Verification of outputs through independent sources.

  • Bias auditing of datasets and model behavior.

  • Transparent disclosure of AI involvement in production.

  • Maintenance of human-in-the-loop decision frameworks.

Oversight is not merely technical; it is ethical. It ensures that performative outputs do not bypass deliberative processes.

Building frameworks for responsible integration

Responsible integration entails institutional design. Educational curricula must incorporate AI literacy. Professional organizations should establish guidelines for disclosure and evaluation. Policy frameworks must address transparency, accountability, and data governance.

Interdisciplinary collaboration, between technologists, social scientists, designers, ethicists, and policymakers, is essential. AI is not solely a computational artifact; it is a socio-technical system embedded in cultural and institutional structures.

In creative and knowledge domains, Performative Intelligence accelerates production and expands possibilities while raising fundamental questions about originality, authorship, and depth. Without critical literacy, acceleration may eclipse reflection. With structured oversight and conceptual clarity, however, AI can function as an augmentation rather than a substitution.

The task is not to resist generative systems, but to situate them properly: as powerful instruments within a broader ecology of human cognition, responsibility, and ethical deliberation.

Reframing intelligence in the technological age

The emergence of Performative Intelligence compels a deeper theoretical question: Is intelligence necessarily conscious? The contemporary AI debate hinges on whether intelligence should be defined functionally, by observable behavior, or phenomenologically, by internal experience and awareness.

Is intelligence necessarily conscious?

Functionalist traditions in philosophy of mind argue that intelligence can be defined by the capacity to process information, solve problems, and adapt behavior (Putnam, 1967). From this perspective, if a system performs intelligent tasks, it may be described as intelligent regardless of its internal substrate.

However, phenomenological and consciousness-centered perspectives maintain that cognition without awareness is incomplete. Thomas Nagel (1974) famously asked, “What is it like to be a bat?”, emphasizing that subjective experience constitutes a defining feature of mind. David Chalmers (1995) later articulated the “hard problem” of consciousness: explaining why and how physical processes give rise to subjective experience.

Current AI systems, despite functional sophistication, exhibit no evidence of subjective awareness. They process inputs and generate outputs but do not possess experiential states. If consciousness remains absent, then what AI demonstrates is not cognition in the human sense, but advanced computation.

Thus, intelligence may be operationally simulated without being ontologically instantiated.

Expanding vs. diluting the definition of intelligence

The proliferation of AI systems risks either expanding or diluting the concept of intelligence. Expanding the definition acknowledges that intelligence may exist across multiple substrates, biological and artificial, manifesting in diverse forms. Dilution, however, occurs when surface-level behavioral similarity substitutes for deeper criteria such as intentionality, self-reflection, or moral reasoning.

Howard Gardner’s theory of multiple intelligences (1983) broadened the concept to include linguistic, spatial, interpersonal, and other capacities. Yet even these forms remain rooted in human cognitive embodiment. Extending the term “intelligence” to purely statistical pattern prediction may obscure meaningful distinctions between adaptive reasoning and probabilistic inference.

The risk of dilution is conceptual inflation: if every predictive system is intelligent, the term loses analytical precision. Performative Intelligence offers a corrective by distinguishing simulated intelligence from conscious cognition without dismissing technological capability.

Philosophical and cognitive science perspectives

Cognitive science increasingly views human intelligence as embodied, situated, and socially distributed (Varela, Thompson & Rosch, 1991; Hutchins, 1995). Intelligence emerges through interaction with environments, tools, and other agents. From this vantage point, AI may be understood as participating in a distributed cognitive system, augmenting human capacities without independently constituting a mind.

This interpretation reframes AI not as a rival to human intelligence but as an extension within a socio-technical network. Performative Intelligence then becomes one node within an expanded ecology of cognition, powerful yet dependent.

Reframing intelligence in the technological age, therefore, requires conceptual discipline. Intelligence may be operationally reproduced, but consciousness, embodiment, and moral agency remain distinct dimensions not presently demonstrated by artificial systems.

Conclusion: coexisting with artificial performance

This article has argued that contemporary AI systems exemplify Performative Intelligence: the capacity to convincingly simulate reasoning, creativity, and understanding without possessing awareness, intentionality, or embodied cognition. Through probabilistic modeling and large-scale pattern recognition, AI generates outputs that satisfy behavioral criteria for intelligence while lacking the phenomenological substrate traditionally associated with mind.

The social amplification of this performance, through media narratives, anthropomorphic projection, and corporate branding, has contributed to widespread misinterpretation. When computational fluency is mistaken for cognition, risks emerge: epistemic overtrust, automation bias, premature delegation of authority, and ethical diffusion.

Yet recognizing these limits does not necessitate technological rejection. On the contrary, Performative Intelligence offers substantial value when positioned correctly. As an instrument for pattern amplification, rapid synthesis, and exploratory generation, AI augments human capacity. It accelerates iteration, expands analytical bandwidth, and supports decision-making processes.

The critical task is discernment.

To coexist productively with artificial performance, societies must:

  • Maintain conceptual clarity regarding the distinction between computation and consciousness.

  • Preserve human interpretive authority in high-stakes domains.

  • Embed oversight, transparency, and accountability into AI deployment.

  • Cultivate critical AI literacy across academic, professional, and civic spheres.

Performative Intelligence should be understood as augmentation, not replacement. It is a technological amplifier operating within a human epistemic framework. Responsibility, judgment, and ethical deliberation remain irreducibly human obligations.

In the technological age, intelligence is no longer singular. It is distributed across biological and artificial systems. But awareness, accountability, and meaning-making continue to reside with us. The future of AI, therefore, depends not on whether machines can perform intelligence, but on whether humans can exercise it wisely.

References

Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103(1–2), 347–356.
Brentano, F. (1874/1973). Psychology from an empirical standpoint (A. C. Rancurello, D. B. Terrell, & L. L. McAlister, Trans.). Routledge.
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2003). The perceived utility of human and automated aids in a visual detection task. Human Factors, 45(1), 199–210.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. Basic Books.
Hutchins, E. (1995). Cognition in the wild. MIT Press.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier.
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253).
Pearl, J. (2018). The book of why: The new science of cause and effect. Basic Books.
Pew Research Center. (2025). How Americans view AI and its impact on people and society.
Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind, and religion (pp. 37–48). University of Pittsburgh Press.



Source link