Support My Work!
austins research
austins research
Discord
  • Preamble
  • About austins research
    • About my tooling
    • austins research services
  • Project Specific Research
    • Feasibility Analysis: Open-Source Discord Bot Platform with No-Code Builder and Advanced Dashboard
    • Automating Discord Server Membership Upon Auth0 Authentication
  • News Research
    • Gemini Report - Shapes Inc Issue
  • Physics Research
    • Page 1
  • Dislang Related Research
    • Dislang Research
    • Assessing the Feasibility of a Dedicated Discord API Programming Language
    • Designing a Domain-Specific Language for Discord API Interaction
  • Gemini Deep Research
    • using UDEV to make a dead man switch
    • SMTP Email Explained
    • AI: Reality or Misinturpritation?
    • Creating a custom Discord Server Widget
    • Cloudflare Pages & Static Blogging
    • Firebase, Supabase, PocketBase Comparison
    • A Comparative Analysis of Large and Small Language Models
    • Building a Privacy-Focused, End-to-End Encrypted Communication Platform: A Technical Blueprint
    • Architecting a Multi-Tenant Managed Redis-Style Database Service on Kubernetes
    • Building an Open-Source DNS Filtering SaaS: A Technical Blueprint
    • Leveraging Automated Analysis, Checks, and AI for C++ to Rust Codebase Migration
    • Constructing Automated Code Translation Systems: Principles, Techniques, and Challenges
    • Universal Webhook Ingestion and JSON Standardization: An Architectural Guide
  • The Investigatory Powers Act 2016: Balancing National Security and Individual Liberties in the Digit
  • The e-Devlet Kapısı Gateway: Breaches, Fallout, and the Erosion of Digital Trust in Turkey
  • Evolving the Discord Ecosystem
Powered by GitBook
LogoLogo

Support Me

  • My Coinbase Link
  • By subscribing to my blog
  • Support Page
  • Apply to join the Community

Stuff About me

  • My Blog
  • my website :)
  • My brain site
  • Privacy Statement
  • Terms of Service

Company Plugging

  • AWFixer Foundation
  • AWFixer Tech
  • AWFixer Development
  • AWFixer Industries
  • AWFixer and Friends
  • AWFixer Shop

© 2025 austin and contributers

On this page

Was this helpful?

Export as PDF
  1. Gemini Deep Research

AI: Reality or Misinturpritation?

Geminis Report/Summary of many sources debating the title

Machines That Think? Evaluating the "Artificial Intelligence" Label in Contemporary Technology

1. Introduction: The AI Naming Controversy: Defining the Scope and Stakes

The term "Artificial Intelligence" (AI) evokes powerful images, ranging from the ancient human dream of creating thinking machines to the futuristic visions, both utopian and dystopian, popularized by science fiction.1 Since its formal inception in the mid-20th century, the field has aimed to imbue machines with capabilities typically associated with human intellect. However, the recent proliferation of technologies labeled as AI—particularly large language models (LLMs), advanced machine learning (ML) algorithms, and sophisticated computer vision (CV) systems—has ignited a critical debate: Is "AI" an accurate descriptor for these contemporary computational systems, or does its use constitute a significant misrepresentation?

This report addresses this central question by undertaking a comprehensive analysis of the historical, technical, philosophical, and societal dimensions surrounding the term "AI." It examines the evolution of AI definitions, the distinct categories of AI proposed (Narrow, General, and Superintelligence), the actual capabilities and inherent limitations of current technologies, and the arguments presented by experts both supporting and refuting the applicability of the "AI" label. Furthermore, it delves into the underlying philosophical concepts of intelligence, understanding, and consciousness, exploring how these abstract ideas inform the debate. Finally, it contrasts the technical reality with public perception and media portrayals, considering the influence of hype and marketing.3

The objective is not merely semantic clarification but a critical evaluation of whether the common usage of "AI" accurately reflects the nature of today's advanced computational systems. This evaluation is crucial because the terminology employed significantly shapes public understanding, directs research funding, influences investment decisions, guides regulatory efforts, and frames ethical considerations.4 The label "AI" carries substantial historical and cultural weight, often implicitly invoking comparisons to human cognition.3 Misunderstanding or misrepresenting the capabilities and limitations of these technologies, fueled by hype or inaccurate terminology, can lead to detrimental consequences, including eroded public trust, misguided policies, and the premature deployment of potentially unreliable or biased systems.4

The current surge in interest surrounding technologies like ChatGPT and other generative models 1 echoes previous cycles of intense optimism ("AI summers") followed by periods of disillusionment and reduced funding ("AI winters") that have characterized the field's history.2 This historical pattern suggests that the current wave of enthusiasm, often amplified by media narratives and marketing 3, may also be susceptible to unrealistic expectations. Understanding the nuances of what constitutes "AI" is therefore essential for navigating the present landscape and anticipating future developments responsibly. This report aims to provide the necessary context and analysis for such an understanding.

2. The Genesis and Evolution of "Artificial Intelligence": From Turing's Question to McCarthy's Terminology and Beyond

The quest to create artificial entities possessing intelligence is not a recent phenomenon. Ancient myths feature automatons, and early modern literature, such as Jonathan Swift's Gulliver's Travels (1726), imagined mechanical engines capable of generating text and ideas.1 The term "robot" itself entered the English language via Karel Čapek's 1921 play R.U.R. ("Rossum's Universal Robots"), initially referring to artificial organic beings created for labor.1 These early imaginings laid cultural groundwork, reflecting a long-standing human fascination with replicating or simulating thought.

The formal discipline of AI, however, traces its more direct intellectual lineage to the mid-20th century, particularly to the work of Alan Turing. In his seminal 1950 paper, "Computing Machinery and Intelligence," Turing posed the provocative question, "Can machines think?".14 To circumvent the philosophical difficulty of defining "thinking," he proposed the "Imitation Game," now widely known as the Turing Test.16 In this test, a human interrogator communicates remotely with both a human and a machine; if the interrogator cannot reliably distinguish the machine from the human based on their conversational responses, the machine is said to have passed the test and could be considered capable of thinking.17 Turing's work, conceived even before the term "artificial intelligence" existed 17, established a pragmatic, behavioral benchmark for machine intelligence and conceptualized machines that could potentially expand beyond their initial programming.18

The term "Artificial Intelligence" itself was formally coined by John McCarthy in 1955, in preparation for a pivotal workshop held at Dartmouth College during the summer of 1956.11 McCarthy, along with other prominent researchers like Marvin Minsky, Nathaniel Rochester, and Claude Shannon, organized the workshop to explore the conjecture that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".18 McCarthy defined AI as "the science and engineering of making intelligent machines".24 This definition, along with the ambitious goals set at Dartmouth, established AI as a distinct field of research, aiming to create machines capable of human-like intelligence, including using language, forming abstractions, solving complex problems, and self-improvement.17

Early AI research (roughly 1950s-1970s) focused heavily on symbolic reasoning, logic, and problem-solving strategies that mimicked human deductive processes.25 Key developments included:

  • Game Playing: Programs were developed to play games like checkers, with Arthur Samuel's program demonstrating early machine learning by improving its play over time.16

  • Logic and Reasoning: Algorithms were created to solve mathematical problems and process symbolic information, leading to early "expert systems" like SAINT, which could solve symbolic integration problems.17

  • Natural Language Processing (NLP): Early attempts at machine translation and conversation emerged, exemplified by Joseph Weizenbaum's ELIZA (1966), a chatbot simulating a Rogerian psychotherapist. Though intended to show the superficiality of machine understanding, many users perceived ELIZA as genuinely human.2

  • Robotics: Systems like Shakey the Robot (1966-1972) integrated perception (vision, sensors) with planning and navigation in simple environments.18

  • Programming Languages: McCarthy developed LISP in 1958, which became a standard language for AI research.16

However, the initial optimism and ambitious goals set at Dartmouth proved difficult to achieve. Progress slowed, particularly in areas requiring common sense reasoning or dealing with the complexities of the real world. Overly optimistic predictions went unfulfilled, leading to periods of reduced funding and interest known as "AI winters" (notably in the mid-1970s and late 1980s).2 The very breadth and ambition of the initial definition—to simulate all aspects of intelligence 18—created a high bar that contributed to these cycles. Successes in narrow domains were often achieved, but the grand vision of generally intelligent machines remained elusive, leading to disappointment when progress stalled.12

Throughout its history, the definition of AI has remained somewhat fluid and contested. Various perspectives have emerged:

  • Task-Oriented Definitions: Focusing on the ability to perform tasks normally requiring human intelligence (e.g., perception, decision-making, translation).13 This aligns with the practical goals of many AI applications.

  • Goal-Oriented Definitions: Defining intelligence as the computational ability to achieve goals in the world.27 This emphasizes rational action and optimization.

  • Cognitive Simulation: Aiming to model or replicate the processes of human thought.22

  • Learning-Based Definitions: Emphasizing the ability to learn from data or experience.12

  • Philosophical Definitions: Engaging with deeper questions about thought, consciousness, and personhood.19 The Stanford Encyclopedia of Philosophy, for instance, characterizes AI as devoted to building artificial animals or persons, or at least creatures that appear to be so.33

  • Organizational Definitions: Bodies like the Association for the Advancement of Artificial Intelligence (AAAI) define their mission around advancing the scientific understanding of thought and intelligent behavior and their embodiment in machines.35 Early AAAI perspectives also grappled with multiple conflicting definitions, including pragmatic (demonstrating intelligent behavior), simulation (duplicating brain states), modeling (mimicking outward behavior/Turing Test), and theoretical (understanding principles of intelligence) approaches.22

  • Regulatory Definitions: Recent legislative efforts like the EU AI Act have developed specific definitions for regulatory purposes, often focusing on machine-based systems generating outputs (predictions, recommendations, decisions) that influence environments, sometimes emphasizing autonomy and adaptiveness.38

A key tension persists throughout these definitions: Is AI defined by its process (how it achieves results, e.g., through human-like reasoning) or by its outcome (what tasks it can perform, regardless of the internal mechanism)? Early symbolic AI, focused on logic and rules 25, leaned towards process simulation. The Turing Test 17 and many modern goal-oriented definitions 27 emphasize outcomes and capabilities. This distinction is central to the current debate, as modern systems, particularly those based on connectionist approaches like deep learning 43, excel at complex pattern recognition and generating human-like outputs 1 but are often criticized for lacking the underlying reasoning or understanding processes associated with human intelligence.45 The historical evolution and definitional ambiguity of "AI" thus provide essential context for evaluating its applicability today.

Table 2.1: Overview of Selected AI Definitions

Source/Originator

Definition/Core Concept

Key Focus

Implied Scope

Turing Test (Implied, 1950)

Ability to exhibit intelligent behavior indistinguishable from a human in conversation.17

Behavioral Outcome (Indistinguishability)

Potentially General

McCarthy (1956)

"The science and engineering of making intelligent machines".24

Creating Machines with Intelligence (Process or Outcome)

General

Dartmouth Proposal (1956)

Simulating "every aspect of learning or any other feature of intelligence".18

Simulating Human Cognitive Processes

General

Stanford Encyclopedia (SEP)

Field devoted to building artificial animals/persons (or creatures that appear to be).33

Creating Artificial Beings (Appearance vs. Reality)

General

Internet Encyclopedia (IEP)

Possession of intelligence, or the exercise of thought, by machines.19

Machine Thought/Intelligence

General

Russell & Norvig (Modern AI)

Systems that act rationally; maximize expected value of a performance measure based on experience/knowledge.27

Goal Achievement, Rational Action

General/Narrow

AAAI (Mission)

Advancing scientific understanding of mechanisms underlying thought and intelligent behavior and their embodiment in machines.35

Understanding Intelligence Mechanisms

General

Common Definition (Capability)

Ability of computer systems to perform tasks normally requiring human intelligence (e.g., perception, reasoning, learning, problem-solving).13

Task Performance (Mimicking Human Capabilities)

General/Narrow

EU AI Act (2024 Final)

"A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".38

Autonomy, Adaptiveness, Generating Outputs influencing environments

General/Narrow

OECD Definition (Referenced)

"A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments".38

Goal-Oriented Output Generation influencing environments

General/Narrow

(Note: This table provides a representative sample; numerous other definitions exist. Scope interpretation can vary.)

3. The AI Spectrum: Understanding Narrow, General, and Super Intelligence (ANI, AGI, ASI)

To navigate the complexities of the AI debate, it is essential to understand the commonly accepted categorization of AI based on its capabilities. This spectrum typically includes three levels: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).49

Artificial Narrow Intelligence (ANI), also referred to as Weak AI, represents the current state of artificial intelligence.11 ANI systems are designed and trained to perform specific, narrowly defined tasks.11 Examples abound in modern technology, including:

  • Virtual assistants like Siri and Alexa 30

  • Recommendation algorithms used by Netflix or Amazon 10

  • Image and facial recognition systems 26

  • Language translation tools 49

  • Self-driving car technologies (which operate within the specific domain of driving) 30

  • Chatbots and generative models like ChatGPT 10

  • Game-playing AI like AlphaGo 50

ANI systems often leverage machine learning (ML) and deep learning (DL) techniques, trained on large datasets to recognize patterns and execute their designated functions.51 Within their specific domain, ANI systems can often match or even significantly exceed human performance in terms of speed, accuracy, and consistency.10 However, their intelligence is confined to their programming and training. They lack genuine understanding, common sense, consciousness, or the ability to transfer their skills to tasks outside their narrow specialization.49 An image recognition system can identify a cat but doesn't "know" what a cat is in the way a human does; a translation system may convert words accurately but miss cultural nuance or context.49 ANI is characterized by its task-specificity and limited adaptability.50

Artificial General Intelligence (AGI), often called Strong AI, represents the hypothetical next stage in AI development.49 AGI refers to machines possessing cognitive abilities comparable to humans across a wide spectrum of intellectual tasks.23 An AGI system would be able to understand, learn, reason, solve complex problems, comprehend context and nuance, and adapt to novel situations much like a human being.49 It would not be limited to pre-programmed tasks but could potentially learn and perform any intellectual task a human can.51 Achieving AGI is a long-term goal for some researchers 23 but remains firmly in the realm of hypothesis.50 The immense complexity of replicating human cognition, coupled with our incomplete understanding of the human brain itself, presents significant hurdles.52 The development of AGI also raises profound ethical concerns regarding control, safety, and societal impact.50

Artificial Superintelligence (ASI) is a further hypothetical level beyond AGI.49 ASI describes an intellect that dramatically surpasses the cognitive performance of the brightest human minds in virtually every field, including scientific creativity, general wisdom, and social skills.49 The transition from AGI to ASI is theorized by some to be potentially very rapid, driven by recursive self-improvement – an "intelligence explosion".54 The prospect of ASI raises significant existential questions and concerns about controllability and the future of humanity, as such an entity could potentially have goals misaligned with human interests and possess the capacity to pursue them with overwhelming effectiveness.50 Like AGI, ASI is currently purely theoretical.50

The common practice of using the single, overarching term "AI" often blurs the critical lines between these three distinct levels.52 This conflation can be problematic. On one hand, it can lead to inflated expectations and hype, where the impressive but narrow capabilities of current ANI systems are misinterpreted as steps imminently leading to human-like AGI.6 On the other hand, it can fuel anxieties and fears based on the potential risks of hypothetical AGI or ASI, projecting them onto the much more limited systems we have today.60 Public discourse frequently fails to make these distinctions, leading to confusion about what AI can currently do versus what it might someday do.52

Furthermore, the implied progression from ANI to AGI to ASI, often framed as a natural evolutionary path 49, is itself a subject of intense debate among experts. While the ANI/AGI/ASI classification provides a useful conceptual framework based on capability, it does not guarantee that current methods are sufficient to achieve the higher levels. Many leading researchers argue that the dominant paradigms driving ANI, particularly deep learning based on statistical pattern recognition, may be fundamentally insufficient for achieving the robust reasoning, understanding, and adaptability required for AGI.45 They suggest that breakthroughs in different approaches—perhaps involving symbolic reasoning, causal inference, or principles derived from neuroscience and cognitive science, or embodiment—might be necessary to bridge the gap between narrow task performance and general intelligence. Thus, the linear ANI -> AGI -> ASI trajectory, while conceptually appealing, may oversimplify the complex and potentially non-linear path of AI development.

4. Contemporary "AI": A Technical Assessment of Capabilities and Constraints (Focus on ML, LLMs, CV)

The technologies most frequently labeled as "AI" today are predominantly applications of Machine Learning (ML), including its subfield Deep Learning (DL), Large Language Models (LLMs), and Computer Vision (CV). A technical assessment reveals impressive capabilities but also significant constraints that differentiate them from the concept of general intelligence.

Machine Learning (ML) and Deep Learning (DL):

ML is formally a subset of AI, focusing on algorithms that enable systems to learn from data and improve their performance on specific tasks without being explicitly programmed for every step.32 Instead of relying on hard-coded rules, ML models identify patterns and correlations within large datasets to make predictions or decisions.32 Common approaches include supervised learning (learning from labeled data), unsupervised learning (finding patterns in unlabeled data), and reinforcement learning (learning through trial and error with rewards/punishments).32

Deep Learning (DL) is a type of ML that utilizes artificial neural networks with multiple layers (deep architectures) to learn hierarchical representations of data.26 Inspired loosely by the structure of the human brain, DL has driven many recent breakthroughs in AI, particularly in areas dealing with unstructured data like images and text.43

  • Capabilities: ML/DL systems excel at pattern recognition, classification, prediction, and optimization tasks within specific domains.32 They power recommendation engines, spam filters, medical image analysis, fraud detection, and many components of LLMs and CV systems.30

  • Limitations: Despite their power, ML/DL systems face several constraints:

    • Data Dependency: They typically require vast amounts of (often labeled) training data, which can be expensive and time-consuming to acquire and curate.3 Performance is heavily dependent on data quality and representativeness.

    • Bias: Models can inherit and even amplify biases present in the training data, leading to unfair or discriminatory outcomes.5

    • Lack of Interpretability: The decision-making processes of deep neural networks are often opaque ("black boxes"), making it difficult to understand why a system reached a particular conclusion.75 This hinders debugging, trust, and accountability.

    • Brittleness and Generalization: Performance can degrade significantly when faced with data outside the distribution of the training set or with adversarial examples (inputs slightly modified to fool the model).64 They struggle to generalize knowledge to truly novel situations.

    • Computational Cost: Training large DL models requires substantial computational resources and energy.75

Large Language Models (LLMs):

LLMs are a specific application of advanced DL, typically using transformer architectures trained on massive amounts of text data.55

  • Capabilities: LLMs demonstrate remarkable abilities in processing and generating human-like text.1 They can perform tasks like translation, summarization, question answering, writing essays or code, and powering conversational chatbots.1 Their performance on some standardized tests has reached high levels.84

  • Limitations: Despite their fluency, LLMs exhibit critical limitations that challenge their classification as truly "intelligent":

    • Lack of Understanding and Reasoning: They primarily operate by predicting the next word based on statistical patterns learned from text data.75 They lack genuine understanding of the meaning behind the words, common sense knowledge about the world, and robust reasoning capabilities.45 They are often described as sophisticated pattern matchers or "stochastic parrots".75

    • Hallucinations: LLMs are prone to generating confident-sounding but factually incorrect or nonsensical information ("hallucinations").5

    • Bias: They reflect and can amplify biases present in their vast training data.5

    • Static Knowledge: Their knowledge is generally limited to the data they were trained on and doesn't update automatically with new information.76

    • Context and Memory: They can struggle with maintaining coherence over long conversations and lack true long-term memory.75

    • Reliability and Explainability: Their outputs can be inconsistent, and explaining why they generate a specific response remains a major challenge.75

Computer Vision (CV):

CV is the field of AI focused on enabling machines to "see" and interpret visual information from images and videos.2

  • Capabilities: CV systems can perform tasks like image classification (identifying the main subject), object detection (locating multiple objects), segmentation (outlining objects precisely), facial recognition, and analyzing scenes.28 These capabilities are used in autonomous vehicles, medical imaging, security systems, and content moderation.

  • Limitations:

    • Recognition vs. Understanding: While CV systems can recognize objects with high accuracy, they often lack deeper understanding of the scene, the context, the relationships between objects, or the implications of what they "see".49 They identify patterns but don't grasp meaning.

    • Common Sense Reasoning: They lack common sense about the physical world (e.g., object permanence, causality, typical object interactions).81

    • Robustness and Context: Performance can be brittle, affected by variations in lighting, viewpoint, occlusion, or adversarial manipulations.64 Understanding context remains a significant challenge.103

AI Agents:

Recently, there has been significant discussion around "AI agents" or "agentic AI"—systems designed to autonomously plan and execute sequences of actions to achieve goals.26 While presented as a major step forward, current implementations often rely on LLMs with function-calling capabilities, essentially orchestrating existing tools rather than exhibiting true autonomous reasoning and planning in complex, open-ended environments.105 Experts note a gap between the hype surrounding autonomous agents and their current, more limited reality, though experimentation is rapidly increasing.105

Across these key areas of contemporary "AI," a fundamental limitation emerges: the disconnect between sophisticated pattern recognition or statistical correlation and genuine understanding, reasoning, or causal awareness.45 These systems are powerful tools for specific tasks, leveraging vast data and computation, but they do not "think" or "understand" in the way humans intuitively associate with the term "intelligence."

This leads to a notable paradox, often referred to as Moravec's Paradox 45: tasks that humans find difficult but involve complex computation or pattern matching within well-defined rules (like playing Go 13, performing complex calculations, or even passing standardized tests 84) are often easier for current AI than tasks that seem trivial for humans but require broad common sense, physical intuition, or flexible adaptation to the real world (like reliably clearing a dinner table 45, navigating a cluttered room, or understanding nuanced social cues).45 This suggests that simply scaling current approaches, which excel at the former type of task, may not be a direct path to the latter, which is more characteristic of general intelligence.

Furthermore, the impressive performance of these systems often obscures a significant dependence on human input. This includes the massive, human-generated datasets used for training, the human labor involved in labeling data, and the considerable human ingenuity required to design the model architectures, select training data, and fine-tune the learning processes.3 Claims of autonomous learning should be tempered by the recognition of this deep reliance on human scaffolding, which differentiates current AI learning from the more independent and embodied learning observed in humans.61

Table 4.1: Comparison of Human Intelligence Aspects vs. Current AI Capabilities

Aspect of Intelligence

Human Capability (Brief Description)

Current AI (ML/LLM/CV) Capability (Brief Description & Key Limitations)

Pattern Recognition

Highly effective, integrated with context and understanding.

Excellent within trained domains (e.g., image classification, text patterns). Limited by training data distribution; vulnerable to adversarial examples.64

Learning from Data

Efficient, often requires few examples, integrates new knowledge with existing understanding.

Requires massive datasets; learning is primarily statistical correlation; struggles with transfer learning and catastrophic forgetting.61

Logical Reasoning

Capable of deductive, inductive, abductive reasoning, though prone to biases and errors.

Limited/Brittle. Primarily pattern matching; struggles with formal, novel, or complex multi-step reasoning; symbolic AI has limitations.45

Causal Reasoning

Understands cause-and-effect relationships, enabling prediction and intervention.

Very Limited. Primarily identifies correlations, not causation; struggles with counterfactuals and interventions.88 Research ongoing in Causal AI.

Common Sense Reasoning

Vast intuitive understanding of the physical and social world (folk physics, folk psychology).

Severely Lacking. Struggles with basic real-world knowledge, physical interactions, implicit assumptions, context.45

Language Fluency

Natural generation and comprehension of complex, nuanced language.

High (LLMs). Can generate remarkably fluent and coherent text.1

Language Understanding

Deep grasp of meaning, intent, context, ambiguity, pragmatics.

Superficial (LLMs). Lacks true semantic understanding, grounding in reality; prone to misinterpretation and hallucination.20

Adaptability/Generalization

Can apply knowledge and skills flexibly to novel situations and domains.

Poor. Generally limited to tasks/data similar to training; struggles with out-of-distribution scenarios and true generalization.50

Creativity

Ability to generate novel, original, and valuable ideas or artifacts.

Simulative. Can generate novel combinations based on training data (e.g., AI art 83), but lacks independent intent, understanding, or genuine originality.111

Consciousness/Sentience

Subjective awareness, phenomenal experience (qualia).

Absent (Current Consensus). No evidence of subjective experience; philosophical debate ongoing (e.g., Hinton vs. critics).19

Embodiment/World Interaction

Intelligence is grounded in physical interaction with the environment through senses and actions.

Largely Disembodied. Most current AI (esp. LLMs) lacks direct sensory input or physical interaction, limiting grounding and common sense.62 Embodied AI is an active research area.

5. The Debate: Does Current Technology Qualify as "AI"?

Given the historical context, the spectrum of AI concepts, and the technical realities of contemporary systems, a vigorous debate exists among experts regarding whether the label "Artificial Intelligence" is appropriate for technologies like ML, LLMs, and CV.

Arguments Supporting the Use of "AI":

Proponents of using the term "AI" for current technologies often point to several justifications:

  1. Alignment with Historical Goals and Definitions: The original goal of AI, as articulated at Dartmouth and by pioneers like Turing, was to create machines that could perform tasks requiring intelligence or simulate aspects of human cognition.17 Current systems, particularly in areas like medical diagnosis 71, complex game playing (e.g., Go) 13, language translation 49, and sophisticated content generation 10, demonstrably achieve tasks that were once the exclusive domain of human intellect. This aligns with definitions focused on capability or outcome.13

  2. Useful Umbrella Term: "AI" serves as a widely recognized and convenient shorthand for a broad and diverse field encompassing various techniques (ML, DL, symbolic reasoning, robotics, etc.) and applications.11 It provides a common language for researchers, industry, policymakers, and the public.

  3. The "AI Effect": A historical phenomenon known as the "AI effect" describes the tendency for technologies, once successfully implemented and understood, to no longer be considered "AI" but rather just "computation" or routine technology.12 Examples include optical character recognition (OCR), chess-playing programs like Deep Blue 117, expert systems, and search algorithms. From this perspective, arguing that current systems aren't "real AI" is simply repeating a historical pattern of moving the goalposts. Current systems represent the cutting edge of the field historically designated as AI.

  4. Intelligence as a Spectrum: Some argue that intelligence is not an all-or-nothing property but exists on a continuum.17 While current systems lack general intelligence, they possess sophisticated capabilities within their narrow domains, exhibiting a form of specialized or narrow intelligence (ANI).

Arguments Against Using "AI" (Critiques of Intelligence and Understanding):

Critics argue that the term "AI" is fundamentally misleading when applied to current technologies because these systems lack the core attributes truly associated with intelligence, particularly understanding and consciousness.

  1. Lack of Genuine Understanding and Reasoning: This is the most central criticism. Current systems, especially those based on deep learning, are characterized as sophisticated pattern-matching engines that manipulate symbols or data based on statistical correlations learned from vast datasets.75 They do not possess genuine comprehension, common sense, causal reasoning, or the ability to understand context in a human-like way.45 Their ability to generate fluent language or recognize images is seen as a simulation of intelligence rather than evidence of it.

  2. Absence of Consciousness and Sentience: The term "intelligence" often carries connotations of consciousness or subjective experience, particularly in popular discourse influenced by science fiction. Critics emphasize that there is no evidence that current systems possess consciousness, sentience, or qualia.20 Philosophical arguments like Searle's Chinese Room further challenge the idea that computation alone can give rise to understanding or consciousness.20

  3. Misleading Nature and Hype: The term "AI" is seen as inherently anthropomorphic and prone to misinterpretation, fueling unrealistic hype cycles, obscuring the technology's limitations, and leading to poor decision-making in deployment and regulation.3

Several prominent researchers have voiced strong critiques:

  • Yann LeCun: Argues that current LLMs lack essential components for true intelligence, such as world models, understanding of physical reality, and the capacity for planning and reasoning beyond reactive pattern completion (System 1 thinking).45 He believes training solely on language is insufficient.

  • Gary Marcus: Consistently highlights the unreliability, lack of robust reasoning, and inability of current systems (especially LLMs) to handle novelty or generalize effectively. He terms them "stochastic parrots" and advocates for hybrid approaches combining neural networks with symbolic reasoning.46

  • Melanie Mitchell: Focuses on the critical lack of common sense and genuine understanding in current AI. She points to the "barrier of meaning" and the brittleness of deep learning systems, emphasizing their vulnerability to unexpected failures and adversarial attacks.64

  • Rodney Brooks: Warns against anthropomorphizing machines and succumbing to hype cycles. He critiques the disembodied nature of much current AI research, arguing for the importance of grounding intelligence in real-world interaction and questioning claims of exponential progress, especially in physical domains.61

A convergence exists among these critics regarding the fundamental limitations of current systems relative to the concept of general intelligence. While their proposed solutions may differ, their diagnoses of the problems—the gap between statistical pattern matching and genuine cognition, the lack of common sense and robust reasoning—are remarkably similar. This shared assessment from leading figures strengthens the case that current technology diverges significantly from the original AGI vision often associated with the term "AI".

The Search for Alternative Labels:

Reflecting dissatisfaction with the term "AI," various alternative labels have been suggested to more accurately describe current technologies:

  • Sophisticated Algorithms / Advanced Algorithms: These terms emphasize the computational nature of the systems without implying human-like intelligence.56

  • Advanced Machine Learning: This highlights the specific technique underlying many current systems.32

  • Pattern Recognition Systems: Focuses on a primary capability of many ML/DL models.

  • Computational Statistics / Applied Statistics: Frames the technology within a statistical paradigm, downplaying notions of intelligence.

  • Cognitive Automation: Suggests the automation of specific cognitive tasks rather than general intelligence.

  • Intelligence Augmentation (IA): Proposed by figures like Erik Brynjolfsson and others, this term shifts the focus from automating human intelligence to augmenting human capabilities.126

The reasoning behind these alternatives is often twofold: first, to provide a more technically accurate description of what the systems actually do (e.g., execute algorithms, learn from data, recognize patterns); and second, to manage expectations and avoid the anthropomorphic baggage and hype associated with "AI".3 The push for terms like "Intelligence Augmentation," in particular, reflects a normative dimension—an effort to steer the field's trajectory. By framing the technology as a tool to enhance human abilities rather than replace human intelligence, proponents aim to mitigate fears of job displacement and encourage development that empowers rather than automates workers, thereby avoiding the "Turing Trap" where automation concentrates wealth and power.126 The choice of terminology, therefore, is not just descriptive but also potentially prescriptive, influencing the goals and societal impact of the technology's development.

6. Philosophical Interrogations: What Does it Mean to Think, Understand, and Be Conscious?

The debate over whether current machines qualify as "AI" inevitably intersects with deep, long-standing philosophical questions about the nature of mind itself. Evaluating the "intelligence" of machines forces a confrontation with the ambiguity inherent in concepts like thinking, understanding, and consciousness.19

Defining Intelligence:

Philosophically, there is no single, universally accepted definition of intelligence. Different conceptions lead to different conclusions about machines:

  • Computational Theory of Mind (Computationalism): This view, influential in early AI and cognitive science, posits that thought is a form of computation.19 If intelligence is fundamentally about information processing according to rules (syntax), then an appropriately programmed machine could, in principle, be intelligent.19 This aligns with functionalism, which defines mental states by their causal roles rather than their physical substrate.20

  • Critiques of Computationalism: Opponents argue that intelligence requires more than computation. Some emphasize the biological substrate, suggesting that thinking is intrinsically tied to the specific processes of biological brains.19 Others highlight the importance of embodiment and interaction with the world, arguing that intelligence emerges from the interplay of brain, body, and environment, something most current AI systems lack.62 A central critique revolves around the distinction between syntax (formal symbol manipulation) and semantics (meaning).20

  • Goal-Oriented vs. Process-Oriented Views: As noted earlier, intelligence can be defined by the ability to achieve goals effectively 27 or by the underlying cognitive processes (reasoning, learning, understanding).14 Current machines often excel at goal achievement in narrow domains but arguably lack human-like cognitive processes.

The Challenge of Understanding:

The concept of "understanding" is particularly contentious. Can a machine truly understand language, concepts, or situations, or does it merely simulate understanding through sophisticated pattern matching? This is the crux of John Searle's famous Chinese Room Argument (CRA).20

Searle asks us to imagine a person (who doesn't understand Chinese) locked in a room, equipped with a large rulebook (in English) that instructs them how to manipulate Chinese symbols. Chinese questions are passed into the room, and by meticulously following the rulebook, the person manipulates the symbols and passes out appropriate Chinese answers. To an outside observer who understands Chinese, the room appears to understand Chinese. However, Searle argues, the person inside the room clearly does not understand Chinese; they are merely manipulating symbols based on syntactic rules without grasping their meaning (semantics). Since a digital computer running a program is formally equivalent to the person following the rulebook, Searle concludes that merely implementing a program, no matter how sophisticated, is insufficient for genuine understanding.121 Syntax, he argues, does not constitute semantics.121

The CRA directly targets "Strong AI" (the view that an appropriately programmed computer is a mind) and functionalism.20 It suggests that the Turing Test is inadequate because passing it only demonstrates successful simulation of behavior, not genuine understanding.21 Common counterarguments include:

  • The Systems Reply: Argues that while the person in the room doesn't understand Chinese, the entire system (person + rulebook + workspace) does.112 Searle counters by imagining the person internalizing the whole system (memorizing the rules), arguing they still wouldn't understand.112

  • The Robot Reply: Suggests that if the system were embodied in a robot that could interact with the world, it could ground the symbols in experience and achieve understanding. Searle remains skeptical, arguing interaction adds inputs and outputs but doesn't bridge the syntax-semantics gap.

The CRA resonates strongly with critiques of current LLMs, which excel at manipulating linguistic symbols to produce fluent text but are often accused of lacking underlying meaning or world knowledge.75 They demonstrate syntactic competence without, arguably, semantic understanding.

The Consciousness Question:

Perhaps the deepest philosophical challenge concerns consciousness—subjective experience or "what it's like" to be something (qualia).114 Can machines be conscious?

  • The Hard Problem: Philosopher David Chalmers distinguishes the "easy problems" of consciousness (explaining functions like attention, memory access) from the "hard problem": explaining why and how physical processes give rise to subjective experience.114 Current AI primarily addresses the easy problems.

  • Substrate Dependence: Some argue consciousness is tied to specific biological properties of brains (Mind-Brain Identity Theory 19 or biological naturalism 121). Others, aligned with functionalism, believe consciousness could arise from any system with the right functional organization, regardless of substrate (silicon, etc.).20

  • Emergence: Could consciousness emerge as a property of sufficiently complex computational systems? This remains highly speculative.

  • Expert Opinions: Views diverge sharply. Geoffrey Hinton has suggested current AIs might possess a form of consciousness or sentience, perhaps based on a gradual replacement argument (if replacing one neuron with silicon doesn't extinguish consciousness, why would replacing all of them?).113 Critics counter this argument, pointing out that gradual replacement with non-functional items would eventually extinguish consciousness, and that Hinton conflates functional equivalence with phenomenal experience (access vs. phenomenal consciousness).113 They argue current AI shows no signs of subjective experience.114

The technical challenge of building AI systems is thus inextricably linked to these fundamental philosophical questions. Assessing whether a machine "thinks" or "understands" requires grappling with what these terms mean, concepts that remain philosophically contested. The difficulty in defining and verifying internal states like understanding and consciousness poses a significant challenge to evaluating progress towards AGI. Arguments like Searle's CRA suggest that purely behavioral benchmarks, like the Turing Test, may be insufficient. If "true AI" requires internal states like genuine understanding or phenomenal consciousness, the criteria for achieving it become far more demanding and potentially unverifiable from the outside, raising the bar far beyond simply mimicking human output.

7. AI in the Public Imagination: Hype, Hope, and the "AI Effect"

The technical and philosophical complexities surrounding AI are often overshadowed by its portrayal in popular culture and media, leading to a significant gap between the reality of current systems and public perception. This gap is fueled by historical narratives, marketing strategies, and the inherent difficulty of grasping the technology's nuances.

Media Narratives and Science Fiction Tropes:

Public understanding of AI is heavily influenced by decades of science fiction, which often depicts AI as embodied, humanoid robots or disembodied superintelligences with human-like motivations, consciousness, and emotions.2 These portrayals frequently swing between utopian visions of AI solving all problems and dystopian nightmares of machines taking over or causing existential harm.60 Common visual tropes include glowing blue circuitry, abstract digital patterns, and anthropomorphic robots.60 While these narratives can inspire research and public engagement, they also create powerful, often inaccurate, mental models.6 They tend to anthropomorphize AI, leading people to overestimate its current capabilities, ascribe agency or sentience where none exists, and focus on futuristic scenarios rather than present-day realities.60 This "deep blue sublime" aesthetic obscures the material realities of AI development, such as the human labor, data collection, energy consumption, and economic speculation involved.137

AI Hype:

The field of AI is notoriously prone to "hype"—exaggerated claims, inflated expectations, and overly optimistic timelines for future breakthroughs.3 This hype is driven by multiple factors:

  • Marketing and Commercial Interests: Companies often use "AI" as a buzzword to attract investment and customers, sometimes overstating the sophistication or impact of their products.3

  • Media Sensationalism: Media outlets often focus on dramatic or futuristic AI narratives, amplifying both hopes and fears.15

  • Researcher Incentives: Researchers may face pressures to generate excitement to secure funding or recognition, sometimes leading to overstated claims about their work's potential.4

  • Genuine Enthusiasm: Rapid progress in specific areas can lead to genuine, albeit sometimes premature, excitement about transformative potential.6

This hype often follows a cyclical pattern: initial breakthroughs lead to inflated expectations, followed by a "trough of disillusionment" when the technology fails to meet the hype, potentially leading to reduced investment (an "AI winter"), before eventually finding practical applications and reaching a plateau of productivity.6 There are signs that the recent generative AI boom may be entering a phase of correction as limitations become clearer and returns on investment prove elusive for some.6

The "AI Effect":

Compounding the issue of hype is the "AI effect," a phenomenon where the definition of "intelligence" or "AI" shifts over time.12 As soon as a capability once considered intelligent (like playing chess at a grandmaster level 117, recognizing printed characters 13, or providing driving directions) is successfully automated by a machine, it is often discounted and no longer considered "real" AI. It becomes simply "computation" or a "solved problem".117 This effect contributes to the persistent feeling that true AI is always just beyond our grasp, as past successes are continually redefined out of the category.13 It reflects a potential psychological need to preserve a unique status for human intelligence.117

Consequences of Hype and Misrepresentation:

The disconnect between AI hype/perception and reality has significant negative consequences:

  • Erosion of Public Trust: When AI systems fail to live up to exaggerated promises or cause harm due to unforeseen limitations (like bias or unreliability), public trust in the technology and its developers can be damaged.4

  • Misguided Investment and Research: Hype can channel funding and research efforts towards fashionable areas (like scaling current LLMs) while potentially neglecting other promising but less hyped approaches, potentially hindering long-term progress.5 Investment bubbles can form and burst.6

  • Premature or Unsafe Deployment: Overestimating AI capabilities can lead to deploying systems in critical domains (e.g., healthcare, finance, autonomous vehicles, criminal justice) before they are sufficiently robust, reliable, or fair, causing real-world harm.5 Examples include biased hiring algorithms 8, flawed medical diagnostic tools 147, or unreliable autonomous systems.5

  • Ineffective Policy and Regulation: Policymakers acting on hype or misunderstanding may create regulations that are either too restrictive (stifling innovation based on unrealistic fears) or too permissive (failing to address actual present-day risks like bias, opacity, and manipulation).5 The focus might be drawn to speculative long-term risks (AGI takeover) while neglecting immediate harms from existing ANI.6

  • Ethical Debt: A failure by researchers and developers to adequately consider and mitigate the societal and ethical implications of their work due to hype or narrow focus can create "ethical debt," undermining the field's legitimacy.9

  • Exacerbation of Inequalities: Biased systems deployed based on hype can reinforce and scale societal inequalities.5

  • Environmental Costs: The push to build ever-larger models, driven partly by hype, incurs significant environmental costs due to energy consumption and hardware manufacturing.143

Addressing these consequences requires greater responsibility from researchers, corporations, and media outlets to communicate AI capabilities and limitations accurately and transparently.4 It also necessitates improved AI literacy among the public and policymakers.6 Surveys reveal significant gaps between expert and public perceptions regarding AI's impact, particularly concerning job displacement and overall benefits, although both groups share concerns about misinformation and bias.149 In specific domains like healthcare, while AI shows promise in areas like diagnosis and drug discovery 72, hype often outpaces reality, with challenges in implementation, reliability, bias, and patient trust remaining significant barriers.144

The entire ecosystem—from technological development and media representation to public perception and governmental regulation—operates in a feedback loop.5 Hype generated by industry or researchers can capture media attention, shaping public opinion and influencing policy and funding, which in turn directs further research, potentially reinforcing the hype cycle. Breaking this requires critical engagement at all levels to ground discussions in the actual capabilities and limitations of the technology, moving beyond sensationalism and marketing narratives towards a more realistic and responsible approach to AI development and deployment.

8. Synthesis: Evaluating the "AI" Label in the Current Technological Landscape

Synthesizing the historical evolution, technical capabilities, philosophical underpinnings, and societal perceptions surrounding Artificial Intelligence allows for a nuanced evaluation of whether the term "AI" accurately represents the state of contemporary technology. The analysis reveals a complex picture where the label holds both historical legitimacy and significant potential for misrepresentation.

Historically, the term "AI," coined by John McCarthy and rooted in Alan Turing's foundational questions, was established with ambitious goals: to create machines capable of simulating human intelligence in its various facets, including learning, reasoning, and problem-solving.17 From this perspective, the term has a valid lineage connected to the field's origins and aspirations. Furthermore, many current systems do perform specific tasks that were previously thought to require human intelligence, aligning with outcome-oriented or capability-based definitions of AI.13 The "AI effect," where past successes are retrospectively discounted, also suggests that what constitutes "AI" is a moving target, and current systems represent the present frontier of that historical pursuit.12

However, a substantial body of evidence and expert critique indicates a significant disconnect between the capabilities of current systems (predominantly ANI) and the broader, often anthropocentric, connotations of "intelligence" invoked by the term "AI," especially the notion of AGI. The technical assessment reveals that today's ML, LLMs, and CV systems, while powerful in specific domains, fundamentally operate on principles of statistical pattern matching and correlation rather than genuine understanding, common sense reasoning, or consciousness.45 They lack robust adaptability to novel situations, struggle with causality, and can be brittle and unreliable outside their training distributions. Prominent researchers like LeCun, Marcus, Mitchell, and Brooks consistently highlight this gap, arguing that current approaches are not necessarily on a path to human-like general intelligence.45

Philosophical analysis further complicates the picture. The very concepts of "intelligence," "understanding," and "consciousness" are ill-defined and contested.19 Arguments like Searle's Chinese Room suggest that even perfect behavioral simulation (passing the Turing Test) may not equate to genuine internal understanding or mental states.20 This implies that judging machines based solely on their outputs, as the term "AI" often encourages in practice, might be insufficient if the goal is to capture something akin to human cognition.

The ambiguity inherent in the term "AI" allows for the conflation of existing ANI with hypothetical AGI and ASI.49 This conflation is amplified by media portrayals rooted in science fiction and marketing efforts that leverage the term's evocative power.3 The result is often a public discourse characterized by unrealistic hype about current capabilities and potentially misdirected fears about future scenarios, obscuring the real, present-day challenges and limitations of the technology.4

Considering alternative terms like "advanced machine learning," "sophisticated algorithms," or "intelligence augmentation" 3 highlights the potential benefits of greater terminological precision. Such labels might more accurately reflect the mechanisms at play, reduce anthropomorphic confusion, and potentially steer development towards more human-centric goals like augmentation rather than pure automation.126

Ultimately, the appropriateness of the term "AI" for current technology is context-dependent and hinges on the specific definition being employed. If "AI" refers broadly to the historical field of study aiming to create machines that perform tasks associated with intelligence, or to the current state-of-the-art in that field (ANI), then its use has historical and practical justification. However, if "AI" is used to imply human-like cognitive processes, genuine understanding, general intelligence, or consciousness, then its application to current systems is largely inaccurate and misleading. The term's value as a widely recognized umbrella category is often counterbalanced by the significant confusion and hype it generates.

Despite compelling arguments questioning its accuracy for describing the nature of current systems, the term "AI" shows remarkable persistence. This resilience stems from several factors constituting a form of path dependency. Its deep historical roots, its establishment in academic nomenclature (journals, conferences, textbooks 47), its adoption in industry and regulatory frameworks (like the EU AI Act 160), its potent marketing value 3, and its strong resonance with the public imagination fueled by cultural narratives 2 make it difficult to displace. Replacing "AI" with more technically precise but less evocative terms faces a significant challenge against this entrenched usage and cultural momentum.

9. Conclusion: Recapitulation and Perspective on the Terminology Debate

The question of whether contemporary technologies truly constitute "Artificial Intelligence" is more than a semantic quibble; it probes the very definition of intelligence, the trajectory of technological development, and the relationship between human cognition and machine capabilities. This report has traversed the historical origins of the term AI, from Turing's foundational inquiries and the Dartmouth workshop's ambitious goals 17, to its evolution through cycles of optimism and disillusionment.11

A critical distinction exists between Artificial Narrow Intelligence (ANI), which characterizes all current systems designed for specific tasks, and the hypothetical realms of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).49 While today's technologies, particularly those based on machine learning, deep learning, large language models, and computer vision, demonstrate impressive performance in narrow domains 28, they exhibit fundamental limitations. A recurring theme across expert critiques and technical assessments is the significant gap between pattern recognition and genuine understanding, reasoning, common sense, and adaptability.45 Philosophical inquiries, notably Searle's Chinese Room Argument 20, further challenge the notion that computational processes alone equate to understanding or consciousness, concepts that remain philosophically elusive.19

The term "AI" itself, while historically legitimate as the name of a field and its aspirations, proves problematic in practice. Its ambiguity allows for the conflation of ANI with AGI/ASI, fueling public and media hype that often misrepresents current capabilities and risks.4 This hype, intertwined with marketing imperatives and historical narratives, can distort research priorities, public trust, and policy decisions.5 The "AI effect," where past successes are discounted, further complicates the perception of progress.13

In synthesis, the label "AI" is nuanced. It accurately reflects the historical lineage and the task-performing capabilities of many current systems relative to past human benchmarks. However, it often inaccurately implies human-like cognitive processes or general intelligence, which current systems demonstrably lack. Its appropriateness depends heavily on the definition invoked. Despite strong arguments for alternative, more precise terminology like "advanced algorithms" or "intelligence augmentation" 32, the term "AI" persists due to powerful historical, institutional, commercial, and cultural inertia.2

Regardless of the label used, the crucial imperative is to foster a clear understanding of the reality of these technologies—their strengths, weaknesses, societal implications, and ethical challenges. This understanding is vital for responsible innovation, effective governance, and navigating the future relationship between humans and increasingly capable machines.

The ongoing debate and the recognized limitations of current paradigms underscore the need for future research directions that move beyond simply scaling existing methods. Exploring avenues like neuro-symbolic AI (integrating learning with reasoning) 29, causal AI (modeling cause-and-effect relationships) 29, and embodied AI (grounding intelligence in physical interaction) 62 represents efforts to tackle the fundamental challenges of reasoning, understanding, and common sense. These research paths implicitly acknowledge the shortcomings highlighted by the terminology debate and aim to bridge the gap towards more robust, reliable, and potentially more "intelligent" systems in a deeper sense. The future development of AI, and our ability to manage it wisely, depends on confronting these challenges directly, moving beyond the allure of labels to engage with the substantive complexities of mind and machine.

Works cited

PreviousSMTP Email ExplainedNextCreating a custom Discord Server Widget

Last updated 1 month ago

Was this helpful?

The History of Artificial Intelligence - IBM, accessed April 12, 2025,

The History of AI: From Futuristic Fiction to the Future of Enterprise - UiPath, accessed April 12, 2025,

Now the Humanities Can Disrupt "AI" - Public Books, accessed April 12, 2025,

Fear not the AI reality: accurate disclosures key to public trust - DEV Community, accessed April 12, 2025,

Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community - AAAI Publications, accessed April 12, 2025,

As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight | TechPolicy.Press, accessed April 12, 2025,

AI Ethics: What it is and why it matters | SAS, accessed April 12, 2025,

The ethical dilemmas of AI | USC Annenberg School for Communication and Journalism, accessed April 12, 2025,

Looking before we leap - Ada Lovelace Institute, accessed April 12, 2025,

What Is Artificial Intelligence (AI)? Definition, Uses, and More | University of Cincinnati, accessed April 12, 2025,

AI & Related Terms | AI Toolkit, accessed April 12, 2025,

A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence - ResearchGate, accessed April 12, 2025,

Artificial intelligence - IJNRD, accessed April 12, 2025,

(PDF) A Brief History of AI: How to Prevent Another Winter (A Critical Review), accessed April 12, 2025,

DeepSeek's AI: Navigating the media hype and reality - Monash Lens, accessed April 12, 2025,

What is the history of artificial intelligence (AI)? - Tableau, accessed April 12, 2025,

The birth of Artificial Intelligence (AI) research | Science and Technology, accessed April 12, 2025,

The History of AI: A Timeline of Artificial Intelligence | Coursera, accessed April 12, 2025,

Artificial Intelligence | Internet Encyclopedia of Philosophy, accessed April 12, 2025,

Chinese room - Wikipedia, accessed April 12, 2025,

Need for Machine Consciousness & John Searle's Chinese Room Argument, accessed April 12, 2025,

Artificial Intelligence: Some Legal Approaches and Implications - AAAI Publications, accessed April 12, 2025,

Artificial intelligence (AI) | Definition, Examples, Types, Applications, Companies, & Facts, accessed April 12, 2025,

Homage to John McCarthy, the father of Artificial Intelligence (AI) - Teneo.Ai, accessed April 12, 2025,

A Brief History of Artificial Intelligence | National Institute of Justice, accessed April 12, 2025,

Artificial Intelligence Definitions - AWS, accessed April 12, 2025,

Philosophy of artificial intelligence - Wikipedia, accessed April 12, 2025,

Artificial intelligence - Wikipedia, accessed April 12, 2025,

Neuro-Symbolic AI in 2024: A Systematic Review - arXiv, accessed April 12, 2025,

What is Artificial Intelligence (AI)? - netlogx, accessed April 12, 2025,

John McCarthy's Definition of Intelligence - Rich Sutton, accessed April 12, 2025,

What is the Difference Between AI and Machine Learning? - ServiceNow, accessed April 12, 2025,

plato.stanford.edu, accessed April 12, 2025,

Artificial Intelligence (Stanford Encyclopedia of Philosophy), accessed April 12, 2025,

The Association for the Advancement of Artificial Intelligence, accessed April 12, 2025,

About the Association for the Advancement of Artificial Intelligence (AAAI) Member Organization, accessed April 12, 2025,

Association for the Advancement of Artificial Intelligence (AAAI) | AI Glossary - OpenTrain AI, accessed April 12, 2025,

Lost in Transl(A)t(I)on: Differing Definitions of AI [Updated], accessed April 12, 2025,

Comparing the EU AI Act to Proposed AI-Related Legislation in the US, accessed April 12, 2025,

A comparative view of AI definitions as we move toward standardization, accessed April 12, 2025,

EU AI Act: Institutions Debate Definition of AI – Publications - Morgan Lewis, accessed April 12, 2025,

Artificial Intelligence Through Time: A Comprehensive Historical Review - ResearchGate, accessed April 12, 2025,

The Evolution of AI: From Foundations to Future Prospects - IEEE Computer Society, accessed April 12, 2025,

Evaluation of the Hierarchical Correspondence between the Human Brain and Artificial Neural Networks: A Review - PMC, accessed April 12, 2025,

Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly ..., accessed April 12, 2025,

Not on the Best Path - Communications of the ACM, accessed April 12, 2025,

Human Compatible: Artificial Intelligence and the Problem of Control - Amazon.com, accessed April 12, 2025,

Human Compatible: A timely warning on the future of AI - TechTalks, accessed April 12, 2025,

The 3 Types of Artificial Intelligence: ANI, AGI, and ASI - viso.ai, accessed April 12, 2025,

Understanding the Levels of AI: Comparing ANI, AGI, and ASI - Arbisoft, accessed April 12, 2025,

Exploring the Three Types of AI: ANI, AGI, and ASI - Toolify.ai, accessed April 12, 2025,

The three different types of Artificial Intelligence – ANI, AGI and ASI - EDI Weekly, accessed April 12, 2025,

Discover and Explore the Seven Types of AI - AI-Pro.org, accessed April 12, 2025,

ANI, AGI and ASI – what do they mean? - Learning & Development Advisory, accessed April 12, 2025,

Difference between AI, ML, LLM, and Generative AI - Toloka, accessed April 12, 2025,

Navigating the AI Landscape: Traditional AI vs Generative AI - NEXTDC, accessed April 12, 2025,

Approaches to AI | ANI | AGI | ASI - Modular Digital, accessed April 12, 2025,

What is artificial intelligence (AI)? - Klu.ai, accessed April 12, 2025,

AI Hype Vs AI Reality: Explained! - FiveRivers Technologies, accessed April 12, 2025,

Portrayals and perceptions of AI and why they matter - Royal Society, accessed April 12, 2025,

A Better Lesson - Rodney Brooks, accessed April 12, 2025,

Intelligence without Representation: A Historical Perspective - MDPI, accessed April 12, 2025,

Gary Marcus: a sceptical take on AI in 2025 - Apple Podcasts, accessed April 12, 2025,

Artificial Intelligence | Summary, Quotes, FAQ, Audio - SoBrief, accessed April 12, 2025,

Understanding AI: Definitions, history, and technological evolution - Article 1 - Elliott Davis, accessed April 12, 2025,

Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends - PMC, accessed April 12, 2025,

Neurosymbolic Reinforcement Learning and Planning: A Survey - NSF Public Access Repository, accessed April 12, 2025,

Human Brain Inspired Artificial Intelligence Neural Networks - IMR Press, accessed April 12, 2025,

ML vs. LLM: Is one “better” than the other? - Superwise.ai, accessed April 12, 2025,

What is AI-Driven Threat Detection and Response? - Radiant Security, accessed April 12, 2025,

Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives - MDPI, accessed April 12, 2025,

A Review of the Role of Artificial Intelligence in Healthcare - PMC - PubMed Central, accessed April 12, 2025,

Artificial Intelligence in Healthcare: Perception and Reality - PMC, accessed April 12, 2025,

Understanding the Limitations of Symbolic AI: Challenges and Future Directions - SmythOS, accessed April 12, 2025,

Exploring the Future Beyond Large Language Models - The Choice by ESCP, accessed April 12, 2025,

10 Biggest Limitations of Large Language Models - ProjectPro, accessed April 12, 2025,

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap, accessed April 12, 2025,

Gary Marcus Discusses AI's Limitations and Ethics - Artificial Intelligence +, accessed April 12, 2025,

Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends - Frontiers, accessed April 12, 2025,

Surveying neuro-symbolic approaches for reliable artificial intelligence of things, accessed April 12, 2025,

On Crashing the Barrier of Meaning in Artificial Intelligence, accessed April 12, 2025,

On Crashing the Barrier of Meaning in AI - Melanie Mitchell, accessed April 12, 2025,

15 Things AI Can — and Can't Do (So Far) - Invoca, accessed April 12, 2025,

AI in the workplace: A report for 2025 - McKinsey, accessed April 12, 2025,

AI skeptic Gary Marcus on AI's moral and technical shortcomings - Freethink, accessed April 12, 2025,

A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Hum4n L4ngu4ge and the W0rld behind W0rds? Evelina - arXiv, accessed April 12, 2025,

Common sense is still out of reach for chatbots | Mind Matters, accessed April 12, 2025,

Intelligence is whatever machines cannot (yet) do, accessed April 12, 2025,

Easy Problems That LLMs Get Wrong - arXiv, accessed April 12, 2025,

Easy Problems That LLMs Get Wrong arXiv:2405.19616v2 [cs.AI] 1 Jun 2024, accessed April 12, 2025,

Machines of mind: The case for an AI-powered productivity boom - Brookings Institution, accessed April 12, 2025,

Is Generative AI Worth the Hype in Healthcare? - L.E.K. Consulting, accessed April 12, 2025,

A Guide to Cutting Through AI Hype: Arvind Narayanan and Melanie Mitchell Discuss Artificial and Human Intelligence - CITP Blog - Freedom to Tinker, accessed April 12, 2025,

The Future of Computer Vision: 2024 and Beyond - Rapid Innovation, accessed April 12, 2025,

The Quest for Visual Understanding: A Journey Through the Evolution of Visual Question Answering - arXiv, accessed April 12, 2025,

Future Directions of Visual Common Sense & Recognition - Basic Research, accessed April 12, 2025,

68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense, accessed April 12, 2025,

arXiv:2501.07109v1 [cs.CV] 13 Jan 2025, accessed April 12, 2025,

Knowledge and Reasoning for Image Understanding by Somak Aditya A Dissertation Presented in Partial Fulfillment of the Requireme, accessed April 12, 2025,

Do Machines Understand? A Short Review of Understanding & Common Sense in Artificial Intelligence - MIT alumni, accessed April 12, 2025,

Understanding and Common Sense: Two Sides of the Same Coin? - ResearchGate, accessed April 12, 2025,

The Pursuit of Machine Common Sense - Jerome Fisher Program in Management & Technology - University of Pennsylvania, accessed April 12, 2025,

Bridging the gap: Neuro-Symbolic Computing for advanced AI applications in construction, accessed April 12, 2025,

(PDF) Common-Sense Reasoning for Human Action Recognition - ResearchGate, accessed April 12, 2025,

AI Agents in 2025: Expectations vs. Reality - IBM, accessed April 12, 2025,

Five Trends in AI and Data Science for 2025 - MIT Sloan Management Review, accessed April 12, 2025,

Measuring AI Ability to Complete Long Tasks - METR, accessed April 12, 2025,

Causal Artificial Intelligence in Legal Language Processing: A Systematic Review - MDPI, accessed April 12, 2025,

Returning to symbolic AI : r/ArtificialInteligence - Reddit, accessed April 12, 2025,

Erik Brynjolfsson on the New Superpowers of AI | DLD 23 - YouTube, accessed April 12, 2025,

The Limitations of Generative AI, According to Generative AI - Lingaro Group, accessed April 12, 2025,

What a Mysterious Chinese Room Can Tell Us About Consciousness | Psychology Today, accessed April 12, 2025,

Have AIs Already Reached Consciousness? - Psychology Today, accessed April 12, 2025,

The Illusion of Conscious AI -, accessed April 12, 2025,

A Call for Embodied AI - arXiv, accessed April 12, 2025,

Artificial intelligence in healthcare - Wikipedia, accessed April 12, 2025,

AI effect - Wikipedia, accessed April 12, 2025,

The History of Artificial Intelligence - University of Washington, accessed April 12, 2025,

The Myth Buster: Rodney Brooks Breaks Down the Hype Around AI - Newsweek, accessed April 12, 2025,

LLMs don't do formal reasoning - and that is a HUGE problem - Gary Marcus - Substack, accessed April 12, 2025,

John Searle's Chinese Room Argument, accessed April 12, 2025,

How to Break the Spell of AI's Magical Thinking: Lessons From Rodney Brooks - Newsweek, accessed April 12, 2025,

Intelligence without representation* - People, accessed April 12, 2025,

Rodney Brooks on limitations of generative AI | Hacker News, accessed April 12, 2025,

The Seven Deadly Sins of Predicting the Future of AI (Rodney Brooks) - Reddit, accessed April 12, 2025,

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence - OCCAM, accessed April 12, 2025,

Automation versus augmentation: What will AI's lasting impact on jobs be?, accessed April 12, 2025,

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, accessed April 12, 2025,

(PDF) The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, accessed April 12, 2025,

A Human-Centered Approach to the AI Revolution | Stanford HAI, accessed April 12, 2025,

The Chinese Room Argument - Stanford Encyclopedia of Philosophy, accessed April 12, 2025,

Chinese room argument | Definition, Machine Intelligence, John Searle, Turing Test, Objections, & Facts | Britannica, accessed April 12, 2025,

The Chinese Room and Creating Consciousness: How Recent Strides in AI Technology Revitalize a Classic Debate - Eagle Scholar, accessed April 12, 2025,

Hinton (father of AI) explains why AI is sentient - The Philosophy Forum, accessed April 12, 2025,

Godfather vs Godfather: Geoffrey Hinton says AI is already conscious, Yoshua Bengio explains why he thinks it doesn't matter - Reddit, accessed April 12, 2025,

Why The Godfather of AI Now Fears His Creation - Curt Jaimungal, accessed April 12, 2025,

Images of AI – Between Fiction and Function, accessed April 12, 2025,

The History of Artificial Intelligence and Its Impact on the Human World | Futurism, accessed April 12, 2025,

What is AI (artificial intelligence)? - McKinsey, accessed April 12, 2025,

Anthropomorphism in AI: hype and fallacy - PhilArchive, accessed April 12, 2025,

Investment Firms Caught in the SEC's Crosshairs - Agio, accessed April 12, 2025,

Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community - arXiv, accessed April 12, 2025,

Watching the Generative AI Hype Bubble Deflate - Ash Center, accessed April 12, 2025,

Artificial Intelligence in Health Care: Will the Value Match the Hype? - ResearchGate, accessed April 12, 2025,

AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business - USC Research Bank, accessed April 12, 2025,

Critical Issues About A.I. Accountability Answered - California Management Review, accessed April 12, 2025,

Artificial Intelligence In Health And Health Care: Priorities For Action - Health Affairs, accessed April 12, 2025,

AI in research - UK Research Integrity Office, accessed April 12, 2025,

How the US Public and AI Experts View Artificial Intelligence | Pew Research Center, accessed April 12, 2025,

60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care - Pew Research Center, accessed April 12, 2025,

Can AI Outperform Doctors in Diagnosing Infectious Diseases? - News-Medical.net, accessed April 12, 2025,

Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis | BMJ Open, accessed April 12, 2025,

Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review - Journal of Medical Internet Research, accessed April 12, 2025,

The Medical AI Revolution - OncLive, accessed April 12, 2025,

Fairness of artificial intelligence in healthcare: review and recommendations - PMC, accessed April 12, 2025,

94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans - Sean Carroll, accessed April 12, 2025,

Future of AI Research - AAAI, accessed April 12, 2025,

AAAI-25 New Faculty Highlights Program, accessed April 12, 2025,

NeurIPS Poster Do causal predictors generalize better to new domains?, accessed April 12, 2025,

Key insights into AI regulations in the EU and the US: navigating the evolving landscape, accessed April 12, 2025,

Comparing the US AI Executive Order and the EU AI Act - DLA Piper GENIE, accessed April 12, 2025,

Unlocking the Potential of Generative AI through Neuro-Symbolic Architectures – Benefits and Limitations - arXiv, accessed April 12, 2025,

Research Publications – Center for Human-Compatible Artificial Intelligence, accessed April 12, 2025,

https://www.ibm.com/think/topics/history-of-artificial-intelligence
https://www.uipath.com/blog/ai/history-of-artificial-intelligence-evolution
https://www.publicbooks.org/now-the-humanities-can-disrupt-ai/
https://dev.to/aimodels-fyi/fear-not-the-ai-reality-accurate-disclosures-key-to-public-trust-2ld9
https://ojs.aaai.org/index.php/AIES/article/download/31737/33904/35801
https://www.techpolicy.press/as-the-ai-bubble-deflates-the-ethics-of-hype-are-in-the-spotlight/
https://www.sas.com/nl_nl/insights/articles/analytics/artificial-intelligence-ethics.html
https://annenberg.usc.edu/research/center-public-relations/usc-annenberg-relevance-report/ethical-dilemmas-ai
https://www.adalovelaceinstitute.org/report/looking-before-we-leap/
https://online.uc.edu/blog/what-is-artificial-intelligence/
https://www.ai-lawenforcement.org/guidance/techrefbook
https://www.researchgate.net/publication/334539401_A_Brief_History_of_Artificial_Intelligence_On_the_Past_Present_and_Future_of_Artificial_Intelligence
https://ijnrd.org/papers/IJNRD1809012.pdf
https://www.researchgate.net/publication/354387444_A_Brief_History_of_AI_How_to_Prevent_Another_Winter_A_Critical_Review
https://lens.monash.edu/@politics-society/2025/02/07/1387324/deepseeks-ai-navigating-the-media-hype-and-reality
https://www.tableau.com/data-insights/ai/history
https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research
https://www.coursera.org/articles/history-of-ai
https://iep.utm.edu/artificial-intelligence/
https://en.wikipedia.org/wiki/Chinese_room
https://www.robometricsagi.com/blog/ai-policy/need-for-machine-consciousness-john-searles-chinese-room-argument
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/392/328
https://www.britannica.com/technology/artificial-intelligence
https://www.teneo.ai/blog/homage-to-john-mccarthy-the-father-of-artificial-intelligence-ai
https://nij.ojp.gov/topics/articles/brief-history-artificial-intelligence
https://hai-production.s3.amazonaws.com/files/2020-09/AI-Definitions-HAI.pdf
https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence
https://en.wikipedia.org/wiki/Artificial_intelligence
https://arxiv.org/pdf/2501.05435
https://netlogx.com/blog/what-artificial-intelligence-ai/
http://www.incompleteideas.net/papers/Sutton-JAGI-2020.pdf
https://www.servicenow.com/ai/what-is-ai-vs-machine-learning.html
https://plato.stanford.edu/entries/artificial-intelligence/#:~:text=Artificial%20intelligence%20(AI)%20is%20the,%E2%80%93%20appear%20to%20be%20persons).
https://plato.stanford.edu/entries/artificial-intelligence/
https://aaai.org/
https://aaai.org/about-aaai/
https://www.opentrain.ai/glossary/association-for-the-advancement-of-artificial-intelligence-aaai
https://www.holisticai.com/blog/ai-definition-comparison
https://businesslawreview.uchicago.edu/print-archive/comparing-eu-ai-act-proposed-ai-related-legislation-us
https://opensource.org/blog/a-comparative-view-of-ai-definitions-as-we-move-toward-standardization
https://www.morganlewis.com/pubs/2023/09/eu-ai-act-institutions-debate-definition-of-ai
https://www.researchgate.net/publication/385939923_Artificial_Intelligence_Through_Time_A_Comprehensive_Historical_Review
https://www.computer.org/publications/tech-news/research/evolution-of-ai
https://pmc.ncbi.nlm.nih.gov/articles/PMC10604784/
https://www.newsweek.com/ai-impact-interview-yann-lecun-artificial-intelligence-2054237
https://cacm.acm.org/opinion/not-on-the-best-path/
https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616
https://bdtechtalks.com/2020/03/16/stuart-russell-human-compatible-ai/
https://viso.ai/deep-learning/artificial-intelligence-types/
https://arbisoft.com/blogs/understanding-the-levels-of-ai-comparing-ani-agi-and-asi
https://www.toolify.ai/ai-news/exploring-the-three-types-of-ai-ani-agi-and-asi-1222777
https://www.ediweekly.com/the-three-different-types-of-artificial-intelligence-ani-agi-and-asi/
https://ai-pro.org/learn-ai/articles/beyond-basics-the-7-types-of-ai/
https://youevolve.net/ani-agi-and-asi-what-do-they-mean/
https://toloka.ai/blog/difference-between-ai-ml-llm-and-generative-ai/
https://www.nextdc.com/blog/how-to-navigate-generative-ai
https://thisismodular.co.uk/approaches-to-ai/
https://klu.ai/glossary/artificial-intelligence
https://fiveriverstech.com/ai-hype-vs-ai-reality-explained
https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf
https://rodneybrooks.com/a-better-lesson/
https://www.mdpi.com/2079-8954/8/3/31
https://podcasts.apple.com/us/podcast/gary-marcus-a-sceptical-take-on-ai-in-2025/id508376907?i=1000684121035&l=ar
https://sobrief.com/books/artificial-intelligence-5
https://www.elliottdavis.com/insights/article-1-understanding-ai-definitions-history-and-technological-evolution
https://pmc.ncbi.nlm.nih.gov/articles/PMC8172805/
https://par.nsf.gov/servlets/purl/10481273
https://www.imrpress.com/journal/JIN/24/4/10.31083/JIN26684/htm
https://superwise.ai/blog/ml-vs-llm-is-one-better-than-the-other/
https://radiantsecurity.ai/learn/ai-driven-threat-detection-and-reponse/
https://www.mdpi.com/2227-9032/12/2/125
https://pmc.ncbi.nlm.nih.gov/articles/PMC10301994/
https://pmc.ncbi.nlm.nih.gov/articles/PMC10587915/
https://smythos.com/ai-agents/ai-agent-development/symbolic-ai-limitations/
https://thechoice.escp.eu/tomorrow-choices/exploring-the-future-beyond-large-language-models/
https://www.projectpro.io/article/llm-limitations/1045
https://hdsr.mitpress.mit.edu/pub/aelql9qy
https://www.aiplusinfo.com/blog/gary-marcus-discusses-ais-limitations-and-ethics/
https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2021.550030/full
https://www.researchgate.net/publication/382593613_Surveying_neuro-symbolic_approaches_for_reliable_artificial_intelligence_of_things
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/download/5259/7227
https://www.melaniemitchell.me/PapersContent/AIMagazine2020.pdf
https://www.invoca.com/blog/6-things-ai-cant-do-yet
https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
https://www.freethink.com/artificial-intelligence/gary-marcus-on-ai
https://arxiv.org/pdf/2308.00109
https://mindmatters.ai/brief/common-sense-is-still-out-of-reach-for-chatbots/
https://statmodeling.stat.columbia.edu/2024/04/13/intelligence-is-whatever-machines-cannot-yet-do/
https://arxiv.org/html/2405.19616v1
http://arxiv.org/pdf/2405.19616
https://www.brookings.edu/articles/machines-of-mind-the-case-for-an-ai-powered-productivity-boom/
https://www.lek.com/sites/default/files/insights/pdf-attachments/gen-ai-transforming-healthcare.pdf
https://blog.citp.princeton.edu/2025/04/02/a-guide-to-cutting-through-ai-hype-arvind-narayanan-and-melanie-mitchell-discuss-artificial-and-human-intelligence/
https://www.rapidinnovation.io/post/future-of-computer-vision
https://arxiv.org/html/2501.07109v1
https://basicresearch.defense.gov/Portals/61/Documents/future-directions/3_Computer_Vision.pdf?ver=2017-09-20-003027-450
https://www.preposterousuniverse.com/podcast/2019/10/14/68-melanie-mitchell-on-artificial-intelligence-and-the-challenge-of-common-sense/
https://arxiv.org/pdf/2501.07109
https://cogintlab-asu.github.io/files/paper/2018/somak_thesis.pdf
http://alumni.media.mit.edu/~kris/ftp/AGI17-UUW-DoMachinesUnderstand.pdf
https://www.researchgate.net/publication/318434865_Understanding_and_Common_Sense_Two_Sides_of_the_Same_Coin
https://fisher.wharton.upenn.edu/wp-content/uploads/2020/09/Thesis_Joseph-Churilla.pdf
https://journal.hep.com.cn/fem/EN/10.1007/s42524-023-0266-0
https://www.researchgate.net/publication/257928620_Common-Sense_Reasoning_for_Human_Action_Recognition
https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality
https://sloanreview.mit.edu/article/five-trends-in-ai-and-data-science-for-2025/
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
https://www.mdpi.com/1099-4300/27/4/351
https://www.reddit.com/r/ArtificialInteligence/comments/zinuyb/returning_to_symbolic_ai/
https://www.youtube.com/watch?v=v-furcIsn-s
https://lingarogroup.com/blog/the-limitations-of-generative-ai-according-to-generative-ai
https://www.psychologytoday.com/us/blog/consciousness-and-beyond/202308/what-a-mysterious-chinese-room-can-tell-us-about-consciousness
https://www.psychologytoday.com/us/blog/the-mind-body-problem/202502/have-ais-already-reached-consciousness
https://thomasramsoy.com/index.php/2025/01/31/title-the-illusion-of-conscious-ai/
https://arxiv.org/html/2402.03824v3
https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare
https://en.wikipedia.org/wiki/AI_effect
https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf
https://www.newsweek.com/rodney-brooks-ai-impact-interview-futures-2034669
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and/comments
http://jmc.stanford.edu/articles/chinese.html
https://www.newsweek.com/rodney-brooks-roomba-irobot-founder-artificial-intelligence-ai-future-2034729
https://people.csail.mit.edu/brooks/papers/representation.pdf
https://news.ycombinator.com/item?id=40835588
https://www.reddit.com/r/slatestarcodex/comments/6yrpia/the_seven_deadly_sins_of_predicting_the_future_of/
https://www.occam.org/post/the-turing-trap-the-promise-peril-of-human-like-artificial-intelligence
https://www-2.rotman.utoronto.ca/insightshub/ai-analytics-big-data/ai-job-impact
https://digitaleconomy.stanford.edu/news/the-turing-trap-the-promise-peril-of-human-like-artificial-intelligence/
https://www.researchgate.net/publication/360304612_The_Turing_Trap_The_Promise_Peril_of_Human-Like_Artificial_Intelligence
https://hai.stanford.edu/news/human-centered-approach-ai-revolution?__hstc=167200929.8bbb7f3a5412b223a4960d1349efc734.1743552000329.1743552000330.1743552000331.1&__hssc=167200929.1.1743552000332&__hsfp=1721781979
https://plato.stanford.edu/entries/chinese-room/
https://www.britannica.com/topic/Chinese-room-argument
https://scholar.umw.edu/student_research/609/
https://thephilosophyforum.com/discussion/15702/hinton-father-of-ai-explains-why-ai-is-sentient
https://www.reddit.com/r/singularity/comments/1ifajzm/godfather_vs_godfather_geoffrey_hinton_says_ai_is/
https://curtjaimungal.substack.com/p/why-the-godfather-of-ai-now-fears
https://blog.betterimagesofai.org/images-of-ai-between-fiction-and-function/
https://vocal.media/futurism/the-history-of-artificial-intelligence-and-its-impact-on-the-human-world
https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai
https://philarchive.org/archive/PLAAIA-4
https://agio.com/artificial-intelligence-investment-firms-caught-in-secs-crosshairs/
https://arxiv.org/html/2408.15244v1
https://ash.harvard.edu/resources/watching-the-generative-ai-hype-bubble-deflate/
https://www.researchgate.net/publication/333225866_Artificial_Intelligence_in_Health_Care_Will_the_Value_Match_the_Hype
https://research.usc.edu.au/esploro/fulltext/journalArticle/AI-hype-as-a-cyber-security/991008896102621?repId=12272900550002621&mId=13272899650002621&institution=61USC_INST
https://cmr.berkeley.edu/2023/11/critical-issues-about-a-i-accountability-answered/
https://www.healthaffairs.org/doi/10.1377/hlthaff.2024.01003
https://ukrio.org/ukrio-resources/ai-in-research/
https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/
https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/
https://www.news-medical.net/health/Can-AI-Outperform-Doctors-in-Diagnosing-Infectious-Diseases.aspx
https://bmjopen.bmj.com/content/13/1/e066322
https://www.jmir.org/2022/1/e32939/
https://www.onclive.com/view/the-medical-ai-revolution
https://pmc.ncbi.nlm.nih.gov/articles/PMC10764412/
https://www.preposterousuniverse.com/podcast/2020/04/27/94-stuart-russell-on-making-artificial-intelligence-compatible-with-humans/
https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-FINAL.pdf
https://aaai.org/conference/aaai/aaai-25/new-faculty-highlights-program/
https://neurips.cc/virtual/2024/poster/94992
https://kennedyslaw.com/en/thought-leadership/article/2025/key-insights-into-ai-regulations-in-the-eu-and-the-us-navigating-the-evolving-landscape/
https://knowledge.dlapiper.com/dlapiperknowledge/globalemploymentlatestdevelopments/2023/comparing-the-US-AI-Executive-Order-and-the-EU-AI-Act.html
https://arxiv.org/html/2502.11269v1
https://humancompatible.ai/research