Jump to content

ELIZA

From Wikipedia, the free encyclopedia
(Redirected from AOLiza)
A conversation with Eliza

ELIZA is an early natural language processing computer program developed from 1964 to 1967[1] at MIT by Joseph Weizenbaum.[2][3] Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party.[4][5][6] Whereas the ELIZA program itself was written (originally)[7] in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation.[8] The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient),[9][10][11] and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots ("chatbot" modernly) and one of the first programs capable of attempting the Turing test.[12][13]

ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including Weizenbaum's secretary, attributed human-like feelings to the computer program.[3] Many academics believed that the program would be able to positively influence the lives of many people, particularly those with psychological issues, and that it could aid doctors working on such patients' treatment.[3][14] While ELIZA was capable of engaging in discourse, it could not converse with true understanding.[15] However, many early users were convinced of ELIZA's intelligence and understanding, despite Weizenbaum's insistence to the contrary.[6] The original ELIZA source-code had been missing since its creation in the 1960s as it was not common to publish articles that included source code at that time. However, more recently the MAD-SLIP source-code has now been discovered in the MIT archives and published on various platforms, such as archive.org.[16] The source-code is of high historical interest since it demonstrates not only the specificity of programming languages and techniques at that time, but also the beginning of software layering and abstraction as a means of achieving sophisticated software programming.

Overview

[edit]
A conversation between a human and ELIZA's DOCTOR script

Joseph Weizenbaum's ELIZA, running the DOCTOR script, created a conversational interaction somewhat similar to what might take place in the office of "a [non-directive] psychotherapist in an initial psychiatric interview"[17] and to "demonstrate that the communication between man and machine was superficial".[18] While ELIZA is best known for acting in the manner of a psychotherapist, the speech patterns are due to the data and instructions supplied by the DOCTOR script.[19] ELIZA itself examined the text for keywords, applied values to said keywords, and transformed the input into an output; the script that ELIZA ran determined the keywords, set the values of keywords, and set the rules of transformation for the output.[20] Weizenbaum chose to make the DOCTOR script in the context of psychotherapy to "sidestep the problem of giving the program a data base of real-world knowledge",[3] allowing it to reflect back the patient's statements in order to carry the conversation forward.[3] The result was a somewhat intelligent-seeming response that reportedly deceived some early users of the program.[21]

Weizenbaum named his program ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion (also appearing in the musical My Fair Lady, which was based on the play and was hugely popular at the time). According to Weizenbaum, ELIZA's ability to be "incrementally improved" by various users made it similar to Eliza Doolittle,[20] since Eliza Doolittle was taught to speak with an upper-class accent in Shaw's play.[9][22] However, unlike the human character in Shaw's play, ELIZA is incapable of learning new patterns of speech or new words through interaction alone. Edits must be made directly to ELIZA's active script in order to change the manner by which the program operates.

Weizenbaum first implemented ELIZA in his own SLIP list-processing language, where, depending upon the initial entries by the user, the illusion of human intelligence could appear, or be dispelled through several interchanges.[2] Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer.[3] Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."[23]

In 1966, interactive computing (via a teletype) was new. It was 11 years before the personal computer became familiar to the general public, and three decades before most people encountered attempts at natural language processing in Internet services like Ask.com or PC help systems such as Microsoft Office Clippit.[24] Although those programs included years of research and work, ELIZA remains a milestone simply because it was the first time a programmer had attempted such a human-machine interaction with the goal of creating the illusion (however brief) of human–human interaction.[citation needed]

At the ICCC 1972, ELIZA was brought together with another early artificial-intelligence program named PARRY for a computer-only conversation. While ELIZA was built to speak as a doctor, PARRY was intended to simulate a patient with schizophrenia.[25]

Design

[edit]

Weizenbaum originally wrote ELIZA in MAD-SLIP for CTSS on an IBM 7094 as a program to make natural-language conversation possible with a computer.[26] To accomplish this, Weizenbaum identified five "fundamental technical problems" for ELIZA to overcome: the identification of key words, the discovery of a minimal context, the choice of appropriate transformations, the generation of responses in the absence of key words, and the provision of an editing capability for ELIZA scripts.[20] Weizenbaum solved these problems and made ELIZA such that it had no built-in contextual framework or universe of discourse.[19] However, this required ELIZA to have a script of instructions on how to respond to inputs from users.[6]

ELIZA starts its process of responding to an input by a user by first examining the text input for a "keyword".[5] A "keyword" is a word designated as important by the acting ELIZA script, which assigns to each keyword a precedence number, or a RANK, designed by the programmer.[15] If such words are found, they are put into a "keystack", with the keyword of the highest RANK at the top. The input sentence is then manipulated and transformed as the rule associated with the keyword of the highest RANK directs.[20] For example, when the DOCTOR script encounters words such as "alike" or "same", it would output a message pertaining to similarity, in this case "In what way?",[4] as these words had high precedence number. This also demonstrates how certain words, as dictated by the script, can be manipulated regardless of contextual considerations, such as switching first-person pronouns and second-person pronouns and vice versa, as these too had high precedence numbers. Such words with high precedence numbers are deemed superior to conversational patterns and are treated independently of contextual patterns.[citation needed]

Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the "decomposition rule" and the "reassembly rule".[20] First, the input is reviewed for syntactical patterns in order to establish the minimal context necessary to respond. Using the keywords and other nearby words from the input, different disassembly rules are tested until an appropriate pattern is found. Using the script's rules, the sentence is then "dismantled" and arranged into sections of the component parts as the "decomposition rule for the highest-ranking keyword" dictates. The example that Weizenbaum gives is the input "You are very helpful", which is transformed to "I are very helpful". This is then broken into (1) empty (2) "I" (3) "are" (4) "very helpful". The decomposition rule has broken the phrase into four small segments that contain both the keywords and the information in the sentence.[20]

The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence.[5] The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response. Using Weizenbaum's example previously stated, such a reassembly rule would take the fragments and apply them to the phrase "What makes you think I am (4)", which would result in "What makes you think I am very helpful?". This example is rather simple, since depending upon the disassembly rule, the output could be significantly more complex and use more of the input from the user. However, from this reassembly, ELIZA then sends the constructed sentence to the user in the form of text on the screen.[20]

These steps represent the bulk of the procedures that ELIZA follows in order to create a response from a typical input, though there are several specialized situations that ELIZA/DOCTOR can respond to. One Weizenbaum specifically wrote about was when there is no keyword. One solution was to have ELIZA respond with a remark that lacked content, such as "I see" or "Please go on".[20] The second method was to use a "MEMORY" structure, which recorded prior recent inputs, and would use these inputs to create a response referencing a part of the earlier conversation when encountered with no keywords.[27] This was possible due to Slip's ability to tag words for other usage, which simultaneously allowed ELIZA to examine, store, and repurpose words for usage in outputs.[20]

While these functions were all framed in ELIZA's programming, the exact manner by which the program dismantled, examined, and reassembled inputs is determined by the operating script. The script is not static and can be edited, or a new one created, as is necessary for the operation in the context needed. This would allow the program to be applied in multiple situations, including the well-known DOCTOR script, which simulates a Rogerian psychotherapist.[16]

A Lisp version of ELIZA, based on Weizenbaum's CACM paper, was written shortly after that paper's publication by Bernie Cosell.[28][29] A BASIC version appeared in Creative Computing in 1977 (although it was written in 1973 by Jeff Shrager).[30] This version, which was ported to many of the earliest personal computers, appears to have been subsequently translated into many other versions in many other languages. Shrager claims not to have seen either Weizenbaum's or Cosell's versions.

In 2021, Jeff Shrager searched MIT's Weizenbaum archives, along with MIT archivist Myles Crowley, and found files labeled Computer Conversations. These included the complete source code listing of ELIZA in MAD-SLIP, with the DOCTOR script attached. The Weizenbaum estate has given permission to open-source this code under a Creative Commons CC0 public domain license. The code and other information can be found on the ELIZAGEN site.[29]

Another version of Eliza popular among software engineers is the version that comes with the default release of GNU Emacs, and which can be accessed by typing M-x doctor from most modern Emacs implementations.

Pseudocode

[edit]

From Figure 15.5, Chapter 15 of Speech and Language Processing (third edition).[31]

function ELIZA GENERATOR(user sentence) returns response
   Let w be the word in sentence that has the highest keyword rank
   if w exists
       Let r be the highest ranked rule for w that matches sentence
       response ← Apply the transform in r to sentence
       if w = 'my'
           future ← Apply a transformation from the ‘memory’ rule list to sentence
           Push future onto the memory queue
       else (no keyword applies)
           Either
               response ← Apply the transform for the NONE keyword to sentence
           Or
               response ← Pop the oldest response from the memory queue
   Return response

Response and legacy

[edit]

Lay responses to ELIZA were disturbing to Weizenbaum and motivated him to write his book Computer Power and Human Reason: From Judgment to Calculation, in which he explains the limits of computers, as he wants to make clear his opinion that the anthropomorphic views of computers are just a reduction of human beings or any life form for that matter.[32] In the independent documentary film Plug & Pray (2010) Weizenbaum said that only people who misunderstood ELIZA called it a sensation.[33]

David Avidan, who was fascinated with future technologies and their relation to art, desired to explore the use of computers for writing literature. He conducted several conversations with an APL implementation of ELIZA and published them – in English, and in his own translation to Hebrew – under the title My Electronic Psychiatrist – Eight Authentic Talks with a Computer. In the foreword, he presented it as a form of constrained writing.[34]

There are many programs based on ELIZA in different programming languages. For MS-DOS computers, some Sound Blaster cards came bundled with Dr. Sbaitso, which functions like the DOCTOR script. Other versions adapted ELIZA around a religious theme, such as ones featuring Jesus (both serious and comedic), and another Apple II variant called I Am Buddha. The 1980 game The Prisoner incorporated ELIZA-style interaction within its gameplay. In 1988, the British artist and friend of Weizenbaum Brian Reffin Smith created two art-oriented ELIZA-style programs written in BASIC, one called "Critic" and the other "Artist", running on two separate Amiga 1000 computers and showed them at the exhibition "Salamandre" in the Musée du Berry, Bourges, France. The visitor was supposed to help them converse by typing in to "Artist" what "Critic" said, and vice versa. The secret was that the two programs were identical. GNU Emacs formerly had a psychoanalyze-pinhead command that simulates a session between ELIZA and Zippy the Pinhead.[35] The Zippyisms were removed due to copyright issues, but the DOCTOR program remains.

ELIZA has been referenced in popular culture and continues to be a source of inspiration for programmers and developers focused on artificial intelligence. It was also featured in a 2012 exhibit at Harvard University titled "Go Ask A.L.I.C.E.", as part of a celebration of mathematician Alan Turing's 100th birthday. The exhibit explores Turing's lifelong fascination with the interaction between humans and computers, pointing to ELIZA as one of the earliest realizations of Turing's ideas.[1]

ELIZA won a 2021 Legacy Peabody Award. A 2023 preprint reported that ELIZA beat OpenAI's GPT-3.5, the model used by ChatGPT at the time, in a Turing test study. However, it did not outperform GPT-4 or real humans.[36][37]

Eliza Effect

[edit]

The Eliza effect borrowed its name from ELIZA the chatbot. This effect is first defined in Fluid Concepts and Creative Analogies: Computer Models and the Fundamental Mechanisms of Thought[38] as humans’ assumption of which computer programs understand the user inputs and make analogies. However, it has no permanent knowledge but “handling a list of ‘assertions’.”

This misunderstanding can potentially manipulate and misinform users. When interacting and communicating with chatbots, users can be overly confident in the reliability of the chatbots’ answers. Other than misinforming, the chatbot’s human-mimicking nature can also cause severe consequences, especially for younger users who lack a sufficient understanding of the chatbot’s mechanism.

Results of Eliza Effect

[edit]

Although chatbots can only communicate to users in limited ways, it can be fatally dangerous.

In 2023, a young Belgian man commited suicide after talking to Eliza, an AI chatbot on Chai. He discussed his concerns about climate change and hoped for technologies to solve it. As this belief progressed, he saw the chatbot as a sentient being and decided to sacrificing his life for Eliza to save humanity.[39]

On February 28, 2024, the chatbot from Character.AI. induced Sewell Setzer III, a 14-year-old ninth grader from Orlando, Fla, to commit suicide.[40] Although Setzer has the knowledge of chatbot being a program that has no personality, he still has a strong emotional attachment to it. Through the Eliza effect, the chatbot generates misleading scripts that result in unexpected consequences, disobeying its original intention.

[edit]

In 1969, George Lucas and Walter Murch incorporated an Eliza-like dialogue interface in their screenplay for the feature film THX-1138. Inhabitants of the underground future world of THX, when stressed, would retreat to "confession booths" and initiate a one-sided Eliza-formula conversation with a Jesus-faced computer who claimed to be "OMM".[citation needed]

Frederik Pohl's science-fiction novel Gateway has the narrator undergo therapy at a praxis run by an AI that performs the task of a Freudian therapist, which he calls "Sigfrid von Shrink". The novel contains a few pages of (nonsensical) machine code illustrating Sigfrid's internal processes.

ELIZA influenced a number of early computer games by demonstrating additional kinds of interface designs. Don Daglow claims he wrote an enhanced version of the program called Ecala on a DEC PDP-10 minicomputer at Pomona College in 1973.[citation needed]

The 2011 video game Deus Ex: Human Revolution and the 2016 sequel Deus Ex: Mankind Divided features an artificial-intelligence Picus TV Network newsreader named Eliza Cassan.[41]

In Adam Curtis's 2016 documentary, HyperNormalisation, ELIZA was referenced in relationship to post-truth.[42]

The twelfth episode of the American sitcom Young Sheldon, aired in January 2018, included the protagonist "conversing" with ELIZA, hoping to resolve a domestic issue.[43]

On August 12, 2019, independent game developer Zachtronics published a visual novel called Eliza, about an AI-based counseling service inspired by ELIZA.[44][45]

In A Murder at the End of the World, the anthropomorphic LLM-powered character Ray cites ELIZA as an example of how some may seek refuge in a non-human therapist.

Concerns

[edit]

Bias

[edit]

When ELIZA was created in 1966, it was meant predominantly for white, male, individuals with high education. This exclusivity was especially prevalent during the creation and testing stages of the bot, which marginalized the experience of those intended users and those who did not fit into the characteristics mentioned.[10] Although this chatbot was meant to mimic human conversation with the goal of making the user think it is human, and those users would typically converse with others like them, ELIZA was named after a female character and programmed to give more feminine responses. Joseph Weizenbaum, the creator of ELIZA, has reflected upon and critiqued how ELIZA and other chatbots of the sort reinforce gender stereotypes. In particular, Weizenbaum reflects on how the script ELIZA is programmed to follow mimics a therapist’s nurturing and feminine qualities.[3] He criticizes this decision by acknowledging that when technologies such as chatbots are created in such a way, they reinforce the idea that emotional and nurturing jobs are inherently feminine.

Accuracy and responsiveness

[edit]

ELIZA's design, while a pioneering chatbot of its time, unveils the need to reevaluate the Turing Test's relevance in assessing AI capabilities.

In a study titled “Does GPT-4 Pass the Turing Test?” by University of California, San Diego researchers Cameron R. Jones and Benjamin K. Bergen where they explored the performance of various AI models, including ELIZA, GPT-3.5, and GPT-4, alongside human participants in imitating human conversation, they ended up highlighting several factors that contributed to ELIZA’s surprising performance. First factor was its conservative response style that minimized the risk of providing misleading or incorrect information that could have exposed that it’s a machine since ELIZA was processed to build its response around a single keyword from users which meant that its accuracy was limited to syntactic response built upon predefined patterns. Further, researchers observed an absence of characteristic traits that were found in modern AI - such as helpfulness or excessive verbosity - that led them to view ELIZA as an uncooperative human.[46] Ultimately, the researchers, Cameron R. Jones and Benjamin K. Bergen from UC San Diego, pointed out how the Turing Test that led to ELIZA’s roar need to be critically reevaluated in the light of findings about both current and historical AI systems as ELIZA’s role in the case of ongoing conversation seems to be relevant only because of the parameters put forward by the Turing Test in 1949 which was initially thought of as an experiment and not an actual test.[47] In the same research, they - Cameron R. Jones and Benjamin K. Bergen from UC San Diego - also observed how ELIZA ignores grammatical structure and the context of the sentence. This causes an issue as ELIZA suffers from the inability to parse sentence structures that leads to less meaningful responses. This could also stem from its knowledge gap about the topic being discussed. Unlike modern models, due to this ELIZA could not put things into a broader context. ELIZA’s responsiveness is scripted and seems rigid and the reason is rooted in its underlying design, thus, changing its programming would not change its response patterns and sentence handling, rather would add more to its complexity as seen in the information shared by University of Birmingham’s excerpt What ELIZA lacks that say when a user states “Computers worry me,” ELIZA cannot relate this to any broader context, and often can generalize between the previous statement and “I’m not worried much by computers”.[48][47] This calls for an AI capable of meaningful guidance.

References

[edit]
  1. ^ a b "Alan Turing at 100". Harvard Gazette. 13 September 2012. Retrieved 2016-02-22.
  2. ^ a b Berry, David M. (2018). "Weizenbaum, ELIZA and the End of Human Reason". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern [Hello, I'm Eliza: Fifty Years of Conversations with Computers] (in German) (1st ed.). Berlin: Projekt Verlag. pp. 53–70. ISBN 9783897334670.
  3. ^ a b c d e f g Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W. H. Freeman and Company. ISBN 0-7167-0464-1.
  4. ^ a b Norvig, Peter (1992). Paradigms of Artificial Intelligence Programming. New York: Morgan Kaufmann Publishers. pp. 151–154. ISBN 1-55860-191-0.
  5. ^ a b c Weizenbaum, Joseph (January 1966). "ELIZA--A Computer Program for the Study of Natural Language Communication Between Man and Machine" (PDF). Communications of the ACM. 9: 36–45. doi:10.1145/365153.365168. S2CID 1896290 – via universelle-automation.
  6. ^ a b c Baranovska, Marianna; Höltgen, Stefan, eds. (2018). Hello, I'm Eliza fünfzig Jahre Gespräche mit Computern (1st ed.). Bochum: Bochum Freiburg projektverlag. ISBN 978-3-89733-467-0. OCLC 1080933718.
  7. ^ "ELIZAGEN - The Original ELIZA". sites.google.com. Retrieved 2021-05-31.
  8. ^ Berry, David M. (2023-11-06). "The Limits of Computation: Joseph Weizenbaum and the ELIZA Chatbot". Weizenbaum Journal of the Digital Society. 3 (3). doi:10.34669/WI.WJDS/3.3.2. ISSN 2748-5625.
  9. ^ a b Dillon, Sarah (2020-01-02). "The Eliza effect and its dangers: from demystification to gender critique". Journal for Cultural Research. 24 (1): 1–15. doi:10.1080/14797585.2020.1754642. ISSN 1479-7585. S2CID 219465727.
  10. ^ a b Bassett, Caroline (2019). "The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present". AI & Society. 34 (4): 803–812. doi:10.1007/s00146-018-0825-9.
  11. ^ "The Samantha Test". The New Yorker. Archived from the original on 2020-07-31. Retrieved 2019-05-25.
  12. ^ Marino, Mark (2006). Chatbot: The Gender and Race Performativity of Conversational Agents. University of California.
  13. ^ Marino, Mark C.; Berry, Dav id M. (2024-11-03). "Reading ELIZA: Critical Code Studies in Action". Electronic Book Review.
  14. ^ Colby, Kenneth Mark; Watt, James B.; Gilbert, John P. (1966). "A Computer Method of Psychotherapy". The Journal of Nervous and Mental Disease. 142 (2): 148–52. doi:10.1097/00005053-196602000-00005. PMID 5936301. S2CID 36947398.
  15. ^ a b Shah, Huma; Warwick, Kevin; Vallverdú, Jordi; Wu, Defeng (2016). "Can machines talk? Comparison of Eliza with modern dialogue systems" (PDF). Computers in Human Behavior. 58: 278–95. doi:10.1016/j.chb.2016.01.004.
  16. ^ a b Shrager, Jeff; Berry, David M.; Hay, Anthony; Millican, Peter (2022). "Finding ELIZA - Rediscovering Weizenbaum's Source Code, Comments and Faksimiles". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern (2nd ed.). Berlin: Projekt Verlag. pp. 247–248.
  17. ^ Weizenbaum 1976, p. 188.
  18. ^ Epstein, J.; Klinkenberg, W. D. (2001). "From Eliza to Internet: A brief history of computerized assessment". Computers in Human Behavior. 17 (3): 295–314. doi:10.1016/S0747-5632(01)00004-8.
  19. ^ a b Wortzel, Adrianne (2007). "ELIZA REDUX: A Mutable Iteration". Leonardo. 40 (1): 31–6. doi:10.1162/leon.2007.40.1.31. JSTOR 20206337. S2CID 57565169.
  20. ^ a b c d e f g h i Weizenbaum, Joseph (1966). "ELIZA—a computer program for the study of natural language communication between man and machine". Communications of the ACM. 9: 36–45. doi:10.1145/365153.365168. S2CID 1896290.
  21. ^ Wardrip-Fruin, Noah (2009). Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge, Massachusetts: MIT Press. p. 33. ISBN 9780262013437. OCLC 827013290.
  22. ^ Markoff, John (2008-03-13), "Joseph Weizenbaum, Famed Programmer, Is Dead at 85", The New York Times, retrieved 2009-01-07.
  23. ^ Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. W. H. Freeman. p. 7.
  24. ^ Meyer, Robinson (2015-06-23). "Even Early Focus Groups Hated Clippy". The Atlantic. Retrieved 2023-11-07.
  25. ^ Megan, Garber (Jun 9, 2014). "When PARRY Met ELIZA: A Ridiculous Chatbot Conversation From 1972". The Atlantic. Archived from the original on 2017-01-18. Retrieved 19 January 2017.
  26. ^ Walden, David; Van Vleck, Tom, eds. (2011). "Compatible Time-Sharing System (1961-1973): Fiftieth Anniversary Commemorative Overview" (PDF). IEEE Computer Society. Retrieved February 20, 2022. Joe Wiezenbaum's most famous CTSS project was ELIZA
  27. ^ Wardip-Fruin, Noah (2014). Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge: The MIT Press. p. 33. ISBN 9780262013437 – via eBook Collection (EBSCOhost).
  28. ^ "Coders at Work: Bernie Cosell". codersatwork.com.
  29. ^ a b "elizagen.org". elizagen.org.
  30. ^ Big Computer Games: Eliza – Your own psychotherapist at www.atariarchives.org.
  31. ^ "Chatbots & Dialogue Systems" (PDF). stanford.edu. Retrieved 6 April 2023.
  32. ^ Berry, David M. (2014). Critical theory and the digital. London: Bloomsbury Publishing. ISBN 978-1-4411-1830-1. OCLC 868488916.
  33. ^ maschafilm. "Content: Plug & Pray Film – Artificial Intelligence – Robots". plugandpray-film.de.
  34. ^ Avidan, David (2010), Collected Poems, vol. 3, Jerusalem: Hakibbutz Hameuchad, OCLC 804664009.
  35. ^ "lol:> psychoanalyze-pinhead". IBM. Archived from the original on October 23, 2007.
  36. ^ Edwards, Benj (2023-12-01). "1960s chatbot ELIZA beat OpenAI's GPT-3.5 in a recent Turing test study". Ars Technica. Retrieved 2023-12-03.
  37. ^ Jones, Cameron R.; Bergen, Benjamin K. (2024-04-20), Does GPT-4 pass the Turing test?, arXiv:2310.20216
  38. ^ Hofstadter, Douglas R. (1995). Fluid concepts & creative analogies: computer models of the fundamental mechanisms of thought. Fluid Analogies Research Group. New York, NY: Basic Books. ISBN 978-0-465-02475-9.
  39. ^ Atillah, Imane El (March 31, 2023). "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change". Euro News. Retrieved November 28, 2024.
  40. ^ Roose, Kevin (October 24, 2024). "Can A.I. Be Blamed for a Teen's Suicide?". The New York Times.
  41. ^ Tassi, Paul. "'Deus Ex: Mankind Divided's Ending Is Disappointing In A Different Way". Forbes. Retrieved 2020-04-04.
  42. ^ "The Quietus | Opinion | Black Sky Thinking | HyperNormalisation: Is Adam Curtis, Like Trump, Just A Master Manipulator?". The Quietus. 6 October 2016. Retrieved 26 June 2021.
  43. ^ McCarthy, Tyler (2018-01-18). "Young Sheldon Episode 12 recap: The family's first computer almost tears it apart". Fox News. Retrieved 2018-01-24.
  44. ^ O'Connor, Alice (2019-08-01). "The next Zachtronics game is Eliza, a visual novel about AI". Rock Paper Shotgun. Retrieved 2019-08-01.
  45. ^ Machkovech, Sam (August 12, 2019). "Eliza review: Startup culture meets sci-fi in a touching, fascinating tale". Ars Technica. Retrieved August 12, 2019.
  46. ^ Jones, Cameron; Bergen, Benjamin. "Does GPT-4 Pass the Turing Test?". ResearchGate.
  47. ^ a b Biever, Celeste (2023-07-25). "ChatGPT broke the Turing test — the race is on for new ways to assess AI". Nature. 619 (7971): 686–689. doi:10.1038/d41586-023-02361-7.
  48. ^ "What Eliza Lacks". poplogarchive.getpoplog.org. Retrieved 2024-11-29.

Bibliography

[edit]

Further reading

[edit]
[edit]