Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/27066
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Lapointe, Sandra | - |
dc.contributor.author | Raman, Siddharth | - |
dc.date.accessioned | 2021-10-14T20:10:30Z | - |
dc.date.available | 2021-10-14T20:10:30Z | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | http://hdl.handle.net/11375/27066 | - |
dc.description.abstract | The purpose of this thesis is to attempt to answer the question why chatbot computer programs are often not very good at communicating in human natural language. It is argued that one possible reason why chatbots are often not good conversationalists is because they model communication in terms of only encoding and decoding processes. Human communication, however, involves making inferences about the mental states of others. Chapter one begins by exploring a popular theory about how communication works called the code model of communication. The code model describes human communication as having to do with speakers encoding thoughts into utterances and listeners decoding utterances to recover a representation of the thought that the speaker wanted to communicate. A variation on the code model is also explained; this is referred to as the information-theoretic code model. Two arguments against the code model will then be presented. Finally, an alternative to the code model is considered, called the ostensive-inferential model of communication. Chapter two begins with an explanation of how chatbots work. Chatbots are made up of several different components. The language model component provides chatbots with the ability to produce and interpret utterances. Next, an explanation of how language models work, and how chatbots can represent the meanings of words is provided. The chapter concludes by documenting the fact that chatbots communicate using only encoding and decoding processes – that is, that chatbots communicate within the paradigm of the code model of communication. Chapter three explains how the fact that chatbots communicate using only encoding and decoding processes can help explain why chatbots often cannot communicate effectively in human natural language. The poor conversational abilities of chatbots are a result of the fact that chatbots only access linguistic context, whereas listeners need access to non-linguistic context to be able to grasp utterance meaning. The question of whether chatbots are able to make inferences about non-linguistic properties of context at all is also considered. It is argued that they cannot, precisely because the neural language models that they rely upon for their linguistic competence are natural codes that merely associate percepts with output behaviours using encoding and decoding processes. | en_US |
dc.language.iso | en | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Philosophy | en_US |
dc.subject | Philosophy of Language | en_US |
dc.subject | Linguistics | en_US |
dc.title | A Contribution to the Philosophy of Artificial Intelligence and Chatbot Communication | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Philosophy | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Master of Philosophy (MA) | en_US |
dc.description.layabstract | The purpose of this thesis is to explore the question why chatbot computer programs are often not very good at communicating in human natural language. It is argued that one possible reason why chatbots are often not good conversationalists is because they model communication in terms of only encoding and decoding processes. Human communication, however, involves making inferences about the mental states of others. | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
raman_siddharth_s_2021august_maphilosophy.pdf | 658.35 kB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.