In reading through this article, there were more than a few thoughts, questions, issues and otherwise that presented themselves to me. The first thing that I found myself disagreeing with Searle was less of a disagreement with what he said and more of a disagreement with one of his unsaid assumptions. In his response to “The Combination Reply” from Berkeley and Stanford, he discusses intentionality as it relates to the “mental” world. While there is a definition given from Newell, as “the essence of the mental is the operation of a physical symbol system” I did not find this helpful in understanding what the mental is in humans. While this makes sense within a mechanistic and operationalized understanding of a robot, or within something “artificial” (nonhuman) I find that this understanding of the mental is not sufficient to understand human functioning. If we are to provide something that is insufficient in describing the human nature of “mentality” then I think it is fair to say that, when using this definition to stats if there was mentality in the robot, it, of a necessity, will be fundamentally different from what a human possesses. To try and make this more succinct, I think that the question of human mental processes needs to be answered more fully, as a prerequisite, before we can begin to understand and converse on what the nature of a program would look like that possesses intentionality and mentality.

One of the replies that I found summed up one of my questions that appeared much earlier in the article than did this reply was the “many mansions reply” from Berkeley. Here, the point was made that this question of strong AI would eventually be moved past, given the astronomical leaps in technology that would eventually be made. Here, I think that this reply is much stronger than Searle’s response. The question of AI HAS changed drastically in the over 40 years that have passed since this article was published. Algorithms now pass for a “weak” AI that plays an intimate role in many societies, including American. What I think this does is change how this article should be read. If we are to read this through the lens of 2021, it should not be read as a discussion on what is possible and what is probable as it comes to AI, but what the implications for our understanding of how humans approach their own understanding of their own “mentality” “intentionality” and, on a more global level, their essential humanity. Should AI be something that approximates human cognition, or perhaps surpasses it, or should it be something completely different, that is used analogously to human cognition? As I touched on earlier, I think that this is a question that we should not even begin to address, as the question of what human cognition and intentionality is, what it is used for and why we should be approximating/creating a new form of it are all questions that need to be answered if we are to ethically answer the question of what AI is, and what it should be used for.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s