The Frame Problem = The Underestimation of What It Means to Be Human

I am currently reading a very interesting course in the philosophy of artificial intelligence. This week I got introduced to The Frame Problem of AI, and I thoroughly enjoyed this reading so I thought I’d share my post on the assignment with you.

According to Dennett, the frame problem is an epistemological problem rather than a computational problem. Why is this? Epistemology concerns the theory of knowledge, what is knowledge, how do we obtain and validate it? If we are to transfer knowledge to a robot we have to be able to answer the philosophical questions related to knowledge. How do human beings come to know and act in a common-sensical way? I think Dennett shows clearly that this is a very hard (if not impossible) question to answer and, therefore, equally hard or impossible to implement in a robot.

Firstly, we are biological beings, we are born knowing things about the world without ever having to be (explicitly) taught (e.g. that a smile shows friendliness). We are no tabula rasa like robots where you have to program everything in order for it to act intelligently (even that it has to smile to look friendly). But we also learn from experience and from connecting that experience with other experiences. Although sometimes, we can connect two completely unrelated experiences and learn from that to solve problems. How can this complex learning process ever be “taught” to robots? One way of doing this is through introspection, or to use the words of Dennett “an examination of what is presented or given to consciousness” [1, p. 186]. But, as Dennett writes, introspection has limitations. We cannot observe or explain everything we do. “For some time now we have known better, we have conscious access to only upper surface, as it were, of the multilevel system of information-processing that occurs in us” [1, p. 187]. Even when we seem to be deliberately thinking about how to solve a difficult task, we cannot explain all the details on how we solved these problems. Also, even if we try to plan the problem-solving process to the most meticulous detail we still may encounter other unpredictable or “surprise” problems. Human beings are flexible enough to deal with these problems but how can we ever program into a machine to deal with these problems if we, the people who build them, are not even aware or prepared for the problems in the first place.

Secondly, the real world is full of noise, but thankfully our brains are experts at filtering this information so that we are not overloaded. Human beings are very good at noticing the most important things that we need to notice and to ignore a bunch of things that are not relevant. The question of what is relevant information depends, of course, on what we plan to do, the context etc. How do you prepare a machine for every single situation that it might encounter? In addition to this, how do we prepare the machine for an ever-changing world? This relates to qualification problem, and this is a very important part of the frame problem according to Dennett.

Thirdly, according to Dennett, another aspect of the frame problem is the problem of induction. “The problem of having good expectations about any future events, whether they are one’s own actions, the actions of another agent, or mere happenings of nature” [1, p. 194]. How do we answer the general question: “given that I believe all this (have all this evidence), what ought I to believe as well (about the future or about unexamined parts of the world)?” (ibid.) You need a vast amount of knowledge and experience to answer this question (symbolic problem). And if you are a robot, this information has to be store and readily accessible (syntactic problem) Can we ever give a robot enough experience for it to answer this question intelligently? Even if a robot can answer this question, the question is still how it can represent this knowledge effectively?

Lastly, I think it is important to keep in mind that human beings make mistakes so we should expect nothing less of a robot. But what kinds of mistakes can we tolerate, that is the question, because when it comes to the question of responsibility – who should take the blame? The machine with a “mind” or the programmer. This is also my first question. Another question that I have is related to the concept of cognitive wheel. Even though we might be able to mimic the cognitive subcomponents in the brain, we are still not one step closer to understanding how human common-sense making is accomplished. My question is: why does this matter? Why should we aim to understand human sense-making with the help of robots? If we stop aiming to create common-sense making human beings out of robots then we can also ignore this question, and just enjoy the benefits of robots being the square machine that it is.

Another thing I’ve been thinking about is the obsession with making robots like “human” and “excessively smart”. This surely must be a gendered question because, evidently, the field of AI has, and is, dominated by the male gender. I am convinced that this gendered aspect has affected everything related to computing and AI. I guess you could say that AI is men’s attempt to defeat women in the only one thing that a female person can do that a man can’t, namely to create life.

Reference

[1]       D. Dennett, “Cognitive Wheels: The Frame Problem of AI,” in Minds, Machines and Evolution, Cambridge University Press, 1984.

Advertisement