In this memorandum, I propose a new scientific endeavor that I call “phenomenology of artefacts.” The gist of the endeavor is that (human) epistemology would be emulated by constructing artefacts with epistemic functions similar to human beings. While it is of an epistemology, it is inspired by (Husserlian) phenomenology, as the enterprise of Husserl was to construct a rigorous science (strenge Wissenschaft) where objective knowledge is established. The basic idea of phenomenology of artefacts is the parallel that while Husserl started with introspection of his own mind to obtain knowledge, the 'mental state' of an artificial intelligence can be inspected (by external observers).
an artificial intelligence (AI) can inspect its own ‘mental states.’ (modified 2015-07-12, see the footnote* below) Thus, I believe, AI researchers can obtain inspirations from ideas of the Husserlian phenomenology. I’ll give a few points on this regard in the following.
Husserl regarded his phenomenology as transcendental. Since one starts from phenomenal (internal) world, s/he would never reach or be sure about external objects. Yet Husserl’s attempt was to transcend this barrier and to obtain objective knowledge of the external world. The situation can be similar for an AI. Though its perceptual states could be causally explained as sensory information processing, it is not clear at all how an AI could construct knowledge of the world from perceptual imagery. This is especially true when it is not provided with prior knowledge of the world and should learn from scratch. For example, how can an AI tell that there are 3D physical objects out there from transient perceptual patterns given as part of its internal states? When it establishes the model of the 3D physical world and becomes ‘sure’ about external objects, then it transcends from internal perceptual imagery.
Perception and kinesthesisHusserl emphasized the role of perception (especially the visual one) in obtaining objective knowledge. He also emphasized the relation of perception to our motion. When we perceive an object while moving, the perceptual images change over time in a certain way. The way of change can be learned and becomes predictable. To put it very simply, we learn about the external 3D world as we move and perceive. Husserl used the term kinesthesis, which in a narrow sense means bodily sensation as we move, but may refer to any motion-related sensation (including visual perception).
Time-consciousness“According to Husserl, the most fundamental consciousness, presupposed in all other forms and structures of consciousness, is the consciousness of time.” [Bernet et al.] (p.101) If the knowledge of the world is obtained through kinesthetic interaction with the world, it is apparently true also for phenomenology of artefacts, but this aspect has not been fully explored by AI researchers.
Husserl distinguished three moments of time-consciousness, namely, 1) premodal sensation corresponding to the now-moment, 2) retention or ‘a comet tail of memory’ and 3) protention or expectation. Putting philosophical consideration asides, these moments may be understood from a neural perspective. The cortex (brain) can be regarded as a ‘recurrent neural network’ learning temporal patterns. At any moment, it retains information of its state of the immediate past (retention) and expects its state of the immediate future (protention) as it learns temporal patterns.
I often encounter useful insights when I read books on Husserl and there may be more inspirations from Husserl for AI researchers. However, even if there is no more inspiration there, I believe phenomenology of artefacts could stand by itself and it could even replace Husserl’s attempt. Human introspection is notoriously vague/opaque and does not seem to be suited for rigorous pursuit of science. Meanwhile, artefacts can be designed definitely and observed without opaqueness. Thus, I advocate phenomenology of artefacts as a way to rigorous science Husserl envisaged…
Reference[Bernet et al.] An Introduction to Husserlian Phenomenology, Rudolph Bernet, Iso Kern & Eduard Marbach, Northwestern University Press (1993).
[Gallagher et al.] The Phenomenological Mind (2nd edition), Shaun Gallagher & Dan Zahavi, Routledge (2012).
[J.J.Gibson] The Ecological Approach To Visual Perception, James J. Gibson,, Routledge (1986).
Footnote* Modified after the comment by Shogo Tanaka that inspection by a machine would lead to the idea of 'consciously' reflecting on the internal representation of the world. Phenomenology avoids such an idea and stress pre-reflective consciousness. I fully agree with this comment. Machine learning-based emergent AI does not normally make symbolic representation of what they are perceiving or doing.
The PDF version of this content (version 2014-02-02) is available here.