Saturday, February 15, 2014

Phase II Plan

I modified the plan from the previous post.
  • In the previous post, I thought I'd use the depth map for object recognition, but I'd use regular optical maps in this plan, as the depth map and the optical map are different modalities and it is more difficult to imagine what is going on with the depth perception from the human (phenomenological) point of view.
  • In this plan, I dropped machine learning and will use more hard-wired approaches as I'd avoid unpredictable aspects.


Again recapitulating the Phase II part of my research plan here:

Phase II: Recognizing Spelke's Objects

  • Basic Ideas
    • Spelke's Object: coherent, solid and inert bundle of features of a certain dimension that continues over time.
      Features: colors, shapes (jagginess), texture, visual depth, etc.
    • While recognition of Spelke's objects may be preprogrammed, recognized objects become objects of categorization by means of non-supervised learning.  In this process, hierarchical (deep) learning would be done from the categorization of primitive features to the re-categorization of categorized patterns.
    • Object recognition will be carried out within spontaneous actions of the robot.
    • The robot shall gather information preferentially on 'novel' objects (curiosity-driven behavior) ('novelty' to be defined).
The following is the new plan for Phase II.

Robot Basics
  • Fish like robot swimming in a 3D space
    Experiments will be done with the SigVerse robot simulator.
Environment
  • Keep the robot from wandering away by making it attracted by objects on the floor (see below).
  • There are passively movable objects on the ground.
Robot Vision
  • Line of Sight and 2D depth sensors (SigVerse)
  • Static images
  • Optical flow (temporal ⊿)
  • Line of sight depth sensor (to avoid collision)
Central visual field (CVF)
  • CVF (gaze) moves randomly (saccades) in the fixed visual field.
  • CVF is attracted to information-dense areas.
  • Information density is measured by the density of extracted features such as line segments and optical flow.
  • CVF is 'bored' with each attractor as time passes.
  • CVS allows high-resolution feature extraction.
Peripheral visual field
  • Information density is measured with low-resolution feature extraction.
Feature extraction
  • Line segments (using such as SIFT or Gabor filter) 
  • Border ownership/Figure-Ground separation (see below)
Basic activities (hard-wired)
  • Randomly change direction
  • Direction change is attracted by line of sight
  • If reward increases by direction change, then accelerate till next direction change.
  • If locomotion decreases reward, then decelerate (with water resistance) (and change direction).
  • If depth sensor predicts collision, then decelerate (with water resistance) and change direction.
Rewards
  • Increase in information density in CVF gives a positive reward (curiosity; aesthetics)
  • Concussion by collision will give a negative reward.
Spelke's object detection
  • Uses figure-ground separation algorithms inspired by visual information processing in the brain.  Ref. 
  • Spelke's objects are recognized as figure-like lumps detected by figure-ground separation algorithms.
  • Optical flow may also be used for figure-ground separation.


Sunday, February 2, 2014

Phenomenology of Artefacts

In this memorandum, I propose a new scientific endeavor that I call “phenomenology of artefacts.”  The gist of the endeavor is that (human) epistemology would be emulated by constructing artefacts with epistemic functions similar to human beings.  While it is of an epistemology, it is inspired by (Husserlian) phenomenology, as the enterprise of Husserl was to construct a rigorous science (strenge Wissenschaft) where objective knowledge is established.  The basic idea of phenomenology of artefacts is the parallel that while Husserl started with introspection of his own mind to obtain knowledge, the 'mental state' of an artificial intelligence can be inspected (by external observers).  an artificial intelligence (AI) can inspect its own ‘mental states.’ (modified 2015-07-12, see the footnote* below)  Thus, I believe, AI researchers can obtain inspirations from ideas of the Husserlian phenomenology.  I’ll give a few points on this regard in the following.

Transcendence

Husserl regarded his phenomenology as transcendental.  Since one starts from phenomenal (internal) world, s/he would never reach or be sure about external objects.  Yet Husserl’s attempt was to transcend this barrier and to obtain objective knowledge of the external world.  The situation can be similar for an AI.  Though its perceptual states could be causally explained as sensory information processing, it is not clear at all how an AI could construct knowledge of the world from perceptual imagery.  This is especially true when it is not provided with prior knowledge of the world and should learn from scratch.  For example, how can an AI tell that there are 3D physical objects out there from transient perceptual patterns given as part of its internal states?  When it establishes the model of the 3D physical world and becomes ‘sure’ about external objects, then it transcends from internal perceptual imagery.

Perception and kinesthesis

Husserl emphasized the role of perception (especially the visual one) in obtaining objective knowledge.  He also emphasized the relation of perception to our motion.  When we perceive an object while moving, the perceptual images change over time in a certain way.  The way of change can be learned and becomes predictable.  To put it very simply, we learn about the external 3D world as we move and perceive.  Husserl used the term kinesthesis, which in a narrow sense means bodily sensation as we move, but may refer to any motion-related sensation (including visual perception).

Time-consciousness

“According to Husserl, the most fundamental consciousness, presupposed in all other forms and structures of consciousness, is the consciousness of time.” [Bernet et al.] (p.101)   If the knowledge of the world is obtained through kinesthetic interaction with the world, it is apparently true also for phenomenology of artefacts, but this aspect has not been fully explored by AI researchers.
  Husserl distinguished three moments of time-consciousness, namely, 1) premodal sensation corresponding to the now-moment, 2) retention or ‘a comet tail of memory’ and 3) protention or expectation.   Putting philosophical consideration asides, these moments may be understood from a neural perspective.  The cortex (brain) can be regarded as a ‘recurrent neural network’ learning temporal patterns.  At any moment, it retains information of its state of the immediate past (retention) and expects its state of the immediate future (protention) as it learns temporal patterns.

Concluding remarks

I often encounter useful insights when I read books on Husserl and there may be more inspirations from Husserl for AI researchers.  However, even if there is no more inspiration there, I believe phenomenology of artefacts could stand by itself and it could even replace Husserl’s attempt.  Human introspection is notoriously vague/opaque and does not seem to be suited for rigorous pursuit  of science.  Meanwhile, artefacts can be designed definitely and observed without opaqueness.  Thus, I advocate phenomenology of artefacts as a way to rigorous science Husserl envisaged…

Reference

[Bernet et al.] An Introduction to Husserlian Phenomenology, Rudolph Bernet, Iso Kern & Eduard Marbach, Northwestern University Press (1993).
[Gallagher et al.] The Phenomenological Mind (2nd edition), Shaun Gallagher & Dan Zahavi, Routledge (2012).
[J.J.Gibson] The Ecological Approach To Visual Perception, James J. Gibson,, Routledge (1986).

Footnote

* Modified after the comment by Shogo Tanaka that inspection by a machine would lead to the idea of 'consciously' reflecting on the internal representation of the world.  Phenomenology avoids such an idea and stress pre-reflective consciousness.  I fully agree with this comment.  Machine learning-based emergent AI does not normally make symbolic representation of what they are perceiving or doing.

The PDF version of this content (version 2014-02-02) is available here.