Friday, December 18, 2015

Thought on Architecture

The recent posts showed the agents under development in action.
Perhaps, it is the time to stop and think before continuing the development...

For the moment, the agents have very simple architecture, with a blackboard kind of internal communication system.  Namely, an agent has an input buffer and a state buffer, where 'codelets' can read from and write to, respectively.  As both buffers are python dictionaries (hash tables), they can contain any data, which are accessed by 'names/keys.'

Codelets are registered as 'rules,' which have a condition part and an action part.  In each execution cycle, the action of the codelets/rules whose conditions meet 'fires' and it may try to modify the content of the state buffer.  Each codelet has a score and if more than one codelet tries to modify a same part of the state buffer (with the same name/key), the value written by the codelet with the highest score (the winner) is chosen.



In the recent posts, I mentioned affect terms such as 'reward' and 'urge.'  In fact, they correspond to variables with the same names on the state buffer, and the agents 'smile' when they read a positive 'reward' in the buffer.

While the mechanism is quite simple, generic and usable, there are a few points to consider for future modifications:

  • Is it appropriate as a cognitive architecture?
    To make it look more like a respectable cognitive architecture, perhaps, the affect mechanism should have its own modules.
  • Is it biologically plausible?
    (The answer is, of course, No.)  To make it biologically plausible, the architecture should mimic the brain architecture.  Or, more simply, real brains are not supposed to use blackboard architecture.
  • Symbolic representation?
    The current system uses symbolic representation (python dictionaries) in internal communication.  Besides that the brain does not use such symbolic representation, using vector representation would be good when the system is to be controlled machine learning algorithms.

Tuesday, December 15, 2015

Spontaneous walk

After 'looking into each other' a while, the agents get bored (develop the urge to move) and start moving.  (The previous post showed the footage in which they follow each other, stop and smile when they meet.)

 
The simulator was made with V-Rep and python.
The code is found on GitHub.

Friday, December 4, 2015

Adding Emotional Expressions to Agents

Facial expressions were added to the agents in my simulation environment. They follow each other and stop and smile when they meet (kind of cute :-D).  (The smile is driven by 'reward' given when the agent meets another.)

 
The simulator was made with V-Rep and python.
The code is found on GitHub.