Our papers on multimodal sensory representation and emergent emotion have been accepted to ICDL-EpiRob 2018 conference and the workshop on Continual Unsupervised Sensorimotor Learning, respectively. This post shares the abstracts of our studies and my (biased) highlights of the conference activities.

Quick notes on the conference

  • Deep learning is everywhere, and it brings noteworthy applications to different domains: kinesthetic demonstration, affordance learning, and reinforcement learning.
  • Intrinsic motivation is on the rise, and there were some interesting applications in the game playing swarm. There was an interesting approach on the transferring (or sharing) the motivation value among the swarm members.
  • An excellent keynote has given by Kenji Doya on What can we further learn from the brain for AI and robotics?. The presentation suggests an exciting idea: considering the brain as a heterogeneous multi-agent system which effectively utilizes the energy consumption and data processing.
  • Another keynote speaker, Oliver Brock, suggests that beauty of the model – as in physics equations– should be cosidered while performing model selection.

The abstract of the conference paper: Multimodal sensory representation for object classifaction via Neo-cortically inspried Algorithm.

This study reports our initial work on multimodal sensory representation for object classification. To form a sensory representation we used the spatial pooling phase of the Hierarchical Temporal Memory -- a Neocortically-inspired algorithm. The classification task was carried out on the Washington RGB-D dataset in which the employed method provides extraction of non-hand engineered representations (or features) from different modalities which are pixel values (RGB) and depth (D) information. These representations, both early and lately fused, were used as inputs to a machine learning algorithm to perform object classification. The obtained results show that using multimodal representations significantly improve (by 5%) the classification performance compared to a when a single modality is used. The results also indicate that the performed method is effective for multimodal learning and different sensory modalities are complementary for the object classification. Therefore, we envision that this method can be employed for object concept formation that requires multiple sensory information to execute cognitive tasks.

The abstract of the workshop paper: Emergent emotion as a regulatory mechanism for a cognitive task implemented on the iCub robot.

In this study, we employed an emergent emotion model, based on neuro-computational energy regulation, to carry out a cognitive task. The experiment involved visual recalling and was performed by a physically embodied agent (iCub humanoid robot). In this task, the agent operates its associative memory (Higher-Order Hopfield Network) to form a stimulusenergy association for each perceived input. Then, the agent uses these associations to derive an internal reward signal in a reinforcement learning framework to make a sequence of actions (i.e., coordinated head-eye movements) to discover the states where the minimal computational energy is required to perform the task. The results indicate that the agent successfully utilizes this model to act in an unknown environment by following the energy minimization principle. On the basis of obtained results, we suggest that exploiting this approach will give rise to rich applications for developmental robotics where emergent (that is, not reflexive) behaviors are necessary for higher cognitive functions – planning, decision making, etc.

To sum up, the atmosphere of the conference was productive for the attendees, especially for a first timer. Note that, there is a special issue call on Transactions on Cognitive and Developmental Systems on Continual Unsupervised Sensorimotor Learning.