“Research is hard. Extending human knowledge is a difficult task. Discovering new and useful ideas is like attacking a granite cliff with your bare hands. Once in a while a small fragment breaks loose and progress is made” D. S. Bernstein.
Neurophysiology and computational modeling of action-observation, (ongoing)
In this colloborative project, we are aiming to propose a biologically realistic computational model for monkeys' Mirror Neuron System via performing machine learning and statistical modeling tecniques on the data that obtained by Neuroscientists.
Multimodal sensory representation for object classifaction (July 2018)
This study reports our initial work on multimodal sensory representation for object classification. To form a sensory representation we used the spatial pooling phase of the Hierarchical Temporal Memory -- a Neocortically-inspired algorithm. The classification task was carried out on the Washington RGB-D dataset in which the employed method provides extraction of non-hand engineered representations (or features) from different modalities which are pixel values (RGB) and depth (D) information. These representations, both early and lately fused, were used as inputs to a machine learning algorithm to perform object classification. The obtained results show that using multimodal representations significantly improve (by 5%) the classification performance compared to a when a single modality is used. The results also indicate that the performed method is effective for multimodal learning and different sensory modalities are complementary for the object classification. Therefore, we envision that this method can be employed for object concept formation that requires multiple sensory information to execute cognitive tasks.
Spatial pooling as feature selection method for object recognition, (Feb 2018)
This study reports our work on object recognition by using the spatial pooler of Hierarchical Temporal Memory (HTM) as a method for feature selection. To perform the recognition task, we employed this pooling method to select features from COIL-100 dataset. We benchmarked the results with state-of-the-art feature extraction methods while using different amounts of training data (from 5% to 45%). The results indicate that the performed method is effective for object recognition with a low amount of training data in which state-of-the-art feature extraction methods show limitations.
Sequential Decision Making Based on Emergent Emotion for a Humanoid Robot, (July 2016)
Certain emotions and moods can be manifestations of complex and costly neural computations that our brain wants to avoid. Instead of reaching an optimal decision based on the facts, we find it often easier and sometimes more useful to rely on hunches. In this work, we extend a previously developed model for such a mechanism where a simple neural associative memory was used to implement a visual recall system for a humanoid robot. In the model, the changes in the neural state consume (neural) energy, and to minimize the total cost and the time to recall a memory pattern, the robot should take the action that will lead to minimal neural state change. To do so, the robot needs to learn to act rationally, and for this, it has to explore and find out the cost of its actions in the long run. In this study, a humanoid robot (iCub) is used to act in this scenario. The robot is given the sole action of changing his gaze direction. By reinforcement learning (RL) the robot learns which state-action pair sequences lead to minimal energy consumption. More importantly, the reward signal for RL is not given by the environment but obtained internally, as the actual neural cost of processing an incoming visual stimuli. The results indicate that reinforcement learning with the internally generated reward signal leads to non-trivial behaviours of the robot which might be interpreted by external observers as the robot's `liking' of a specific visual pattern, which in fact emerged solely based on the neural cost minimization principle.
Visual target sequence prediction via HTM on the iCub (May 2016)
In this study, we present our work on sequence prediction of a visual target by implementing a cortically inspired method, namely Hierarchical Temporal Memory (HTM). As a preliminary test, we employ HTM on periodic functions to quantify prediction performance with respect to prediction steps. We then perform simulation experiments on the iCub humanoid robot simulated in the Neurorobotics Platform. We use the robot as embodied agent which enables HTM to receive sequences of visual target position from its camera in order to predict target positions in different trajectories such as horizontal, vertical and sinusoidal. The obtained results indicate that HTM based method can be customized for robotics applications that require adaptation of spatiotemporal changes in the environment and acting accordingly.
Computational Approaches To Brain Mechanisms Of Action Recognition and Emotion (Aug 2015)
Our brains can solve computationally expensive problems that is common in robotics and artificial intelligence with apparent ease. Among such problems is decision making and action recognition with noisy/unreliable data/information, which we investigate in this study. To be concrete, (1) we investigate the plausibility of emotion as a mechanism to accelerate decision making with the penalty of possibility of making wrong decisions, and (2) we analyze the neural firing of mirror neurons (a set of neurons in the ventral premotor cortex, i.e. area F5 of macaque monkeys that respond to self executed as well as observed actions) to understand their action recognition capacity.
Neural representation in F5: cross-decoding from observation to execution, (Oct 2014- Feb 2015)
The mirror neurons are usually studied and identified with correlation based analysis. A neuron is deemed a mirror neuron when the its firing correlates highly with action execution as well as with the observation of a similar action performed by another monkey or the experiment. This is fine for the initial filtering of neurons but does not answer the question of representational equivalence. Is it really the case that the representation of an executed action also assumes similar representation during action observation? To answer this question we recorded 192 neurons from the F5 area of a macaque monkey trained to follow a paradigm similar to used by Rizzolatti and coworkers with the target objects of Cylinder, Sphere, Ring and Cube (Papadourakis, Raos 2014), and adopted a decoding framework.
Control of networked cooperative teams of mobile robots (May- August 2013)
In this project, the experiments with laser equipped Khepera III robots were carried out to create robust wireless connectivity by estimating some channel parameters in online while navigating in the dynamic environments- corridors and lab space- and executing a specific task. In this research period, we provide initial results of the multi and single robot experimental studies. The obtained results seem to be promising and collected data can be used as a benchmark after quantitatively analysing parameters and distance/time relations. To extend this study, the multiple robot experiments will be carried out in the same environment and the connectivity parameters will be extracted in real-time. The extracted parameters will enable the robots to update their own positions to maintain connectivity between two end points of the network which are base-station and leading robot.
Bio-inspired Optimization Algorithms and chemical concentration mapping by mobile robots, (Feb–August 2010)
In these projects we describe implementations of various bio-inspired algorithms for obtaining the chemical gas concentration map of an environment filled with a contaminant. The experiments are performed using Khepera III miniature mobile robots equipped with a “kheNose” transducer in an environment with ethanol gas. We implement and investigate the performance of Decentralized and Asynchronous Particle Swarm Optimization (DAPSO), Bacterial Foraging Optimization (BFO), and Ant Colony Optimization (ACO) algorithms. Moreover, we implement sweeping (sequential search algorithm) as a base case for comparison with the implemented algorithms. During the experiments at each step the robots send their sensor readings and position data to a remote computer where these data are combined, filtered, and interpolated to form the chemical concentration map of the environment. The robots also exchange this information among each other and cooperate in the DAPSO and ACO algorithms. The performance of the implemented algorithms is compared in terms of the quality of the maps obtained and success of locating the target gas sources.