I just finished my Ph.D. works and waiting for the thesis defense. The thesis is titled as “Brain-inspired algorithmic approaches to sensory representation and decision making.” After defending the thesis, I am planning to write a lot of longhand reflections (i.e., lessons learned) for my Ph.D. period soon.
Abstract: In this dissertation, we algorithmically investigated the parts of the mammalian brain that play critical roles in sensory representation and decision making. To this end, we initially focused on the operating principles of the Neocortex to form robust sensory representations of the perceived visual stimuli. We, then, employed these representations on vision-based tasks: object recognition and multimodal object classification. In the decision-making part, we investigated the direct and indirect interactions among the sensory cortex, the Amygdala (AMY) and the Prefrontal Cortex (PFC). By emulating these interactions and information flow, we proposed a new method to generate an internal reward signal while making a series of decisions to execute a cognitive task (i.e., visual recalling). This reward mechanism enables the agent to regulate neuro-computational energy consumption and shows emotion-guided behavior.
For the sensory representation tasks, we observed that employing a neocortically-inspired method is useful for extracting stable representations (features) while processing low-resolution image datasets for object recognition. We also show that the method is capable of forming multimodal representations by employing color and depth information for object classification. By leveraging this method for vision-based tasks, we report the following contributions: first, we show that this method enables the employed image processing pipeline to elicit features in a non-hand-engineered way; second, we report that employing this method for feature selection brings about higher recognition accuracy rates in applications which state-of-the-art methods cannot be applied (or perform poorly). Lastly, this method allows forming multimodal sensory representation that yields the complementary characteristics of the used sensory inputs for object classification.
For the decision-making task, we examined the role of the emotion in a way that emerges in the brain. We show that this emergent phenomenon can be an outcome of the regulatory mechanism of neuro-computational demand in decision making. To do this, we embedded an internal reward signal –both on a simulated agent and the iCub robot– in a reinforcement learning framework to perform a visual recalling task. The conducted experiments lead to the following contributions: firstly, we show that the proposed mechanism enables an agent to perform the cognitive task following a neuro-computational energy minimization principle. Secondly, we note that the behavior demonstrated by the agent is non-trivial regarding stimulus-energy-reward associations. Lastly and more importantly, this non-deterministic behavior emerges from the agent’s internal mechanisms (that is, associative memories and internal reward) while operating in an unknown environment.
On the basis of the presented results, we conclude that brain-inspired algorithmic approaches are suitable for the applications that require robust sensory representation and sequential decision making. We also note that these methods are useful in the experiments where hardware constraints (e.g., camera resolution) and environmental perturbations (e.g., noise) cannot be avoided.