Infomorphic Neurons Bring AI One Step Closer to Brain-Like Learning

Summary: Researchers have developed a new kind of artificial neuron—called infomorphic neurons—that can independently learn and self-organize with nearby neurons, mimicking the decentralized learning of biological brains. Inspired by pyramidal cells in the cerebral cortex, these neurons process local signals to adapt and specialize in tasks without external control.
Each infomorphic neuron determines whether to collaborate, specialize, or align with others based on a novel information-theoretic measure. This approach not only enhances machine learning efficiency and transparency but also offers valuable insights into how biological neurons learn.
Key Facts:
- Local Learning: Infomorphic neurons learn independently through neighbor interactions, eliminating the need for central coordination.
- Brain-Inspired Design: Modeled after pyramidal brain cells, these neurons mimic biological learning mechanisms.
- Flexible and Transparent: A new info-theoretic framework lets neurons specialize or collaborate, improving both performance and interoperability.
Source: Max Planck Institute
Both, human brain and modern artificial neural networks are extremely powerful. At the lowest level, the neurons work together as rather simple computing units.
An artificial neural network typically consists of several layers composed of individual neurons. An input signal passes through these layers and is processed by artificial neurons in order to extract relevant information. However, conventional artificial neurons differ significantly from their biological models in the way they learn.

While most artificial neural networks depend on overarching coordination outside the network in order to learn, biological neurons only receive and process signals from other neurons in their immediate vicinity in the network.
Biological neural networks are still far superior to artificial ones in terms of both, flexibility and energy efficiency.
The new artificial neurons, known as infomorphic neurons, are capable of learning independently and self-organized among their neighboring neurons. This means that the smallest unit in the network has to be controlled no longer from the outside, but decides itself which input is relevant and which is not.
In developing the infomorphic neurons, the team was inspired by the way the brain works, especially by the pyramidal cells in the cerebral cortex. These also process stimuli from different sources in their immediate environment and use them to adapt and learn.
The new artificial neurons pursue very general, easy-to-understand learning goals: “We now directly understand what is happening inside the network and how the individual artificial neurons learn independently”, emphasizes Marcel Graetz from CIDBN.
By defining the learning objectives, the researchers enabled the neurons to find their specific learning rules themselves.
The team focused on the learning process of each individual neuron. They applied a novel information-theoretic measure to precisely adjust whether a neuron should seek more redundancy with its neighbors, collaborate synergistically, or try to specialize in its own part of the network’s information.
“By specializing in certain aspects of the input and coordinating with their neighbors, our infomorphic neurons learn how to contribute to the overall task of the network”, explains Valentin Neuhaus from MPI-DS.
With the infomorphic neurons, the team is not only developing a novel method for machine learning, but is also contributing to a better understanding of learning in the brain.
About this AI and learning research news
Author: Manuel Maidorn
Source: Max Planck Institute
Contact: Manuel Maidorn – Max Planck Institute
Image: The image is credited to Neuroscience News
Original Research: Open access.
“A general framework for interpretable neural learning based on local information-theoretic goal functions” by Marcel Graetz et al. PNAS
Abstract
A general framework for interpretable neural learning based on local information-theoretic goal functions
Despite the impressive performance of biological and artificial networks, an intuitive understanding of how their local learning dynamics contribute to network-level task solutions remains a challenge to this date.
Efforts to bring learning to a more local scale indeed lead to valuable insights, however, a general constructive approach to describe local learning goals that is both interpretable and adaptable across diverse tasks is still missing.
We have previously formulated a local information processing goal that is highly adaptable and interpretable for a model neuron with compartmental structure.
Building on recent advances in Partial Information Decomposition (PID), we here derive a corresponding parametric local learning rule, which allows us to introduce “infomorphic” neural networks.
We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised, and memory learning.
By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.