The discovery was made using a general purpose vision system called CLIP. CLIP trains itself on advanced data sets to distinguish between objects and people in abstract contexts such as cartoons and sculptures.
OpenAI made the following statements in its blog post:
Within the CLIP system, we discovered neurons that respond to the same concept in general terms, regardless of symbolic or conceptual expression.
Our discovery of multi-knot neurons in CLIP hints at something that may be a common mechanism in synthetic and natural vision systems: abstraction.
Multi-knot neurons were first discovered in the human brain in 2005. In the discovery, scientists realized that a single neuron could distinguish a common theme through abstract clusters of concepts transmitted from sensory data.
For example, instead of working with millions of neurons to recognize a famous person’s picture, only one neuron is responsible for distinguishing them.
This means that in the human brain there is a single neuron unique to every family member, friend, and celebrity known to the person. This neuron responds to photographs, drawings, and just the visual biçim of this person’s name.
OpenAI researchers identified artificial neurons that “respond to emotions, animals, and famous people,” similar to their biological relatives.
One of these neurons, which they call the “Spider-Man neuron,” bears a “striking resemblance” to the multi-knot neurons first outlined in a 2005 study.
The researchers wrote that the neuron “responds to the image of a spider, the image of the word ‘spider” and the comic book character Spider-Man, both dressed up and drawn. ”
Neural networks have serious potential in the development of advanced artificial intelligence systems that have already made leaps with facial recognition, digital assistants and self-utilizing tools.
Systems consist of artificial neurons or nodes inspired by the architecture of biological nervous systems to process data.
The drawback of such a powerful technology is that it is difficult to understand why the system makes certain decisions or how it achieves certain results.
This can lead to unintended consequences, such as having sexist or racist relationships with certain categories due to the huge data sets they use in their education.
OpenAI’s model özgü been trained in a selected subset of the web, but özgü some biases and connotations that could be harmful if used in commercial applications.
For example, we came across a ‘Middle East’ neuron with a terrorism connotation and a ‘migration’ neuron responsive to Latin America.
We even came across a neuron firing in response to both dark-skinned people and gorillas. This is the equivalent of the photo tagging incidents we’ve seen on previous models and found unacceptable.
The tools researchers have developed to understand these types of neural networks can help others to anticipate future problems.