Using insights from neuroscience to build modern robots

Today, neuroscience and robotics are developing hand in hand. Mikhail Lebedev, Academic Supervisor at HSE University's Centre for Bioelectric Interfaces, spoke about how studying the brain inspires the development of robots.

Robots are interesting to neuroscience and neuroscience is interesting to robots - this is what the article 'Neuroengineering challenges of fusing robotics and neuroscience' was about in the journal Science Robotics. Such collaborative development contributes to progress in both fields, bringing us closer to developing more advanced android robots and a deeper understanding of the structure of the human brain. And, to some extent, to combining biological organisms with machines, to create cybernetic organisms (cyborgs).

Neuroscience for robots

Robots often resemble humans in their make-up. This is true for robots that are meant to mimic human actions and behavior - neuroscience is less important to industrial machines.

The most obvious thing to use in the design of a robot is to make it look human. Robots often have two arms, two legs and a head, even if it is not necessary from an engineering point of view. This is especially important when the robot will interact with people - a machine that looks like us is easier to trust.

It is possible to ensure that not only the appearance, but also the 'brain' of the robot resembles that of a human. In developing the mechanisms for perception, information processing, and control, engineers are inspired by the structure of the human nervous system.

For example, a robot's eyes -- TV cameras that can move on different axes -- mimic the human visual system. Based on the knowledge of how human vision is structured and how the visual signal is processed, engineers design the robot's sensors according to the same principles. In this way the robot can be endowed with the human ability to see the world in three dimensions, for example.

Humans have a vestibulo-ocular reflex: the eyes apply stabilization using vestibular information when we move, allowing us to maintain the stability of the picture we see. There may also be acceleration and orientation sensors on a robot's body. These help the robot to take body movements into account to stabilize the visual perception of the outside world and improve agility.

In addition, a robot can experience the sense of touch just like a human - a robot can have skin, it can feel touched. And then it doesn't just move randomly in space: if it touches an obstacle, it senses it and reacts to it just like a human does. It can also use this artificial tactile information to grip objects.

Robots can even simulate sensations of pain: some forms of physical contact feel normal and some cause pain, which drastically changes the robot's behavior. It starts to avoid pain and develop new behavior patterns, i.e. it learns - like a child who has been burned by something hot for the first time.

Not only sensory systems, but also a robot's body control can be designed analogously to that of humans. In humans, walking is controlled by so-called central rhythm generators -- specialized nerve cells designed to control autonomous motor activity. There are robots in which the same idea is used to control walking.

In addition, robots can learn from humans. A robot can perform actions in an infinite number of ways, but if it wants to mimic a human, it must observe the human and try to repeat their movements. When it makes mistakes, it compares itself with how a human performs the same action.

Robots for neuroscience

How can neuroscience use robots? When we build a model of a biological system, we begin to better understand the principles by which it works. Therefore, developing mechanical and computer models of human nervous system movement control brings us closer to understanding neurological functions and biomechanics.

And the most promising area of using robots in modern neuroscience is in designing neurointerfaces -- systems for controlling external devices using brain signals. Neurointerfaces are necessary for the development of neuroprostheses (for example, an artificial arm for people who have lost a limb) and exoskeletons - external frames or skeletons for a human body to increase its strength or restore lost motor ability.

A robot can interact with the nervous system through a bi-directional interface: the nervous system can send a command signal to the robot, and the robot from its sensors can return sensory information to the human, causing real sensations by stimulating nerves, nerve endings in the skin, or the sensory cortex itself. Such feedback mechanisms make it possible to restore the sensation of a limb if it has been lost. They are also necessary for more precise movements of the robotic limb, since it is on the basis of sensory information received from the arms and legs that we correct our movements.

There is an interesting question that arises here: Should we control all degrees of freedom of the robot through a neural interface? In other words, how should we send specific commands to it? For example, we can 'order' the robotic arm to pick up a bottle of water, and it will perform specific operations: it will lower its arm, turn it, and unclench and clench the fingers on its hand - all on its own. This approach is called combined control - we give simple commands through a neural interface, and a special controller inside the robot selects the best strategy for implementation. Or we can create a mechanism that will not understand the 'take the bottle' command: it needs to be sent information about specific, detailed movements.

Current studies

Neuroscientists and robotic scientists study various aspects of brain operation and robotic devices. For example, at Duke University I conducted experiments with neural interfaces on monkeys, since interfaces need to be directly connected to brain areas for them to work accurately and such experimental interventions are not always possible to be performed on humans.

In one of my studies, a monkey walked along a path and the activity of its motor cortex, which is responsible for leg movement, was read and triggered a robot to start walking. At the same time, the monkey observed this walking robot on a screen placed in front of it.

The monkey used feedback, so it corrected its movements based on what it saw on the screen. This is how the most effective neural interfaces for implementing walking are developed.

The cybernetic future

Such research leads us to innovative developments in the future. For example, creating an exoskeleton to restore the movements of completely paralysed people no longer seems like an unattainable fantasy - it just takes time. Progress may be held back by the lack of computer power, but development over the past ten years has also been enormous here. It is likely that we will soon see people around us using light, comfortable exoskeletons rather than wheelchairs or strollers to get around. Human cyborgs will become commonplace.

Commercial development of such systems is taking place all over the world, including in Russia. For example, the famous ExoAtlet project is developing exoskeletons for the rehabilitation of people with motor disabilities. The HSE Centre for Bioelectric Interfaces participated in the development of algorithms for these machines: Centre Head, Professor Alexey Ossadtchi, and his doctoral students developed a neurointerface that triggers the walking movements of the exoskeleton.

The rapid development of humanoid robots is also becoming a reality. It is likely that we will soon have robots walking around imitating us in many respects - moving like us and thinking like us. They will be able to do some of the work previously available only to humans.

Obviously, we will see the development of both robotics and neuroscience, and these fields will converge. This not only opens up new opportunities, but also creates new ethical questions, such as how we should treat android robots or human cyborgs.

And yet, so far, humans are better than robots in many ways. Our muscles are the most economical: eat a sandwich and you will have enough energy for the whole day. The robot will have a flat battery in half an hour. And although it may be much more powerful than a human, it is often too heavy. When it comes to elegance and energy capacity optimisation - so far, a human is still superior to a robot.

It is not far into the future when this will change - there are tens of thousands of talented scientists and engineers working towards this goal.

Source:
Journal reference:

Cheng, G., et al. (2020) In vivo synthesis of bacterial amyloid curli contributes to joint inflammation during S. Typhimurium infection. Science Robotics. doi.org/10.1126/scirobotics.abd1911.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New Study Challenges Traditional View of Axon Shape and Function