Ten urgent challenges facing robots

Editor's Note: In addition to artificial intelligence and virtual reality technology, 2016 has received great attention. Commercial robots are gradually appearing in our lives. There are more and more family companion robots. We can see that the future of the robot market is broad, but it is inevitable to face challenge. Today, Xiaobian recommended a last year's article, which details the ten challenges facing robots.

Nowadays, robots can do all kinds of work, and they have a certain degree of artificial intelligence. However, robots like "chasing" seem to be in the foreseeable future. So, which problems are the most urgent challenges facing robots. What?

First, open up the road

In our lives, moving from location A to location B may be something we do every day, and it is extremely easy. We have different paths to different locations, depending on the condition of the road or even Choose the path to your location with your personal mood. For a robot, it is mainly through point-to-point positioning through navigation and pre-setting. However, when the preset environment is changed, it may become a bit tricky. The robot must be able to understand and adapt to the new environment, understand the data that is being analyzed into the brain, and re-initiate its own judgment and choice.

At present, the robotic expert's solution is to equip the robot with various "skills" to evaluate the surrounding environment by multiplying the robot with a large number of sensors, scanners, cameras and other high-tech tools. The underwater robots are also covered with a "sounding technique" magic cloak to deal with the water's interference with light. These visions make up a complete set of stereo vision systems that help the robot have a broad field of view and facilitate the collection of detailed environmental data.

The collection of data from related environments by robots is only half the battle. The bigger challenge is how robots use that data to make decisions. Researchers now navigate their robots primarily by using pre-set maps or by instantly building a map map in robot motion. It is necessary to combine the preset map with the real-time data in order for the robot to open its own path according to the environment. Researchers are solving this problem with more powerful computers and advanced probabilistic algorithms.

Second, show dexterity

In reality, we can already see robots assisting all walks of life, such as service robots can send and receive parcels, can clean, even this year, Japan has a full robot service hotel opened; and in industry, there are robots in Perform specific work in the assembly line or project. These robots all show certain skills, but they usually carry out a customized work, can not show dexterity, can not walk freely, jump and so on.

It is very difficult for robots to imitate the ability of people to choose what they need from a mess of things, and to consciously respond to the surrounding environment and other dexterity skills. In the past few years, researchers have made significant progress in the design and compatibility of robots. The better the compatibility, the more flexible a robot is, the ability to mimic human motion, and the ability to make some degree of data decisions; the stereotyped machine is the opposite, lacking flexibility.

In 2013, researchers at the Georgia Institute of Technology created a joint spring robotic arm that not only bends freely, but also interacts well with the environment, as if it were a human hand. . The researchers also planted a layer of "skins" that are spread over infrared sensors and equipped with electronic fingerprints with ridged lines that allow the robotic arm to detect not only the surrounding objects but also the robotic arm. Grab the object.

This high-tech arm is equipped with a more developed vision system. An agile robot is born. This robot can gently caress animals, and can choose what is needed among many things. However, this pair of "chasing" It is still not worth mentioning.

Third, talk freely with humans

As one of the founders of computer science, Turing made a bold prediction in 1950: One day, robots can talk to people fluently, and we don't even realize that we are chatting with robots. Unfortunately, Turing failed to see his expectations become reality. This is because speech recognition is not the same as natural language processing. It is more complicated than natural language processing. In natural language processing, our brain extracts the meaning of the conversation from the sentences in the conversation.

Initially, scientists thought it was as simple as embedding simple grammar rules into the machine's memory. But a simple copy of the complex grammar of any particular language proved impossible, so in the current situation, it is difficult to make robots free to communicate like humans. Even if the robot is provided with the meaning and rules of words, language learning is still a difficult task for them. Because human brain thinking is difficult to move into the robot brain, there is a concrete example of this: for different words, such as "new" and "knew", or "bank" Banks" and "banks", humans can understand the differences between these words, but scientists have not been able to break down these functions into discrete, identifiable rules, so it is difficult to embed the grammar of robots. Similar effects to human language functions.

Nowadays, the language processing of many robots is based on statistical data. Scientists implant a large number of text collections, called corpora, and then let their "brains" break up long text into small pieces and carry them out. Sort texts to find out which words are often grouped together and what the order of the words is. This requires the robot to learn a language based on statistical analysis. For example, for a robot, the word "bat" would have the word "fly" or "wing", "bat" refers to the flying mammal, and the "ball" followed by the word "ball" Or "gloves" refers to team sports. But can this make the robot talk freely with people?

Fourth, to acquire new skills

If a person who has never played golf wants to learn how to swing, he may go to read the book and try the swing, or he may watch the swing of a seasoned golfer to learn. If you learn new behaviors, this is a faster and easier way.

In Superchasing, we can see that Chad is able to continuously learn new skills and apply them perfectly. Not only can he sing and dance, he can dance swords and swords, but he can also show his superb driving skills. However, robotics experts are faced with the dilemma. When they try to build a robot that can learn new skills autonomously, they need to break down an activity into different precise steps and then program the relevant information into the robot brain. This approach assumes that every aspect of the activity can be identified, described and coded, but it turns out that this is not so easy. For example, there are some specific aspects of the swing of a golf ball that may be difficult to describe. For example, the force between the wrist and the elbow is difficult to describe in a language. These subtle details are easier to communicate through presentations than through storytelling.

In recent years, researchers have had some successful experiences in teaching robots to simulate human operations, which they call analog learning or demonstration learning. The robot sees the specific process or activity of the human demonstration through the wide-angle zoom lens mounted on the body. The data is then processed by an algorithm to produce a mathematical function that maps the vision to the actual action. Of course, robots must be able to ignore certain aspects of human behavior in demonstration learning—such as scratching, blinking, and other personal problems—the difference between robots and humans.

Fifth, learn to deceive

For robots, learning how to deceive a person or deceive another robot is a huge challenge. Deception is something you need to imagine - this is the ability to form a specific idea or image of an object that does not exist externally, and this ability is what robots lack. Robots are better at processing data input from sensors, cameras, or scanners, and converting those data into specific thoughts or images based on human settings, but are not good at understanding things other than sensor data.

In the course of his growth, "Chao" has experienced human lies and deceptions, from ignorance to growth, to the "people" who are familiar with the world. Although today's robots are still difficult to achieve such amazing results, researchers at the Georgia Institute of Technology have been able to pass on some of the squirrel's deceptive skills to robots. First, they studied how the fluffy rodents protect their buried food sites by guiding competitors to different places; then, the researchers code those behaviors into simple rules and load them into the robot's brain, and the robot can Algorithms are used to determine under what circumstances deception works and can provide a false communication to mislead robot partners away from their hiding place. Over time, this skill may mature as the artificial intelligence continues to evolve. Imagine how we will deal with a robot that can lie?

Ball Head

SHAOXING COLORBEE PLASTIC CO.,LTD , https://www.colorbeephoto.com

Posted on