First, BCI can read the real-time human language

  Soon, we will be relying on the idea of ​​typing chat, issue a command to the computer. Today, Facebook and the University of California, San Francisco (UCSF) has released the latest progress of the study of brain-computer interfaces, the paper appeared in the journal Nature published the latest issue of the child "Nature Communications" on.

  "Today we share the work on building a new non-invasive wearable device, so that people just to complete the" typing "action through the imagination to say," Andrew Bosworth, vice president of Facebook said. "This development shows the new potential for future AR headset input and interaction capabilities."

  The study proved that the brain activity of people during the dialogue can produce real-time decoding for the text on a computer screen - whereas before, such work is done off-line, real-time "translate" the text is in the field of brain-machine interfaces research the first time. The researchers say their method so far can only recognize a small portion of words and phrases, but ongoing work aims to translate more words, and significantly reduce the recognition error rate.

  Lead author of the study, UCSF associate professor Edward F. Chang and his postdoc David A. Moses.

  The new study shows the possibility of perhaps still a long way away from us, Facebook released later said in an official blog: "It may take another decade ...... but we think we can close that gap."

  Facebook and UCSF research dedicated to helping nerve injury by detecting brain activity in real-time voice of the patient can expect the same exchange like a normal person. Interestingly, unlike many current methods to detect the brain, Facebook and UCSF are exploring strategy is to use a pulse oximeter to detect neurons of oxygen consumption, thereby detecting brain activity. This indirect, non-invasive method seems a lot safer.

  2017, research director Mark Facebook Reality Lab BCI project Chevillet gave myself two years to prove the feasibility of reading 100 words per minute from the human brain using non-invasive techniques.

  Two years later, the results have come out: "promise ringing in our ears," Chevillet said, "We really think this is feasible." He plans to continue to promote the program. The ultimate goal is to develop a team without having to speak loudly AR headset will be able to control.

  Neurosurgeon at the University of California, San Francisco is one of the authors Edward Chang, he said, the result is an important step towards a neural implant that can help people due to stroke, spinal cord injury, lost the ability to speak returned to normal communication. In April this year, Chang's team to create a different brain-computer interface, can be directly decoded speech signals from the brain.   Zhengzhou infertility hospital: http: //jbk.39.net/yiyuanzaixian/zztjyy/ Zhengzhou infertility hospital Which is good: http: //jbk.39.net/yiyuanzaixian/zztjyy/ Zhengzhou infertility hospital rankings: http: //jbk.39.net/yiyuanzaixian/zztjyy/

  Improve decoding accuracy of magic: add context

  The announcement of the objective of this work is to improve the accuracy of decoding brain activity. The researchers said they are decoding two types of information from two different parts of the brain, and use them as the context, the results of the accuracy of decoding had a considerable impact.

  Enhance the decoding accuracy is based on a simple concept: to add context. Using electrodes implanted in three patients with epilepsy brain, the researchers recorded under their brain activity when listening to recordings of a group problem, and let them put what they are hearing say it out loud.

Guess you like

Origin www.cnblogs.com/sushine1/p/11276663.html