Alphabet – Caitlyn Seim, Thad Eugene Starner, Georgia Tech Research Corp

Abstract for “Methods and systems for conveying chorded input”

“Disclosed are systems, methods, computer-readable media and apparatuses for conveying chorded information to a user. One or more sequences can be used to convey chorded input. Each sequence represents a specific chorded input.

Background for “Methods and systems for conveying chorded input”

Many applications require users to know chorded input. This is input that requires two or more concurrent activities. Teaching applications such as music instrument playing (e.g. piano), language-based (e.g. Korean), code-based (e.g. Braille, Morse Code), text-entry and many other fields require chorded input. It is necessary to develop better systems for learning and teaching chorded input.

“For example, better systems to teach Braille are required. The world currently has 39 million blind people. It is difficult to learn Braille and it is a key component of rehabilitation and independence training for blind and visually impaired people. Braille is particularly difficult for people who have lost their sight in later life. This includes the elderly, the wounded veterans and the growing number of diabetics. Braille instruction in schools is often neglected. Only ten percent of blind people can learn Braille using current methods.

The National Federation of the Blind considers illiteracy among the blind to be a “crisis.” Low-vision and blind students are not taught Braille because of lack of certified teachers and bureaucratic obstacles to education. Braille is a means of reading and writing for these people. They are not literate without it. Braille literacy is associated with academic success and employment for the blind (even if they are proficient with screen-readers). This leaves 74% of those who are blind unemployed. This crisis is also caused by the mainstreaming of blind students in public schools, which has significantly reduced time for learning Braille. The influx in speech-in-technology is also contributing to the neglect in Braille instruction. Research shows that Braille is a crucial tool for students learning science, math, grammar, language and spelling. Adults and students who are blind can attend Rehabilitation Centers to acquire the skills necessary for independent living. Access to these facilities can be difficult, and it requires commitment of seven to more months of inpatient training. Only twelve of these facilities exist in the United States. For many people, however, it is difficult to access instruction due to geographic or financial limitations. The current technology available for Braille instruction is limited in its capabilities to electronic Braillers and refreshable Braille displays. Today’s methods of teaching Braille include tactile flash cards, block models, and hand guidance. First, users learn to read and then they can type letters. These techniques can be time-consuming and cumbersome.

“In addition the Braille example above, improved systems to teach other chorded input (e.g. musical instruments, code-based system, text-entry etc.) are also needed. “Also, improved systems for teaching other chorded input (e.g., musical instruments), code-based systems, text-entry, etc.) are needed. Improved systems to teach stenotype (a text entry technique that allows for real-time transcription) are also needed. Stenotype, which is similar to Braille and a chorded text-entry system, can also be used. The Passive Haptic Learning (stenotype) would reduce the practice time of experts and lower barriers to entry to this industry. Currently, there are 85%-95% dropout rates at vocational schools.

Developers are still searching for discrete text entry methods for small devices, especially with the rise of mobile and wearable technology. Users’ need for discrete entry, as well as the difficulties of learning new entry methods, is a constant trade-off that leads to most innovative techniques being abandoned. Users are complaining about the popularity of wearable devices like smartwatches and Google Glass. The users want a nonverbal (silent), form of text entry that is not visual and can be done using touch. These devices are slim and lightweight, so it is difficult to use standard text entry methods. Research has been done to find a solution. This includes new keyboard interfaces that are optimized for smartwatches, and novel entry methods like rotational text entry. Distinctive text entry is still a problem on these devices. It’s difficult to create an interface that a person can manipulate without using bulky hardware or non-subtle gestures. Mobile devices are becoming smaller and more discreet, so many low-profile devices, such as headsets, hearing aids and electronic textiles, cannot support this method. A non-visual, single-channel system such as Morse code might be a solution. However, there are learning costs that can prevent many text entry systems from being adopted. It is also desirable to have improved systems for transmitting chorded input on small mobile devices for silent, eyes-free text entry.

“The disclosed methods, computer-readable media, systems and apparatuses for conveying chorded input are described. Some embodiments transmit the chorded input via passive and/or active haptic learning technology. As used herein, haptic refers to or is related to touch sense. Tactile feedback technology, which can be used to recreate the sensation of touch, is a form of haptic learning technology. It may include wearable tactile interfaces that apply a sensation (e.g. via vibrations, forces or motions), to the user. Passive Haptic Learning (PHL), is the acquisition of sensorimotor abilities without paying attention to learning. It allows a person to acquire?muscle memory? Through a sensory stimulus, without paying attention to it. Sensorimotor skills can be applied to many applications, including Braille, music instruments, code-based system, text-entry system, rehabilitation and the like.

“Disclosed are methods for conveying a chorded input. They include generating, via an electrical communication device with a plurality actuators and an out device, a plurality stimulation sequence signals that cause a plurality activations in which one of the plurality actuators activates in a specific sequential order. Responding to the plurality actuators activating within the first stimulation order, the processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals

“In some embodiments, the first activation and the second activation of a plurality of activations is separated by a predetermined offset (e.g. 0 to 50 milliseconds). A second predetermined offset can be used to separate a first activation from a second activation for the second plurality activations. Some embodiments transmit the chorded input via passive haptic reading. Each stimulation event may be a vibrational stimulation or an audio stimulation in some embodiments.

“In some embodiments, the execution of the second stimulation sequence starts between 100 milliseconds and 1 second after the last plurality of activations has ended. A stimulation sequence may be a chorded input. A chorded input could be a word, letter, syllable or code. It can also represent a number, symbol, musical note, musical chord, or any combination of these.

“In some embodiments, each actuator is placed on or within a wearable gadget that stimulates a specific portion of the body. Some embodiments include a glove as the wearable device. The device-wearer is able to choose whether the portion of their body includes a left or right hand and each actuator is able to stimulate by vibrating the right or left hand.

“Some embodiments also include generating, by processor, a signal to cause the parsing device generate a parsing indicator to a user before executing a first stimulation sequence. A letter can be used as the parsing indicator. A visual cue, audible sound or pause can all be used to indicate the parsing indication. Some embodiments include a visual cue and a screen for a wearable headset that displays the visual cue. One embodiment includes a plurality of actuators that include a vibr motor, speaker, bone-conduction device or combination thereof.

The perceptible indication may include a visual cue or an audible sound, a stop, vibration, or any combination thereof. Some embodiments include an audible sound and an output device that is configured to produce the audible sounds.

“A computer program product can be provided according to another example embodiment. A computer-readable medium can be included in the computer program product. Instructions that are stored on the computer-readable medium can cause the system to perform a particular function when they are executed by at least one processor. A non-transitory computer readable medium is one that contains instructions that when executed by at most one processor causes the system to perform a certain function. This includes generating a plurality stimulation sequences by an electrically communicating processor with a plurality actuators and an out device. The processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals to cause a second plurality activations in that one or two of the plurality actuators activate

“A system is described in another embodiment. The system may include at least 1 memory that is operatively coupled with at least 1 processor. It can be configured to store data and instructions. Some embodiments of this invention include a system comprising at most one memory that is operatively coupled with at least 1 processor. This memory is configured for storing instructions and data.

“Another embodiment, features, or aspects of the disclosed tech are described in detail and are included in the claimed disclosed technology.” You can also refer to the claims, accompanying drawings and detailed descriptions for other embodiments, features and aspects.

“Disclosed are systems, methods, and apparatuses for conveying chorded input. Some embodiments transmit the chorded input haptically. Some embodiments transmit the chorded input passively.

“Some embodiments described herein include a wearable computing device that can teach chorded input (e.g. Braille typing skills or playing the piano) using sensory input (e.g. vibrational stimuli), with or not the active attention of a user. Through user studies, these systems have been shown to be useful teaching tools, as illustrated in Examples 1-5 and the corresponding Figures. These systems, methods, and apparatuses can be used to teach users how to type these chords. However, the human perception of simultaneous stimuli is poor. This disclosure describes a method of teaching chorded inputs with sequential stimuli using passive haptic learning systems. During system use, users were able to recognize and understand the technique.

“Some embodiments will be described in detail hereinafter, with reference to the accompanying illustrations. The disclosed technology may be used in other forms, and it should not be taken to mean that the inventions described herein are exclusive.

“The following description contains many specific details. It is important to understand that embodiments of disclosed technology can be used without these details. Other cases, well-known techniques, structures, or methods have not been described in detail to facilitate understanding of the description. Refers to “one embodiment” ?an embodiment,? ?example embodiment,? ?some embodiments,? ?certain embodiments,? ?various embodiments,? etc. indicate that an embodiment of the disclosed technology may include a particular feature or structure. However, not all embodiments necessarily include the specific feature, structure or characteristic. Repeated use of the phrase “in one embodiment” may not be appropriate. It does not necessarily refers to the same embodiment.

“Unless the context is clear, the following terms are understood to mean the same thing throughout the specification and claims. The term “or” is intended to mean an inclusive?or.? The term?or? is meant to refer to an inclusive?or. Further, the terms “a”,?????????????????????????????? ?an,? ?an,? If not specified or made clear in the context, they mean one or more forms.

“Unless otherwise stated, the use the ordinal adjectives?first? ?second,? ?third,? “Third”,?

A computing device can be called a mobile device (mobile computing device), a terminal, cellular phone (MS), mobile station (MS), mobile phone (PDA), smartphone (cellular phone), cellular handset), personal digital assistant (PDA), mobile phone, cellular device), mobile station (MS), terminal), cellular phone, cellular telephone, cellular handset), mobile phone (PDA), mobile phone, cellular device), mobile station (MS), terminal), cellular phone, cellular device (MS), mobile phone), cellular device), display device, device, device, device, device), medical device, device, device, device, device, device, device, device), or any other similar terminology. A computing device can also be a processor, controller or central processing unit (CPU). A computing device could also be a collection of hardware components.

“Various aspects may be implemented using standard programming techniques or engineering techniques to create software, firmware or hardware or any combination thereof to control the computing device to implement disclosed subject matter. A computer-readable medium could include, for instance, a magnetic storage device, such as hard disk, floppy disk, or magnetic strip; an optical storage media such as compact disk (CD), digital versatile disk(DVD); a smartcard; and a flash storage device such a card stick, key drive, or embedded part. It should also be noted that computer-readable electronic data can be carried by a carrier wave. This includes electronic mail (email), access to a computer network, such as the Internet (or a local area network) (LAN), and other electronic data. A person with ordinary skill in the arts will be able to recognize that many modifications can be made without departing from its scope or spirit.

“Various methods and systems for transmitting chorded input are disclosed. We will now describe them with reference to the accompanying images.”

“FIG. 1. A block diagram of an illustrative computing device architecture 100 according to an example embodiment. FIG. A computing device 100 may include certain aspects of FIG. You can have more or less components in the FIG. 1. It is understood that the computing device architecture 100 was provided as an example and does not limit the scope and capabilities of all the disclosed systems, methods and computer-readable media.

“The computing device architecture 100 in FIG. 1. The computing device architecture 100 of FIG. The display interface 104 can be connected directly to a local display in certain embodiments. This could include a touch-screen display that is associated with a mobile computing system. Another example embodiment of the display interface 104 could be used to provide data, images and other information to an external/remote screen 150 that is not physically connected with the mobile computing device. A desktop monitor can be used to mirror graphics or other information on a mobile computing devices. The display interface 104 can wirelessly communicate with the remote display 150, in certain embodiments.

“In one example embodiment, the network interface 112 can be configured as a communication device and may render video, graphics, images or text on the display. A communication interface could include a serial port or parallel port. It may also include a general purpose input/output (GPIO), a game port (USB), a universal serial bus, a micro-USB, a high-definition multimedia (HDMI), port, a video and audio port, a Bluetooth, a near field communication (NFC), port, another similar communication interface, or any combination of these.

“The computing device architecture 100 could include a keyboard interface (106), which provides a communication interface with a keyboard. One example embodiment of the computing device architecture 100 includes a presence sensitive display interface 108 that allows for connection to a presence sensor display 107. The presence-sensitive interface 108, according to some embodiments, may be used to provide a communication interface with various devices, such as a touch screen, a pointing device, or a depth camera. These devices may or may not be linked to a display.

“The computing device architecture 100 can be configured to use one or more input/output interfaces. For example, the keyboard interface, keyboard interface 106, display interface 104, presence sensitive display interface 110, network connection interface 112, camera and sound interfaces 114, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112 etc. To allow the user to record information into computing device architecture 100. An input device could include a trackball or mouse, as well as a trackball or directional pad, trackpad, touch-verified track pads, presence-sensitive track pads, presence-sensitive displays, scroll wheels, digital cameras, digital video cameras, web cameras, microphones, sensors, smartcards, and other similar devices. The input device can be part of the computing device architecture 100, or it may be an independent device. An input device could be, for example, an accelerometer, magnetometer, digital camera, microphone, or optical sensor.

“Example embodiments for the computing device architecture 100 might include an antenna interface 110, which provides a communication link to an antenna, and a network connection interface 112. This provides a communication link to a network. A camera interface 114, which acts as a communication link and allows for the capture of digital images using a camera, is included in certain embodiments. A sound interface 116 can be used to convert sound into electrical signals with a microphone or a speaker into sound. A random access memory (RAM), 118 is available in some embodiments. This allows computer instructions and data to be stored in volatile memory devices for processing by CPU 102.

According to an example embodiment, the computing system architecture 100 contains a read-only (ROM) 120 that stores invariant low level system code and data for basic functions like basic input and output (I/O), startup, keystrokes, etc. An example embodiment of the computing device architecture 100 contains a storage medium (122 or another suitable type) that stores operating system 124, applications programs 126, data files 128 and other relevant information. An example embodiment of the computing device architecture 100 contains a power source 130, which provides appropriate alternating current or direct current to power components. An example embodiment of the computing device architecture 100 contains a telephony system 132 that allows sound to be transmitted and received over a telephone network. The bus 134 allows the components and CPU 102 to communicate with one another.

“The CPU 102 is a computer processor according to an example embodiment. The CPU 102 can have more than one processor unit in one arrangement. The RAM 118 interfaces to the computer bus134 to provide RAM storage for the CPU 102 in a quick manner during execution of software programs like the operating system applications programs and device drivers. To execute software programs, the CPU102 loads computer-executable processes steps from the storage media 122 or another media to a field in the RAM 118. The RAM 118 may contain data that can be accessed by CPU 102 during execution. One example configuration of the device architecture 100 is that it contains at least 125MB RAM and 256MB flash memory.

“The storage medium (122) may contain a number physical drive units such as a redundant array independent disks (RAID), flash memory, a USB flash memory, an external hard drive drive and a thumb drive. An external mini-dual inline memory module (DIMM), synchronous dynamic random acces memory (SDRAM) or an external micro DIMM SDRAM. These computer-readable media enable a computing device access to computer-executable processes, applications programs, and the like. They can also be used to offload or upload data to the device. Storage medium 122 may contain a machine-readable medium. This may allow a computer program product to be tangibly embedded in it.

According to one example embodiment, the computing device may refer to a CPU or be conceptualized as such (for example, CPU 102 in FIG. 1). This example embodiment shows how the computing device can be connected, coupled, or in communication with other devices such as display. This example embodiment shows that the computing device can output content to its local display or speaker(s). Another example embodiment is that the computing device can output content to an external display device (e.g. over Wi-Fi such as a TV, or an external computing system).

“In some embodiments, the computing device 100 can include any number hardware and/or other software applications that are executed in order to facilitate any of these operations. One or more I/O interfaces can be used to facilitate communication between the computing devices and input/output device. A universal serial bus port, an interface port, a serial port and a disk drive are some examples of I/O interfaces that may be used to facilitate interaction between the computing device and one or more input/output devices. One or more I/O interfaces can be used to receive data or user instructions from many input devices. The received data can be processed by one to many computer processors according to various embodiments of the disclosed tech and/or stored in one/more memory devices.

One or more network interfaces can be used to connect the inputs and outputs of the computing device to one or more appropriate networks or connections. These connections could include connections that allow communication with any number sensors in the system. One or more network interfaces can facilitate connection to one of the suitable networks, such as a local area network or a wide area network.

“FIG. “FIG. Method 200 may, according to some embodiments, include the generation and storage of stimulation sequences. These sequences can be read by a processor. The processor may be in electrical communication to a plurality actuators and an outputting device. Each stimulation sequence can contain instructions to activate one or more of the plurality actuators within a specific sequence. A plurality of actuators can drive a variety of devices including, but not restricted to, a vibr motor, speaker, bone-conduction device, or a combination thereof. The plurality of actuators may be placed on or within a wearable device that stimulates a particular part of the body. In some embodiments, the wearable device could be a pair or glove and the plurality actuators might be coin vibration motors that vibrate certain portions of the wearer’s fingers.

“In some embodiments at 202, the processor of a computing device 100 can generate multiple stimulation sequences. In some embodiments, each stimulation cycle and its resulting sequential activation may be a chorded input. In some embodiments, for example, three consecutive activations of three coin vibration engines in different fingers of a glove could represent the three fingers required to play a specific piano chord. These chorded inputs can represent a variety of information including a word or letter, a symbol or a syllable as well as a code, number, musical note, musical chord, and a combination thereof.

“At 204, a processor can generate a plurality stimulation sequence signals that cause a plurality activations in which one of the plurality activates in a first sequential ordering. Each activation can be a separate stimulation event. A stimulation event may be either a vibrational stimulation, audio stimulation, tactile stimulation, or a combination of both. Some embodiments allow stimulation events to be generated by electric or visual stimulations. In some embodiments, the processor generates the initial plurality of stimulation sequences upon execution of a first stimulation sequence. A first stimulation sequence, for instance, may cause a plurality stimulation events (such as vibrations) to occur in a sequence. The sequence of vibrations is a chorded input such as an input to Braille machines.

“In some embodiments at 206, the processor can generate an out signal to cause an indication by the output device. An output device could be an audio output device that generates an audible sound, speaker, display, screen, or wearable headset for displaying a visual cue. In some embodiments, the perceptible indicator may include a visual cue or audible sound, a stop, vibration, or a combination of these. Perceptible indicators can be used to indicate the end of a stimulation sequence (representing a chorded input), and the beginning of the next stimulation sequence. In some embodiments, a first stimulation sequence could represent a piano chord. A second stimulation sequence might represent a piano chord. The perceptible indicator may be a beep from a speaker that indicates that the stimulation events for the first chord are over and that the stimulation events for the second chord are about to start.

“In some embodiments at 208, the process can generate a second plurality stimulation sequence signals that cause a second plurality activations in which one of the plurality activates in a second sequential ordering. Each of the second plurality can be a separate stimulation event. In some embodiments, the processor may generate the second plurality stimulation sequences upon execution of a second stimulation sequence.

“In some embodiments, method 200 may also include the generation of a signal that causes a parsing system to send a parsing indicator to a user prior to the execution of the first stimulation sequence. This indication gives context to the user about the next sequence. The parsing indicator may be visual cues, audible sounds, pauses, vibrations or a combination of these. The parsing indicator may indicate the idea that is embodied in the chorded input. In some embodiments, the parsing indicator may be a letter from the alphabet. Parsing indications may also be sound effects that are played by speakers to indicate a letter in the alphabet. The parsing device can be any combination of a speaker, display, screen or wearable headset that displays a visual cue.

“Some embodiments allow for the separation of activations of actuators due to a stimulation sequence. A predetermined offset could be used to separate an activation from another activation within a plurality. This offset could be between 0 and 50 milliseconds. The skilled in the art will understand that there may be more than one offset or that the offset’s value may change in different embodiments.

“Additionally in some embodiments, the execution a subsequent stimulation sequence may begin at a predetermined moment after a plurality activations have ended. The predetermined time can be from 100 ms up to 1 s in some embodiments (e.g. 100-200ms and 200-300ms respectively, 300-400ms and 400-500ms respectively, 500-600ms as well as 600-700ms. 700-800ms. 800-900ms. 900ms. 900ms. ).

According to some embodiments, the method can execute any number sequences of stimulation, with each sequence representing a specific chord. Some embodiments allow the method to automatically repeat several stimulation sequences, generating the same stimulation events multiple times. The method 200 may be used to transmit one or more chorded inputs through passive haptic learning.

“In some embodiments, instructions stored on a non-transitory computer readable medium can cause the processor 200 to be executed when they are executed by at least one of the processors. The method 200 can also be executed by a system that includes at least 1 memory, operatively coupled with at least 1 processor, and configured to store data and instructions that, when executed, cause the processor to execute each step of the method.

“A computer program product can be provided according to another example embodiment. A computer-readable medium can be included in the computer program product. Instructions that are stored on the computer-readable medium can cause the system to perform a particular function when they are executed by at least one processor. A non-transitory computer readable medium is one that contains instructions that when executed by at most one processor causes the system to perform a certain function. This includes generating a plurality stimulation sequences by an electrically communicating processor with a plurality actuators and an out device. The processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals to cause a second plurality activations in that one or two of the plurality actuators activate

“A system is described in another embodiment. The system may include at least 1 memory that is operatively coupled with at least 1 processor. It can be configured to store data and instructions. Some embodiments of this invention include a system comprising at most one memory that is operatively coupled with at least 1 processor. This memory is configured for storing instructions and data.

It will be clear that FIG. FIG. 2 is illustrative only. Other steps can be added, removed or modified.

“CERTAIN IMPLICATIONS OF THE DECREDED TECHNOLOGY are described with reference to block- and flow diagrams of methods, systems, and/or computer programs according to examples of the disclosed tech. Computer-executable instructions may be used to execute one or more of the blocks and/or combinations of blocks within the flow diagrams. According to certain embodiments of disclosed technology, not all blocks of the flow diagrams or block diagrams need to be executed in the same order.

These computer-executable instructions can be loaded onto a general purpose computer, a specific-purpose computer or a processor to create a particular machine. The instructions that are executed on the processor or other data processing apparatus create means for implementing the functions in the flow block or blocks. These computer program instructions can also be stored in computer-readable memories that may direct a computer, or other programmable processing apparatus, to perform a certain function. The instructions stored in the computer readable memory may produce an article or manufacture that includes instruction means that implement one of the functions in the flow block or blocks. One example of the disclosed technology is a computer program product. This may include a computer useable medium containing a computer readable program code or instructions, and said computer-readable code adapted for execution to implement the functions specified in the flow block or blocks. Computer program instructions can also be loaded onto a computer, or another programmable data processor apparatus to cause a sequence of operational elements or steps on the computer to produce a computer implemented process. The instructions that are executed on the computer and other programmable apparatus provide elements/steps for implementing the functions in the flow block or blocks.

Accordingly, blocks in the flow diagrams and block diagrams can be used to support combinations of means or elements for performing specified functions. They also provide program instructions for the execution of specified functions. You will also understand that every block in the flow diagrams and block diagrams can be used by special-purpose, hardware-based computers that perform the specific functions, elements, steps or combinations of special-purpose computer instructions.

“The disclosed systems, methods, computer-readable media (CRM), apparatuses and systems can convey chorded input in many ways. Any sensory perception can convey the chorded input, such as sight, sound, touch or any combination thereof. In some embodiments, the conveyance may include tap input, audio input, bone conduction, visual input, gesture input, text input, or a combination of these. Some embodiments transmit the chorded input haptically (actively or passively) in some cases. Teaching, learning, or any combination thereof can be included in the conveyance.

Learning is not always active. It can also be passive. Learning passively is what you do, instead of learning it. It is described as being?caught, rather than taught? Subjects who are exposed to media rich information and live in an environment that is conducive to learning are 40% more likely than those who are not exposed to it. However, a media-rich environment does not have to be limited to visual and audio stimulation. Multi-modal audio and tactile cues can give users a deeper understanding of musical structure. In a series experiments, we demonstrated that manual skills can also be reinforced or learned passively while the user engages in tactile stimulation tasks.

The chorded input can also be transmitted via simultanious/grouped stimuli. Sequential stimulation can also be used to convey the chorded input. Some embodiments include accompanying audio to convey the chorded input.

“The systems, methods and apparatuses described herein can be used for a wide variety of purposes. The systems, methods and CRM described herein can be used to transmit chorded input. Chorded input is any type of input where multiple actions are performed simultaneously. For example, pressing multiple keys simultaneously. The systems, methods and apparatuses described herein can be used to help teach chorded input, including a word, letter, number, symbol, musical note, musical chord or combination thereof.

“In certain embodiments, the systems and methods and apparatuses described herein can be used for music-related learning. This includes sight recognition, sound recognition and musical-instrument reading. Music-related learning can be described as learning to play an instrument. Some embodiments of the musical-related learning involve enhancing and/or improving skills related to an existing musical instrument. This could include enhancing or learning a new song, or a portion thereof, increasing volume or volume of musical notes or chords or enhancing playing speed. The systems, methods, apparatuses described herein can facilitate the playing of various musical instruments, including: e.g. flute, string instruments (e.g. guitar), percussion instruments, brass instruments (e.g. trumpet), electronic instruments, e.g. synthesizer, and keyboard instruments (e.g. piano). These systems, methods and apparatuses can facilitate the playing of musical instruments, including idiophones (e.g. directly struck idiophones or indirectly struck idiophones), plucked idiophones and friction idiophones. They also allow membranophones to be used as a means to facilitate membranophones and membranophones. The systems, methods and apparatuses described herein also allow for the facilitation of playing musical instruments such as guitar, xylophones, marimbas, drums and organs. Some embodiments teach portions of music. Some embodiments teach entire pieces of music. Some embodiments allow for the teaching of music in a sequence. Some embodiments allow the music to be taught simultaneously in multiple limbs if the chorded input is multiple.

“In some embodiments, systems, methods and apparatuses described herein can be used for teaching words, letters and phrases or combinations thereof to facilitate learning language-related skills (including sight recognition, sound recognition and reading, writing and verbal comprehension). Language-related learning can be described as learning a new language or a portion of it. Language-related learning can be described as the acquisition of new skills or a portion thereof. The systems, methods and apparatuses described herein can facilitate language-related learning in English, Korean and Mandarin. The systems, methods and apparatuses described herein may be used to enable the use of a specific language for a particular function such as stenography. Some embodiments allow the language-based skills to be used to teach typing. Some embodiments allow the language-based skills to be used to teach reading. In some embodiments, language-based skills can also be taught incrementally with a panagram.

“In some embodiments, systems, methods and apparatuses described herein can be used for teaching words, letters and phrases, codes, and combinations thereof, to facilitate code-related learning (including sight recognition, sound recognition and reading and writing). Some embodiments include rhythmic and/or temporally-based code-related skills. Some embodiments allow for code-related learning to be done by learning a new code or a portion of it. Some embodiments of code-related learning involve enhancing or advancing skills related a code already learned. This could include speeding up reading, writing and comprehension, increasing the number of words, letters and/or phrases that are being learned, etc. The systems, methods and apparatuses described herein may facilitate code-related learning, including Braille and Morse codes. Some embodiments allow for the teaching of Morse code via tapping input to text entry, such as on a mobile device. Some embodiments teach Morse code using only audio (which could result in passive learning of the text entry).

“In some embodiments, systems, methods, or apparatuses described herein may be used to facilitate rehabilitation. Rehabilitation can be motor skill rehabilitation due to injury, disability or birth defect. It also includes aging rehabilitation. Rehabilitation may also include sensory enhancement for paralyzed individuals.

“In some embodiments, systems, methods and apparatuses described herein can be used for any application that uses a haptic interface. This includes but not limited to teleoperation, flight simulator, dance simulation, gaming controllers, game add-ons for Augmented Reality, computer-based learning system, text-entry systems and computer-based learning. Some embodiments teach muscle memory via the transmission of chorded input. Some embodiments teach machine and/or system control via conveyance of chorded input.

“Also described herein are methods for teaching manual skills and/or musicality. In these methods, the fingers are stimulated using a pattern that corresponds to a motor action/skill (using preset phrases, songs, patterns). . . ; some motor skills include multiple simultaneous finger actions (‘chords’). These chorded actions can be represented as sequential stimuli/stimuli that have a temporal offset. Users may focus their attention on other tasks while not paying attention to the stimuli. Some embodiments allow sequential stimuli to delay between each other by 0 ms (immediately consecutive and/or overlapping) and 250ms for chords, respectively. Sequential actions can be delayed between 0 and 1.5 seconds for either chords or sequential actions.

“In some embodiments, sequential stimuli can alternate between hands within a chorded act according to a preprogrammed pattern based on clarity determination. Some embodiments present sequential stimuli in a chord traversing both of the hands with temporal offset. Some embodiments group stimuli in a chord traversing both hands by the hand. Some embodiments start with the hand that contains the most stimuli (=actions), in a chord traversing both arms. Some embodiments transmit stimuli within a chord traversing both arms by using alternating hands (e.g. if there are 2 stimuli per arm). Some embodiments allow stimuli in a chord to be transmitted by grouping nearby stimuli (stimuli on the same hand on adjacent fingers) when the chord is not stimulating the fingers on the opposite side of the chord. Some embodiments allow stimuli in a chord to be transmitted by switching between hands, if the chord is being used by the same hand. Some embodiments transmit stimuli within a chord traversing both arms by alternating between hands. If the order is given by the hand in some embodiments, it may be done by the hands. This can be done in alternating hands if the chords are repeated.

“Also described herein are methods of teaching manual tasks, as described in paragraph preceding. In this case, audio is used to accompany tactile stimuli. Some embodiments allow patterns and sequences (tactiles) to be divided into chunks that contain between 10 and 18 tactile stimuli. Some embodiments of the method for teaching manual tasks are structured with synchronized sound that encodes meaning to the tactile patterns immediately before each chord-group tactile stimuli for multiple simultaneous actions. The four stimuli are used to encode Braille letter “g” Some embodiments of the method for teaching manual tasks use synchronized audio to present sections of sequential-action stimuli immediately before them (e.g. words). Some embodiments of the method for teaching manual tasks use synchronized audio to present larger groups of stimuli before them (e.g. words, phrases).

While certain embodiments have been described with reference to what are currently considered to be the most practicable embodiments, it should be understood that the disclosed tech is not limited to these embodiments. It is, on the contrary, intended to include all modifications and equivalent arrangements within the scope of the attached claims. Specific terms may be used herein but they are only used for descriptive and general purposes and not to limit the scope of the disclosed technology.

This written description includes examples that disclose certain embodiments and the best mode. It also allows anyone skilled in the art, to use certain embodiments and to make and use any devices or systems. Certain embodiments of disclosed technology are patentable within the scope of the claims. This may also include any other examples that might occur to those who are skilled in the art. These other examples will be included in the scope of the claimed if they contain equivalent structural elements and have no material differences from the claims’ literal language.

“EXAMPLES”

“Example 1”

“Passive Haptic Learning from Non-Chorded input”

“Haptic guidance is a way for users to learn manual skills. Even if the user is distracted, this learning can still take place. This example shows that PHL aids in the learning of rote patterns for muscle memory for one hand. Mobile Music Touch (MMT), a project that demonstrated passive learning of the pattern of keys that plays a piano melody, was used by the Mobile Music Touch (MMT). The research involved users wearing a PHL glove (wearable tactile interface) while performing other tasks such as homework or taking a test. The glove system played the song and stimulated each finger with the appropriate note. The vibrations could be ignored or used to distract from learning. This method was proven to be effective in learning simple melodies such as Ode to Joy in just 30 minutes.

Three participants learned to type a sentence on a random, 10-key keyboard using a 1-finger-to-1 key mapping. This was in a feasibility study. The keyboard had the letters?A??-?H? and space. Enter was also available. The study participants wore gloves that contained embedded vibration motors. They focused their attention on playing memory cards for 30 minutes, listening to the phrase repeatedly and stimulating the appropriate finger patterns (to type the phrase). Users were able type the phrase in less than 20% error after the PHL session. They also were able to correctly type the component of the phrase (words, letters) and they understood the keyboard mapping well enough to create a new phrase with less than 20% error.

“Example 2”

“Passive Haptic Learning Braille Skills for Two Phrases: Passive Haptic Learning Braille Skills for two Phrases”

Here, the experiment of Example 2 can demonstrate that Braille typing skills are possible to be taught. Without the active attention of the learner;(2) articulate a method to teach chorded input using sequential tapping patterns (where previous attempts at teaching chords have failed); (3) create a method to teach the entire Braille alphabet in just four hours; (4) show that Braille letter identification can be both visually and tactilely; and (5) introduce distraction tasks with more sensitive performance metrics.

This example demonstrates a system for passive haptic learning of typing skills. A study with 16 participants showed that users had significantly lower errors when typing Braille phrases after receiving passive instruction. This was compared to the control group’s 32.85% error rate and 2.73% error increase. PHL users were also more able to identify and read Braille letters from the phrase (72.5%) vs. 22.4%. A second study with 8 participants taught the entire Braille alphabet passively over four sessions. Participants who received PHL had a faster and more consistent rate of typing errors reduction than those who did not receive it. 75% of PHL users reached zero typing errors, compared to none control users. Participants in PHL were also able recognize and to read 93.3% all Braille alphabet letters by the end of this study. These results indicate that Passive Haptic instruction using wearable computers could be feasible to teach Braille typing and reading.

“Example 2 examined whether Braille typing skills could be taught passively. The study evaluated the user’s performance during typing tests that were set around learning periods. The user completed a distraction task without or with simultaneous Passive Haptic Learning during each study session. The session ended with a typing test and Braille reading quizzes. To assess the impact of PHL on user performance, the distraction task was scored. Braille reading quizzes were used to determine if there was any transfer of knowledge between Braille typing skills and Braille reading skills. Two sessions were offered to each user: one with PHL (control) and one without PHL (control). Users learned one of the two phrases in their first session and then learned the second phrase in their next session. The experiment was balanced for condition and phrase. All participants are native English speakers and do not know Braille.

“System: The system described in Example 2 consisted of a pair gloves with one vibration motor per finger and a programmed controller to drive the glove interface. The microcontroller coordinated the sequences of vibrations with audio prompts for two phrases. This Braille keyboard is made from two BAT keyboards. FIG. 3 shows the system. 3.”

“Gloves”: A pair of gloves was the wearable tactile interface that delivered vibration stimuli in Example 2. To ensure a comfortable fit for different sizes of hands, the gloves were not fingerless. This allowed the motors to rest flush at the base knuckle. Each motor was attached to the stretchy glove with adhesive. It was located inside the glove on the back (dorsal, not-palm-side). These gloves were powered by Eccentric Rotating Mass (ERM), vibration motors (Precision Microdrives Model #308-100). They could be driven high or floated through an Arduino Nano with buffered Circuitry attached to Darlington array chips.

“Audio & Vibration Sequences” Braille is a chorded languages, which means that you need multiple keys to type the same alphabet character. Instead of delivering stimuli simultaneously to all fingers involved in typing a chord, the motors were activated sequentially instead. To indicate completion of a chord or letter to users, audio and timing cues were used.

Vibration and audio were used in this study under two conditions: once during each pretest and once during Passive Haptic Learning. Both times presented users with the audio of the phrase and the audio spelling. Each motor on the fingers needed to type the chorded letter was vibrated in a specific sequence after each letter had been spoken. After the chord was completed, the system stopped for 100 milliseconds and then played the audio for the next letter. The audio-haptic stimulus was played for the duration of PHL’s distraction task. Each repetition took 10 seconds. Motors were activated from 300 ms through 750 ms. Phrase vibration sequence timings were selected to allow clear discrimination of vibrations and separate chorded letter recognition. This allows for approximately 60 repetitions of a phrase over the distraction task period.

“Keyboard: This study used two Infogrip Braille keyboards. The inputs from the BAT keyboard were converted into Braille keyboard entries. The ASCII characters generated by key presses were then translated from a hash map to the appropriate Braille value. Both staggered (pressing one key at a time, then releasing them all) and simultaneous (pressing all of the keys down simultaneously) entry are supported. This method produced a chorded input that follows the Perkins Brailler standard, as well as digital Braille keyboards such as Freedom Scientific’s Pac Mate.

“Typing Software”: The typing tests were conducted in our studies using specialized typing software. The software provided audio prompts and a blank screen. The screen would display an asterisk after each successful entry of Braille letters or spaces. This was to provide feedback and prevent any learning during testing. The software logged user inputs and performed calculations to calculate statistics such as uncorrected errors (UER), words per minute (WPM), using formulae described by MacKenzie & TanakaIshii, Text Entry Systems for Mobility, Accessibility and Universality (2007). San Francisco, Calif., Morgan Kaufmann

“Phrases tested: The phrases (FIG. The phrases (FIG.4) that were used in this study were:?add a backpack? (AAB) &?hike fee (AAB) and?hike fee? (HF), respectively. These phrases were selected for their ability to be easily identified via audio clips. These phrases don’t include homophones or obscure spellings. They have clear meanings that are easy to understand and remember. These phrases were chosen for their similar length (15-18 vibrations). These phrases are composed of Braille letters that require no more than 3 keys to type. They also have comparable complexity (4 or 5 unique letters) and contain words of 3-4 characters.

“Pre-Test”: The initial performance of users was determined by a typing pretest. To introduce participants to the keyboard, and to explain the nature of the typing chords, study administrators use a set of verbal instructions and gestures. Users, all of whom were not well-informed, can now understand how to use the chorded keyboard to type and the meaning of the vibrations and audio. Participants were prompted to type the phrase after they had heard the audio of the current phrase. During the pre-test, users were allowed to type the phrase once. The pre-test required users to focus on understanding vibration-guided meanings and use the pre-test as a guide to how to type chords. The pre-test results are used to establish baseline typing performance.

“Distraction Task”: Subjects in both PHL- and control conditions took part in a distraction task after the pre-test. The distraction task allowed subjects to focus on other tasks and measured their ability while they were receiving the stimulus. Each group was given 30 minutes to distract themselves with their gloves on and their earbuds in. The distraction task was an online game in this study. The task was to keep the user’s attention on the game only and ignore any vibrations or audio. Both groups were asked to score the highest possible scores during the task. Their scores were recorded at the end of each distraction task.

“For this study, the distraction game was chosen to (1) be difficult/cognitively demanding/mentally taxing; (2) contain no reading/words; (3) emit no sounds/mutable; and (4) log a score. An online game called Fritz! The distraction task was chosen and administered to both groups. All subjects received instruction on how to play before the game. Fritz! Fritz! was designed to align blocks of similar patterns by moving blocks adjacent. Users who were undergoing a PHL study session received audio and haptic stimulation while they played the game. PHL audio and vibration were not provided to the control groups. PHL groups were instructed to not pay attention to vibrations or audio for the study and to concentrate on the game.

“Post Test”: Users were given a typing test (post-test) after the distraction task. Participants were asked to first type the entire phrase that they had learned or attempted during the pre test. Participants were allowed to type the entire phrase three times before being asked to enter each word and letter (in random order) for three more trials. During the test, participants did not feel any vibrations.

“Braille Reading Questions: This research examined the potential of Passive Haptic Learning to learn typing skills on Braille Keyboard. Braille reading tests were created to test participants’ Braille typing skills. These quizzes were created to verify if the transfer took place. The pool of blind people who were recruited was not familiar with Braille. It is known that tactile perception can be challenging for such individuals. Reading quizzes that included visual Braille representations in addition to tactile quizzes were created. Embossed Braille representations

“Two questions were given at the end each session. Because the tactile quiz combined Braille typing-to?reading translation and tactile perception, the visual quiz was presented prior to the tactile quiz.

Instructions were given at the beginning of each quiz to explain how finger mapping was done on the keyboard. To ensure that participants understood the relationship, study administrators demonstrated this mapping with their hands and a set of verbal instructions. FIG. 5A shows the picture that was used to illustrate the mapping in the quizzes. 5A.”

“Visual Quiz”: This visual quiz consisted of images of Braille cells, with dots filled in or left blank to show what would be printed on a Braille document. Each letter was given a question. The?add to a bag? Session users were asked to identify the letters of the phrase in a consistently random order. The?hike fee was also asked. They were asked to answer questions in the following order: f. i.e. k. h. (FIG. 5B, and asked them to write the letter they represent.

“Tactile Quiz”: This quiz was created to test students’ ability to perceive Braille cells using their fingers and identify the letter they see. The subject placed their dominant hand in a box that was only one side open. Inside the box was a card with the current letter. The subject can slide their hand into the box and use their fingers to access Braille without having to read the Braille on the card. Participants were provided with the same letters as in the visual quiz, but in a consistent random order (b. g. a. d. h. e. f. i. k). To indicate what they saw (FIG. 5C). The subject also completed a blank to identify the embossed letter.

“RESULTS OF TWOPHRASE STUDY”: Participants were able to significantly reduce typing errors on Braille keyboards, sometimes reaching 100% accuracy, with the help of PHL. The users also learned how to read 75% of the Braille letters. These results suggest that users have learned Braille/chorded text entry via Passive Haptic Learning.

“Typing”: The study used typing software to calculate uncorrected errors rate (UER), words per minute (WPM) and other statistics that were used to analyze the performance of participants. Because this study was conducted within-subjects, paired tests were used to evaluate the effects of PHL and not having PHL. PHL was chosen because it was believed that PHL would improve the accuracy of letter and phrase typing, as well as visual and tactile recognition of letters. There was no need to correct multi-hypotheses in families. The threshold of significance was set at?=0.05.”

“Comparing typing errors in the pre-test with those of the three post-test phrase-typing trials, the UER (uncorrectederror rate) was calculated for each session. FIGS. 6 and 7 show the results for both phrases. FIGS. 6 and 7 show that users decreased their typing errors (increased accuracy) after passive learning sessions (31.55% on average and 42.78% respectively).

“This result was not valid for control sessions where minimal to zero improvement (2.68%) was normal for?add bags? Increased errors (up 7.14%) were the norm for?hike fees. These data are represented in the average accuracy improvements for each phrase (FIG. 8). A paired t test suggests that there is a statistical difference between the conditions. Participants who received PHL had a greater AER difference (39.14), than those who did not receive it (M=1.97, SE=11.98, BCa 95%CI[22.0], 56.27], and t(15),=4.87, P0.00001). Participants were asked to correctly type every letter in the phrase. The number of correct letters was significantly higher in PHL sessions than it was for control (FIG. 9).”

“T-tests show that there is statistically a difference (2.31) between conditions where participants receive PHL (M=3.25,SE=1.69) and those not given PHL.

“Distraction Task”: This is the base performance of participants in the distraction task, called the Fritz! Also, the game was characterized. Three games were played by a player who was not part of the PHL study. Each trial was a 10 minute focused session followed by a 10-minute distracted session. The focused session was a game-only session. The player was asked to play the game while watching a television program. The average score of the player during distracted game sessions was 19.36%.

“All 16 subjects participated in the game during each PHL and control session and cleared up to level 5 within the time limit. The results for performance differences were noisy because of the nature the game. However, the average score difference between PHL (and control) was within 10% as shown in FIG. 10. These results confirm and demonstrate the sensitivity to our distraction task for monitoring user attention and sharing mental resources.

Summary for “Methods and systems for conveying chorded input”

Many applications require users to know chorded input. This is input that requires two or more concurrent activities. Teaching applications such as music instrument playing (e.g. piano), language-based (e.g. Korean), code-based (e.g. Braille, Morse Code), text-entry and many other fields require chorded input. It is necessary to develop better systems for learning and teaching chorded input.

“For example, better systems to teach Braille are required. The world currently has 39 million blind people. It is difficult to learn Braille and it is a key component of rehabilitation and independence training for blind and visually impaired people. Braille is particularly difficult for people who have lost their sight in later life. This includes the elderly, the wounded veterans and the growing number of diabetics. Braille instruction in schools is often neglected. Only ten percent of blind people can learn Braille using current methods.

The National Federation of the Blind considers illiteracy among the blind to be a “crisis.” Low-vision and blind students are not taught Braille because of lack of certified teachers and bureaucratic obstacles to education. Braille is a means of reading and writing for these people. They are not literate without it. Braille literacy is associated with academic success and employment for the blind (even if they are proficient with screen-readers). This leaves 74% of those who are blind unemployed. This crisis is also caused by the mainstreaming of blind students in public schools, which has significantly reduced time for learning Braille. The influx in speech-in-technology is also contributing to the neglect in Braille instruction. Research shows that Braille is a crucial tool for students learning science, math, grammar, language and spelling. Adults and students who are blind can attend Rehabilitation Centers to acquire the skills necessary for independent living. Access to these facilities can be difficult, and it requires commitment of seven to more months of inpatient training. Only twelve of these facilities exist in the United States. For many people, however, it is difficult to access instruction due to geographic or financial limitations. The current technology available for Braille instruction is limited in its capabilities to electronic Braillers and refreshable Braille displays. Today’s methods of teaching Braille include tactile flash cards, block models, and hand guidance. First, users learn to read and then they can type letters. These techniques can be time-consuming and cumbersome.

“In addition the Braille example above, improved systems to teach other chorded input (e.g. musical instruments, code-based system, text-entry etc.) are also needed. “Also, improved systems for teaching other chorded input (e.g., musical instruments), code-based systems, text-entry, etc.) are needed. Improved systems to teach stenotype (a text entry technique that allows for real-time transcription) are also needed. Stenotype, which is similar to Braille and a chorded text-entry system, can also be used. The Passive Haptic Learning (stenotype) would reduce the practice time of experts and lower barriers to entry to this industry. Currently, there are 85%-95% dropout rates at vocational schools.

Developers are still searching for discrete text entry methods for small devices, especially with the rise of mobile and wearable technology. Users’ need for discrete entry, as well as the difficulties of learning new entry methods, is a constant trade-off that leads to most innovative techniques being abandoned. Users are complaining about the popularity of wearable devices like smartwatches and Google Glass. The users want a nonverbal (silent), form of text entry that is not visual and can be done using touch. These devices are slim and lightweight, so it is difficult to use standard text entry methods. Research has been done to find a solution. This includes new keyboard interfaces that are optimized for smartwatches, and novel entry methods like rotational text entry. Distinctive text entry is still a problem on these devices. It’s difficult to create an interface that a person can manipulate without using bulky hardware or non-subtle gestures. Mobile devices are becoming smaller and more discreet, so many low-profile devices, such as headsets, hearing aids and electronic textiles, cannot support this method. A non-visual, single-channel system such as Morse code might be a solution. However, there are learning costs that can prevent many text entry systems from being adopted. It is also desirable to have improved systems for transmitting chorded input on small mobile devices for silent, eyes-free text entry.

“The disclosed methods, computer-readable media, systems and apparatuses for conveying chorded input are described. Some embodiments transmit the chorded input via passive and/or active haptic learning technology. As used herein, haptic refers to or is related to touch sense. Tactile feedback technology, which can be used to recreate the sensation of touch, is a form of haptic learning technology. It may include wearable tactile interfaces that apply a sensation (e.g. via vibrations, forces or motions), to the user. Passive Haptic Learning (PHL), is the acquisition of sensorimotor abilities without paying attention to learning. It allows a person to acquire?muscle memory? Through a sensory stimulus, without paying attention to it. Sensorimotor skills can be applied to many applications, including Braille, music instruments, code-based system, text-entry system, rehabilitation and the like.

“Disclosed are methods for conveying a chorded input. They include generating, via an electrical communication device with a plurality actuators and an out device, a plurality stimulation sequence signals that cause a plurality activations in which one of the plurality actuators activates in a specific sequential order. Responding to the plurality actuators activating within the first stimulation order, the processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals

“In some embodiments, the first activation and the second activation of a plurality of activations is separated by a predetermined offset (e.g. 0 to 50 milliseconds). A second predetermined offset can be used to separate a first activation from a second activation for the second plurality activations. Some embodiments transmit the chorded input via passive haptic reading. Each stimulation event may be a vibrational stimulation or an audio stimulation in some embodiments.

“In some embodiments, the execution of the second stimulation sequence starts between 100 milliseconds and 1 second after the last plurality of activations has ended. A stimulation sequence may be a chorded input. A chorded input could be a word, letter, syllable or code. It can also represent a number, symbol, musical note, musical chord, or any combination of these.

“In some embodiments, each actuator is placed on or within a wearable gadget that stimulates a specific portion of the body. Some embodiments include a glove as the wearable device. The device-wearer is able to choose whether the portion of their body includes a left or right hand and each actuator is able to stimulate by vibrating the right or left hand.

“Some embodiments also include generating, by processor, a signal to cause the parsing device generate a parsing indicator to a user before executing a first stimulation sequence. A letter can be used as the parsing indicator. A visual cue, audible sound or pause can all be used to indicate the parsing indication. Some embodiments include a visual cue and a screen for a wearable headset that displays the visual cue. One embodiment includes a plurality of actuators that include a vibr motor, speaker, bone-conduction device or combination thereof.

The perceptible indication may include a visual cue or an audible sound, a stop, vibration, or any combination thereof. Some embodiments include an audible sound and an output device that is configured to produce the audible sounds.

“A computer program product can be provided according to another example embodiment. A computer-readable medium can be included in the computer program product. Instructions that are stored on the computer-readable medium can cause the system to perform a particular function when they are executed by at least one processor. A non-transitory computer readable medium is one that contains instructions that when executed by at most one processor causes the system to perform a certain function. This includes generating a plurality stimulation sequences by an electrically communicating processor with a plurality actuators and an out device. The processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals to cause a second plurality activations in that one or two of the plurality actuators activate

“A system is described in another embodiment. The system may include at least 1 memory that is operatively coupled with at least 1 processor. It can be configured to store data and instructions. Some embodiments of this invention include a system comprising at most one memory that is operatively coupled with at least 1 processor. This memory is configured for storing instructions and data.

“Another embodiment, features, or aspects of the disclosed tech are described in detail and are included in the claimed disclosed technology.” You can also refer to the claims, accompanying drawings and detailed descriptions for other embodiments, features and aspects.

“Disclosed are systems, methods, and apparatuses for conveying chorded input. Some embodiments transmit the chorded input haptically. Some embodiments transmit the chorded input passively.

“Some embodiments described herein include a wearable computing device that can teach chorded input (e.g. Braille typing skills or playing the piano) using sensory input (e.g. vibrational stimuli), with or not the active attention of a user. Through user studies, these systems have been shown to be useful teaching tools, as illustrated in Examples 1-5 and the corresponding Figures. These systems, methods, and apparatuses can be used to teach users how to type these chords. However, the human perception of simultaneous stimuli is poor. This disclosure describes a method of teaching chorded inputs with sequential stimuli using passive haptic learning systems. During system use, users were able to recognize and understand the technique.

“Some embodiments will be described in detail hereinafter, with reference to the accompanying illustrations. The disclosed technology may be used in other forms, and it should not be taken to mean that the inventions described herein are exclusive.

“The following description contains many specific details. It is important to understand that embodiments of disclosed technology can be used without these details. Other cases, well-known techniques, structures, or methods have not been described in detail to facilitate understanding of the description. Refers to “one embodiment” ?an embodiment,? ?example embodiment,? ?some embodiments,? ?certain embodiments,? ?various embodiments,? etc. indicate that an embodiment of the disclosed technology may include a particular feature or structure. However, not all embodiments necessarily include the specific feature, structure or characteristic. Repeated use of the phrase “in one embodiment” may not be appropriate. It does not necessarily refers to the same embodiment.

“Unless the context is clear, the following terms are understood to mean the same thing throughout the specification and claims. The term “or” is intended to mean an inclusive?or.? The term?or? is meant to refer to an inclusive?or. Further, the terms “a”,?????????????????????????????? ?an,? ?an,? If not specified or made clear in the context, they mean one or more forms.

“Unless otherwise stated, the use the ordinal adjectives?first? ?second,? ?third,? “Third”,?

A computing device can be called a mobile device (mobile computing device), a terminal, cellular phone (MS), mobile station (MS), mobile phone (PDA), smartphone (cellular phone), cellular handset), personal digital assistant (PDA), mobile phone, cellular device), mobile station (MS), terminal), cellular phone, cellular telephone, cellular handset), mobile phone (PDA), mobile phone, cellular device), mobile station (MS), terminal), cellular phone, cellular device (MS), mobile phone), cellular device), display device, device, device, device, device), medical device, device, device, device, device, device, device, device), or any other similar terminology. A computing device can also be a processor, controller or central processing unit (CPU). A computing device could also be a collection of hardware components.

“Various aspects may be implemented using standard programming techniques or engineering techniques to create software, firmware or hardware or any combination thereof to control the computing device to implement disclosed subject matter. A computer-readable medium could include, for instance, a magnetic storage device, such as hard disk, floppy disk, or magnetic strip; an optical storage media such as compact disk (CD), digital versatile disk(DVD); a smartcard; and a flash storage device such a card stick, key drive, or embedded part. It should also be noted that computer-readable electronic data can be carried by a carrier wave. This includes electronic mail (email), access to a computer network, such as the Internet (or a local area network) (LAN), and other electronic data. A person with ordinary skill in the arts will be able to recognize that many modifications can be made without departing from its scope or spirit.

“Various methods and systems for transmitting chorded input are disclosed. We will now describe them with reference to the accompanying images.”

“FIG. 1. A block diagram of an illustrative computing device architecture 100 according to an example embodiment. FIG. A computing device 100 may include certain aspects of FIG. You can have more or less components in the FIG. 1. It is understood that the computing device architecture 100 was provided as an example and does not limit the scope and capabilities of all the disclosed systems, methods and computer-readable media.

“The computing device architecture 100 in FIG. 1. The computing device architecture 100 of FIG. The display interface 104 can be connected directly to a local display in certain embodiments. This could include a touch-screen display that is associated with a mobile computing system. Another example embodiment of the display interface 104 could be used to provide data, images and other information to an external/remote screen 150 that is not physically connected with the mobile computing device. A desktop monitor can be used to mirror graphics or other information on a mobile computing devices. The display interface 104 can wirelessly communicate with the remote display 150, in certain embodiments.

“In one example embodiment, the network interface 112 can be configured as a communication device and may render video, graphics, images or text on the display. A communication interface could include a serial port or parallel port. It may also include a general purpose input/output (GPIO), a game port (USB), a universal serial bus, a micro-USB, a high-definition multimedia (HDMI), port, a video and audio port, a Bluetooth, a near field communication (NFC), port, another similar communication interface, or any combination of these.

“The computing device architecture 100 could include a keyboard interface (106), which provides a communication interface with a keyboard. One example embodiment of the computing device architecture 100 includes a presence sensitive display interface 108 that allows for connection to a presence sensor display 107. The presence-sensitive interface 108, according to some embodiments, may be used to provide a communication interface with various devices, such as a touch screen, a pointing device, or a depth camera. These devices may or may not be linked to a display.

“The computing device architecture 100 can be configured to use one or more input/output interfaces. For example, the keyboard interface, keyboard interface 106, display interface 104, presence sensitive display interface 110, network connection interface 112, camera and sound interfaces 114, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112 etc. To allow the user to record information into computing device architecture 100. An input device could include a trackball or mouse, as well as a trackball or directional pad, trackpad, touch-verified track pads, presence-sensitive track pads, presence-sensitive displays, scroll wheels, digital cameras, digital video cameras, web cameras, microphones, sensors, smartcards, and other similar devices. The input device can be part of the computing device architecture 100, or it may be an independent device. An input device could be, for example, an accelerometer, magnetometer, digital camera, microphone, or optical sensor.

“Example embodiments for the computing device architecture 100 might include an antenna interface 110, which provides a communication link to an antenna, and a network connection interface 112. This provides a communication link to a network. A camera interface 114, which acts as a communication link and allows for the capture of digital images using a camera, is included in certain embodiments. A sound interface 116 can be used to convert sound into electrical signals with a microphone or a speaker into sound. A random access memory (RAM), 118 is available in some embodiments. This allows computer instructions and data to be stored in volatile memory devices for processing by CPU 102.

According to an example embodiment, the computing system architecture 100 contains a read-only (ROM) 120 that stores invariant low level system code and data for basic functions like basic input and output (I/O), startup, keystrokes, etc. An example embodiment of the computing device architecture 100 contains a storage medium (122 or another suitable type) that stores operating system 124, applications programs 126, data files 128 and other relevant information. An example embodiment of the computing device architecture 100 contains a power source 130, which provides appropriate alternating current or direct current to power components. An example embodiment of the computing device architecture 100 contains a telephony system 132 that allows sound to be transmitted and received over a telephone network. The bus 134 allows the components and CPU 102 to communicate with one another.

“The CPU 102 is a computer processor according to an example embodiment. The CPU 102 can have more than one processor unit in one arrangement. The RAM 118 interfaces to the computer bus134 to provide RAM storage for the CPU 102 in a quick manner during execution of software programs like the operating system applications programs and device drivers. To execute software programs, the CPU102 loads computer-executable processes steps from the storage media 122 or another media to a field in the RAM 118. The RAM 118 may contain data that can be accessed by CPU 102 during execution. One example configuration of the device architecture 100 is that it contains at least 125MB RAM and 256MB flash memory.

“The storage medium (122) may contain a number physical drive units such as a redundant array independent disks (RAID), flash memory, a USB flash memory, an external hard drive drive and a thumb drive. An external mini-dual inline memory module (DIMM), synchronous dynamic random acces memory (SDRAM) or an external micro DIMM SDRAM. These computer-readable media enable a computing device access to computer-executable processes, applications programs, and the like. They can also be used to offload or upload data to the device. Storage medium 122 may contain a machine-readable medium. This may allow a computer program product to be tangibly embedded in it.

According to one example embodiment, the computing device may refer to a CPU or be conceptualized as such (for example, CPU 102 in FIG. 1). This example embodiment shows how the computing device can be connected, coupled, or in communication with other devices such as display. This example embodiment shows that the computing device can output content to its local display or speaker(s). Another example embodiment is that the computing device can output content to an external display device (e.g. over Wi-Fi such as a TV, or an external computing system).

“In some embodiments, the computing device 100 can include any number hardware and/or other software applications that are executed in order to facilitate any of these operations. One or more I/O interfaces can be used to facilitate communication between the computing devices and input/output device. A universal serial bus port, an interface port, a serial port and a disk drive are some examples of I/O interfaces that may be used to facilitate interaction between the computing device and one or more input/output devices. One or more I/O interfaces can be used to receive data or user instructions from many input devices. The received data can be processed by one to many computer processors according to various embodiments of the disclosed tech and/or stored in one/more memory devices.

One or more network interfaces can be used to connect the inputs and outputs of the computing device to one or more appropriate networks or connections. These connections could include connections that allow communication with any number sensors in the system. One or more network interfaces can facilitate connection to one of the suitable networks, such as a local area network or a wide area network.

“FIG. “FIG. Method 200 may, according to some embodiments, include the generation and storage of stimulation sequences. These sequences can be read by a processor. The processor may be in electrical communication to a plurality actuators and an outputting device. Each stimulation sequence can contain instructions to activate one or more of the plurality actuators within a specific sequence. A plurality of actuators can drive a variety of devices including, but not restricted to, a vibr motor, speaker, bone-conduction device, or a combination thereof. The plurality of actuators may be placed on or within a wearable device that stimulates a particular part of the body. In some embodiments, the wearable device could be a pair or glove and the plurality actuators might be coin vibration motors that vibrate certain portions of the wearer’s fingers.

“In some embodiments at 202, the processor of a computing device 100 can generate multiple stimulation sequences. In some embodiments, each stimulation cycle and its resulting sequential activation may be a chorded input. In some embodiments, for example, three consecutive activations of three coin vibration engines in different fingers of a glove could represent the three fingers required to play a specific piano chord. These chorded inputs can represent a variety of information including a word or letter, a symbol or a syllable as well as a code, number, musical note, musical chord, and a combination thereof.

“At 204, a processor can generate a plurality stimulation sequence signals that cause a plurality activations in which one of the plurality activates in a first sequential ordering. Each activation can be a separate stimulation event. A stimulation event may be either a vibrational stimulation, audio stimulation, tactile stimulation, or a combination of both. Some embodiments allow stimulation events to be generated by electric or visual stimulations. In some embodiments, the processor generates the initial plurality of stimulation sequences upon execution of a first stimulation sequence. A first stimulation sequence, for instance, may cause a plurality stimulation events (such as vibrations) to occur in a sequence. The sequence of vibrations is a chorded input such as an input to Braille machines.

“In some embodiments at 206, the processor can generate an out signal to cause an indication by the output device. An output device could be an audio output device that generates an audible sound, speaker, display, screen, or wearable headset for displaying a visual cue. In some embodiments, the perceptible indicator may include a visual cue or audible sound, a stop, vibration, or a combination of these. Perceptible indicators can be used to indicate the end of a stimulation sequence (representing a chorded input), and the beginning of the next stimulation sequence. In some embodiments, a first stimulation sequence could represent a piano chord. A second stimulation sequence might represent a piano chord. The perceptible indicator may be a beep from a speaker that indicates that the stimulation events for the first chord are over and that the stimulation events for the second chord are about to start.

“In some embodiments at 208, the process can generate a second plurality stimulation sequence signals that cause a second plurality activations in which one of the plurality activates in a second sequential ordering. Each of the second plurality can be a separate stimulation event. In some embodiments, the processor may generate the second plurality stimulation sequences upon execution of a second stimulation sequence.

“In some embodiments, method 200 may also include the generation of a signal that causes a parsing system to send a parsing indicator to a user prior to the execution of the first stimulation sequence. This indication gives context to the user about the next sequence. The parsing indicator may be visual cues, audible sounds, pauses, vibrations or a combination of these. The parsing indicator may indicate the idea that is embodied in the chorded input. In some embodiments, the parsing indicator may be a letter from the alphabet. Parsing indications may also be sound effects that are played by speakers to indicate a letter in the alphabet. The parsing device can be any combination of a speaker, display, screen or wearable headset that displays a visual cue.

“Some embodiments allow for the separation of activations of actuators due to a stimulation sequence. A predetermined offset could be used to separate an activation from another activation within a plurality. This offset could be between 0 and 50 milliseconds. The skilled in the art will understand that there may be more than one offset or that the offset’s value may change in different embodiments.

“Additionally in some embodiments, the execution a subsequent stimulation sequence may begin at a predetermined moment after a plurality activations have ended. The predetermined time can be from 100 ms up to 1 s in some embodiments (e.g. 100-200ms and 200-300ms respectively, 300-400ms and 400-500ms respectively, 500-600ms as well as 600-700ms. 700-800ms. 800-900ms. 900ms. 900ms. ).

According to some embodiments, the method can execute any number sequences of stimulation, with each sequence representing a specific chord. Some embodiments allow the method to automatically repeat several stimulation sequences, generating the same stimulation events multiple times. The method 200 may be used to transmit one or more chorded inputs through passive haptic learning.

“In some embodiments, instructions stored on a non-transitory computer readable medium can cause the processor 200 to be executed when they are executed by at least one of the processors. The method 200 can also be executed by a system that includes at least 1 memory, operatively coupled with at least 1 processor, and configured to store data and instructions that, when executed, cause the processor to execute each step of the method.

“A computer program product can be provided according to another example embodiment. A computer-readable medium can be included in the computer program product. Instructions that are stored on the computer-readable medium can cause the system to perform a particular function when they are executed by at least one processor. A non-transitory computer readable medium is one that contains instructions that when executed by at most one processor causes the system to perform a certain function. This includes generating a plurality stimulation sequences by an electrically communicating processor with a plurality actuators and an out device. The processor generates an output signal to cause an indication by the output device. Finally, the processor generates a second plurality stimulation sequence signals to cause a second plurality activations in that one or two of the plurality actuators activate

“A system is described in another embodiment. The system may include at least 1 memory that is operatively coupled with at least 1 processor. It can be configured to store data and instructions. Some embodiments of this invention include a system comprising at most one memory that is operatively coupled with at least 1 processor. This memory is configured for storing instructions and data.

It will be clear that FIG. FIG. 2 is illustrative only. Other steps can be added, removed or modified.

“CERTAIN IMPLICATIONS OF THE DECREDED TECHNOLOGY are described with reference to block- and flow diagrams of methods, systems, and/or computer programs according to examples of the disclosed tech. Computer-executable instructions may be used to execute one or more of the blocks and/or combinations of blocks within the flow diagrams. According to certain embodiments of disclosed technology, not all blocks of the flow diagrams or block diagrams need to be executed in the same order.

These computer-executable instructions can be loaded onto a general purpose computer, a specific-purpose computer or a processor to create a particular machine. The instructions that are executed on the processor or other data processing apparatus create means for implementing the functions in the flow block or blocks. These computer program instructions can also be stored in computer-readable memories that may direct a computer, or other programmable processing apparatus, to perform a certain function. The instructions stored in the computer readable memory may produce an article or manufacture that includes instruction means that implement one of the functions in the flow block or blocks. One example of the disclosed technology is a computer program product. This may include a computer useable medium containing a computer readable program code or instructions, and said computer-readable code adapted for execution to implement the functions specified in the flow block or blocks. Computer program instructions can also be loaded onto a computer, or another programmable data processor apparatus to cause a sequence of operational elements or steps on the computer to produce a computer implemented process. The instructions that are executed on the computer and other programmable apparatus provide elements/steps for implementing the functions in the flow block or blocks.

Accordingly, blocks in the flow diagrams and block diagrams can be used to support combinations of means or elements for performing specified functions. They also provide program instructions for the execution of specified functions. You will also understand that every block in the flow diagrams and block diagrams can be used by special-purpose, hardware-based computers that perform the specific functions, elements, steps or combinations of special-purpose computer instructions.

“The disclosed systems, methods, computer-readable media (CRM), apparatuses and systems can convey chorded input in many ways. Any sensory perception can convey the chorded input, such as sight, sound, touch or any combination thereof. In some embodiments, the conveyance may include tap input, audio input, bone conduction, visual input, gesture input, text input, or a combination of these. Some embodiments transmit the chorded input haptically (actively or passively) in some cases. Teaching, learning, or any combination thereof can be included in the conveyance.

Learning is not always active. It can also be passive. Learning passively is what you do, instead of learning it. It is described as being?caught, rather than taught? Subjects who are exposed to media rich information and live in an environment that is conducive to learning are 40% more likely than those who are not exposed to it. However, a media-rich environment does not have to be limited to visual and audio stimulation. Multi-modal audio and tactile cues can give users a deeper understanding of musical structure. In a series experiments, we demonstrated that manual skills can also be reinforced or learned passively while the user engages in tactile stimulation tasks.

The chorded input can also be transmitted via simultanious/grouped stimuli. Sequential stimulation can also be used to convey the chorded input. Some embodiments include accompanying audio to convey the chorded input.

“The systems, methods and apparatuses described herein can be used for a wide variety of purposes. The systems, methods and CRM described herein can be used to transmit chorded input. Chorded input is any type of input where multiple actions are performed simultaneously. For example, pressing multiple keys simultaneously. The systems, methods and apparatuses described herein can be used to help teach chorded input, including a word, letter, number, symbol, musical note, musical chord or combination thereof.

“In certain embodiments, the systems and methods and apparatuses described herein can be used for music-related learning. This includes sight recognition, sound recognition and musical-instrument reading. Music-related learning can be described as learning to play an instrument. Some embodiments of the musical-related learning involve enhancing and/or improving skills related to an existing musical instrument. This could include enhancing or learning a new song, or a portion thereof, increasing volume or volume of musical notes or chords or enhancing playing speed. The systems, methods, apparatuses described herein can facilitate the playing of various musical instruments, including: e.g. flute, string instruments (e.g. guitar), percussion instruments, brass instruments (e.g. trumpet), electronic instruments, e.g. synthesizer, and keyboard instruments (e.g. piano). These systems, methods and apparatuses can facilitate the playing of musical instruments, including idiophones (e.g. directly struck idiophones or indirectly struck idiophones), plucked idiophones and friction idiophones. They also allow membranophones to be used as a means to facilitate membranophones and membranophones. The systems, methods and apparatuses described herein also allow for the facilitation of playing musical instruments such as guitar, xylophones, marimbas, drums and organs. Some embodiments teach portions of music. Some embodiments teach entire pieces of music. Some embodiments allow for the teaching of music in a sequence. Some embodiments allow the music to be taught simultaneously in multiple limbs if the chorded input is multiple.

“In some embodiments, systems, methods and apparatuses described herein can be used for teaching words, letters and phrases or combinations thereof to facilitate learning language-related skills (including sight recognition, sound recognition and reading, writing and verbal comprehension). Language-related learning can be described as learning a new language or a portion of it. Language-related learning can be described as the acquisition of new skills or a portion thereof. The systems, methods and apparatuses described herein can facilitate language-related learning in English, Korean and Mandarin. The systems, methods and apparatuses described herein may be used to enable the use of a specific language for a particular function such as stenography. Some embodiments allow the language-based skills to be used to teach typing. Some embodiments allow the language-based skills to be used to teach reading. In some embodiments, language-based skills can also be taught incrementally with a panagram.

“In some embodiments, systems, methods and apparatuses described herein can be used for teaching words, letters and phrases, codes, and combinations thereof, to facilitate code-related learning (including sight recognition, sound recognition and reading and writing). Some embodiments include rhythmic and/or temporally-based code-related skills. Some embodiments allow for code-related learning to be done by learning a new code or a portion of it. Some embodiments of code-related learning involve enhancing or advancing skills related a code already learned. This could include speeding up reading, writing and comprehension, increasing the number of words, letters and/or phrases that are being learned, etc. The systems, methods and apparatuses described herein may facilitate code-related learning, including Braille and Morse codes. Some embodiments allow for the teaching of Morse code via tapping input to text entry, such as on a mobile device. Some embodiments teach Morse code using only audio (which could result in passive learning of the text entry).

“In some embodiments, systems, methods, or apparatuses described herein may be used to facilitate rehabilitation. Rehabilitation can be motor skill rehabilitation due to injury, disability or birth defect. It also includes aging rehabilitation. Rehabilitation may also include sensory enhancement for paralyzed individuals.

“In some embodiments, systems, methods and apparatuses described herein can be used for any application that uses a haptic interface. This includes but not limited to teleoperation, flight simulator, dance simulation, gaming controllers, game add-ons for Augmented Reality, computer-based learning system, text-entry systems and computer-based learning. Some embodiments teach muscle memory via the transmission of chorded input. Some embodiments teach machine and/or system control via conveyance of chorded input.

“Also described herein are methods for teaching manual skills and/or musicality. In these methods, the fingers are stimulated using a pattern that corresponds to a motor action/skill (using preset phrases, songs, patterns). . . ; some motor skills include multiple simultaneous finger actions (‘chords’). These chorded actions can be represented as sequential stimuli/stimuli that have a temporal offset. Users may focus their attention on other tasks while not paying attention to the stimuli. Some embodiments allow sequential stimuli to delay between each other by 0 ms (immediately consecutive and/or overlapping) and 250ms for chords, respectively. Sequential actions can be delayed between 0 and 1.5 seconds for either chords or sequential actions.

“In some embodiments, sequential stimuli can alternate between hands within a chorded act according to a preprogrammed pattern based on clarity determination. Some embodiments present sequential stimuli in a chord traversing both of the hands with temporal offset. Some embodiments group stimuli in a chord traversing both hands by the hand. Some embodiments start with the hand that contains the most stimuli (=actions), in a chord traversing both arms. Some embodiments transmit stimuli within a chord traversing both arms by using alternating hands (e.g. if there are 2 stimuli per arm). Some embodiments allow stimuli in a chord to be transmitted by grouping nearby stimuli (stimuli on the same hand on adjacent fingers) when the chord is not stimulating the fingers on the opposite side of the chord. Some embodiments allow stimuli in a chord to be transmitted by switching between hands, if the chord is being used by the same hand. Some embodiments transmit stimuli within a chord traversing both arms by alternating between hands. If the order is given by the hand in some embodiments, it may be done by the hands. This can be done in alternating hands if the chords are repeated.

“Also described herein are methods of teaching manual tasks, as described in paragraph preceding. In this case, audio is used to accompany tactile stimuli. Some embodiments allow patterns and sequences (tactiles) to be divided into chunks that contain between 10 and 18 tactile stimuli. Some embodiments of the method for teaching manual tasks are structured with synchronized sound that encodes meaning to the tactile patterns immediately before each chord-group tactile stimuli for multiple simultaneous actions. The four stimuli are used to encode Braille letter “g” Some embodiments of the method for teaching manual tasks use synchronized audio to present sections of sequential-action stimuli immediately before them (e.g. words). Some embodiments of the method for teaching manual tasks use synchronized audio to present larger groups of stimuli before them (e.g. words, phrases).

While certain embodiments have been described with reference to what are currently considered to be the most practicable embodiments, it should be understood that the disclosed tech is not limited to these embodiments. It is, on the contrary, intended to include all modifications and equivalent arrangements within the scope of the attached claims. Specific terms may be used herein but they are only used for descriptive and general purposes and not to limit the scope of the disclosed technology.

This written description includes examples that disclose certain embodiments and the best mode. It also allows anyone skilled in the art, to use certain embodiments and to make and use any devices or systems. Certain embodiments of disclosed technology are patentable within the scope of the claims. This may also include any other examples that might occur to those who are skilled in the art. These other examples will be included in the scope of the claimed if they contain equivalent structural elements and have no material differences from the claims’ literal language.

“EXAMPLES”

“Example 1”

“Passive Haptic Learning from Non-Chorded input”

“Haptic guidance is a way for users to learn manual skills. Even if the user is distracted, this learning can still take place. This example shows that PHL aids in the learning of rote patterns for muscle memory for one hand. Mobile Music Touch (MMT), a project that demonstrated passive learning of the pattern of keys that plays a piano melody, was used by the Mobile Music Touch (MMT). The research involved users wearing a PHL glove (wearable tactile interface) while performing other tasks such as homework or taking a test. The glove system played the song and stimulated each finger with the appropriate note. The vibrations could be ignored or used to distract from learning. This method was proven to be effective in learning simple melodies such as Ode to Joy in just 30 minutes.

Three participants learned to type a sentence on a random, 10-key keyboard using a 1-finger-to-1 key mapping. This was in a feasibility study. The keyboard had the letters?A??-?H? and space. Enter was also available. The study participants wore gloves that contained embedded vibration motors. They focused their attention on playing memory cards for 30 minutes, listening to the phrase repeatedly and stimulating the appropriate finger patterns (to type the phrase). Users were able type the phrase in less than 20% error after the PHL session. They also were able to correctly type the component of the phrase (words, letters) and they understood the keyboard mapping well enough to create a new phrase with less than 20% error.

“Example 2”

“Passive Haptic Learning Braille Skills for Two Phrases: Passive Haptic Learning Braille Skills for two Phrases”

Here, the experiment of Example 2 can demonstrate that Braille typing skills are possible to be taught. Without the active attention of the learner;(2) articulate a method to teach chorded input using sequential tapping patterns (where previous attempts at teaching chords have failed); (3) create a method to teach the entire Braille alphabet in just four hours; (4) show that Braille letter identification can be both visually and tactilely; and (5) introduce distraction tasks with more sensitive performance metrics.

This example demonstrates a system for passive haptic learning of typing skills. A study with 16 participants showed that users had significantly lower errors when typing Braille phrases after receiving passive instruction. This was compared to the control group’s 32.85% error rate and 2.73% error increase. PHL users were also more able to identify and read Braille letters from the phrase (72.5%) vs. 22.4%. A second study with 8 participants taught the entire Braille alphabet passively over four sessions. Participants who received PHL had a faster and more consistent rate of typing errors reduction than those who did not receive it. 75% of PHL users reached zero typing errors, compared to none control users. Participants in PHL were also able recognize and to read 93.3% all Braille alphabet letters by the end of this study. These results indicate that Passive Haptic instruction using wearable computers could be feasible to teach Braille typing and reading.

“Example 2 examined whether Braille typing skills could be taught passively. The study evaluated the user’s performance during typing tests that were set around learning periods. The user completed a distraction task without or with simultaneous Passive Haptic Learning during each study session. The session ended with a typing test and Braille reading quizzes. To assess the impact of PHL on user performance, the distraction task was scored. Braille reading quizzes were used to determine if there was any transfer of knowledge between Braille typing skills and Braille reading skills. Two sessions were offered to each user: one with PHL (control) and one without PHL (control). Users learned one of the two phrases in their first session and then learned the second phrase in their next session. The experiment was balanced for condition and phrase. All participants are native English speakers and do not know Braille.

“System: The system described in Example 2 consisted of a pair gloves with one vibration motor per finger and a programmed controller to drive the glove interface. The microcontroller coordinated the sequences of vibrations with audio prompts for two phrases. This Braille keyboard is made from two BAT keyboards. FIG. 3 shows the system. 3.”

“Gloves”: A pair of gloves was the wearable tactile interface that delivered vibration stimuli in Example 2. To ensure a comfortable fit for different sizes of hands, the gloves were not fingerless. This allowed the motors to rest flush at the base knuckle. Each motor was attached to the stretchy glove with adhesive. It was located inside the glove on the back (dorsal, not-palm-side). These gloves were powered by Eccentric Rotating Mass (ERM), vibration motors (Precision Microdrives Model #308-100). They could be driven high or floated through an Arduino Nano with buffered Circuitry attached to Darlington array chips.

“Audio & Vibration Sequences” Braille is a chorded languages, which means that you need multiple keys to type the same alphabet character. Instead of delivering stimuli simultaneously to all fingers involved in typing a chord, the motors were activated sequentially instead. To indicate completion of a chord or letter to users, audio and timing cues were used.

Vibration and audio were used in this study under two conditions: once during each pretest and once during Passive Haptic Learning. Both times presented users with the audio of the phrase and the audio spelling. Each motor on the fingers needed to type the chorded letter was vibrated in a specific sequence after each letter had been spoken. After the chord was completed, the system stopped for 100 milliseconds and then played the audio for the next letter. The audio-haptic stimulus was played for the duration of PHL’s distraction task. Each repetition took 10 seconds. Motors were activated from 300 ms through 750 ms. Phrase vibration sequence timings were selected to allow clear discrimination of vibrations and separate chorded letter recognition. This allows for approximately 60 repetitions of a phrase over the distraction task period.

“Keyboard: This study used two Infogrip Braille keyboards. The inputs from the BAT keyboard were converted into Braille keyboard entries. The ASCII characters generated by key presses were then translated from a hash map to the appropriate Braille value. Both staggered (pressing one key at a time, then releasing them all) and simultaneous (pressing all of the keys down simultaneously) entry are supported. This method produced a chorded input that follows the Perkins Brailler standard, as well as digital Braille keyboards such as Freedom Scientific’s Pac Mate.

“Typing Software”: The typing tests were conducted in our studies using specialized typing software. The software provided audio prompts and a blank screen. The screen would display an asterisk after each successful entry of Braille letters or spaces. This was to provide feedback and prevent any learning during testing. The software logged user inputs and performed calculations to calculate statistics such as uncorrected errors (UER), words per minute (WPM), using formulae described by MacKenzie & TanakaIshii, Text Entry Systems for Mobility, Accessibility and Universality (2007). San Francisco, Calif., Morgan Kaufmann

“Phrases tested: The phrases (FIG. The phrases (FIG.4) that were used in this study were:?add a backpack? (AAB) &?hike fee (AAB) and?hike fee? (HF), respectively. These phrases were selected for their ability to be easily identified via audio clips. These phrases don’t include homophones or obscure spellings. They have clear meanings that are easy to understand and remember. These phrases were chosen for their similar length (15-18 vibrations). These phrases are composed of Braille letters that require no more than 3 keys to type. They also have comparable complexity (4 or 5 unique letters) and contain words of 3-4 characters.

“Pre-Test”: The initial performance of users was determined by a typing pretest. To introduce participants to the keyboard, and to explain the nature of the typing chords, study administrators use a set of verbal instructions and gestures. Users, all of whom were not well-informed, can now understand how to use the chorded keyboard to type and the meaning of the vibrations and audio. Participants were prompted to type the phrase after they had heard the audio of the current phrase. During the pre-test, users were allowed to type the phrase once. The pre-test required users to focus on understanding vibration-guided meanings and use the pre-test as a guide to how to type chords. The pre-test results are used to establish baseline typing performance.

“Distraction Task”: Subjects in both PHL- and control conditions took part in a distraction task after the pre-test. The distraction task allowed subjects to focus on other tasks and measured their ability while they were receiving the stimulus. Each group was given 30 minutes to distract themselves with their gloves on and their earbuds in. The distraction task was an online game in this study. The task was to keep the user’s attention on the game only and ignore any vibrations or audio. Both groups were asked to score the highest possible scores during the task. Their scores were recorded at the end of each distraction task.

“For this study, the distraction game was chosen to (1) be difficult/cognitively demanding/mentally taxing; (2) contain no reading/words; (3) emit no sounds/mutable; and (4) log a score. An online game called Fritz! The distraction task was chosen and administered to both groups. All subjects received instruction on how to play before the game. Fritz! Fritz! was designed to align blocks of similar patterns by moving blocks adjacent. Users who were undergoing a PHL study session received audio and haptic stimulation while they played the game. PHL audio and vibration were not provided to the control groups. PHL groups were instructed to not pay attention to vibrations or audio for the study and to concentrate on the game.

“Post Test”: Users were given a typing test (post-test) after the distraction task. Participants were asked to first type the entire phrase that they had learned or attempted during the pre test. Participants were allowed to type the entire phrase three times before being asked to enter each word and letter (in random order) for three more trials. During the test, participants did not feel any vibrations.

“Braille Reading Questions: This research examined the potential of Passive Haptic Learning to learn typing skills on Braille Keyboard. Braille reading tests were created to test participants’ Braille typing skills. These quizzes were created to verify if the transfer took place. The pool of blind people who were recruited was not familiar with Braille. It is known that tactile perception can be challenging for such individuals. Reading quizzes that included visual Braille representations in addition to tactile quizzes were created. Embossed Braille representations

“Two questions were given at the end each session. Because the tactile quiz combined Braille typing-to?reading translation and tactile perception, the visual quiz was presented prior to the tactile quiz.

Instructions were given at the beginning of each quiz to explain how finger mapping was done on the keyboard. To ensure that participants understood the relationship, study administrators demonstrated this mapping with their hands and a set of verbal instructions. FIG. 5A shows the picture that was used to illustrate the mapping in the quizzes. 5A.”

“Visual Quiz”: This visual quiz consisted of images of Braille cells, with dots filled in or left blank to show what would be printed on a Braille document. Each letter was given a question. The?add to a bag? Session users were asked to identify the letters of the phrase in a consistently random order. The?hike fee was also asked. They were asked to answer questions in the following order: f. i.e. k. h. (FIG. 5B, and asked them to write the letter they represent.

“Tactile Quiz”: This quiz was created to test students’ ability to perceive Braille cells using their fingers and identify the letter they see. The subject placed their dominant hand in a box that was only one side open. Inside the box was a card with the current letter. The subject can slide their hand into the box and use their fingers to access Braille without having to read the Braille on the card. Participants were provided with the same letters as in the visual quiz, but in a consistent random order (b. g. a. d. h. e. f. i. k). To indicate what they saw (FIG. 5C). The subject also completed a blank to identify the embossed letter.

“RESULTS OF TWOPHRASE STUDY”: Participants were able to significantly reduce typing errors on Braille keyboards, sometimes reaching 100% accuracy, with the help of PHL. The users also learned how to read 75% of the Braille letters. These results suggest that users have learned Braille/chorded text entry via Passive Haptic Learning.

“Typing”: The study used typing software to calculate uncorrected errors rate (UER), words per minute (WPM) and other statistics that were used to analyze the performance of participants. Because this study was conducted within-subjects, paired tests were used to evaluate the effects of PHL and not having PHL. PHL was chosen because it was believed that PHL would improve the accuracy of letter and phrase typing, as well as visual and tactile recognition of letters. There was no need to correct multi-hypotheses in families. The threshold of significance was set at?=0.05.”

“Comparing typing errors in the pre-test with those of the three post-test phrase-typing trials, the UER (uncorrectederror rate) was calculated for each session. FIGS. 6 and 7 show the results for both phrases. FIGS. 6 and 7 show that users decreased their typing errors (increased accuracy) after passive learning sessions (31.55% on average and 42.78% respectively).

“This result was not valid for control sessions where minimal to zero improvement (2.68%) was normal for?add bags? Increased errors (up 7.14%) were the norm for?hike fees. These data are represented in the average accuracy improvements for each phrase (FIG. 8). A paired t test suggests that there is a statistical difference between the conditions. Participants who received PHL had a greater AER difference (39.14), than those who did not receive it (M=1.97, SE=11.98, BCa 95%CI[22.0], 56.27], and t(15),=4.87, P0.00001). Participants were asked to correctly type every letter in the phrase. The number of correct letters was significantly higher in PHL sessions than it was for control (FIG. 9).”

“T-tests show that there is statistically a difference (2.31) between conditions where participants receive PHL (M=3.25,SE=1.69) and those not given PHL.

“Distraction Task”: This is the base performance of participants in the distraction task, called the Fritz! Also, the game was characterized. Three games were played by a player who was not part of the PHL study. Each trial was a 10 minute focused session followed by a 10-minute distracted session. The focused session was a game-only session. The player was asked to play the game while watching a television program. The average score of the player during distracted game sessions was 19.36%.

“All 16 subjects participated in the game during each PHL and control session and cleared up to level 5 within the time limit. The results for performance differences were noisy because of the nature the game. However, the average score difference between PHL (and control) was within 10% as shown in FIG. 10. These results confirm and demonstrate the sensitivity to our distraction task for monitoring user attention and sharing mental resources.

Click here to view the patent on Google Patents.