Internet – Michael McCarty, Glen E. Roe, Adeia Guides Inc

Abstract for “Systems and Methods for Generating a Volume-Based Response for Multiple Voice-Operated User Devices”

“Systems, methods and devices are described herein that allow for the response to a voice order at a volume level determined by the volume level of the voice commands. A media guidance application might detect, for example, the voice command of a user through a first voice operated user device from a plurality voice-operated devices. The media guidance app may determine the volume level of the voice commands. The media guidance application could determine, based on the volume level for the voice command that the second voice-operated device is closer to the user than any other voice-operated devices. A media guidance application might generate an audible response through the second voice-operated device at a second volume setting that is based on the volume level of the voice commands.

Background for “Systems and Methods for Generating a Volume-Based Response for Multiple Voice-Operated User Devices”

It is becoming more common for homes to have voice-operated devices. Voice-operated devices can adjust their response volume to match the volume of a voice command. It is becoming increasingly difficult to coordinate multiple voice-operated devices within a home with the goal of determining which device should answer a query at what volume. In some cases, the user may not be able to hear the response if the volume of the response is the same as the volume of their voice. It can be difficult for users to manually choose a device and set a response volume every time they want a response to voice commands. This could make the device’s response less useful.

“Accordingly, the systems and methods described herein allow you to respond to a voice-command at a volume level that is based on the volume level of your voice command. The volume level of a voice commands can be used to determine the response volume level. This allows users to change the volume level, without the need to adjust the voice-operated device. One example is that a user device may have a first voice on one side and a second voice on the other. The first user might be seated at the end closest to the first voice-operated device, while the second user could be seated at the opposite end. A voice command may be spoken by the first user. It will be received by both the first voice-operated device and the second. Based on the volume level of the voice commands received by each voice-operated device user device, the systems and methods described herein can determine which voice-operated device is closest to the user. The volume level at which the first voice-operated device responds may be determined by the volume level of the voice commands. The first user and second user might be watching a movie, such as?Star Wars?. A whispered request by the first user to repeat the last line of the movie may be used. The first voice-operated device may respond by whispering or scribbling the last line (e.g. “May the Force be yours?”). “Return to the user.”

These systems and methods can be implemented using a media guidance app. A media guidance application can be connected to multiple voice-operated devices. A plurality of voice-operated devices could include DeviceA, DeviceB and DeviceC. A first voice-operated device may be used by the media guidance application to detect a voice command from a user. The media guidance application might detect DeviceA, for example, a voice command by the user saying “Repeat the last sentence?” in reference to the movie that the user is currently watching.

“The media guidance app may determine the volume level at which the voice command is received by first voice-operated device. The first volume level could be, for example, the average input volume (e.g. 48 db) detected at DeviceA.

“In certain embodiments, to determine the first volume level the media guidance app may measure the unfiltered volume of the voice command. The unfiltered volume level for voice commands may be 60dB. However, this unfiltered volume level may contain background noise such as a TV playing a movie nearby. The media guidance app may detect background noise levels and filter voice commands to reduce it. The movie that is being shown on the TV may have a higher frequency than user’s voice. To remove background noise, the media guidance application might filter out high-frequency components of the voice commands. The media guidance program may calculate a filtered volume of the voice command. The filtered volume level for a voice command might be 48 dB and the unfiltered level 60 dB.

“Several voice-operated device devices could detect the voice command from the person. Each device could hear the voice command at a different volume depending on its proximity to the user. Each voice-operated device from a plurality voice-operated devices will have a volume level that corresponds to a plurality volume levels of the voice commands. Each volume level from the plurality volume levels may be associated with a voice-operated device. The media guidance application might receive from each voice-operated device one data structure that contains a volume level as well as a voice-operated device identifier. The media guidance application might receive, for example, a data structure with a volume level 52 db and Device identifier deviceB from a second voice operator user device. The media guidance application might receive, for example, a data structure with a volume level of 50 and a Device Identifier DeviceC from a third voice operator user device.

“In certain embodiments, the media guide application may compare the first volume level with the plurality of volumes. The media guidance application might compare DeviceA’s 48 dB with DeviceB and 52 dB with DeviceB with DeviceC. 50 dB may be associated with DeviceC.

“In some instances, the media guidance app may decide, based upon comparing the first volume level with the plurality volume levels, the greatest volume level among the plurality volume levels. 52 dB may be the maximum volume level that any of the plurality voice-operated devices can receive. The plurality’s loudest volume level is the one that the plurality voice-operated devices receives.

“In certain embodiments, the media guide application may search at least one data structure to find a second voice-operated device that has the highest volume level. The device with the highest volume will be the one closest to the user who gave the voice command. This is because the device that’s closer to the user, the louder the voice command will be to it. If the highest volume level is 52dB, then the media guidance application can search the data structure for the voice-operated device that corresponds with the 52dB volume level. The 52 dB volume level can be identified in the data structure with Device identifier deviceB. This is the second voice-operated device. DeviceB is, in this case, the closest voice-operated device to the user.

“In certain embodiments, the media guide application may transmit a command the second voice-operated device. The command could instruct the second voice-operated device to change the volume level of the response level to the second volume level. This is determined based on the highest volume level. The media guidance application might instruct DeviceB, the second voice-operated device user device, to increase the volume level by 52 dB. The media guidance application might instead tell DeviceB, the second voice-operated device, to increase the volume level to 53 dB. This is slightly less than the highest volume level. To account for ambient noise, the media guidance application may increase the highest volume level slightly.

“In certain embodiments, the media guide application may produce an audible response when a voice command is made. The second voice-operated device may generate the audible response at the second volume level. DeviceB, which is the second voice-operated device, may, for example, repeat at volume 53 dB the last line in the movie that the user is watching (e.g.?May The Force be with You?). In some embodiments, both the first and second voice-operated device may be the exact same device. The first volume level and second volume levels may also be the same.

“In some embodiments, media guidance applications may detect that different voice-operated devices (e.g. DeviceA, DeviceB or DeviceC) use different equipment, methods, or sensitivity to detect voice commands. The highest volume level may not correspond with the closest voice-operated device to which the voice command is being issued. The media guidance application may adjust the plurality volume levels to account for the difference in voice-operated devices, before determining the highest volume level. The media guidance application may also use other factors in some instances to determine which voice-operated device is closest. This could include infrared detection (IR) to determine the distance between each voice-operated device and the voice command being issued.

“In some embodiments, the application can determine other users than the one who gave the voice command and who would be interested to hear the audible response. The audible response may be output by more than one voice-operated device from the plurality voice-operated devices. The responses can be given simultaneously, or at different volumes depending on the user’s preference. This ensures that all users are able to hear the reply. UserA might ask UserA what TV show is being shown on HBO. The media guidance app may decide that the program is “Game of Thrones.” UserB may be a second user who is interested in the Game of Thrones program. The media guidance application generates an audible response, such as “Game of Thrones is currently on HBO.?” The media guidance application can generate an audible response to UserA’s voice command via the second voice-operated device. For example, the audible response generated by the second voice-operated device may be at a level of 53 dB. The audible reply generated by the third voice-operated device may be at a level of 55 dB. It is possible to choose between the second and third volume levels based on how easily the user can hear the audible response.

The media guidance application may be able to identify the user’s profile in order to determine the second volume level. A user profile could contain a hearing data structure that contains a plurality user volume levels the user has previously acknowledged hearing. The media guidance app may determine the lowest user volume level from among the plurality. 40 dB may be an example of the lowest volume level that a user has acknowledged hearing previously. Based on the user’s lowest volume level, the second volume level (the volume of the audible response), may also be determined. 35 dB may be the maximum volume level that any one of the plurality devices (such DeviceA, DeviceB or DeviceC) can hear, for example. To ensure the user hears the audible response, the second volume level could be set at 40dB.

“In some embodiments, the response from the user to a voice command may be displayed visually on a device. A media guidance application can identify the display device associated to the user. The media guidance application could interface with a television that is associated with the user via a user profile. The media guidance app may display a visual representation the audible response. The media guidance application could generate a window on the TV and display the response. The media guidance application might display, for example, Game of Thrones’ title. When the user requests the name of the show. The display window can also include a reference of the device that generated the audible reply after the request is received. The media guidance application might display, for example,?DeviceB stated? ?Game of Thrones .??? This tells the user which device they are using to communicate and, consequently, what device is closest to them.

“In certain embodiments, the media guide application might determine that the user is hearing impaired using their user profile. This information could, for example, be stored in the hearing data structure. The user might not be able hear any audible responses, for example. Similar to the previous description, the media guidance app may be able to identify the display device associated with the user. The media guidance app may display a visual representation the user’s audible response, and transmit that visual representation to the display device. The media guidance app may, for example transmit the visual representation to the mobile phone associated with the user.

“In some cases, the media guidance app may wait for acknowledgement from the user that they have heard the audible response. The media guidance application can generate another audible response through the second voice-operated device user or display a visual representation of that audible response for display on the user device. A media guidance application can determine the first time that corresponds to when the audible reply was generated. The media guidance application could save a time stamp on a data structure when the audible reply is generated. The audible response could have been generated at 3:02:03 PM. The media guidance app may then add a time period to calculate another time. The time period could be 20 seconds, and the second time might be 3:22:23 PM. This is the time the media guidance app waits to respond to the voice command of the user.

There are many ways to determine the time it takes for users to acknowledge their request. The media guidance application may use user profiles to determine the average response time. The media guidance application may identify the user profile by listening to a key phrase spoken by the user. This key word can be associated with a specific user. Speech patterns may be used to identify the user profile. A user profile could contain, for instance, a first data structure that contains information about the time taken by the user to reply to voice-operated devices in the past. To determine an average response time for the user, the media guidance application might use the average of the past responses to calculate the average response time. The user might have responded to the second voice-operated device using the same time frame in the past, for example, taking 10 seconds, 5 minutes, and 15 seconds. This user could set the time to wait for a reply to be 10 seconds, as 10 seconds is the average of all previous responses.

“In certain embodiments, if the user does not acknowledge the device within the specified time, the media guidance app will generate another response or repeat the initial audible response to voice command. An acknowledgement is a confirmation that the user heard the audible response. The audible response of DeviceB’s second voice-operated device may be, for example,?May God be with you? This response may be acknowledged by the user who might respond with a thank you, DeviceB.

If no acknowledgement is received within a specified time, the media guidance application may generate a second audible or visual representation of that audible response. In some embodiments, for example, the media guidance app may transmit, based upon whether acknowledgement was received at any third time, a visual representation the audio response to a display device associated with the user. The third time corresponds to before the second. The audible response could have been generated at 3:03 PM. For example, if the time period is twenty seconds, the second time will be 3:12:23 PM. DeviceB may not have received acknowledgement by 3:22 PM if the media guidance application is running. The television associated with the user will display the visual representation of the audible response. ?May the Force Be with You

“In certain embodiments, the media guidance app generates another audible reply if the voice-operated device doesn’t receive acknowledgement within the specified time. The media guidance application can generate, depending on whether acknowledgement was received at any third time, a second audio response via the second voice-operated device. The second audible reply can be the same as that of the first user. In this case, the second user repeats the user’s audible response at the same volume or at a different volume. For example, the second audible reply may prompt the user’s response. DeviceB may also generate a second audible response if it has not received acknowledgement from the user before 3:22:23 PM. ?May the Force Be with You ????”

The second audible response can be generated at the same volume as the first or at a louder volume in certain embodiments. The media guidance application might determine that the third volume level at which the second audible response is generated should be greater than the second volume. For example, the third volume level could be the second volume level plus some pre-determined amount. The second volume level might be 53 dB and the third level 56 dB. The third volume level generates the second audible reaction. Media guidance applications may also generate this second response. ?May the Force Be with You Through DeviceB at 56 dB

The media guidance application might identify the user profile to generate the second audible reply in certain aspects. The media guidance application might identify the user profile through a key word spoken orally by the user. For example, UserA might say “UserA?” Before issuing a voice request. You may also identify the user profile by their speech patterns or vocal patterns.

“In certain embodiments, the media guide application may use the user’s profile to determine the average speaking volume of the user. The user profile might contain a first data structure that contains speaking volumes used by the user in the past. To determine the average speaking volume of the user, the media guidance application can use the average of these speaking volumes. The user’s average speaking volume may be 60 decibels.

The media guidance application might then calculate a difference between the average speaking level and the highest volume level (i.e. The volume received by the voice-operated device closest to the user. The maximum volume level could be 52 dB, while the average user’s speaking volume may be 60. In this example, the difference is 8 dB.

The difference between the second volume level and the third volume level (the volume of the second audible reply) can be used to determine the volume level. The difference could be as high as 8 dB and the second volume may be 53dB. In this example, the third volume level would be 61dB. In such a case, the second voice-operated device user would emit the second audible response at the rate of 61 dB.

“In some cases, the media guide application may receive an acknowledgement from the user. The acknowledgement may be followed by the storage of the second volume level (the volume that was heard in the initial audible response). A user profile might contain a hearing information structure that contains a variety of user volume levels that the user has heard in the past. This second data structure may contain the second volume level.

“It is important to note that the systems and/or methodologies described above could be applied to, and used in accordance, with, other systems or methods and/or apparatuses listed in this disclosure.”

“Systems, methods and procedures are described in this document for responding to voice commands at a volume level that is based on the volume level of the voice. The volume level of the voice commands can be used to determine the response volume level. This allows users to change the volume level, without the need to adjust the voice-operated device manually.

“FIG. “FIG. 1 illustrates an example of multiple voice operated user devices sensing a voice command in accordance to some embodiments. A first voice-operated device device 102 may be located at one end of a sofa, while a second voice operated device device 104 is at the other end. The first user, 108, may be found at the couch’s end closest to the first voice-operated device 102. A second user 110 might be located at the opposite couch’s end near the second voice operated user device 104. A second user 110 might utter a voice instruction 106 which is heard by the first voice-operated device 102 and second voice-operated device 104. Based on the volume level of the voice commands received by each voice-operated device 102,104, the systems and methods described herein can determine that the second voice operated user device (104) is closer to the second user 110 who issued voice command 106. The volume level at which the second voice-operated device 104 responds 112 may be determined based on volume level of voice command 106. The second user 110 and the first user 108 may be watching the same movie (e.g., “Star Wars?”). The 110 voice command 106 of the second user may be a whispered request for the movie’s last line (e.g., “May the Force Be with You?”). The second voice-operated device 104 may reply 112 by whispering last line (e.g.?May the Force Be with You?). The second user 110 will be redirected. The voice command 106 was received by the first voice-operated device 102, but the second voice-operated device 104 responded with the answer 112

These systems and methods can be implemented using a media guidance app. A plurality of voice-operated devices may be connected to the media guidance application 102,104, and 114. A plurality of voice operated user devices could include, for instance, DeviceA, DeviceB, and DeviceC. A first voice-operated device 102 may be used to detect a voice command 106 from a user. The media guidance application might detect, for example, a first voice-operated device 102 (e.g. DeviceA), a user speaking a voice command 106 such as “Repeat the last sentence”,? Refers to the movie that the user is currently watching.

“The media guidance app may determine the volume level of the voice commands 106. The voice command is then received by the first voice-operated device 102. The first volume level could be, for example, the average input volume (e.g. 48 db) detected at the first voice operated user device 102 (e.g. DeviceA).

“In certain embodiments, to determine the first volume level the media guidance app may measure the unfiltered volume level of the voice commands 106. The unfiltered volume level for voice commands may be 60 dB. However, this unfiltered volume level may contain background noise such as a TV playing a movie close to the user 110.

“The media guidance app may detect background noise levels and filter the voice command (106) to reduce the noise. The movie that is being shown on the TV may have a higher frequency than 110. To remove background noise, the media guidance application might filter out high-frequency components of the voice commands 106. The media guidance program may calculate a filtered volume level for the voice command 106. The filtered volume level for a voice command could be 48 dB and the unfiltered level 60 dB, respectively.

The voice command 106 may be detected by several voice-operated devices (e.g. DeviceA 102 and DeviceB 104), as well as DeviceC 114. The volume level at which each of these devices 102-104 and 114 receives the voice command may vary depending on the user’s proximity. Each voice-operated device 102,104, and 114 will be associated with a volume of a plurality volume levels of the voice commands 106. Each volume level from the plurality volume levels may be associated with a voice-operated device. The media guidance application might receive from each voice-operated device one data structure that contains a volume level as well as a voice-operated device identifier. The media guidance application might receive, for example, a data structure with a volume level 52 db and Device identifier deviceB from a second voice operated user device 104. The media guidance application might receive, for example, 114 from a third voice-operated device a data structure with a volume level 50 db and Device identifier deviceC.

“In certain embodiments, the media guide application may compare the first volume level with the plurality of volumes. The media guidance application might compare 48 dB associated to DeviceA 102 with 52 dB associating with DeviceB 104 with 50 dB associating with DeviceC 114.

“In some instances, the media guidance app may decide, based upon comparing the first volume level with the plurality volume levels, the greatest volume level among the plurality volume levels. 52 dB may be the maximum volume level that any of the plurality voice-operated device devices can receive. The plurality’s loudest volume level is the one that receives the most volume levels (e.g. voice-operated devices 102,104, and 114).

“In certain embodiments, the media guide application may search at least one data structure to find a second voice-operated device 104 that has the highest volume level. The device with the highest volume will be the one closest to the user who gave the voice command. This is because the device that’s closer to the user the more loud the voice command will sound to it. If the highest volume level is 52dB, then the media guidance application can search the data structure for the voice-operated device that corresponds to 52dB volume level. The 52 dB volume level can be identified in the data structure with Device identifier DeviceB. This is the device that represents the second voice-operated device 104. In this example, the second voice-operated device 104 (e.g. DeviceB) is the closest voice-operated device to the 110 user who issued voice command.

“In certain embodiments, the media guide application may transmit a command 104 to the second voice-operated device. The command could instruct the second voice-operated device 104 to change the volume level to match the highest volume level. The media guidance application might instruct DeviceB, the second voice-operated device 104, to increase the volume level by 52 decibels. The media guidance application might instead tell DeviceB, the second voice-operated device, to increase the volume level to 53dB. This is slightly less than the highest volume level. To account for ambient noise, the media guidance application may increase the highest volume level slightly.

“In certain embodiments, the media guide application may produce an audible response 112 when a voice command is made 106. The second voice-operated device 104 may generate the audible response 112, at the second volume level. The DeviceB second voice-operated device 104 may, for example, repeat at volume 53 dB the last line in the movie that the user is watching (e.g.?May God be with you?). In some embodiments, both the first and second voice-operated devices may be one device. The volume levels for the first and second volumes may also be identical.

“In certain embodiments, the media guide application may detect that different voice-operated devices (e.g. first voice-operated device 102, second voice operated user device 104, and third voice-operated device 114) use different equipment, methods, or sensitivity to detect voice command 106. The highest volume level may not correspond with the voice-operated device closest to the user 110 issuing voice command 106. The media guidance application may adjust the plurality volume levels to account for the differences in voice-operated device models. The media guidance application may also use other factors in some instances to determine which voice-operated device is closest. This could include infrared detection (IR) to determine the distance between each voice-operated device and the user issuing a voice command.

“In some instances, the media guidance app may identify users other than the 110 user who issued the voice commands 106 and who would be interested to hear the audible response 112 of the voice command.106. The audible response may be output by more than one voice-operated device from the plurality voice-operated devices 102,104, and 114 in such embodiments. The responses can be given simultaneously, or at different volumes, to ensure that all users are able hear them. The second user 110 might issue a voice command asking which television show is being shown on HBO. The media guidance application might determine that the program is “Game of Thrones?”. The media guidance app may decide that user 108 is interested in the program “Game of Thrones?”. The media guidance application may generate an audible response, such as “Game of Thrones is currently on HBO.?” The media guidance application can generate an audible response to the voice command of the second user 110 via the second voice-operated device 104. For example, the second voice-operated device 104 may generate an audible response at a volume level of 53 dB. The third voice-operated device 114 may generate a response at a volume level of 55 dB. The third and second volume levels can be determined based on the user’s ability to hear the audible response.

“In certain embodiments, the media guide application may detect if a user moves. The media guidance application might measure volume levels when a user speaks and when it stops speaking. The volume level at the third voice-operated user device (114) may be higher than that received at the 114 after the user has finished issuing the voice commands. When the user starts speaking, the volume received at the second user voice device 104 may be lower than that received at second voice operator user device (104), when the user stops speaking. Using these changes in volume levels, the media guidance application can determine that the user is moving closer to the second voice-operated user device than the third. Accordingly, the media guidance application can determine multiple voice-operated user devices from the plurality that are along the user’s path.

“In certain embodiments, the media guide application may generate parts of the audible reaction through multiple voice-operated user devices. Multiple voice-operated user devices can generate the audible reply or a part of the audible reaction simultaneously, or at different times. The voice command could request translation from Spanish to English of a song. The third voice-operated user device 114 may play the first ten second of the translation (the audible reply), while the first voice-operated user device 101 may play the next ten second of the song and the last component. The media guidance app may adjust each device’s volume to match (e.g. the second volume level), and all devices will respond at the same volume level. The second volume level could be, for example, 57 dB. Each device may respond at the second volume level of 57 dB. The audible response in the above-described embodiment may follow the user’s movement. This allows the user to hear the audible responses as they move near different devices.

“In some embodiments, the media guide application may identify the user profile associated to the user 110 who gave the voice command 106. A user profile could contain a hearing data structure that contains a plurality user volume levels that user 110 has previously acknowledged hearing. The media guidance app may determine the lowest user volume level from the plurality. For example, 40 dB may be the minimum volume that user 110 has acknowledged hearing. Based on the user’s lowest volume level, the second volume level (the volume of the audible response), may also be determined. 35 dB may be the maximum volume level that any user device (such as DeviceA102, DeviceB104, or DeviceC114) can hear, for example. To ensure that user 110 hears the audible response, the second volume level could be set at 40dB.

“FIG. “FIG. The media guidance application may wait for acknowledgement from the user 202 that it has heard the audible reply 206. The media guidance application can generate an audible response 210 through the voice command 204 from the second voice-operated user 208 device. If no acknowledgement is received within that time, it may display a visual representation on the user device. 3. The process is described below. A media guidance application can determine the first time that corresponds to the time when audible response (206) was generated. The media guidance application could save a time stamp on a data structure if the audible reply 206 is generated. The audible response could have been generated at 3:03:03 PM. The media guidance app may then add a time period to calculate another time. The time period could be 20 seconds, and the second time might be 3:22:23 PM. This is the time the media guidance app waits to respond to the user’s voice command 202.

There are many ways to determine the time it takes for users to acknowledge receipt of their messages. The media guidance application may use user profiles to determine the time frame. The media guidance application may identify the user profile by listening to a key phrase spoken by the user202. This key word can be associated with a specific user. Speech patterns may be used to identify the user profile. For example, the first data structure in the user profile could contain information about the time it took for the user to respond to voice-operated devices. Media guidance applications may use the average of the past responses to calculate the average response time for user 202. The user 202 might have responded to the second voice-operated device 208 in the past 10 seconds, 5 minutes, or 15 seconds. This user could set the time to wait for a response to be 10 seconds, as 10 seconds is the average of all previous responses.

“In certain embodiments, if the user does not acknowledge the device 208 within the specified time, the media guidance app will generate another response 220 or repeat the initial audible response 204 to voice command. The acknowledgement 212 shows that the user heard the audible response. The audible response 206 may come from DeviceB. This response may be acknowledged by the user 202 who might respond with a thank you DeviceB.

“If acknowledgement is not received within a specified time, an audible second response 210 or visual representation of the audible reply 206 may be generated. In some embodiments, for example, the media guidance app may transmit, depending on whether acknowledgement 212 has been received at a third occasion, a visual representation (such that shown in FIG. 3 in window 308) of the audio response to a display device (such that shown in FIG. 3 at device 308) is associated with the user. The third time is prior to the second. The audible response 206 could have been generated at 3:02:03 PM. For example, if the time period is twenty seconds, the second time will be 3:12:23 PM. DeviceB may not have received acknowledgement by 3:22 PM if the media guidance application is running. The television associated with the user will display the visual representation of the audio response 206. ?May the Force Be with You ???).”

“In certain embodiments, the media guidance app generates a second audible response 220 if the user does not acknowledge the device voice-operated 208 within the specified time. The media guidance application can generate a second audible reply 210 depending on whether acknowledgement (such acknowledgement 212) has been received at any other time. The second audible reply 210 may be the same as that of the audible answer 204. In this case, the second user repeats the audible responses to the user at the same volume or at a different volume. For example, the second audible reply 210 might prompt the user’s response. If the second voice operated user device 208 (e.g. DeviceB) has not received acknowledgement from the user before 3:22:23 PM, then the media guidance app may generate the second audible reply 210 asking?Did I hear? ?May the Force Be with You

The second audible response (210) may be generated at the same volume as the first response, or at a louder volume in certain embodiments. The media guidance application might determine that the third volume level at which the second audible response is generated 210 is higher than the second volume. For example, the third volume level could be the second volume level plus an amount pre-determined. The second volume level might be 53 dB and the third level 56 dB. The third volume level generates the second audible reaction 210. Media guidance applications may also generate the second audible reply 210. ?May the Force Be with You via the second voice-operated device (e.g. DeviceB) at 56dB

The media guidance application might identify the user profile of the user 202 in certain aspects to generate the best audible response. The media guidance application might identify the user profile by listening to a key word spoken 202 by the user. For example, UserA might say “UserA?” Before issuing a voice command. You may also identify the user profile by their speech patterns or vocal patterns 202.

“In certain embodiments, the media guide application may use the user’s profile to determine the average speaking volume of the user. 202 The user profile might contain a first data structure that contains speaking volumes used by the user in the past. To determine the user’s average speaking volume, the media guidance application can use the average of these speaking volumes. The user’s average speaking volume may be 60 decibels.

The media guidance application can then determine the difference between the average speaking level and the highest volume level (i.e. the volume received by the voice-operated device 208 closest to the user 202). The maximum volume level could be 52 dB, while the average user’s speaking volume may be 60. In this example, the difference is 8 dB.

The difference between the second volume level and the third volume level (the volume of the second audible reply 210) can be used to determine the third volume level. The difference could be as high as 8 dB and the second volume could be 53 dB. In this example, the third volume level might be 61 dB. The second voice-operated device 208 would emit the second audible response, 210 at 61dB.

“In certain cases, the media guidance app receives an acknowledgment 212 from user. The acknowledgement 212 may be received by the media guidance app. It may then store the second volume level (the volume of the initial audible response 206) within the user’s profile. The user profile could contain a hearing information structure that contains a variety of user volume levels that the user 202 was able to hear previously. This second data structure may contain the second volume level.

“FIG. “FIG. The response to voice commands 304 may be displayed visually on a display device 308 in some embodiments. The media guidance app may be able to identify the display device associated with the user 308 through a user profile. The media guidance application could interface with a television that is associated with the user 302. A visual representation of the audible response may be generated by the media guidance application. The media guidance application might generate a window 310 on the TV and display the response in the windows 310. The media guidance application might display, for example, the last line of the movie that user 302 is currently watching. When the user 302 requests 304, it is a query about which line of the movie is being shown. The display window 310 can also include a reference 306 to the device that generated the audible reply after the request 304 is received. The media guidance application might display, for example,?DeviceB stated? ?May the Force Be with You .??? This tells the user which device they are talking to and, consequently, what device is closest to them.

“In certain embodiments, the media guide application might determine that the user 302 has a hearing impairment using their user profile. This information could, for example, be stored in the hearing data structure. The user 302 might not be able hear any audible responses. Similar to the previous description, the media guidance app may be able to identify the display device 308 that is associated with the user. The media guidance app may display a visual representation based on the user’s audible response. It can also transmit this visual representation to the display device 308. The media guidance application could transmit the visual representation to a television 308. It may also be associated with user 302.

“Continuous listening device” is defined herein. A device that can, when powered up, continuously monitor audio without the user needing to prompt (e.g. by pressing a button) it to prepare for input commands. A continuous listening device might be monitoring audio for a prompt or keyword (e.g. “Hello Assistant?”). To activate an active listening condition, or to monitor and process all audio in a passive listening condition. A “passive listening state” is defined herein. A?passive listening state’ is a mode of operation for a continuous listening system in which audio is being continuously or temporarily recorded, without the user having prompted it to do so. The passive state means that the continuous listening device processes all audio input. This is in contrast to active listening, which only processes audio when a prompt or keyword is provided. The continuous listening device may store audio received in a buffer that stores audio for a set length. The continuous listening device might store five minutes worth of audio. In this case, the oldest audio information is erased as new audio is recorded. Some embodiments store all audio in a persistent manner, which can be erased using routine housekeeping or manually by the user.

“A voice-operated user device is, as used herein. A device that can continuously listen for keywords and audio input. The voice-operated device can process audio input if a keyword address is detected. As described above, a voice-operated device can also be called a continuous listening device. Voice-operated devices can use either a passive listening or active listening state. Some of the devices mentioned above use passive listening states while others may use active listening states in any combination.

The content that is available in any content delivery system can be overwhelming. Many users want media guidance that makes it easy to navigate through content selections and identify the content they are looking for. This application provides this guidance is called an interactive media guide application, sometimes referred to as a media guidance app or a guidance program.

“Interactive media guidance apps can take many forms, depending on what content they provide. An interactive television program guide is one example of a media guidance application. The well-known guidance application, also known as an electronic program guide, allows users to locate and navigate among many types of media assets or content. Interactive media guidance apps may create graphical user interface screens that allow a user navigate among, find and select content. The terms “media asset” and “content” are used herein. The terms?media asset? and?content? are interchangeable. The term “content” should be understood as an electronic consumable asset such as television programming as well as on-demand programs (as seen in video-on demand (VOD) systems), Internet content (e.g. streaming content, downloadable content and Webcasts). Videos, audio clips, video, audio content, images, videos, rotational images, documents and playlists. Users can also use guidance applications to locate and navigate between content. Multimedia is also referred to in this document. Multimedia can be defined as content that uses at least two of the content types described above. You can record, play, display, or access content using user equipment. However, it is also possible to be part of live performances.

Computer-readable media may encode the media guidance application or any instructions for performing any embodiments mentioned herein. Any media that can store data is considered computer readable media. Computer readable media can be either transitory or non-transitory, such as propagating electromagnetic or electrical signals. ), etc.”

“With the advent and use of mobile computing and high-speed wireless networks, users can now access media on their user equipment devices in a way that they never could before. The phrase “user equipment device” is used herein. ?user equipment,? ?user device,? ?electronic device,? ?electronic equipment,? ?media equipment device,? or ?media device? Any device that can access the content described above should be understood as any device. This could include a TV, Smart TV, set-top box, integrated receiver decoders (IRD), a digital media receiver/DMR), digital media adapter/DMA, a streaming media device, a TV, a TV box, a computer with a webcam, a computer and a computer center. The user equipment device might have multiple screens or multiple angles. The user equipment device might have a front-facing camera or a rear-facing camera in some instances. These user equipment devices may allow users to locate and navigate between the same content as a TV. These devices may also offer media guidance. You may receive guidance for content only available through a TV, content only available through one or more other types user equipment devices, and content that is available through both a television or one or more other types user equipment devices. Media guidance applications can be offered as either on-line (i.e. provided through a website) or standalone applications or clients on user-equipment devices. Below are detailed descriptions of the various platforms and devices that can implement media guidance apps.

“One function of the media guidance app is to provide media data to users. The phrase “media guidance data” is used herein. or ?guidance data? Any data that is related to content or used to operate the guidance application should be understood. The guidance data could include program information, settings for guidance applications, preferences of users, media listings, media-related media information (e.g. broadcast times, broadcast channels and titles, descriptions, ratings information (e.g. parental control ratings, critics ratings, etc. ), information about genres or categories, information about actors, data regarding logos for broadcasters or providers, etc. ), media format (e.g., standard definition, high definition, 3D, etc. On-demand information, blogs and websites, as well as any other guidance data that can be used to help a user locate and navigate among desired content selections.

“FIGS. The illustrative display screen shown in FIGS. 4-5 illustrates how they can be used to provide media direction data. FIGS. The display screens shown in FIGS. 4-5 can be used on any type of user equipment or platform. The FIGS. FIGS. 4-5 can be used as full-screen displays. However, they can also be partially or fully overlaid on the content. The user can indicate that they want to access information by choosing a selection option on a screen (e.g., menu options, listings options, icons, hyperlinks, etc.). Or pressing a button designated (e.g., GUIDE button) on a remote controller or other user interface device. The media guidance application can display data according to the user’s input. It may show media guidance data in one of the following ways: by time and channel, by source, time, channel, source, content type, category (e.g. movies, sports, news or children or any other types of programming) or predefined, user-defined or other criteria.

“FIG. “FIG. Grid 402 may be included in display 400. It could include: (1) a column with channel/content types identifiers (404), where each channel/content kind identifier (which are cells in the column) identifies another channel or content type; (2) a row with time identifiers 406 where each time code (which are cells in the row) identifies an hour of programming. Grid 402 may also include program listings such as program listing 408, in which each listing contains the title of the program and the associated time. A user input device can be used to select program listings. The highlight region 410 can be moved. Program information region 412 may contain information about the highlight region 410 program listing. The program information region 412 could include the title of the program, its description, the time it was provided (if relevant), the channel on which it is being shown (if applicable), and any other pertinent information.

The media guidance application provides access to linear programming. This is content that is to be transmitted to multiple user equipment devices at the same time according to a set schedule. However, it does not provide access to non-linear content. Non-linear programming can include content from multiple content sources, such as VOD, on-demand content, and Internet content (e.g. streaming media, downloadable media). Locally stored content (e.g. content stored on the user equipment described above or another storage device), and other time-independent contents. On-demand content can include movies and any other content offered by a specific content provider (e.g. HBO On Demand offering?The Sopranos? , and?Curb your Enthusiasm? Time Warner Company L.P. et al. owns HBO ON DEMAND, a service mark. The Home Box Office, Inc. owns the trademarks THE SOPRANOS, CURB YOUR ENHUSIASM, and the trademarks THE SOPRANOS.

Grid 402 can provide media guidance data for nonlinear programming, including recorded content listing 416 and on-demand listing 414. Sometimes, a mixed-media display is one that displays media guidance data from content from multiple sources. display. Display 400 can display different types of media guidance data depending on the user’s selection or guidance app definition. For example, you may see only broadcast and recorded listings or only on-demand and broadcast listing. Listings 414, 416 and 418 are displayed as they span the entire grid 402 to indicate that selecting these listings could provide access to an on-demand listing, recorded listing, or Internet listing display. These listings may be included in grid 402 in some embodiments. The user may also be able to view additional media guidance data by selecting one of the navigational icon 420. The display may be affected by the user pressing an arrow key on an input device in the same way as selecting navigational icons420.

“Display 400” may also include video regions 422, and 426. Video region 422 might allow users to preview and/or view programs that are available or will be made available. Video region 422 could contain content that corresponds to or is independent of one of the grid listings. Picture-in-guide (PIG), displays that include a grid display are also known as grid displays. Satterfield et al. provide more information about PIG displays and how they work. U.S. Pat. No. No. 6,564,378, published May 13, 2003 by Yuen et.al. U.S. Pat. No. No. 6,239 794, issued May 29, 2001. These are hereby incorporated in their entirety by reference herein. Other media guidance screen screens may include PIG displays.

“Options 426 can be used to access various types of content, media guide application displays, and/or features. Option region 426 can be part of display 400 (and the other display screens discussed herein), or invoked by the user by selecting an option on-screen or pressing a designated or assignable button on an input device. Options region 426 can include features related to grid 402 program listings, or options that are available from the main menu display. You can search for other air times and ways to receive a programme, record a program or enable series recording, set program/channel as a favourite, purchase a program or other features related to program listing. The main menu display can offer options such as search options and VOD options. There are also cloud-based options. Device synchronization options. Second screen device options. You have the option to access different types of media guidance data displays. You can edit your profile, access a browse overlay, and other options.

The media guidance app can be customized based on user preferences. The personalized media guidance app allows users to personalize displays and features to create an individual experience. The media guidance app. These customizations can be input by users and/or the media guidance app monitoring user activity to determine user preferences. Logging in or identifying yourself to the guidance app will allow users to access their personal guidance application. A user profile can be used to customize the media guidance app. You can customize the presentation scheme (e.g. color scheme, font size, etc.). You can customize the content listings (e.g. only HDTV or 3D programming, user-specified channels based upon favorite channel selections, reordering of channels, recommended content, etc. ), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc. ), parental control settings, personalized presentation of Internet content (e.g. presentation of social media content via e-mail or electronically delivered articles etc. Other customizations are possible.

The media guidance application can allow users to create user profiles or automatically compile them. The media guidance app may monitor content accessed by the user and/or any interactions that the user has with the application. The media guidance application can also obtain any or all of the other user profiles related to a user (e.g. from www.Tivo.com and other media guidance apps the user accesses as well as from interactive applications that the user accesses. The media guidance application can also access information about users from other sources. This allows users to have a single guidance application experience on all of their devices. FIG. 2 shows a more detailed description of this type of user experience. 7. Ellis et al., U.S. Patent Application Publication No. 621827, describes additional personalized media guidance features in more detail. 2005/0251827 filed Jul. 11, 2005, Boyer et al., U.S. Pat. No. No. 7,165,098, issued Jan. 16, 2007 and Ellis et., U.S. Patent Application Publication Number. 2002/0174430, filed February 21, 2002. These documents are herein incorporated by reference in their entirety.

FIG. 5. Video mosaic display 500 offers selectable options 502 to organize content information based on type, genre and/or other criteria. Display 500 shows the television listings option 504, which provides listings 506, 508, 509, 510, and 612 as broadcast program listings. Display 500 may include graphical images, including still images, cover art, and video clips previews. These listings can also provide live video of the content or other information that indicates the content described in the media guidance data. The listing may include text that provides additional information about the content. Listing 508 could include multiple portions, such as media portion 514 or text portion 516. You can select media portion 514 or text portion 516 to view the content in full-screen, as well as information about the content displayed in media section 514 (e.g. to view listings for the particular channel on which the video is displayed).

Display 500 listings are different sizes. Listing 506 is larger than listings 508, 508, 510 and 512. However, if you wish, all listings can be the same size. The listings may be different sizes, or graphically enhanced to show user levels of interest or highlight certain content. This is based on the preferences of the content provider and user preferences. Yates, U.S. Patent Application Publication No. 003885, discusses various systems and methods to graphically accentuate content listings. 2010/0153885 filed November 12, 2009. It is herein incorporated by reference in its entirety.

“Users can access the content and media guidance application (and its display screen described below and above) from any one or more of their equipment devices. FIG. FIG. 6 shows an illustrative embodiment of 600 user equipment device. In connection with FIG. 6, we will discuss more specific implementations of user-equipment devices. 7. User equipment device 600 may receive content and data via input/output (hereinafter ?I/O?) path 602. I/O path 602 can provide content (e.g. broadcast programming, on demand programming, Internet content, content accessible over a local or wide area network, and/or other content), and data to control circuitry 6004, which includes processing circuitry 606 as well as storage 608. I/O path 604 can be used to send or receive commands, requests, or other data. I/O path 602 can connect control circuitry 604 and specifically processing circuitry 606 to one or more communication paths (described below). Although I/O functions can be provided by any of these communication paths, they are shown in FIG. 6 to avoid complicating the drawing.”

“Control circuitry 604 can be built on any processing circuitry, such as processing circuitry 606. Processing circuitry 604 may be based on any suitable processing circuitry, such as processing circuitry 606. Processing circuitry can be distributed over multiple processors or units in some embodiments. For example, there may be multiple processors of the same type (e.g. two Intel Core i7 CPUs), or several processors of different types (e.g. an Intel Core i5 and an Intel Core i7 respectively). In some embodiments control circuitry 604 executes instructions stored in memory (i.e. storage 608). The media guidance application may instruct control circuitry 604 to perform the functions described above. The media guidance application might give instructions to control circuitry 604 in order to generate media guidance displays. In certain implementations, control circuitry 604 might take action based on instructions from the media guidance app.

“In client-server based embodiments control circuitry 604 could include communications circuitry that can be used to communicate with a guidance server, other networks, or servers. On the guidance application server, the instructions for performing the functionality mentioned above may be stored. Communication circuitry can include a cable modem or an integrated services digital network modem (ISDN), a digital subscriber link (DSL), modem for telephone, Ethernet card or wireless modem to communicate with other equipment or other suitable circuitry. These communications can involve the Internet, or other suitable communication networks or paths (which are described in greater detail in connection to FIG. 7). 7).

“Memory could be an electronic storage device that is provided as storage 608 and part of control circuitry 604. The expression ‘electronic storage devices? is used herein. or ?storage device? Any device that stores electronic data, computer software or firmware should be understood. You can store different types of content, as well as the media guidance data, in Storage 608 Nonvolatile memory can also be used (e.g. to launch a boot up routine or other instructions). Referring to FIG. 7 may be used in addition to storage 608 or storage 608.

“Control circuitry 604 can include video generating circuitry or tuning circuitry. This could include one or more analog tuners or one or two MPEG-2 decoders, or other digital decoding, high-definition tuneers, or any other suitable video or tuning circuits, or combinations thereof. It may also include encoding circuitry, which is used to convert over-the-air, digital, and analog signals to MPEG signals for stored. Control circuitry 604 could also include scaler circuitry to convert and downconvert content to the preferred output format for the user equipment 600. Circuitry 604 can also contain digital-to analog converter circuitry or analog-todigital converter circuitry to convert between analog and digital signals. The user equipment device may use the tuning and encoding circuitry to receive, display, play or record content. You may also use the tuning and encoding circuitry to receive guidance data. This circuitry includes the following: tuning, video generating and decoding. Encrypting, decrypting. Scaler, and analog/digital Circuitry may all be implemented with software that runs on one or more general-purpose or specialized processors. Multiple tuners can be used to perform simultaneous functions such as watch and record, picture-in?picture (PIP), recording, multi-tuner, etc. Storage 608 may be used as an additional device to user equipment 600. In this case, the tuning circuitry and encoding circuitry (including multiple tuneers) can be linked with storage 608.

“A user can send instructions to control circuitry 604 via user input interface610. Any suitable user interface may be used as user input interface 610, including a trackball, mouse, trackballs, keypad, keyboard touch screen touchpad touchpad stylus input joystick or voice recognition interface. Display 612 can be used as an independent device or integrated into other elements of the user equipment device 600. Display 612 could be touch-sensitive or touchscreen. Display 612 may also be used in combination with user interface 610. Display 612 could be any combination of a monitor, television, liquid crystal display (LCD), electrofluidic, cathode-ray tube display and light-emitting display. Display 612 may also be capable of HDTV. Display 612 can be configured to display 3D content. The interactive media guidance application, as well as any other suitable content, may also be used in some embodiments. Display 612 may be generated by a video card or graphics card. A video card can provide various functions, such as acceleration of rendering 3D scenes, MPEG-2/MPEG-4 decoding and TV output. It may also allow for multiple monitor connections. Any processing circuitry mentioned above in relation to control circuitry 604 may be used as the video card. The control circuitry 604 may also be used as the video card. The speaker 614 can be used in conjunction with the user equipment device 600, or as a stand-alone unit. Speakers 614 can play audio content from videos and other content on display 612. The audio may be distributed to a receiver, not shown in some embodiments. This receiver processes the audio and outputs it via speakers 614.

“The guidance application can be implemented using any architecture. It may also be an independent application that is fully-implemented on the user’s equipment 600. Instructions of the application are stored locally (e.g. in storage 608) and data to be used by the application is downloaded periodically (e.g. from an out-of?band feed, an Internet resource or another suitable approach). Control circuitry 604 can retrieve the instructions from storage 608 and use the instructions to create any of the displays described herein. Control circuitry 604 can determine the action to take when input is received via input interface 610 based on the processed instructions. The processed instructions may indicate movement of a cursor on a monitor up/down when input interface610 indicates that an up/down switch was selected.

“In certain embodiments, the media guide application is a client/server based application. A request to a remote server 600 allows data to be retrieved by a thin or thick client that is installed on the user equipment 600. Control circuitry 604 is an example of a client/server-based guidance application. It runs a web browser which interprets web pages from remote servers. The instructions may be stored on a remote server. The instructions stored by the remote server can be processed using circuitry (e.g. control circuitry 604) to generate the displays described above and below. The remote server may generate displays that the client device can receive and display locally on 600. The server processes the instructions remotely, while the displays generated are displayed locally on 600. Equipment device 600 can receive inputs from users via input interface610. These inputs are transmitted to remote servers for processing and generation of the appropriate displays. Equipment device 600, for example, may send a message to the remote server informing it that an input interface 610 was used to select the up/down button. The remote server might process the input and create a display of that application (e.g., one that moves the cursor up/down) according to that input. The display generated is transmitted to 600 equipment for presentation to the user.

“In some cases, the media guidance app is downloaded and interpreted by an interpreter (run by control circuitry 604) or any other means. Some embodiments may include the guidance application encoded in ETV Binary Interchange format (EBIF), which is received by control circuitry 604 along with a suitable feed and interpreted by a user-agent running on control circuitry 604. The guidance application could be an EBIF app. The guidance application can be defined in a series JAVA-based files. These files are then received and run on a local virtual machine, or any other suitable middleware that is controlled by control circuitry 604. Some of these embodiments, such as those using MPEG-2 or another digital media encoding scheme, may include the guidance application being encoded and transmitted in an MPEG-2 objects carousel along with the MPEG audio- and video packets.

“User equipment 600 of FIG. 6. can be used in FIG. 700. 7, as user television equipment 702, wireless user communications device 706, and any other type suitable for accessing content such as a portable gaming machine. These devices can be collectively referred to as user equipment or user devices. They may be substantially the same as the user equipment described above. A media guidance application can be used to implement user equipment devices. These devices may operate as standalone devices or part of a larger network. There are many configurations for devices that can be connected to a network. These are described in detail below.

“A user equipment device that uses at least one of the system features discussed above in connection to FIG. 6. may not be considered user television equipment 702, 704 or wireless user communications device 706. User television equipment 702 might, just like other user computer equipment 704, allow access to the Internet. While user computer equipment 704 may include a tuner that allows access to TV programming, unlike some television equipment 702, user computer equipment 704 could be Internet-enabled. The media guidance app may look the same on different types of equipment, or it may be customized to fit the capabilities of the equipment. The guidance application could be provided on user computer equipment 704 as a web page that can be accessed via a web browser. Another example is that the guidance application could be reduced for wireless user communication devices 706.

“System 700 contains more than one type of user equipment device, but FIG. shows only one. 7 to simplify the drawing. Each user can use more than one type or one type of equipment device.

“In some embodiments, a second screen device may be called a user device (e.g. user television equipment 702, wireless user communications device 706). A second screen device can be used to supplement the content on a first device. Any content may be presented on the second screen device that complements the content on the first device. The second screen device may be used to adjust settings or display preferences on the first device. The second screen device can be used to interact with other second-screen devices or with a social networking site. The second screen device may be in the same room or in a separate room.

The user can also adjust settings to ensure consistent media guidance application settings between in-home and remote devices. These settings include the ones described herein as well as channel or program favorites, programming preferences that are used by the guidance app to make programming recommendations, display preferences and other desired guidance settings. If a user makes a channel a favorite, such as on www.Tivo.com, it will also appear on their in-home devices (e.g. user television equipment or user computer equipment), as well as on their mobile devices. Changes made to one device’s user interface can affect the experience of another device. The changes may also be based on user settings and user activity that is monitored by the guidance app.

“The communications network 714 may be used to connect user equipment devices. User television equipment 702 and user computer equipment 704 are connected to the communications network 714 by using communications paths 708, 710 and 712. Communications network 714 could be any combination of networks, including the Internet, a wireless user communications device 706, and mobile voice or data networks (e.g., 4G or LTE networks), as well as a cable network, public switched telephone network, and other types of networks. The paths 708, 710 and 712 can be combined or separated to include one or more communications paths such as a satellite path or fiber-optic route, a cable path, or a path that supports Internet communication (e.g. IPTV), or free-space connections (e.g. for broadcast or other wireless signaling), or any other suitable wired/wireless communications path or combination thereof. In FIG. 7, path 712 is marked with dotted lines. Path 712 is drawn with dotted lines to indicate that it is a wireless path. In FIG. 7, paths 708 and 710 have been drawn as solid lines to indicate that they are wired paths.

“Communications with user equipment devices can be provided by one or several of these communication paths but they are shown as a single pathway in FIG. 7 to avoid complicating the drawing.

Although communication paths cannot be drawn between user equipment devices they may communicate with one another via communication paths such as the ones described above in connection to paths 708, 710 and 712. ), or any other short-range communication via wireless or wired paths. Bluetooth SIG INC. owns the certification mark BLUETOOTH. An indirect route via the communications network 714 allows users to communicate directly with their equipment devices.

“System 700 contains content source 716, media guidance data source 718 and communications network 714 via communication pathways 720 and 722. Paths 720 or 722 can include any of these communication paths in conjunction with paths 708, 710 and 712.

“Communications between the content source 716, and media guidance data source 718 can be exchanged over one or multiple communications paths but are shown in FIG. 7 to simplify the drawing. Additionally, more than one content source 716 or media guidance data source 718 may exist, but FIG. 7 shows only one. 7 to avoid making the drawing too complicated. Below are the different types of each source. You can combine content source 716 with media guidance data source 718 to create one source device. Communications between sources 716, 718 and user equipment devices 702, 704 and 704 are shown as via communications network 714. However, in certain embodiments sources 716, 718 and 704 may communicate directly with 702, 704 and 706 via communication paths (not illustrated), such as those described above with respect to paths 708, 710 and 712.

“System 700” may also contain an advertisement source 724, which is connected to a communications network 714 via a communication path 726. Path 726 could include any of these communication paths, as well as the ones described in connection to paths 708, 710 and 712. Advertisement source 724 could include any of the communication paths described above in connection with paths 708, 710, and 712. Cable operators may be able to insert advertisements at specific times on certain channels, for example. Advertisement source 724 could transmit advertisements to users during these time slots. Another example is that advertisement source might target ads based on demographics (e.g. teenagers watching a reality series). Another example is that advertisement source might offer different advertisements depending upon the location of the user equipment watching a media asset (e.g. east coast or coast).

“In some embodiments, advertisement source 724 may be configured to maintain user information including advertisement-suitability scores associated with user in order to provide targeted advertising. Additionally or alternatively, a server associated with advertisement source 724 may be configured to store raw information that may be used to derive advertisement-suitability scores. In some embodiments, advertisement source 724 may transmit a request to another device for the raw information and calculate the advertisement-suitability scores. Advertisement source 724 may update advertisement-suitability scores for specific users (e.g., first subset, second subset, or third subset of users) and transmit an advertisement of the target product to appropriate users.”

“Content source 716 could include one or more types content distribution equipment, including a television distribution device, cable system headend and satellite distribution facility. Programming sources (e.g. television broadcasters such as NBC, ABC and HBO) may also be included. ), intermediate distribution facilities or servers, Internet providers, servers, on-demand media server, and other content providers. NBC is a trademark of the National Broadcasting Company, Inc., ABC, and HBO are trademarks owned by American Broadcasting Company, Inc. Content source 716 could be the originator of content (e.g. a TV broadcaster, Webcast provider, etc.). Content source 716 may or not be the originator of the content (e.g. an on-demand provider of content, an Internet provider who provides content for broadcast programs that can be downloaded, etc.). Content source 716 could include cable providers, satellite providers or on-demand providers. Internet providers, over the-top content providers and other providers of content may also be included. Remote media servers may be included in content source 716. These servers can store different types (including video content) at a remote location from the user’s equipment. In Ellis et al. U.S. Pat., we discuss more about remote storage and remote access to content. No. No. 7,761,892, July. 20, 2010, is hereby included by reference in its entirety.

“Media guidance data source 718 could provide media data such as the media data described above. Any approach to providing media guidance data to user equipment devices may work. The guidance application can be an interactive TV program guide that receives program data via a data stream (e.g., a continuous or trickle feed). The user equipment may receive program schedule data and other guidance data via a TV channel sideband. This data can be transmitted using either an in-band or out-of-band signal, or any other data transmission method. User equipment may receive program schedule data and other media information on multiple digital or analog television channels.

“In certain embodiments, guidance data may be provided to users’ devices using a client/server approach. A user equipment device could pull media guidance data from a remote server or push it to the user equipment device. A guidance application client that is installed on the user’s device may initiate sessions with source 718 to retrieve guidance data whenever needed. This could be used to access the data when it is out of date, or when the user requests the data. The user equipment may receive media guidance at any time (e.g., continuously or daily, according to user request, system-specified time, or in response to user equipment’s request). Media guidance data source 718 can provide user equipment devices 702, 704, 706 and the media application itself, or software updates.

In some embodiments, media guidance data may also include viewer data. The viewer data could include historical and current user activity information. This includes information such as what content the user is most interested in, when they watch it, how often, what time of day, whether they interact with a network to post information, and what type of content (e.g. pay TV or free TV). Subscription data could also be included in the media guidance data. The subscription data could include information about which services or sources a user subscribes, and/or which services or sources a user previously subscribed to but then terminated access to (e.g. whether the subscriber subscribes to premium channels, whether they have added premium services, or whether the user has increased their Internet speed). Some embodiments may allow the viewer data and/or subscription data to identify patterns over a longer period. A model, such as a survivor model, may be included in the media guidance data. This model is used to generate a score that indicates whether a user will end their access to a particular service/source. The media guidance application might process viewer data and subscription data to determine if the user will end access to a service or source. A higher score could indicate that the user is more likely to terminate access to a service or source. The media guidance application can generate promotional messages that may entice users to keep the service or source identified by the score.

Summary for “Systems and Methods for Generating a Volume-Based Response for Multiple Voice-Operated User Devices”

It is becoming more common for homes to have voice-operated devices. Voice-operated devices can adjust their response volume to match the volume of a voice command. It is becoming increasingly difficult to coordinate multiple voice-operated devices within a home with the goal of determining which device should answer a query at what volume. In some cases, the user may not be able to hear the response if the volume of the response is the same as the volume of their voice. It can be difficult for users to manually choose a device and set a response volume every time they want a response to voice commands. This could make the device’s response less useful.

“Accordingly, the systems and methods described herein allow you to respond to a voice-command at a volume level that is based on the volume level of your voice command. The volume level of a voice commands can be used to determine the response volume level. This allows users to change the volume level, without the need to adjust the voice-operated device. One example is that a user device may have a first voice on one side and a second voice on the other. The first user might be seated at the end closest to the first voice-operated device, while the second user could be seated at the opposite end. A voice command may be spoken by the first user. It will be received by both the first voice-operated device and the second. Based on the volume level of the voice commands received by each voice-operated device user device, the systems and methods described herein can determine which voice-operated device is closest to the user. The volume level at which the first voice-operated device responds may be determined by the volume level of the voice commands. The first user and second user might be watching a movie, such as?Star Wars?. A whispered request by the first user to repeat the last line of the movie may be used. The first voice-operated device may respond by whispering or scribbling the last line (e.g. “May the Force be yours?”). “Return to the user.”

These systems and methods can be implemented using a media guidance app. A media guidance application can be connected to multiple voice-operated devices. A plurality of voice-operated devices could include DeviceA, DeviceB and DeviceC. A first voice-operated device may be used by the media guidance application to detect a voice command from a user. The media guidance application might detect DeviceA, for example, a voice command by the user saying “Repeat the last sentence?” in reference to the movie that the user is currently watching.

“The media guidance app may determine the volume level at which the voice command is received by first voice-operated device. The first volume level could be, for example, the average input volume (e.g. 48 db) detected at DeviceA.

“In certain embodiments, to determine the first volume level the media guidance app may measure the unfiltered volume of the voice command. The unfiltered volume level for voice commands may be 60dB. However, this unfiltered volume level may contain background noise such as a TV playing a movie nearby. The media guidance app may detect background noise levels and filter voice commands to reduce it. The movie that is being shown on the TV may have a higher frequency than user’s voice. To remove background noise, the media guidance application might filter out high-frequency components of the voice commands. The media guidance program may calculate a filtered volume of the voice command. The filtered volume level for a voice command might be 48 dB and the unfiltered level 60 dB.

“Several voice-operated device devices could detect the voice command from the person. Each device could hear the voice command at a different volume depending on its proximity to the user. Each voice-operated device from a plurality voice-operated devices will have a volume level that corresponds to a plurality volume levels of the voice commands. Each volume level from the plurality volume levels may be associated with a voice-operated device. The media guidance application might receive from each voice-operated device one data structure that contains a volume level as well as a voice-operated device identifier. The media guidance application might receive, for example, a data structure with a volume level 52 db and Device identifier deviceB from a second voice operator user device. The media guidance application might receive, for example, a data structure with a volume level of 50 and a Device Identifier DeviceC from a third voice operator user device.

“In certain embodiments, the media guide application may compare the first volume level with the plurality of volumes. The media guidance application might compare DeviceA’s 48 dB with DeviceB and 52 dB with DeviceB with DeviceC. 50 dB may be associated with DeviceC.

“In some instances, the media guidance app may decide, based upon comparing the first volume level with the plurality volume levels, the greatest volume level among the plurality volume levels. 52 dB may be the maximum volume level that any of the plurality voice-operated devices can receive. The plurality’s loudest volume level is the one that the plurality voice-operated devices receives.

“In certain embodiments, the media guide application may search at least one data structure to find a second voice-operated device that has the highest volume level. The device with the highest volume will be the one closest to the user who gave the voice command. This is because the device that’s closer to the user, the louder the voice command will be to it. If the highest volume level is 52dB, then the media guidance application can search the data structure for the voice-operated device that corresponds with the 52dB volume level. The 52 dB volume level can be identified in the data structure with Device identifier deviceB. This is the second voice-operated device. DeviceB is, in this case, the closest voice-operated device to the user.

“In certain embodiments, the media guide application may transmit a command the second voice-operated device. The command could instruct the second voice-operated device to change the volume level of the response level to the second volume level. This is determined based on the highest volume level. The media guidance application might instruct DeviceB, the second voice-operated device user device, to increase the volume level by 52 dB. The media guidance application might instead tell DeviceB, the second voice-operated device, to increase the volume level to 53 dB. This is slightly less than the highest volume level. To account for ambient noise, the media guidance application may increase the highest volume level slightly.

“In certain embodiments, the media guide application may produce an audible response when a voice command is made. The second voice-operated device may generate the audible response at the second volume level. DeviceB, which is the second voice-operated device, may, for example, repeat at volume 53 dB the last line in the movie that the user is watching (e.g.?May The Force be with You?). In some embodiments, both the first and second voice-operated device may be the exact same device. The first volume level and second volume levels may also be the same.

“In some embodiments, media guidance applications may detect that different voice-operated devices (e.g. DeviceA, DeviceB or DeviceC) use different equipment, methods, or sensitivity to detect voice commands. The highest volume level may not correspond with the closest voice-operated device to which the voice command is being issued. The media guidance application may adjust the plurality volume levels to account for the difference in voice-operated devices, before determining the highest volume level. The media guidance application may also use other factors in some instances to determine which voice-operated device is closest. This could include infrared detection (IR) to determine the distance between each voice-operated device and the voice command being issued.

“In some embodiments, the application can determine other users than the one who gave the voice command and who would be interested to hear the audible response. The audible response may be output by more than one voice-operated device from the plurality voice-operated devices. The responses can be given simultaneously, or at different volumes depending on the user’s preference. This ensures that all users are able to hear the reply. UserA might ask UserA what TV show is being shown on HBO. The media guidance app may decide that the program is “Game of Thrones.” UserB may be a second user who is interested in the Game of Thrones program. The media guidance application generates an audible response, such as “Game of Thrones is currently on HBO.?” The media guidance application can generate an audible response to UserA’s voice command via the second voice-operated device. For example, the audible response generated by the second voice-operated device may be at a level of 53 dB. The audible reply generated by the third voice-operated device may be at a level of 55 dB. It is possible to choose between the second and third volume levels based on how easily the user can hear the audible response.

The media guidance application may be able to identify the user’s profile in order to determine the second volume level. A user profile could contain a hearing data structure that contains a plurality user volume levels the user has previously acknowledged hearing. The media guidance app may determine the lowest user volume level from among the plurality. 40 dB may be an example of the lowest volume level that a user has acknowledged hearing previously. Based on the user’s lowest volume level, the second volume level (the volume of the audible response), may also be determined. 35 dB may be the maximum volume level that any one of the plurality devices (such DeviceA, DeviceB or DeviceC) can hear, for example. To ensure the user hears the audible response, the second volume level could be set at 40dB.

“In some embodiments, the response from the user to a voice command may be displayed visually on a device. A media guidance application can identify the display device associated to the user. The media guidance application could interface with a television that is associated with the user via a user profile. The media guidance app may display a visual representation the audible response. The media guidance application could generate a window on the TV and display the response. The media guidance application might display, for example, Game of Thrones’ title. When the user requests the name of the show. The display window can also include a reference of the device that generated the audible reply after the request is received. The media guidance application might display, for example,?DeviceB stated? ?Game of Thrones .??? This tells the user which device they are using to communicate and, consequently, what device is closest to them.

“In certain embodiments, the media guide application might determine that the user is hearing impaired using their user profile. This information could, for example, be stored in the hearing data structure. The user might not be able hear any audible responses, for example. Similar to the previous description, the media guidance app may be able to identify the display device associated with the user. The media guidance app may display a visual representation the user’s audible response, and transmit that visual representation to the display device. The media guidance app may, for example transmit the visual representation to the mobile phone associated with the user.

“In some cases, the media guidance app may wait for acknowledgement from the user that they have heard the audible response. The media guidance application can generate another audible response through the second voice-operated device user or display a visual representation of that audible response for display on the user device. A media guidance application can determine the first time that corresponds to when the audible reply was generated. The media guidance application could save a time stamp on a data structure when the audible reply is generated. The audible response could have been generated at 3:02:03 PM. The media guidance app may then add a time period to calculate another time. The time period could be 20 seconds, and the second time might be 3:22:23 PM. This is the time the media guidance app waits to respond to the voice command of the user.

There are many ways to determine the time it takes for users to acknowledge their request. The media guidance application may use user profiles to determine the average response time. The media guidance application may identify the user profile by listening to a key phrase spoken by the user. This key word can be associated with a specific user. Speech patterns may be used to identify the user profile. A user profile could contain, for instance, a first data structure that contains information about the time taken by the user to reply to voice-operated devices in the past. To determine an average response time for the user, the media guidance application might use the average of the past responses to calculate the average response time. The user might have responded to the second voice-operated device using the same time frame in the past, for example, taking 10 seconds, 5 minutes, and 15 seconds. This user could set the time to wait for a reply to be 10 seconds, as 10 seconds is the average of all previous responses.

“In certain embodiments, if the user does not acknowledge the device within the specified time, the media guidance app will generate another response or repeat the initial audible response to voice command. An acknowledgement is a confirmation that the user heard the audible response. The audible response of DeviceB’s second voice-operated device may be, for example,?May God be with you? This response may be acknowledged by the user who might respond with a thank you, DeviceB.

If no acknowledgement is received within a specified time, the media guidance application may generate a second audible or visual representation of that audible response. In some embodiments, for example, the media guidance app may transmit, based upon whether acknowledgement was received at any third time, a visual representation the audio response to a display device associated with the user. The third time corresponds to before the second. The audible response could have been generated at 3:03 PM. For example, if the time period is twenty seconds, the second time will be 3:12:23 PM. DeviceB may not have received acknowledgement by 3:22 PM if the media guidance application is running. The television associated with the user will display the visual representation of the audible response. ?May the Force Be with You

“In certain embodiments, the media guidance app generates another audible reply if the voice-operated device doesn’t receive acknowledgement within the specified time. The media guidance application can generate, depending on whether acknowledgement was received at any third time, a second audio response via the second voice-operated device. The second audible reply can be the same as that of the first user. In this case, the second user repeats the user’s audible response at the same volume or at a different volume. For example, the second audible reply may prompt the user’s response. DeviceB may also generate a second audible response if it has not received acknowledgement from the user before 3:22:23 PM. ?May the Force Be with You ????”

The second audible response can be generated at the same volume as the first or at a louder volume in certain embodiments. The media guidance application might determine that the third volume level at which the second audible response is generated should be greater than the second volume. For example, the third volume level could be the second volume level plus some pre-determined amount. The second volume level might be 53 dB and the third level 56 dB. The third volume level generates the second audible reaction. Media guidance applications may also generate this second response. ?May the Force Be with You Through DeviceB at 56 dB

The media guidance application might identify the user profile to generate the second audible reply in certain aspects. The media guidance application might identify the user profile through a key word spoken orally by the user. For example, UserA might say “UserA?” Before issuing a voice request. You may also identify the user profile by their speech patterns or vocal patterns.

“In certain embodiments, the media guide application may use the user’s profile to determine the average speaking volume of the user. The user profile might contain a first data structure that contains speaking volumes used by the user in the past. To determine the average speaking volume of the user, the media guidance application can use the average of these speaking volumes. The user’s average speaking volume may be 60 decibels.

The media guidance application might then calculate a difference between the average speaking level and the highest volume level (i.e. The volume received by the voice-operated device closest to the user. The maximum volume level could be 52 dB, while the average user’s speaking volume may be 60. In this example, the difference is 8 dB.

The difference between the second volume level and the third volume level (the volume of the second audible reply) can be used to determine the volume level. The difference could be as high as 8 dB and the second volume may be 53dB. In this example, the third volume level would be 61dB. In such a case, the second voice-operated device user would emit the second audible response at the rate of 61 dB.

“In some cases, the media guide application may receive an acknowledgement from the user. The acknowledgement may be followed by the storage of the second volume level (the volume that was heard in the initial audible response). A user profile might contain a hearing information structure that contains a variety of user volume levels that the user has heard in the past. This second data structure may contain the second volume level.

“It is important to note that the systems and/or methodologies described above could be applied to, and used in accordance, with, other systems or methods and/or apparatuses listed in this disclosure.”

“Systems, methods and procedures are described in this document for responding to voice commands at a volume level that is based on the volume level of the voice. The volume level of the voice commands can be used to determine the response volume level. This allows users to change the volume level, without the need to adjust the voice-operated device manually.

“FIG. “FIG. 1 illustrates an example of multiple voice operated user devices sensing a voice command in accordance to some embodiments. A first voice-operated device device 102 may be located at one end of a sofa, while a second voice operated device device 104 is at the other end. The first user, 108, may be found at the couch’s end closest to the first voice-operated device 102. A second user 110 might be located at the opposite couch’s end near the second voice operated user device 104. A second user 110 might utter a voice instruction 106 which is heard by the first voice-operated device 102 and second voice-operated device 104. Based on the volume level of the voice commands received by each voice-operated device 102,104, the systems and methods described herein can determine that the second voice operated user device (104) is closer to the second user 110 who issued voice command 106. The volume level at which the second voice-operated device 104 responds 112 may be determined based on volume level of voice command 106. The second user 110 and the first user 108 may be watching the same movie (e.g., “Star Wars?”). The 110 voice command 106 of the second user may be a whispered request for the movie’s last line (e.g., “May the Force Be with You?”). The second voice-operated device 104 may reply 112 by whispering last line (e.g.?May the Force Be with You?). The second user 110 will be redirected. The voice command 106 was received by the first voice-operated device 102, but the second voice-operated device 104 responded with the answer 112

These systems and methods can be implemented using a media guidance app. A plurality of voice-operated devices may be connected to the media guidance application 102,104, and 114. A plurality of voice operated user devices could include, for instance, DeviceA, DeviceB, and DeviceC. A first voice-operated device 102 may be used to detect a voice command 106 from a user. The media guidance application might detect, for example, a first voice-operated device 102 (e.g. DeviceA), a user speaking a voice command 106 such as “Repeat the last sentence”,? Refers to the movie that the user is currently watching.

“The media guidance app may determine the volume level of the voice commands 106. The voice command is then received by the first voice-operated device 102. The first volume level could be, for example, the average input volume (e.g. 48 db) detected at the first voice operated user device 102 (e.g. DeviceA).

“In certain embodiments, to determine the first volume level the media guidance app may measure the unfiltered volume level of the voice commands 106. The unfiltered volume level for voice commands may be 60 dB. However, this unfiltered volume level may contain background noise such as a TV playing a movie close to the user 110.

“The media guidance app may detect background noise levels and filter the voice command (106) to reduce the noise. The movie that is being shown on the TV may have a higher frequency than 110. To remove background noise, the media guidance application might filter out high-frequency components of the voice commands 106. The media guidance program may calculate a filtered volume level for the voice command 106. The filtered volume level for a voice command could be 48 dB and the unfiltered level 60 dB, respectively.

The voice command 106 may be detected by several voice-operated devices (e.g. DeviceA 102 and DeviceB 104), as well as DeviceC 114. The volume level at which each of these devices 102-104 and 114 receives the voice command may vary depending on the user’s proximity. Each voice-operated device 102,104, and 114 will be associated with a volume of a plurality volume levels of the voice commands 106. Each volume level from the plurality volume levels may be associated with a voice-operated device. The media guidance application might receive from each voice-operated device one data structure that contains a volume level as well as a voice-operated device identifier. The media guidance application might receive, for example, a data structure with a volume level 52 db and Device identifier deviceB from a second voice operated user device 104. The media guidance application might receive, for example, 114 from a third voice-operated device a data structure with a volume level 50 db and Device identifier deviceC.

“In certain embodiments, the media guide application may compare the first volume level with the plurality of volumes. The media guidance application might compare 48 dB associated to DeviceA 102 with 52 dB associating with DeviceB 104 with 50 dB associating with DeviceC 114.

“In some instances, the media guidance app may decide, based upon comparing the first volume level with the plurality volume levels, the greatest volume level among the plurality volume levels. 52 dB may be the maximum volume level that any of the plurality voice-operated device devices can receive. The plurality’s loudest volume level is the one that receives the most volume levels (e.g. voice-operated devices 102,104, and 114).

“In certain embodiments, the media guide application may search at least one data structure to find a second voice-operated device 104 that has the highest volume level. The device with the highest volume will be the one closest to the user who gave the voice command. This is because the device that’s closer to the user the more loud the voice command will sound to it. If the highest volume level is 52dB, then the media guidance application can search the data structure for the voice-operated device that corresponds to 52dB volume level. The 52 dB volume level can be identified in the data structure with Device identifier DeviceB. This is the device that represents the second voice-operated device 104. In this example, the second voice-operated device 104 (e.g. DeviceB) is the closest voice-operated device to the 110 user who issued voice command.

“In certain embodiments, the media guide application may transmit a command 104 to the second voice-operated device. The command could instruct the second voice-operated device 104 to change the volume level to match the highest volume level. The media guidance application might instruct DeviceB, the second voice-operated device 104, to increase the volume level by 52 decibels. The media guidance application might instead tell DeviceB, the second voice-operated device, to increase the volume level to 53dB. This is slightly less than the highest volume level. To account for ambient noise, the media guidance application may increase the highest volume level slightly.

“In certain embodiments, the media guide application may produce an audible response 112 when a voice command is made 106. The second voice-operated device 104 may generate the audible response 112, at the second volume level. The DeviceB second voice-operated device 104 may, for example, repeat at volume 53 dB the last line in the movie that the user is watching (e.g.?May God be with you?). In some embodiments, both the first and second voice-operated devices may be one device. The volume levels for the first and second volumes may also be identical.

“In certain embodiments, the media guide application may detect that different voice-operated devices (e.g. first voice-operated device 102, second voice operated user device 104, and third voice-operated device 114) use different equipment, methods, or sensitivity to detect voice command 106. The highest volume level may not correspond with the voice-operated device closest to the user 110 issuing voice command 106. The media guidance application may adjust the plurality volume levels to account for the differences in voice-operated device models. The media guidance application may also use other factors in some instances to determine which voice-operated device is closest. This could include infrared detection (IR) to determine the distance between each voice-operated device and the user issuing a voice command.

“In some instances, the media guidance app may identify users other than the 110 user who issued the voice commands 106 and who would be interested to hear the audible response 112 of the voice command.106. The audible response may be output by more than one voice-operated device from the plurality voice-operated devices 102,104, and 114 in such embodiments. The responses can be given simultaneously, or at different volumes, to ensure that all users are able hear them. The second user 110 might issue a voice command asking which television show is being shown on HBO. The media guidance application might determine that the program is “Game of Thrones?”. The media guidance app may decide that user 108 is interested in the program “Game of Thrones?”. The media guidance application may generate an audible response, such as “Game of Thrones is currently on HBO.?” The media guidance application can generate an audible response to the voice command of the second user 110 via the second voice-operated device 104. For example, the second voice-operated device 104 may generate an audible response at a volume level of 53 dB. The third voice-operated device 114 may generate a response at a volume level of 55 dB. The third and second volume levels can be determined based on the user’s ability to hear the audible response.

“In certain embodiments, the media guide application may detect if a user moves. The media guidance application might measure volume levels when a user speaks and when it stops speaking. The volume level at the third voice-operated user device (114) may be higher than that received at the 114 after the user has finished issuing the voice commands. When the user starts speaking, the volume received at the second user voice device 104 may be lower than that received at second voice operator user device (104), when the user stops speaking. Using these changes in volume levels, the media guidance application can determine that the user is moving closer to the second voice-operated user device than the third. Accordingly, the media guidance application can determine multiple voice-operated user devices from the plurality that are along the user’s path.

“In certain embodiments, the media guide application may generate parts of the audible reaction through multiple voice-operated user devices. Multiple voice-operated user devices can generate the audible reply or a part of the audible reaction simultaneously, or at different times. The voice command could request translation from Spanish to English of a song. The third voice-operated user device 114 may play the first ten second of the translation (the audible reply), while the first voice-operated user device 101 may play the next ten second of the song and the last component. The media guidance app may adjust each device’s volume to match (e.g. the second volume level), and all devices will respond at the same volume level. The second volume level could be, for example, 57 dB. Each device may respond at the second volume level of 57 dB. The audible response in the above-described embodiment may follow the user’s movement. This allows the user to hear the audible responses as they move near different devices.

“In some embodiments, the media guide application may identify the user profile associated to the user 110 who gave the voice command 106. A user profile could contain a hearing data structure that contains a plurality user volume levels that user 110 has previously acknowledged hearing. The media guidance app may determine the lowest user volume level from the plurality. For example, 40 dB may be the minimum volume that user 110 has acknowledged hearing. Based on the user’s lowest volume level, the second volume level (the volume of the audible response), may also be determined. 35 dB may be the maximum volume level that any user device (such as DeviceA102, DeviceB104, or DeviceC114) can hear, for example. To ensure that user 110 hears the audible response, the second volume level could be set at 40dB.

“FIG. “FIG. The media guidance application may wait for acknowledgement from the user 202 that it has heard the audible reply 206. The media guidance application can generate an audible response 210 through the voice command 204 from the second voice-operated user 208 device. If no acknowledgement is received within that time, it may display a visual representation on the user device. 3. The process is described below. A media guidance application can determine the first time that corresponds to the time when audible response (206) was generated. The media guidance application could save a time stamp on a data structure if the audible reply 206 is generated. The audible response could have been generated at 3:03:03 PM. The media guidance app may then add a time period to calculate another time. The time period could be 20 seconds, and the second time might be 3:22:23 PM. This is the time the media guidance app waits to respond to the user’s voice command 202.

There are many ways to determine the time it takes for users to acknowledge receipt of their messages. The media guidance application may use user profiles to determine the time frame. The media guidance application may identify the user profile by listening to a key phrase spoken by the user202. This key word can be associated with a specific user. Speech patterns may be used to identify the user profile. For example, the first data structure in the user profile could contain information about the time it took for the user to respond to voice-operated devices. Media guidance applications may use the average of the past responses to calculate the average response time for user 202. The user 202 might have responded to the second voice-operated device 208 in the past 10 seconds, 5 minutes, or 15 seconds. This user could set the time to wait for a response to be 10 seconds, as 10 seconds is the average of all previous responses.

“In certain embodiments, if the user does not acknowledge the device 208 within the specified time, the media guidance app will generate another response 220 or repeat the initial audible response 204 to voice command. The acknowledgement 212 shows that the user heard the audible response. The audible response 206 may come from DeviceB. This response may be acknowledged by the user 202 who might respond with a thank you DeviceB.

“If acknowledgement is not received within a specified time, an audible second response 210 or visual representation of the audible reply 206 may be generated. In some embodiments, for example, the media guidance app may transmit, depending on whether acknowledgement 212 has been received at a third occasion, a visual representation (such that shown in FIG. 3 in window 308) of the audio response to a display device (such that shown in FIG. 3 at device 308) is associated with the user. The third time is prior to the second. The audible response 206 could have been generated at 3:02:03 PM. For example, if the time period is twenty seconds, the second time will be 3:12:23 PM. DeviceB may not have received acknowledgement by 3:22 PM if the media guidance application is running. The television associated with the user will display the visual representation of the audio response 206. ?May the Force Be with You ???).”

“In certain embodiments, the media guidance app generates a second audible response 220 if the user does not acknowledge the device voice-operated 208 within the specified time. The media guidance application can generate a second audible reply 210 depending on whether acknowledgement (such acknowledgement 212) has been received at any other time. The second audible reply 210 may be the same as that of the audible answer 204. In this case, the second user repeats the audible responses to the user at the same volume or at a different volume. For example, the second audible reply 210 might prompt the user’s response. If the second voice operated user device 208 (e.g. DeviceB) has not received acknowledgement from the user before 3:22:23 PM, then the media guidance app may generate the second audible reply 210 asking?Did I hear? ?May the Force Be with You

The second audible response (210) may be generated at the same volume as the first response, or at a louder volume in certain embodiments. The media guidance application might determine that the third volume level at which the second audible response is generated 210 is higher than the second volume. For example, the third volume level could be the second volume level plus an amount pre-determined. The second volume level might be 53 dB and the third level 56 dB. The third volume level generates the second audible reaction 210. Media guidance applications may also generate the second audible reply 210. ?May the Force Be with You via the second voice-operated device (e.g. DeviceB) at 56dB

The media guidance application might identify the user profile of the user 202 in certain aspects to generate the best audible response. The media guidance application might identify the user profile by listening to a key word spoken 202 by the user. For example, UserA might say “UserA?” Before issuing a voice command. You may also identify the user profile by their speech patterns or vocal patterns 202.

“In certain embodiments, the media guide application may use the user’s profile to determine the average speaking volume of the user. 202 The user profile might contain a first data structure that contains speaking volumes used by the user in the past. To determine the user’s average speaking volume, the media guidance application can use the average of these speaking volumes. The user’s average speaking volume may be 60 decibels.

The media guidance application can then determine the difference between the average speaking level and the highest volume level (i.e. the volume received by the voice-operated device 208 closest to the user 202). The maximum volume level could be 52 dB, while the average user’s speaking volume may be 60. In this example, the difference is 8 dB.

The difference between the second volume level and the third volume level (the volume of the second audible reply 210) can be used to determine the third volume level. The difference could be as high as 8 dB and the second volume could be 53 dB. In this example, the third volume level might be 61 dB. The second voice-operated device 208 would emit the second audible response, 210 at 61dB.

“In certain cases, the media guidance app receives an acknowledgment 212 from user. The acknowledgement 212 may be received by the media guidance app. It may then store the second volume level (the volume of the initial audible response 206) within the user’s profile. The user profile could contain a hearing information structure that contains a variety of user volume levels that the user 202 was able to hear previously. This second data structure may contain the second volume level.

“FIG. “FIG. The response to voice commands 304 may be displayed visually on a display device 308 in some embodiments. The media guidance app may be able to identify the display device associated with the user 308 through a user profile. The media guidance application could interface with a television that is associated with the user 302. A visual representation of the audible response may be generated by the media guidance application. The media guidance application might generate a window 310 on the TV and display the response in the windows 310. The media guidance application might display, for example, the last line of the movie that user 302 is currently watching. When the user 302 requests 304, it is a query about which line of the movie is being shown. The display window 310 can also include a reference 306 to the device that generated the audible reply after the request 304 is received. The media guidance application might display, for example,?DeviceB stated? ?May the Force Be with You .??? This tells the user which device they are talking to and, consequently, what device is closest to them.

“In certain embodiments, the media guide application might determine that the user 302 has a hearing impairment using their user profile. This information could, for example, be stored in the hearing data structure. The user 302 might not be able hear any audible responses. Similar to the previous description, the media guidance app may be able to identify the display device 308 that is associated with the user. The media guidance app may display a visual representation based on the user’s audible response. It can also transmit this visual representation to the display device 308. The media guidance application could transmit the visual representation to a television 308. It may also be associated with user 302.

“Continuous listening device” is defined herein. A device that can, when powered up, continuously monitor audio without the user needing to prompt (e.g. by pressing a button) it to prepare for input commands. A continuous listening device might be monitoring audio for a prompt or keyword (e.g. “Hello Assistant?”). To activate an active listening condition, or to monitor and process all audio in a passive listening condition. A “passive listening state” is defined herein. A?passive listening state’ is a mode of operation for a continuous listening system in which audio is being continuously or temporarily recorded, without the user having prompted it to do so. The passive state means that the continuous listening device processes all audio input. This is in contrast to active listening, which only processes audio when a prompt or keyword is provided. The continuous listening device may store audio received in a buffer that stores audio for a set length. The continuous listening device might store five minutes worth of audio. In this case, the oldest audio information is erased as new audio is recorded. Some embodiments store all audio in a persistent manner, which can be erased using routine housekeeping or manually by the user.

“A voice-operated user device is, as used herein. A device that can continuously listen for keywords and audio input. The voice-operated device can process audio input if a keyword address is detected. As described above, a voice-operated device can also be called a continuous listening device. Voice-operated devices can use either a passive listening or active listening state. Some of the devices mentioned above use passive listening states while others may use active listening states in any combination.

The content that is available in any content delivery system can be overwhelming. Many users want media guidance that makes it easy to navigate through content selections and identify the content they are looking for. This application provides this guidance is called an interactive media guide application, sometimes referred to as a media guidance app or a guidance program.

“Interactive media guidance apps can take many forms, depending on what content they provide. An interactive television program guide is one example of a media guidance application. The well-known guidance application, also known as an electronic program guide, allows users to locate and navigate among many types of media assets or content. Interactive media guidance apps may create graphical user interface screens that allow a user navigate among, find and select content. The terms “media asset” and “content” are used herein. The terms?media asset? and?content? are interchangeable. The term “content” should be understood as an electronic consumable asset such as television programming as well as on-demand programs (as seen in video-on demand (VOD) systems), Internet content (e.g. streaming content, downloadable content and Webcasts). Videos, audio clips, video, audio content, images, videos, rotational images, documents and playlists. Users can also use guidance applications to locate and navigate between content. Multimedia is also referred to in this document. Multimedia can be defined as content that uses at least two of the content types described above. You can record, play, display, or access content using user equipment. However, it is also possible to be part of live performances.

Computer-readable media may encode the media guidance application or any instructions for performing any embodiments mentioned herein. Any media that can store data is considered computer readable media. Computer readable media can be either transitory or non-transitory, such as propagating electromagnetic or electrical signals. ), etc.”

“With the advent and use of mobile computing and high-speed wireless networks, users can now access media on their user equipment devices in a way that they never could before. The phrase “user equipment device” is used herein. ?user equipment,? ?user device,? ?electronic device,? ?electronic equipment,? ?media equipment device,? or ?media device? Any device that can access the content described above should be understood as any device. This could include a TV, Smart TV, set-top box, integrated receiver decoders (IRD), a digital media receiver/DMR), digital media adapter/DMA, a streaming media device, a TV, a TV box, a computer with a webcam, a computer and a computer center. The user equipment device might have multiple screens or multiple angles. The user equipment device might have a front-facing camera or a rear-facing camera in some instances. These user equipment devices may allow users to locate and navigate between the same content as a TV. These devices may also offer media guidance. You may receive guidance for content only available through a TV, content only available through one or more other types user equipment devices, and content that is available through both a television or one or more other types user equipment devices. Media guidance applications can be offered as either on-line (i.e. provided through a website) or standalone applications or clients on user-equipment devices. Below are detailed descriptions of the various platforms and devices that can implement media guidance apps.

“One function of the media guidance app is to provide media data to users. The phrase “media guidance data” is used herein. or ?guidance data? Any data that is related to content or used to operate the guidance application should be understood. The guidance data could include program information, settings for guidance applications, preferences of users, media listings, media-related media information (e.g. broadcast times, broadcast channels and titles, descriptions, ratings information (e.g. parental control ratings, critics ratings, etc. ), information about genres or categories, information about actors, data regarding logos for broadcasters or providers, etc. ), media format (e.g., standard definition, high definition, 3D, etc. On-demand information, blogs and websites, as well as any other guidance data that can be used to help a user locate and navigate among desired content selections.

“FIGS. The illustrative display screen shown in FIGS. 4-5 illustrates how they can be used to provide media direction data. FIGS. The display screens shown in FIGS. 4-5 can be used on any type of user equipment or platform. The FIGS. FIGS. 4-5 can be used as full-screen displays. However, they can also be partially or fully overlaid on the content. The user can indicate that they want to access information by choosing a selection option on a screen (e.g., menu options, listings options, icons, hyperlinks, etc.). Or pressing a button designated (e.g., GUIDE button) on a remote controller or other user interface device. The media guidance application can display data according to the user’s input. It may show media guidance data in one of the following ways: by time and channel, by source, time, channel, source, content type, category (e.g. movies, sports, news or children or any other types of programming) or predefined, user-defined or other criteria.

“FIG. “FIG. Grid 402 may be included in display 400. It could include: (1) a column with channel/content types identifiers (404), where each channel/content kind identifier (which are cells in the column) identifies another channel or content type; (2) a row with time identifiers 406 where each time code (which are cells in the row) identifies an hour of programming. Grid 402 may also include program listings such as program listing 408, in which each listing contains the title of the program and the associated time. A user input device can be used to select program listings. The highlight region 410 can be moved. Program information region 412 may contain information about the highlight region 410 program listing. The program information region 412 could include the title of the program, its description, the time it was provided (if relevant), the channel on which it is being shown (if applicable), and any other pertinent information.

The media guidance application provides access to linear programming. This is content that is to be transmitted to multiple user equipment devices at the same time according to a set schedule. However, it does not provide access to non-linear content. Non-linear programming can include content from multiple content sources, such as VOD, on-demand content, and Internet content (e.g. streaming media, downloadable media). Locally stored content (e.g. content stored on the user equipment described above or another storage device), and other time-independent contents. On-demand content can include movies and any other content offered by a specific content provider (e.g. HBO On Demand offering?The Sopranos? , and?Curb your Enthusiasm? Time Warner Company L.P. et al. owns HBO ON DEMAND, a service mark. The Home Box Office, Inc. owns the trademarks THE SOPRANOS, CURB YOUR ENHUSIASM, and the trademarks THE SOPRANOS.

Grid 402 can provide media guidance data for nonlinear programming, including recorded content listing 416 and on-demand listing 414. Sometimes, a mixed-media display is one that displays media guidance data from content from multiple sources. display. Display 400 can display different types of media guidance data depending on the user’s selection or guidance app definition. For example, you may see only broadcast and recorded listings or only on-demand and broadcast listing. Listings 414, 416 and 418 are displayed as they span the entire grid 402 to indicate that selecting these listings could provide access to an on-demand listing, recorded listing, or Internet listing display. These listings may be included in grid 402 in some embodiments. The user may also be able to view additional media guidance data by selecting one of the navigational icon 420. The display may be affected by the user pressing an arrow key on an input device in the same way as selecting navigational icons420.

“Display 400” may also include video regions 422, and 426. Video region 422 might allow users to preview and/or view programs that are available or will be made available. Video region 422 could contain content that corresponds to or is independent of one of the grid listings. Picture-in-guide (PIG), displays that include a grid display are also known as grid displays. Satterfield et al. provide more information about PIG displays and how they work. U.S. Pat. No. No. 6,564,378, published May 13, 2003 by Yuen et.al. U.S. Pat. No. No. 6,239 794, issued May 29, 2001. These are hereby incorporated in their entirety by reference herein. Other media guidance screen screens may include PIG displays.

“Options 426 can be used to access various types of content, media guide application displays, and/or features. Option region 426 can be part of display 400 (and the other display screens discussed herein), or invoked by the user by selecting an option on-screen or pressing a designated or assignable button on an input device. Options region 426 can include features related to grid 402 program listings, or options that are available from the main menu display. You can search for other air times and ways to receive a programme, record a program or enable series recording, set program/channel as a favourite, purchase a program or other features related to program listing. The main menu display can offer options such as search options and VOD options. There are also cloud-based options. Device synchronization options. Second screen device options. You have the option to access different types of media guidance data displays. You can edit your profile, access a browse overlay, and other options.

The media guidance app can be customized based on user preferences. The personalized media guidance app allows users to personalize displays and features to create an individual experience. The media guidance app. These customizations can be input by users and/or the media guidance app monitoring user activity to determine user preferences. Logging in or identifying yourself to the guidance app will allow users to access their personal guidance application. A user profile can be used to customize the media guidance app. You can customize the presentation scheme (e.g. color scheme, font size, etc.). You can customize the content listings (e.g. only HDTV or 3D programming, user-specified channels based upon favorite channel selections, reordering of channels, recommended content, etc. ), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc. ), parental control settings, personalized presentation of Internet content (e.g. presentation of social media content via e-mail or electronically delivered articles etc. Other customizations are possible.

The media guidance application can allow users to create user profiles or automatically compile them. The media guidance app may monitor content accessed by the user and/or any interactions that the user has with the application. The media guidance application can also obtain any or all of the other user profiles related to a user (e.g. from www.Tivo.com and other media guidance apps the user accesses as well as from interactive applications that the user accesses. The media guidance application can also access information about users from other sources. This allows users to have a single guidance application experience on all of their devices. FIG. 2 shows a more detailed description of this type of user experience. 7. Ellis et al., U.S. Patent Application Publication No. 621827, describes additional personalized media guidance features in more detail. 2005/0251827 filed Jul. 11, 2005, Boyer et al., U.S. Pat. No. No. 7,165,098, issued Jan. 16, 2007 and Ellis et., U.S. Patent Application Publication Number. 2002/0174430, filed February 21, 2002. These documents are herein incorporated by reference in their entirety.

FIG. 5. Video mosaic display 500 offers selectable options 502 to organize content information based on type, genre and/or other criteria. Display 500 shows the television listings option 504, which provides listings 506, 508, 509, 510, and 612 as broadcast program listings. Display 500 may include graphical images, including still images, cover art, and video clips previews. These listings can also provide live video of the content or other information that indicates the content described in the media guidance data. The listing may include text that provides additional information about the content. Listing 508 could include multiple portions, such as media portion 514 or text portion 516. You can select media portion 514 or text portion 516 to view the content in full-screen, as well as information about the content displayed in media section 514 (e.g. to view listings for the particular channel on which the video is displayed).

Display 500 listings are different sizes. Listing 506 is larger than listings 508, 508, 510 and 512. However, if you wish, all listings can be the same size. The listings may be different sizes, or graphically enhanced to show user levels of interest or highlight certain content. This is based on the preferences of the content provider and user preferences. Yates, U.S. Patent Application Publication No. 003885, discusses various systems and methods to graphically accentuate content listings. 2010/0153885 filed November 12, 2009. It is herein incorporated by reference in its entirety.

“Users can access the content and media guidance application (and its display screen described below and above) from any one or more of their equipment devices. FIG. FIG. 6 shows an illustrative embodiment of 600 user equipment device. In connection with FIG. 6, we will discuss more specific implementations of user-equipment devices. 7. User equipment device 600 may receive content and data via input/output (hereinafter ?I/O?) path 602. I/O path 602 can provide content (e.g. broadcast programming, on demand programming, Internet content, content accessible over a local or wide area network, and/or other content), and data to control circuitry 6004, which includes processing circuitry 606 as well as storage 608. I/O path 604 can be used to send or receive commands, requests, or other data. I/O path 602 can connect control circuitry 604 and specifically processing circuitry 606 to one or more communication paths (described below). Although I/O functions can be provided by any of these communication paths, they are shown in FIG. 6 to avoid complicating the drawing.”

“Control circuitry 604 can be built on any processing circuitry, such as processing circuitry 606. Processing circuitry 604 may be based on any suitable processing circuitry, such as processing circuitry 606. Processing circuitry can be distributed over multiple processors or units in some embodiments. For example, there may be multiple processors of the same type (e.g. two Intel Core i7 CPUs), or several processors of different types (e.g. an Intel Core i5 and an Intel Core i7 respectively). In some embodiments control circuitry 604 executes instructions stored in memory (i.e. storage 608). The media guidance application may instruct control circuitry 604 to perform the functions described above. The media guidance application might give instructions to control circuitry 604 in order to generate media guidance displays. In certain implementations, control circuitry 604 might take action based on instructions from the media guidance app.

“In client-server based embodiments control circuitry 604 could include communications circuitry that can be used to communicate with a guidance server, other networks, or servers. On the guidance application server, the instructions for performing the functionality mentioned above may be stored. Communication circuitry can include a cable modem or an integrated services digital network modem (ISDN), a digital subscriber link (DSL), modem for telephone, Ethernet card or wireless modem to communicate with other equipment or other suitable circuitry. These communications can involve the Internet, or other suitable communication networks or paths (which are described in greater detail in connection to FIG. 7). 7).

“Memory could be an electronic storage device that is provided as storage 608 and part of control circuitry 604. The expression ‘electronic storage devices? is used herein. or ?storage device? Any device that stores electronic data, computer software or firmware should be understood. You can store different types of content, as well as the media guidance data, in Storage 608 Nonvolatile memory can also be used (e.g. to launch a boot up routine or other instructions). Referring to FIG. 7 may be used in addition to storage 608 or storage 608.

“Control circuitry 604 can include video generating circuitry or tuning circuitry. This could include one or more analog tuners or one or two MPEG-2 decoders, or other digital decoding, high-definition tuneers, or any other suitable video or tuning circuits, or combinations thereof. It may also include encoding circuitry, which is used to convert over-the-air, digital, and analog signals to MPEG signals for stored. Control circuitry 604 could also include scaler circuitry to convert and downconvert content to the preferred output format for the user equipment 600. Circuitry 604 can also contain digital-to analog converter circuitry or analog-todigital converter circuitry to convert between analog and digital signals. The user equipment device may use the tuning and encoding circuitry to receive, display, play or record content. You may also use the tuning and encoding circuitry to receive guidance data. This circuitry includes the following: tuning, video generating and decoding. Encrypting, decrypting. Scaler, and analog/digital Circuitry may all be implemented with software that runs on one or more general-purpose or specialized processors. Multiple tuners can be used to perform simultaneous functions such as watch and record, picture-in?picture (PIP), recording, multi-tuner, etc. Storage 608 may be used as an additional device to user equipment 600. In this case, the tuning circuitry and encoding circuitry (including multiple tuneers) can be linked with storage 608.

“A user can send instructions to control circuitry 604 via user input interface610. Any suitable user interface may be used as user input interface 610, including a trackball, mouse, trackballs, keypad, keyboard touch screen touchpad touchpad stylus input joystick or voice recognition interface. Display 612 can be used as an independent device or integrated into other elements of the user equipment device 600. Display 612 could be touch-sensitive or touchscreen. Display 612 may also be used in combination with user interface 610. Display 612 could be any combination of a monitor, television, liquid crystal display (LCD), electrofluidic, cathode-ray tube display and light-emitting display. Display 612 may also be capable of HDTV. Display 612 can be configured to display 3D content. The interactive media guidance application, as well as any other suitable content, may also be used in some embodiments. Display 612 may be generated by a video card or graphics card. A video card can provide various functions, such as acceleration of rendering 3D scenes, MPEG-2/MPEG-4 decoding and TV output. It may also allow for multiple monitor connections. Any processing circuitry mentioned above in relation to control circuitry 604 may be used as the video card. The control circuitry 604 may also be used as the video card. The speaker 614 can be used in conjunction with the user equipment device 600, or as a stand-alone unit. Speakers 614 can play audio content from videos and other content on display 612. The audio may be distributed to a receiver, not shown in some embodiments. This receiver processes the audio and outputs it via speakers 614.

“The guidance application can be implemented using any architecture. It may also be an independent application that is fully-implemented on the user’s equipment 600. Instructions of the application are stored locally (e.g. in storage 608) and data to be used by the application is downloaded periodically (e.g. from an out-of?band feed, an Internet resource or another suitable approach). Control circuitry 604 can retrieve the instructions from storage 608 and use the instructions to create any of the displays described herein. Control circuitry 604 can determine the action to take when input is received via input interface 610 based on the processed instructions. The processed instructions may indicate movement of a cursor on a monitor up/down when input interface610 indicates that an up/down switch was selected.

“In certain embodiments, the media guide application is a client/server based application. A request to a remote server 600 allows data to be retrieved by a thin or thick client that is installed on the user equipment 600. Control circuitry 604 is an example of a client/server-based guidance application. It runs a web browser which interprets web pages from remote servers. The instructions may be stored on a remote server. The instructions stored by the remote server can be processed using circuitry (e.g. control circuitry 604) to generate the displays described above and below. The remote server may generate displays that the client device can receive and display locally on 600. The server processes the instructions remotely, while the displays generated are displayed locally on 600. Equipment device 600 can receive inputs from users via input interface610. These inputs are transmitted to remote servers for processing and generation of the appropriate displays. Equipment device 600, for example, may send a message to the remote server informing it that an input interface 610 was used to select the up/down button. The remote server might process the input and create a display of that application (e.g., one that moves the cursor up/down) according to that input. The display generated is transmitted to 600 equipment for presentation to the user.

“In some cases, the media guidance app is downloaded and interpreted by an interpreter (run by control circuitry 604) or any other means. Some embodiments may include the guidance application encoded in ETV Binary Interchange format (EBIF), which is received by control circuitry 604 along with a suitable feed and interpreted by a user-agent running on control circuitry 604. The guidance application could be an EBIF app. The guidance application can be defined in a series JAVA-based files. These files are then received and run on a local virtual machine, or any other suitable middleware that is controlled by control circuitry 604. Some of these embodiments, such as those using MPEG-2 or another digital media encoding scheme, may include the guidance application being encoded and transmitted in an MPEG-2 objects carousel along with the MPEG audio- and video packets.

“User equipment 600 of FIG. 6. can be used in FIG. 700. 7, as user television equipment 702, wireless user communications device 706, and any other type suitable for accessing content such as a portable gaming machine. These devices can be collectively referred to as user equipment or user devices. They may be substantially the same as the user equipment described above. A media guidance application can be used to implement user equipment devices. These devices may operate as standalone devices or part of a larger network. There are many configurations for devices that can be connected to a network. These are described in detail below.

“A user equipment device that uses at least one of the system features discussed above in connection to FIG. 6. may not be considered user television equipment 702, 704 or wireless user communications device 706. User television equipment 702 might, just like other user computer equipment 704, allow access to the Internet. While user computer equipment 704 may include a tuner that allows access to TV programming, unlike some television equipment 702, user computer equipment 704 could be Internet-enabled. The media guidance app may look the same on different types of equipment, or it may be customized to fit the capabilities of the equipment. The guidance application could be provided on user computer equipment 704 as a web page that can be accessed via a web browser. Another example is that the guidance application could be reduced for wireless user communication devices 706.

“System 700 contains more than one type of user equipment device, but FIG. shows only one. 7 to simplify the drawing. Each user can use more than one type or one type of equipment device.

“In some embodiments, a second screen device may be called a user device (e.g. user television equipment 702, wireless user communications device 706). A second screen device can be used to supplement the content on a first device. Any content may be presented on the second screen device that complements the content on the first device. The second screen device may be used to adjust settings or display preferences on the first device. The second screen device can be used to interact with other second-screen devices or with a social networking site. The second screen device may be in the same room or in a separate room.

The user can also adjust settings to ensure consistent media guidance application settings between in-home and remote devices. These settings include the ones described herein as well as channel or program favorites, programming preferences that are used by the guidance app to make programming recommendations, display preferences and other desired guidance settings. If a user makes a channel a favorite, such as on www.Tivo.com, it will also appear on their in-home devices (e.g. user television equipment or user computer equipment), as well as on their mobile devices. Changes made to one device’s user interface can affect the experience of another device. The changes may also be based on user settings and user activity that is monitored by the guidance app.

“The communications network 714 may be used to connect user equipment devices. User television equipment 702 and user computer equipment 704 are connected to the communications network 714 by using communications paths 708, 710 and 712. Communications network 714 could be any combination of networks, including the Internet, a wireless user communications device 706, and mobile voice or data networks (e.g., 4G or LTE networks), as well as a cable network, public switched telephone network, and other types of networks. The paths 708, 710 and 712 can be combined or separated to include one or more communications paths such as a satellite path or fiber-optic route, a cable path, or a path that supports Internet communication (e.g. IPTV), or free-space connections (e.g. for broadcast or other wireless signaling), or any other suitable wired/wireless communications path or combination thereof. In FIG. 7, path 712 is marked with dotted lines. Path 712 is drawn with dotted lines to indicate that it is a wireless path. In FIG. 7, paths 708 and 710 have been drawn as solid lines to indicate that they are wired paths.

“Communications with user equipment devices can be provided by one or several of these communication paths but they are shown as a single pathway in FIG. 7 to avoid complicating the drawing.

Although communication paths cannot be drawn between user equipment devices they may communicate with one another via communication paths such as the ones described above in connection to paths 708, 710 and 712. ), or any other short-range communication via wireless or wired paths. Bluetooth SIG INC. owns the certification mark BLUETOOTH. An indirect route via the communications network 714 allows users to communicate directly with their equipment devices.

“System 700 contains content source 716, media guidance data source 718 and communications network 714 via communication pathways 720 and 722. Paths 720 or 722 can include any of these communication paths in conjunction with paths 708, 710 and 712.

“Communications between the content source 716, and media guidance data source 718 can be exchanged over one or multiple communications paths but are shown in FIG. 7 to simplify the drawing. Additionally, more than one content source 716 or media guidance data source 718 may exist, but FIG. 7 shows only one. 7 to avoid making the drawing too complicated. Below are the different types of each source. You can combine content source 716 with media guidance data source 718 to create one source device. Communications between sources 716, 718 and user equipment devices 702, 704 and 704 are shown as via communications network 714. However, in certain embodiments sources 716, 718 and 704 may communicate directly with 702, 704 and 706 via communication paths (not illustrated), such as those described above with respect to paths 708, 710 and 712.

“System 700” may also contain an advertisement source 724, which is connected to a communications network 714 via a communication path 726. Path 726 could include any of these communication paths, as well as the ones described in connection to paths 708, 710 and 712. Advertisement source 724 could include any of the communication paths described above in connection with paths 708, 710, and 712. Cable operators may be able to insert advertisements at specific times on certain channels, for example. Advertisement source 724 could transmit advertisements to users during these time slots. Another example is that advertisement source might target ads based on demographics (e.g. teenagers watching a reality series). Another example is that advertisement source might offer different advertisements depending upon the location of the user equipment watching a media asset (e.g. east coast or coast).

“In some embodiments, advertisement source 724 may be configured to maintain user information including advertisement-suitability scores associated with user in order to provide targeted advertising. Additionally or alternatively, a server associated with advertisement source 724 may be configured to store raw information that may be used to derive advertisement-suitability scores. In some embodiments, advertisement source 724 may transmit a request to another device for the raw information and calculate the advertisement-suitability scores. Advertisement source 724 may update advertisement-suitability scores for specific users (e.g., first subset, second subset, or third subset of users) and transmit an advertisement of the target product to appropriate users.”

“Content source 716 could include one or more types content distribution equipment, including a television distribution device, cable system headend and satellite distribution facility. Programming sources (e.g. television broadcasters such as NBC, ABC and HBO) may also be included. ), intermediate distribution facilities or servers, Internet providers, servers, on-demand media server, and other content providers. NBC is a trademark of the National Broadcasting Company, Inc., ABC, and HBO are trademarks owned by American Broadcasting Company, Inc. Content source 716 could be the originator of content (e.g. a TV broadcaster, Webcast provider, etc.). Content source 716 may or not be the originator of the content (e.g. an on-demand provider of content, an Internet provider who provides content for broadcast programs that can be downloaded, etc.). Content source 716 could include cable providers, satellite providers or on-demand providers. Internet providers, over the-top content providers and other providers of content may also be included. Remote media servers may be included in content source 716. These servers can store different types (including video content) at a remote location from the user’s equipment. In Ellis et al. U.S. Pat., we discuss more about remote storage and remote access to content. No. No. 7,761,892, July. 20, 2010, is hereby included by reference in its entirety.

“Media guidance data source 718 could provide media data such as the media data described above. Any approach to providing media guidance data to user equipment devices may work. The guidance application can be an interactive TV program guide that receives program data via a data stream (e.g., a continuous or trickle feed). The user equipment may receive program schedule data and other guidance data via a TV channel sideband. This data can be transmitted using either an in-band or out-of-band signal, or any other data transmission method. User equipment may receive program schedule data and other media information on multiple digital or analog television channels.

“In certain embodiments, guidance data may be provided to users’ devices using a client/server approach. A user equipment device could pull media guidance data from a remote server or push it to the user equipment device. A guidance application client that is installed on the user’s device may initiate sessions with source 718 to retrieve guidance data whenever needed. This could be used to access the data when it is out of date, or when the user requests the data. The user equipment may receive media guidance at any time (e.g., continuously or daily, according to user request, system-specified time, or in response to user equipment’s request). Media guidance data source 718 can provide user equipment devices 702, 704, 706 and the media application itself, or software updates.

In some embodiments, media guidance data may also include viewer data. The viewer data could include historical and current user activity information. This includes information such as what content the user is most interested in, when they watch it, how often, what time of day, whether they interact with a network to post information, and what type of content (e.g. pay TV or free TV). Subscription data could also be included in the media guidance data. The subscription data could include information about which services or sources a user subscribes, and/or which services or sources a user previously subscribed to but then terminated access to (e.g. whether the subscriber subscribes to premium channels, whether they have added premium services, or whether the user has increased their Internet speed). Some embodiments may allow the viewer data and/or subscription data to identify patterns over a longer period. A model, such as a survivor model, may be included in the media guidance data. This model is used to generate a score that indicates whether a user will end their access to a particular service/source. The media guidance application might process viewer data and subscription data to determine if the user will end access to a service or source. A higher score could indicate that the user is more likely to terminate access to a service or source. The media guidance application can generate promotional messages that may entice users to keep the service or source identified by the score.

Click here to view the patent on Google Patents.