Alphabet – Max Abecassis, Ryan M. Donahue, CustomPlay LLC

Abstract for “Second screen shopping”

“Systems and methods for displaying video information include: a second device that obtains current play location data from a video being played on an initial screen device (e.g. obtaining from the primary device an identification or an acoustic footprint of the video); determining the current position of the video being played on an initial screen device based on the current play situation data; downloading information (e.g. video map, subtitles and moral principles, objectionable material, memorable content and performers; and displaying information on an item, rating, item, item, rating, which is synchronized with the secondary screen

Background for “Second screen shopping”

“1. “1.

“Systems and methods for displaying information related to a current location in a video on a secondary screen during playback of the same video on the primary screen on a second monitor. The processing of the information is synchronized to the playing of a movie on the primary monitor. This could include a timecode retrieval or acoustic fingerprint match. Processing and/or retrieval information can be done at the second screen, remote server, or service provider.

“2. “2.

“Systems and methods for, or displaying on, a second screen, during the playing of a movie on a primary monitor, information relating the current position in the video are restricted in terms of the functions supported and information that is provided to the user.”

“The inventions concern systems and methods that provide supplementary information on a secondary screen during the playing of a video on the primary screen.”

It is an object of present inventions, to provide user capabilities on the second screen. These routines include: content previewing that responds to user preferences with respect to explicitness in a plurality content categories; content previewing that displays content that is responsive to user preferences with regard to location and linkages; plot information and for enabling user preferences with respect to notification categories (e.g. disabling notification for clues; identifying the best lines, memorable moments and best performances in a movie; notifying the nearest brick and asking trivia questions

These and other objects can be described briefly by systems and methods for displaying video information. The second screen device obtains current play location data from a video being displayed on a primary device. This data includes: downloading information (e.g. video map, subtitles and moral principles, objectionable and memorable content, performers and geographical maps, shopping and rating information, as well as information from the secondary screen device over a computer communications system into the second screen’s memory; and then displaying the information on the second display device that is synchronized with the primary device. These and other embodiments, benefits, and objects are described in detail, along with the accompanying drawings and the appended claims.

“For the purposes of this disclosure, different terms used in art are defined as:

“The term “herein” shall mean in the entirety of this specification including drawings and abstract. “Herein” shall refer to the entire specification, including all drawings and abstracts, as well as claims. This term does not include the section or paragraph in which it might appear.

“The terms ‘include?,?comprise? and?contains are not to be confused with the words?comprise?. The elements are not limited to the ones listed. Only the term “consist” is allowed. The elements are limited to the ones listed.

“There should not be any conceptual distinction made between the terms on, at or in. For example, the phrase receiving on or receiving at or receiving in a screen device should not be used.

“The term “responsive” is not defined. “Responsive” does not mean that all elements, conditions, preferences and/or requirements can be considered. An event that responds to a specific requirement does not necessarily have to be responsive to that requirement. A specified requirement may trigger an event that is responsive to another requirement. This is especially true when the second requirement, although described as an alternate requirement, can also be deemed complementary.

“The terms?”application software??,?software application??,?application??,?app??,?routine? and??computer software? are interchangeable. All executables, libraries, scripts and instructions that cause or are required by a device to execute a task, function or process shall be considered. A computer program that assists a user in performing a task, function, process or activity is called application software. Sometimes, application software and the operating system software can be synergistically integrated.

“The term “associate” shall mean assign, give, allocate and associate. “Associate” shall be understood to mean assign, give and allocate, as well as associate, designate or attribute, link, and/or relate.

“Clip” is an abbreviation for a short segment. “Clip” can be used to refer to a shorter segment than a chapter or a scene. A clip is a sequence of shots that includes at least one contiguous shot. It usually shows the same primary characters in the same location. The definition of a clip is affected by a material shift in the participation of the principal characters or in the location and/or a distinct alteration in thematic content, topic, or tone of conversation.

“The term ‘descriptor? “Descriptor” shall refer to a keyword, word or phrase, code, phrase and/or designations. Descriptor can also refer to any data, information or image that identifies and describes, links, and/or categorizes the content of a video, portion of a movie, or a frame. Linkage refers to any information, data and/or method that allows retrieving or downloading data from either a local/internal or remote source.

“The term “Dialog” can be defined as: “Dialog” can refer to any dialog, conversation or monologue. Information that is included in the subtitles and closed captioning may also be called dialog.

“Geographic map” is a term that refers to any map. Any map that includes satellite, topographical and street data, as well as maps such Google Maps and Google Earth Views and Google Street Views. It can be 2D or 3D, interactive or static, single- or multi-featured and representative. What is the term “geographic map?” Any depiction (e.g. map) that gives context to a location shall be considered “geographic map”.

“Item” shall be defined as: “Item” shall refer to: (i), an object, article or artifact; (iii), a specific action or act within an activity; (v) a sound; (vi) a part of a dialog; (viii), cinematography, cinematographic technique; (viii); and (ix). A locale.

“Keywords” shall be defined as: “Keywords” shall refer to words, phrases, definitions, codes and descriptors as well as data, metadata, numbers, and data.

“Keywording” shall be defined as: “Keywording” shall refer to associating keywords.

“The term “locale” means: “Location” can be used to refer to a place, site, spot, area, landmark, point of interest, tourist attraction or building. A location is a place or area outside the movie studio used to film a movie or a portion thereof. The actual location may be shown or it may be used in depictions to represent another locale. The term “locale” is different from the term “location”. The term “locale” is different from the term “location”. When the term “location” refers to a point on the timeline of the video.

“The term “navigator” is a generic term that refers to software and/or operating system software that allows you to navigate through the Internet. “Navigator” shall refer to software or operating system software that provides video playback capabilities, decoding and decrypting, as well as rendering for playing movies on personal computers. For example, Microsoft’s DVD Navigator and decoder filters are part of a navigator. The renderer can handle CSS and analog copy protection.

“Network” shall refer to any private or public, wired or wireless communication system. “Network” shall refer to any private or public, wired and wireless communication system.

“Notable” is a term that refers to content. Content shall refer to content that is: (i.e., may be of interest for a significant audience; or (ii., is notable, remarkable, compelling; or (iii. is uncommon, atypical or unusual; or (iv. is rare, unique, rare or extraordinary).

“Performer” is a term that refers to an individual, actor, participant, or actress who performs. “Performer” can refer to any individual, participant, actor or actress who appears in a video and/or is credited for the physical or verbal performance of a particular character. An actor in a motion pic, an athlete in a televised sporting competition, a newscaster on a news program and a chef in an cooking show are all examples of performers.

“The terms ‘play? and?playing? refer to the act of playing a segment of a video. “Play” and “playing?” refer to playing an entire segment or part of it. A method or system described herein may claim to play all or part of a segment. However, complete playing of a segment doesn’t necessarily mean that every frame, audio, sub-picture portion and/or bit data must be played.

“Plot info” is an abbreviation of the term. “Plot info” shall refer to information, reasoning, and/or explanation related to, or relevant for understanding or appreciating a plot or sub-plot.

“The term “plot point” is defined as: “Plot point” can refer to a plot, subplot, storyline or principle.

“Preferences” shall be defined as: “Preferences” can be defined as?programming preference,??version preference,???presentation preference,???content preferences,???content preferences,???function preferences,??technical preferences, and???playback preferences. The term “programming preference” is used. A preference or set of preferences for a particular video (e.g. Spider-Man, genres of videos (e.g. Action), types and videos (e.g. Interactive video detective games, series of videos (e.g. 007) wide subject matter of videos mystery, and/or the time and date of playback. The term “version preference” is used. The term?version preference? refers to a preference for a particular version of a video (e.g. motion picture), which is released by the copyright owner (e.g. motion picture studio) and that contains content not found in an alternate version. For example, the ‘Theatrical?, ‘Unrated? and?Director?s Cut versions of a video are all considered to be one version. Version options on a DVD-Video. The video version does not include sequels or/or remakes, such as Spider-Man 2 (2204), and The Amazing Spider-Man 2012 (2012). The term “presentation preference” is used. A preference or preferences that allow the inclusion of selected segments within a video or within multiple videos in a presentation? The term “presentation preference” is also used. The term?presentation preference? also refers to a preference for one or more of the many features offered by each of these: Compilations, Subjects and Best Of, Performers Shopping, Music, Search, Preview. Content preferences is a term that refers to your preference for content. The term “content preferences” refers to your preferences regarding the form, explicitness, inclusion, exclusion, or both of objectionable contents. It also includes the length, level, detail, type, and depiction of potentially objectionable items. Content preferences can be set up using the CustomPlay app’s Control feature. Function preference is a term that refers to a preference for a particular function. The term?function preference? shall refer to a preference for one or more of the many elements that are provided by or associated with an in-video/playback function (e.g. Who, What and Locations, Plot Info Filmmaking, Trivia and Info). Technical preference is a term that refers to a person’s technical preferences. The term?technical preference? refers to a preference for technical or artistic elements (e.g. dissolves, fades and wipes) that can be used during non-sequential segments. The term “playback preference” is used. The term “playback preference” refers to a preference for audio and visual options (e.g. camera angles, picture with image, captioning and commentaries) that are available for a particular video.

“The terms “seamless” and “seamless?” “Seamless” and “seamlessly?” are synonyms for seamless. Without gaps perceptible by the human eye. This is achieved by maintaining a constant transmission rate. Although seamless play of non-sequential segments is technically seamless, it may not look artistically seamless to users because there was a change in how the content is played.

“Search terms” is a term that refers to search terms. “Search terms” shall refer to terms, words and phrases, as well as codes, descriptors or labels, data, metadata, numbers or any other information that identifies or describes what is being searched.

“The terms’second screen? and’secondary screen? are interchangeable. They can be interchanged and refer to any computing device that is capable of playing/displaying video, audio, images or subtitles. A primary screen device can also be referred to as a primary display screen. A primary screen and second screen include, for example, televisions and personal computers as well as laptop and portable computers. Tablets, smartphones, mobile devices, remote control and computing devices with a display screen. Audio reproducing and output components are also included in a primary screen device and a secondary screen device (e.g. amplifiers and external and internal speakers).

“The term “seek/step data” is defined as: Any index, data and/or information that allows access to a specific video frame or facilitates the use of a map along with a particular video. Step data is not required for Seek/Step data. Without step data, seek/step data can directly address all video frames within a video. For example, seek/step may not be based solely on navigation points or synchronizing data (i.e. seek/stepdata can be based upon shot changes and scene changes in the video).

“The terms’segment? “The terms?segment? and?video section? One or more video frames shall be considered a segment. A segment definition is generally used to identify a beginning and end point (e.g. frames) within a video. In the second screen function examples, however, a segment definition identifies one point (e.g., a frame) within a video.

“Subtitles” shall be defined as: Subtitles can be defined as subtitles and/or any textual information that is representative of a particular portion of the audio dialogue. Displaying subtitles does not necessarily require the display of all subtitles in a video. Displaying subtitles can only show a sub-line, phrase, or unit. Subtitles can be distinguished from closed captioning in this section.

“Subtitle information” is a term that refers to subtitle information. Subtitle information is any information, including text, images, data and/or text that allows the display of subtitles on a screen. Details regarding the display of subtitles or use of sub-title information can be used alternatively, complementaryly and/or with other information.

“Supplemental information” is defined as: Any information, text, data or depiction that contains any information, text, image, video, or content that informs and entertains, clarifies, clarifies, or illustrates.

“Trailer” is a term that refers to a trailer, preview, video clip, still image and/or other content that precedes or is extraneous to the movie. “Trailer” shall refer to a trailer, preview or video clip and/or any other content that precedes and/or adds to the movie.

“The term user? “The term?user? is interchangeable with the terms?subscriber,?viewer? and?person?. It shall refer to an end-user who actively uses video content, passively views a movie, interacts with a game, retrieves video from a provider and/or subscribing and using multimedia, internet and/or communications services.

“Variable content video” is a term that refers to a video with variable content. A video with a nonlinear structure that allows for a wide range of logical sequences, shall be called a variable content video. Variable content videos include parallel, transitional and/or overlap segments that allow for multiple versions of the video. Variable content videos may include an interface, software program routines, software program codes, system control codes, and software programs to control the playback of the video/audio. This could be used depending on the specific embodiment. Video that requires multiple, transitional and/or overlapped segments to be varied played.

“The terms “video” and ‘video program?” are interchangeable. “Video” and “video program” can be interchanged. They refer to any video image, regardless of source, motion, technology or implementation. Video may include images and audio from full motion picture programs, films and movies, interactive electronic gaming, multi-media content and television programs. A video may include subtitles, sub-picture information, user interfaces, software program routines and system control codes. These codes can be used to control the playback of the video/audio. “Movie” is a generic term. “Movie” shall refer to a full-length motion picture that is released in theaters or optical discs (e.g. a DVD-Video, Blu-ray Disc).

“Video map”,?map?”, and??segmentmap” refer to any arrangement, table or database that identifies a start and end of one or more segments, one or more individual video frames, or one or more play positions in a particular video or audio. Video maps also contain data that is associated with at most one segment, a sequence or number of segments, a particular video frame and/or play positions in a specific video or audio. Data associated with a video map may include, for example, a descriptor, (ii), an implicit or explicit editing or filtering action, (iii), a linkage between segments, (v) data and/or textual information/content, (vi) any information, data and linkages that are required to enable and support the features and functions described herein. Video maps may also include seek/step data and bookmark generating data.

“The terms above and any other terms defined herein must be understood in the context of this document and not as they might be defined by incorporation. The incorporation of reference does not modify, limit or expand the definitions provided herein or formally defined in this paper. Any term not formally defined by this document shall be deemed to have its usual and customary meanings.

“Networks and End User Systems”

“FIG. “FIG. The video provider 101-103 can provide any combination of video, information, or data services. However, the services may not be exclusive to other providers. Each participant, regardless of whether they are primarily a provider 101-103, or an end user 140, is capable of retrieving and transmitting video, data, and/or information to and from any other participant.

“Video and other services can be delivered using a variety possible communication networks, infrastructures, and system configurations. FIG. FIG. 1 shows a variety of network, infrastructures, or system configurations that can be implemented. These networks can be wired or unwired, using either fiber optic 111, coaxial cables 112, twisted copper wire 110, cellular 114 and/or satellite.

A video provider 101 includes: i. communications technologies 111 to establish a plurality video and communications streams to a plurality end-users 140, to allow the uploading, downloading, and/or transferring of information, data, and/or video preferences; ii. processing hardware and software 122 for retrieving the end user’s video preferences and content preferences, second-screen function preferences and searches terms, and for processing the end user’s video preferences and content preferences, requests and second-screen function preferences and for synchronization data, as well as well as well synchronization data, including search terms and/or search terms and/or search terms and/or search terms and/or search terms and/or data, and search terms and/or information. For example, to perform segment data searches to identify segments or a list of segments responsive a users search term and search requests. iii. Mass storage random access memory device 123 for storing video maps (e.g. segment data) and/or for processing the user’s video preferences, content preferences, second screen function preferences and requests. iv. Processing hardware and software 124 to maintain accounting and support services in relation to video, data and/or information services.

Video providers can be further classified according to their functions and/or the extent of the videobase and data maintained. Video service providers 101, such as cable companies, might be able to provide a wider range of services than information providers 103 (e.g. websites). The wide variety of information and multimedia configurations available via the internet is evident in the video and information services.

The user may not have direct access to resources 101-103 of a video service provider. The requested video can be streamed, downloaded or uploaded in real-time to a service provider that is more economically accessible for the user. Some video services providers within the network 100 may not provide services directly to users but serve as depositories or originators for other service providers.

“In one of many possible configurations, an end user video system 140 gains access to the network 100, and the various service providers 101-103 through a communications device 131, e.g., satellite dish, cable distribution box. A variety of communication devices, computing devices, screens, and monitors make up an end-user video system 140. The main communications devices are, for instance, a modem (e.g. cable modem); an internal communications device (142), e.g. wired router and wireless router; as well as a network/wireless extender (143). Communication interfaces like Wi-Fi and Ethernet, cellular and 4G LTE are used to facilitate communication between end-users’ various computing devices and multiscreen combinations 144?149. These include set top boxes 144; PC/monitor 145, tablets 146-147, smartphone 148, and television 149. A device can be classified as a communication device, a computer device, or a screen. Devices such as smartphones 148-147, tablets 146-147 and laptop/notebook computers (145) can include all three functions. A television screen 149 can also include storage and communication capabilities, which may not otherwise be available in a set top box or television media accessory (144).

“Communications between devices can be established using any of a number of wired and wireless communications networks, including Wi-Fi or cellular (e.g. 4G LTE). A computing device does not need to be connected directly or indirectly by wire to screen 149. A communications port 143 can be used to connect a computing device 145 to a second screen, 149. It may have varying intelligence levels and capabilities. It may boost or manage the signal or serve no other purpose but to be a convenient outlet to plug in and unplug devices.

“The specific location of the end-user’s devices, screens and subsystems is not restricted to any one arrangement. There are many configurations that can be used to meet end-user’s needs. Preferentially, the end-user configuration includes a primary display device (149), one or more secondary display devices (145, tablets 146?147, and/or smartphone 148)

“An appropriate application software for the communications infrastructure might reside directly or indirectly within the primary display, secondary display, and/or separate devices in communication with both the primary and secondary display devices.”

Multi-screen combinations can include television 149, smartphone 148 and pc/laptop145 and smartphone148, television, 149, pc/laptop145 and smartphone148, television, 149, pc/laptop145, television, 149, and television, 149, multiple tablets 146-147. Multiscreen combinations do not have to be limited to one screen. A second screen, such as a tablet 146, may offer a second screen experience in comparison to a primary screen like a television 149 and another screen, such as a second tablet147.

Multi-screen usage can be classified as either disruptive (e.g. multi-tasking unrelated material) or complementary. It can also be described as sequential (e.g. usage of one screen is followed by another screen), simultaneous (e.g. expected use of a secondary screen as part or the viewing of content on a primary display) or spontaneous. Multi-screen usage can be both disruptive and complementing, however. For example, disruption can be described as the interruption of a linear video experience. However, they are also complementary in that information is provided that users would find beneficial in improving the video experience. A preferred embodiment provides interactive capabilities on a second screen that can be used in conjunction with specific content displayed on a primary screen.

“The novel features described herein can be implemented by anyone, not just the end-user multiscreen systems, communications infrastructure and service providers. 1. Many alternative or complementary devices, components, elements, or services can be integrated into a multiscreen configuration, as disclosed in U.S. Pat. No. No. U.S. patent publication 20151509, entitled?System, Method and Device for Providing a Mobile Application across Smartphone Platforms to Enable Consumer Connectivity and Control Of Media? U.S. Patent publication 20120210349 is titled “Multiple-Screen Interactive Screen Architecture?” U.S. Patent publication 20130061267 is titled “Method and System for Using a Second Screen Device To Interact With A Set Top Box To Enhance A Users Experience?” U.S. Patent publication 20130111514 is titled “Second Screen Interactive Platform?”. Each of the cited references provides disclosures with respect their respective FIG. 1. that directly relate to the disclosure above in relation to FIG. 1, and are incorporated herein as reference.”

“Video Map”

“An implementation of the disclosed video map and playback capabilities are implemented in a currently free CustomPlay PC app which provides users with a comprehensive set video playback features as well as in-video functions for movies on DVD. The 14 feature sets of CustomPlay include Presentations and Compilations, Subjects. Dilemmas. Best Of. Performers. Filmmaking, Plot Info. Shopping. Music. Locations. Search. Preview. Control. The CustomPlay has eight in-video, playback functions. These include Plot Info and Locations. In the movie Casino Royale, examples of the video map enabled CustomPlay capabilities include 91 Shopping items, 19 Locations with links to Apple Maps, 12 entertaining Subjects (1-2 minutes each), 5 story-driven Presentations (26-115 minutes each), over 51,000 keywords that drive the extensive Search feature, and 14 content-customization categories.”

A video map can be included with or separately from the video’s audio and video data. A movie can be downloaded, streamed or downloaded from a remote provider. The corresponding video map may then be downloaded from a secondary remote source (e.g., via the communications interface, from a distant server). A multi-screen configuration may include processing, a memory device and communications capabilities. This could provide in-video and playback functions for movies streamed from remote video providers. This embodiment allows for the downloading of a video map, user interface, and other control programs specific to the motion picture from a remote server or player. The control program reads the video source’s identifier and searches the mass storage fixed storage device looking for a video map. If the map is not found, it communicates with an outside source to obtain the map. This is how linear video programs in the conventional format create a collection of motion pictures that can be used to illustrate the principles.

A video map and/or its components (e.g. acoustic signature information) can be downloaded before playing the relevant video. This is possible simultaneously with the video’s playing and after the video has finished playing. You can download some components before, others as you need them, and some at the end of the video’s playing. The user may also be able to download information and content based on their pre-established or contemporaneously established features, functions preferences, and the specific multi-screen environment.

“Devices”

“Specifically, in regard to second screen devices the teachings of smartphones (e.g. Samsung Galaxy, iPhone, and Nokia Lumia), tablets, (e.g. iPad Air, Amazon Kindle Fire and Microsoft Surface), smart TVs (e.g. Samsung UHD Series, Sony KDL Smart TV Series), are incorporated herein as reference.”

These devices can be further enhanced by adding processing, firmware and memory to support the second screen capabilities. A processor may include a central processing unit, CPU and any associated computing or calculation devices (e.g. graphic chips) as well as application software. A second screen device can provide tactile feedback, depending on the multi-screen configuration of the user and their preferences. This could include device vibrations, mechanical movements of a key, auditory and/or visual notifications that alert the viewer to any new content or functions. Tactile feedback could include device vibration or mechanical movement of a key or button. The number of vibrations and particular rings tones that are used in a notification may indicate whether the function is being performed.

“Remote Control Functions.”

A second screen serves a significant purpose. It can provide video playback controls and display content that is complementary to the primary screen. A smartphone and tablet application can provide all functionality for the remote control and interfaces on the screen. These devices can communicate with the player over Wi-Fi via a transmitter application in controller and a listener app in controlled device.

“FIG. 2. This is an illustration for a remote control interface 200 for smartphones. To activate a function, touch a button/object. Interface 200 includes, for instance, navigation controls 201-209 (e.g. Exit 201; Settings 202; Help 203; Lists 205; Search 207; Browser 208; selection functions 211-213; audio volume controls 221-227 (e.g. Play/Pause Toggle 221, Skip Forward Clip/Scene 222; Fast Forward 224; Fast Rewind 226; What 227; in-video function controls 241-247; content controls 251(e.p Info 244, Filmmaking, Plot Info 246, and Dilemma; content category Violence, Mild and level of explicitness, None, Mild and graphic

“The Exit control 200 confirms that you have exited the application. The Settings function202 displays a screen with options to customize the display of in-video notification on the primary screen or a secondary screen. The Help function203 provides context-sensitive help information. The List function205 displays, on the primary screen and the secondary screen, a menu with CustomPlay features. The screen that displays the various in-video functions as well as play control functions is called In-Video function. 2. Search function 207 displays a screen with Search functionality. The browser function 208 provides keyboard and mouse functionality. The Set-up function 209 provides access to various utilities that allow you to establish communications with other devices (e.g. IP address and port number).

“The Play From control 222 allows a user to playback the video from the current position, regardless of which feature is being used. A user might use the Search feature to find segments responsive to a keyword search. To play the segment from the selected segment, the user can activate the Play From control. The Play From control can also be responsive to the user’s pre-established presentation preferences (e.g. Custom presentations or Play As Is). The Play From control can be set to default instead to the Custom presentation, even if the user does not have a preference for the presentation. The Play From control can also default to playing the last movie that was played by the user. Or, a presentation could be selected in response to the feature that was last used. The Play From control can enable the Play As Is presentation if Search or Preview is not being used. The Custom presentation is available if the Best Of feature has been used in the past.

When enabled by the user during playback the What replay control 227 rewinds the video for a specified amount of time and replays a section of the video with subtitles enabled. The disclosures in U.S. Pat. No. No. ; and U.S. Pat. No. No. 7,430,360 titled “Replaying a Video Segment with Changed Audio?” ; are incorporated by reference herein

The Play Current Clip/Scene 228 uses the clip and scene databases portions of a Video Map to identify a clip/scene definition that is responsive at the current play position. It then automatically rewinds to the beginning of the clip/scene and plays from that point. A Clip/Scene control can respond to a Clip/Scene (Skip Back Clip/Scene 22,3, 226, Play Current Clip/Scene 228), or a Scene. This option is predefined.

“Second Screen Functions.”

A second screen function, in general, is an in-video function that’s enabled on a secondary screen or multi-screen system while video is being played back on a primary screen. The following disclosures relate to second screen apparatuses, architectures and methods. No. No. ; U.S. Pat. No. No. ; U.S. Pat. No. No. 7,899 915 is titled “Method and Apparatus for Browsing Using Multiple Coordinated Devices?” U.S. Patent Application 20120210349 entitled?Multiple Screen Interactive Screen Architecture? U.S. Patent Application 20130061267 is titled “Method and System for Using a Second Screen Device To Interact With A Set TopBox To Enhance A Users Experience?” U.S. Patent Application 20130014155 is titled “System and Method for Presenting Content with Time Based Metadata?” ; and U.S. Patent Application 20140165112 entitled?Launching a Second-Screen App Related to A Non-Triggered Initial-Screen?

This document incorporates the teachings of current second screen capabilities and functions such as Google’s Play Info Cards and Amazon’s X-Ray. There are many ways to sync multiple devices over Wi-Fi networks or remote servers (e.g. JargonTalk).

“It is intended that the features, playback functionality, in-video functions and playback features described herein may be implemented in different second screen embodiments that don’t require altering a traditional playing of a movie on a primary monitor (e.g. remote control functions on second screen, superimposing notifications indications, seamless skipping of video segments and selective playing).

The second screen functions can take advantage of any additional content (e.g. video/audio commentary or supplementary audio content) that is provided with the movie. The video map will be able to map the audio segments and associated descriptors from the additional content. The synchronization data could be used to provide additional information on the second screen (e.g. Additional video content will be displayed during the movie’s playback.”

“What Second Screen Function?”

“FIG. 3A is an illustration showing a second screen that displays features of the What function. A screenshot of the user interface 301 is from Casino Royal, in which 007 approaches an Aston Martin car. This clip is followed by another clip in which 007 sits in the car and delivers a line to open an envelope. The What function control 302 is activated by the user who wants to understand the words. It displays the subtitles 303 from the recently delivered line: “I love you too M?”. Optionally and/or responsively to the second screen embodiment, the What function control 302 may not rewind or pause the video. If the What function control 302 is enabled and the user prefers, it will display the subtitles 303 on the second screen. This allows the user to view a specific portion of the video. The video is not replayed. The disclosure of previously incorporated U.S. Pat. No. No. Multitasking and accessing external information are advantages of the second screen application. This increases the chance that a user will miss dialogue they would otherwise have missed.

“Activating the What function (e.g. touching the icon 302 on second screen interface 301) causes second screen processing to identify current play position directly or indirectly through a play of the video on a primary display and/or the internal clock of the second screen that is synchronized with the video playback. This is used to determine the time that subtitles will be displayed within the video. The display of subtitles is determined by the system default and/or viewer preferences (e.g., “replayed?”). 20 seconds; the subtitle language (e.g. English or another preferred language depending on whether sub-title information is available); the type of subtitles (e.g. subtitles, closed captioning and commentary); and the end time for the display of subtitles. This can be at the point at which the What feature was activated, at some system specific time and/or predetermined preferences of a viewer with respect to when the What function was activated. To identify the appropriate subtitle information for the requested time period, the video map subtitle data are searched. Without the need to rewind or replay the video, the appropriate subtitles will be displayed on the secondary screen and/or second screen.

“In contrast to the method used when the video is replayed without subtitles, there are many novel methodologies that can be applied in the case of a secondary screen embodiment. These include the nature of the primary/video playback device/second screen communication, whether another device/software is playing and whether it is capable multitasking and playing the video uninterrupted while retrieving the required subtitle information. The second screen may also be able to retrieve the subtitle information independently of the video playback capabilities. The preferred second screen embodiment uses the local or remote storage device/server to download the subtitle information. One embodiment includes the downloading of subtitle information and subtitle synchronizing information as part of a download of a videomap. The synchronizing data allows the second screen to sync the display of subtitles with the video playback on its primary screen. The third party providing the subtitle information and synchronizing details is in this case. The video is not being replayed so the subtitle display does not need to be perfectly synchronized with audio tracks. A delay or offset could be beneficial before delivering data to the second screen. The synchronizing information could already offset subtitles display.

“Alternatively, to an intermittent activation the What function, a particularly innovative continuous display of subtitles implements an advantageous offset synchronization method. This is a novel implementation of closed captioning that is displayed on a primary screen. A haphazard delay in closed captioning display is not beneficial and is only possible on the primary screen.

A display of subtitles can be responsive to user preferences with respect to all videos or a particular category of videos (e.g. Foreign Films), certain sequences within a movie (e.g. scenes with Heavily Accented dialogue or Non U.S. Accented English dialog), and any portion that contains unclear dialogue. The What function can be activated at any time by the user pressing and holding the What control 302. This will activate the What function in continuous mode, rather than the temporary mode enabled by touching the What function controller 302. A subsequent touch of the What function controller 302 may deactivate the continuous mode.

If the What function is activated in continuous mode, then the subtitles and preselected parts of the subtitles will be displayed continuously in a synced offset fashion. The concepts of offset, delay and the related synonyms are interchangeable in this context. FIG. FIG. 3B illustrates offset synchronization methods. Subtitles are typically displayed as close to the audio track’s actual dialogue as possible. FIG. FIG. 322 is synchronized in a conventional way with an audio track that includes a portion of dialogue 321. FIG. 3B also illustrates an offset synchronization of the display of subtitles that may be controlled by one or a combinations of offset methodologies, including, for example: the subtitle portion/sentence/phrase (subtitle unit) is displayed the equivalent of a subtitle unit(s) 323 after the corresponding accurately synchronized subtitle would have been displayed 322; the subtitle is displayed at a preestablished play position 324 after the position at which the corresponding accurately synchronized subtitle would have been began to be displayed 322; and the subtitle is displayed a system default and/or a viewer’s preestablished time offset period 325 (e.g., a delay of 2 seconds) after the time at which the corresponding accurately synchronized subtitle would have been displayed or began to be displayed. A What embodiment uses a subtitle offset syncization to control the delay between the real-time created closed captioning (and the corresponding dialogue) rather than the random unsynchronized delay.

A continuous mode does NOT necessarily mean all subtitle units are being played. For example, continuous mode may only play a subset or a few subtitle segments (e.g. 10-12 subtitle sections corresponding to the instances where the audio is not clear enough and/or has the best lines). Intermitting continuous mode will reduce the number of subtitles displayed and reduce the data required to be presented to the second screen.

Examples of embodiments include: downloading information (e.g. subtitles) from a remote provider over a communications network; receiving synchronizing data that is responsive for a play of a movie on a primary device (e.g. using the identification of current play positions or comparing the audio fingerprint to an acoustic library to determine the current position of the video; or displaying the supplementary info on the secondary screen device responsively to an offset sync to the video on primary device. This means that the second-screen device’s of the first screen device’st of the second-screen device shows the second-play of the second-screen device’s playing of the primary device.

The individual steps described above can be performed by either the primary screen device or the second screen device. Remote service providers may also perform these steps. The primary screen device and the second screen device may identify the current play position by creating and comparing an audio fingerprint. A remote service provider can analyze the audio information received from the second device via a computer communications network. Remote processing may perform offset synchronization, and may adjust the downloaded supplementary data accordingly. Other functions, the display of information, and the display of supplementary information may benefit from offset synchronization. Multiple synchronization methods may be active simultaneously on one display. Each responsive to, for instance, the relationship between the supplementary information and the depiction currently being displayed on the primary screen. A user has the ability to activate, deactivate, select, and/or adjust offset delay parameters (e.g. time delay and subtitle unit delays). The offset in synchronization can be deactivated to restore a normal synchronization.

“Notwithstanding conceptual distinctions, the previously incorporated U.S. Patent. No. No. Similar to the disclosure of rewinding the video that is cumulatively responsive to multiple consecutive activations of the What function in a second screen embodiment of the invention, the time period that defines the display subtitles is also cumulatively responsive to multiple successive activations of the What function.

“ID and Credits Second-screen Functions”

FIG. 3A includes an ID function control control 305 as well as a Credits function controller 311. The ID function control 305 allows the user to request identification information. An image and/or promotional image can be used to identify an item. Written identification 307 is required (e.g. brand name, model, description of?Aston Martin DBS Sports Car? type of action, and name of the action?martial arts butterfly kick?) ), a write up, and access to additional information 308.

“Further, an ID function might synergistically benefit from the data associated to a Subject presentation. The Subject could be ‘The Dude? In the movie The Big Lebowski the subject can identify over 100 instances of dialogue, including the term ‘Dude?. An ID Game function displays a notification for each instance of the ID Game set. This allows the participants to take the action they have associated with that game (e.g., give the best line from the movie). An ID Game notification may also include the presentation of Trivia function information/question, and/or the presentation of a simplified Yes/No question, to determine if the particular action that has been associated with the game or the particular notification instance is to be performed.”

The Credits function’s primary purpose is to notify the user if there is any additional content after or during the credit play. The Credits function control 311 allows users to query the function and display the appropriate message. The user is informed in this example that there is no content after or during the credits 312.

“In contrast, in The Avengers, after one minute, fifty six seconds of the film’s credits, a brief scene shows Thanos, the evil supervillain, talking to his emissary The Other. The credit sequence ends with a final scene showing the Avenges dining at a Middle Eastern restaurant. The Credits function control 311 activates, and the display informs the user that there are two distinct sequences in the credits.

“The Credits function can also be activated automatically, depending on system default or user preference, as soon as credits start to appear on the primary screen. This would notify the user on the primary screen and/or secondary screen if there is any additional content. The Credits function control can also be activated to allow the user to choose whether or not they want the movie to stop playing. The non-content parts of the credits are automatically skipped if there is any content after or during the credits. This is responsive to the video map. The motion picture The Avengers has the credits skipped for the first minute and fifty-six seconds. Then, a brief scene displaying Thanos, an evil supervillain, talking to his emissary The Other, is played. The next six minutes and seventeen second credits are then skipped and the last scene featuring the Avengers is played.

As in the other examples, the label and icon for a function control can be responsive to its object. In the example of the Credits function controller 311, the icon of a glass depicts that it is not empty if there is any content. The label, for example, reads: “There’s more?”

“In contrast to the Credits function, which allows a user see all the content of a movie, the Synopsys function allows a user not to view a video and to still see the storyline. Synopsys functions map/define fragments of a video into chapters, scenes and clips. They can also generate multiple units from the video. Each unit is associated to a time code that corresponds to the video’s start position. Each unit is given a brief synopsis (e.g. a paragraph) in order to help them understand what they are doing.

Summary for “Second screen shopping”

“1. “1.

“Systems and methods for displaying information related to a current location in a video on a secondary screen during playback of the same video on the primary screen on a second monitor. The processing of the information is synchronized to the playing of a movie on the primary monitor. This could include a timecode retrieval or acoustic fingerprint match. Processing and/or retrieval information can be done at the second screen, remote server, or service provider.

“2. “2.

“Systems and methods for, or displaying on, a second screen, during the playing of a movie on a primary monitor, information relating the current position in the video are restricted in terms of the functions supported and information that is provided to the user.”

“The inventions concern systems and methods that provide supplementary information on a secondary screen during the playing of a video on the primary screen.”

It is an object of present inventions, to provide user capabilities on the second screen. These routines include: content previewing that responds to user preferences with respect to explicitness in a plurality content categories; content previewing that displays content that is responsive to user preferences with regard to location and linkages; plot information and for enabling user preferences with respect to notification categories (e.g. disabling notification for clues; identifying the best lines, memorable moments and best performances in a movie; notifying the nearest brick and asking trivia questions

These and other objects can be described briefly by systems and methods for displaying video information. The second screen device obtains current play location data from a video being displayed on a primary device. This data includes: downloading information (e.g. video map, subtitles and moral principles, objectionable and memorable content, performers and geographical maps, shopping and rating information, as well as information from the secondary screen device over a computer communications system into the second screen’s memory; and then displaying the information on the second display device that is synchronized with the primary device. These and other embodiments, benefits, and objects are described in detail, along with the accompanying drawings and the appended claims.

“For the purposes of this disclosure, different terms used in art are defined as:

“The term “herein” shall mean in the entirety of this specification including drawings and abstract. “Herein” shall refer to the entire specification, including all drawings and abstracts, as well as claims. This term does not include the section or paragraph in which it might appear.

“The terms ‘include?,?comprise? and?contains are not to be confused with the words?comprise?. The elements are not limited to the ones listed. Only the term “consist” is allowed. The elements are limited to the ones listed.

“There should not be any conceptual distinction made between the terms on, at or in. For example, the phrase receiving on or receiving at or receiving in a screen device should not be used.

“The term “responsive” is not defined. “Responsive” does not mean that all elements, conditions, preferences and/or requirements can be considered. An event that responds to a specific requirement does not necessarily have to be responsive to that requirement. A specified requirement may trigger an event that is responsive to another requirement. This is especially true when the second requirement, although described as an alternate requirement, can also be deemed complementary.

“The terms?”application software??,?software application??,?application??,?app??,?routine? and??computer software? are interchangeable. All executables, libraries, scripts and instructions that cause or are required by a device to execute a task, function or process shall be considered. A computer program that assists a user in performing a task, function, process or activity is called application software. Sometimes, application software and the operating system software can be synergistically integrated.

“The term “associate” shall mean assign, give, allocate and associate. “Associate” shall be understood to mean assign, give and allocate, as well as associate, designate or attribute, link, and/or relate.

“Clip” is an abbreviation for a short segment. “Clip” can be used to refer to a shorter segment than a chapter or a scene. A clip is a sequence of shots that includes at least one contiguous shot. It usually shows the same primary characters in the same location. The definition of a clip is affected by a material shift in the participation of the principal characters or in the location and/or a distinct alteration in thematic content, topic, or tone of conversation.

“The term ‘descriptor? “Descriptor” shall refer to a keyword, word or phrase, code, phrase and/or designations. Descriptor can also refer to any data, information or image that identifies and describes, links, and/or categorizes the content of a video, portion of a movie, or a frame. Linkage refers to any information, data and/or method that allows retrieving or downloading data from either a local/internal or remote source.

“The term “Dialog” can be defined as: “Dialog” can refer to any dialog, conversation or monologue. Information that is included in the subtitles and closed captioning may also be called dialog.

“Geographic map” is a term that refers to any map. Any map that includes satellite, topographical and street data, as well as maps such Google Maps and Google Earth Views and Google Street Views. It can be 2D or 3D, interactive or static, single- or multi-featured and representative. What is the term “geographic map?” Any depiction (e.g. map) that gives context to a location shall be considered “geographic map”.

“Item” shall be defined as: “Item” shall refer to: (i), an object, article or artifact; (iii), a specific action or act within an activity; (v) a sound; (vi) a part of a dialog; (viii), cinematography, cinematographic technique; (viii); and (ix). A locale.

“Keywords” shall be defined as: “Keywords” shall refer to words, phrases, definitions, codes and descriptors as well as data, metadata, numbers, and data.

“Keywording” shall be defined as: “Keywording” shall refer to associating keywords.

“The term “locale” means: “Location” can be used to refer to a place, site, spot, area, landmark, point of interest, tourist attraction or building. A location is a place or area outside the movie studio used to film a movie or a portion thereof. The actual location may be shown or it may be used in depictions to represent another locale. The term “locale” is different from the term “location”. The term “locale” is different from the term “location”. When the term “location” refers to a point on the timeline of the video.

“The term “navigator” is a generic term that refers to software and/or operating system software that allows you to navigate through the Internet. “Navigator” shall refer to software or operating system software that provides video playback capabilities, decoding and decrypting, as well as rendering for playing movies on personal computers. For example, Microsoft’s DVD Navigator and decoder filters are part of a navigator. The renderer can handle CSS and analog copy protection.

“Network” shall refer to any private or public, wired or wireless communication system. “Network” shall refer to any private or public, wired and wireless communication system.

“Notable” is a term that refers to content. Content shall refer to content that is: (i.e., may be of interest for a significant audience; or (ii., is notable, remarkable, compelling; or (iii. is uncommon, atypical or unusual; or (iv. is rare, unique, rare or extraordinary).

“Performer” is a term that refers to an individual, actor, participant, or actress who performs. “Performer” can refer to any individual, participant, actor or actress who appears in a video and/or is credited for the physical or verbal performance of a particular character. An actor in a motion pic, an athlete in a televised sporting competition, a newscaster on a news program and a chef in an cooking show are all examples of performers.

“The terms ‘play? and?playing? refer to the act of playing a segment of a video. “Play” and “playing?” refer to playing an entire segment or part of it. A method or system described herein may claim to play all or part of a segment. However, complete playing of a segment doesn’t necessarily mean that every frame, audio, sub-picture portion and/or bit data must be played.

“Plot info” is an abbreviation of the term. “Plot info” shall refer to information, reasoning, and/or explanation related to, or relevant for understanding or appreciating a plot or sub-plot.

“The term “plot point” is defined as: “Plot point” can refer to a plot, subplot, storyline or principle.

“Preferences” shall be defined as: “Preferences” can be defined as?programming preference,??version preference,???presentation preference,???content preferences,???content preferences,???function preferences,??technical preferences, and???playback preferences. The term “programming preference” is used. A preference or set of preferences for a particular video (e.g. Spider-Man, genres of videos (e.g. Action), types and videos (e.g. Interactive video detective games, series of videos (e.g. 007) wide subject matter of videos mystery, and/or the time and date of playback. The term “version preference” is used. The term?version preference? refers to a preference for a particular version of a video (e.g. motion picture), which is released by the copyright owner (e.g. motion picture studio) and that contains content not found in an alternate version. For example, the ‘Theatrical?, ‘Unrated? and?Director?s Cut versions of a video are all considered to be one version. Version options on a DVD-Video. The video version does not include sequels or/or remakes, such as Spider-Man 2 (2204), and The Amazing Spider-Man 2012 (2012). The term “presentation preference” is used. A preference or preferences that allow the inclusion of selected segments within a video or within multiple videos in a presentation? The term “presentation preference” is also used. The term?presentation preference? also refers to a preference for one or more of the many features offered by each of these: Compilations, Subjects and Best Of, Performers Shopping, Music, Search, Preview. Content preferences is a term that refers to your preference for content. The term “content preferences” refers to your preferences regarding the form, explicitness, inclusion, exclusion, or both of objectionable contents. It also includes the length, level, detail, type, and depiction of potentially objectionable items. Content preferences can be set up using the CustomPlay app’s Control feature. Function preference is a term that refers to a preference for a particular function. The term?function preference? shall refer to a preference for one or more of the many elements that are provided by or associated with an in-video/playback function (e.g. Who, What and Locations, Plot Info Filmmaking, Trivia and Info). Technical preference is a term that refers to a person’s technical preferences. The term?technical preference? refers to a preference for technical or artistic elements (e.g. dissolves, fades and wipes) that can be used during non-sequential segments. The term “playback preference” is used. The term “playback preference” refers to a preference for audio and visual options (e.g. camera angles, picture with image, captioning and commentaries) that are available for a particular video.

“The terms “seamless” and “seamless?” “Seamless” and “seamlessly?” are synonyms for seamless. Without gaps perceptible by the human eye. This is achieved by maintaining a constant transmission rate. Although seamless play of non-sequential segments is technically seamless, it may not look artistically seamless to users because there was a change in how the content is played.

“Search terms” is a term that refers to search terms. “Search terms” shall refer to terms, words and phrases, as well as codes, descriptors or labels, data, metadata, numbers or any other information that identifies or describes what is being searched.

“The terms’second screen? and’secondary screen? are interchangeable. They can be interchanged and refer to any computing device that is capable of playing/displaying video, audio, images or subtitles. A primary screen device can also be referred to as a primary display screen. A primary screen and second screen include, for example, televisions and personal computers as well as laptop and portable computers. Tablets, smartphones, mobile devices, remote control and computing devices with a display screen. Audio reproducing and output components are also included in a primary screen device and a secondary screen device (e.g. amplifiers and external and internal speakers).

“The term “seek/step data” is defined as: Any index, data and/or information that allows access to a specific video frame or facilitates the use of a map along with a particular video. Step data is not required for Seek/Step data. Without step data, seek/step data can directly address all video frames within a video. For example, seek/step may not be based solely on navigation points or synchronizing data (i.e. seek/stepdata can be based upon shot changes and scene changes in the video).

“The terms’segment? “The terms?segment? and?video section? One or more video frames shall be considered a segment. A segment definition is generally used to identify a beginning and end point (e.g. frames) within a video. In the second screen function examples, however, a segment definition identifies one point (e.g., a frame) within a video.

“Subtitles” shall be defined as: Subtitles can be defined as subtitles and/or any textual information that is representative of a particular portion of the audio dialogue. Displaying subtitles does not necessarily require the display of all subtitles in a video. Displaying subtitles can only show a sub-line, phrase, or unit. Subtitles can be distinguished from closed captioning in this section.

“Subtitle information” is a term that refers to subtitle information. Subtitle information is any information, including text, images, data and/or text that allows the display of subtitles on a screen. Details regarding the display of subtitles or use of sub-title information can be used alternatively, complementaryly and/or with other information.

“Supplemental information” is defined as: Any information, text, data or depiction that contains any information, text, image, video, or content that informs and entertains, clarifies, clarifies, or illustrates.

“Trailer” is a term that refers to a trailer, preview, video clip, still image and/or other content that precedes or is extraneous to the movie. “Trailer” shall refer to a trailer, preview or video clip and/or any other content that precedes and/or adds to the movie.

“The term user? “The term?user? is interchangeable with the terms?subscriber,?viewer? and?person?. It shall refer to an end-user who actively uses video content, passively views a movie, interacts with a game, retrieves video from a provider and/or subscribing and using multimedia, internet and/or communications services.

“Variable content video” is a term that refers to a video with variable content. A video with a nonlinear structure that allows for a wide range of logical sequences, shall be called a variable content video. Variable content videos include parallel, transitional and/or overlap segments that allow for multiple versions of the video. Variable content videos may include an interface, software program routines, software program codes, system control codes, and software programs to control the playback of the video/audio. This could be used depending on the specific embodiment. Video that requires multiple, transitional and/or overlapped segments to be varied played.

“The terms “video” and ‘video program?” are interchangeable. “Video” and “video program” can be interchanged. They refer to any video image, regardless of source, motion, technology or implementation. Video may include images and audio from full motion picture programs, films and movies, interactive electronic gaming, multi-media content and television programs. A video may include subtitles, sub-picture information, user interfaces, software program routines and system control codes. These codes can be used to control the playback of the video/audio. “Movie” is a generic term. “Movie” shall refer to a full-length motion picture that is released in theaters or optical discs (e.g. a DVD-Video, Blu-ray Disc).

“Video map”,?map?”, and??segmentmap” refer to any arrangement, table or database that identifies a start and end of one or more segments, one or more individual video frames, or one or more play positions in a particular video or audio. Video maps also contain data that is associated with at most one segment, a sequence or number of segments, a particular video frame and/or play positions in a specific video or audio. Data associated with a video map may include, for example, a descriptor, (ii), an implicit or explicit editing or filtering action, (iii), a linkage between segments, (v) data and/or textual information/content, (vi) any information, data and linkages that are required to enable and support the features and functions described herein. Video maps may also include seek/step data and bookmark generating data.

“The terms above and any other terms defined herein must be understood in the context of this document and not as they might be defined by incorporation. The incorporation of reference does not modify, limit or expand the definitions provided herein or formally defined in this paper. Any term not formally defined by this document shall be deemed to have its usual and customary meanings.

“Networks and End User Systems”

“FIG. “FIG. The video provider 101-103 can provide any combination of video, information, or data services. However, the services may not be exclusive to other providers. Each participant, regardless of whether they are primarily a provider 101-103, or an end user 140, is capable of retrieving and transmitting video, data, and/or information to and from any other participant.

“Video and other services can be delivered using a variety possible communication networks, infrastructures, and system configurations. FIG. FIG. 1 shows a variety of network, infrastructures, or system configurations that can be implemented. These networks can be wired or unwired, using either fiber optic 111, coaxial cables 112, twisted copper wire 110, cellular 114 and/or satellite.

A video provider 101 includes: i. communications technologies 111 to establish a plurality video and communications streams to a plurality end-users 140, to allow the uploading, downloading, and/or transferring of information, data, and/or video preferences; ii. processing hardware and software 122 for retrieving the end user’s video preferences and content preferences, second-screen function preferences and searches terms, and for processing the end user’s video preferences and content preferences, requests and second-screen function preferences and for synchronization data, as well as well as well synchronization data, including search terms and/or search terms and/or search terms and/or search terms and/or search terms and/or data, and search terms and/or information. For example, to perform segment data searches to identify segments or a list of segments responsive a users search term and search requests. iii. Mass storage random access memory device 123 for storing video maps (e.g. segment data) and/or for processing the user’s video preferences, content preferences, second screen function preferences and requests. iv. Processing hardware and software 124 to maintain accounting and support services in relation to video, data and/or information services.

Video providers can be further classified according to their functions and/or the extent of the videobase and data maintained. Video service providers 101, such as cable companies, might be able to provide a wider range of services than information providers 103 (e.g. websites). The wide variety of information and multimedia configurations available via the internet is evident in the video and information services.

The user may not have direct access to resources 101-103 of a video service provider. The requested video can be streamed, downloaded or uploaded in real-time to a service provider that is more economically accessible for the user. Some video services providers within the network 100 may not provide services directly to users but serve as depositories or originators for other service providers.

“In one of many possible configurations, an end user video system 140 gains access to the network 100, and the various service providers 101-103 through a communications device 131, e.g., satellite dish, cable distribution box. A variety of communication devices, computing devices, screens, and monitors make up an end-user video system 140. The main communications devices are, for instance, a modem (e.g. cable modem); an internal communications device (142), e.g. wired router and wireless router; as well as a network/wireless extender (143). Communication interfaces like Wi-Fi and Ethernet, cellular and 4G LTE are used to facilitate communication between end-users’ various computing devices and multiscreen combinations 144?149. These include set top boxes 144; PC/monitor 145, tablets 146-147, smartphone 148, and television 149. A device can be classified as a communication device, a computer device, or a screen. Devices such as smartphones 148-147, tablets 146-147 and laptop/notebook computers (145) can include all three functions. A television screen 149 can also include storage and communication capabilities, which may not otherwise be available in a set top box or television media accessory (144).

“Communications between devices can be established using any of a number of wired and wireless communications networks, including Wi-Fi or cellular (e.g. 4G LTE). A computing device does not need to be connected directly or indirectly by wire to screen 149. A communications port 143 can be used to connect a computing device 145 to a second screen, 149. It may have varying intelligence levels and capabilities. It may boost or manage the signal or serve no other purpose but to be a convenient outlet to plug in and unplug devices.

“The specific location of the end-user’s devices, screens and subsystems is not restricted to any one arrangement. There are many configurations that can be used to meet end-user’s needs. Preferentially, the end-user configuration includes a primary display device (149), one or more secondary display devices (145, tablets 146?147, and/or smartphone 148)

“An appropriate application software for the communications infrastructure might reside directly or indirectly within the primary display, secondary display, and/or separate devices in communication with both the primary and secondary display devices.”

Multi-screen combinations can include television 149, smartphone 148 and pc/laptop145 and smartphone148, television, 149, pc/laptop145 and smartphone148, television, 149, pc/laptop145, television, 149, and television, 149, multiple tablets 146-147. Multiscreen combinations do not have to be limited to one screen. A second screen, such as a tablet 146, may offer a second screen experience in comparison to a primary screen like a television 149 and another screen, such as a second tablet147.

Multi-screen usage can be classified as either disruptive (e.g. multi-tasking unrelated material) or complementary. It can also be described as sequential (e.g. usage of one screen is followed by another screen), simultaneous (e.g. expected use of a secondary screen as part or the viewing of content on a primary display) or spontaneous. Multi-screen usage can be both disruptive and complementing, however. For example, disruption can be described as the interruption of a linear video experience. However, they are also complementary in that information is provided that users would find beneficial in improving the video experience. A preferred embodiment provides interactive capabilities on a second screen that can be used in conjunction with specific content displayed on a primary screen.

“The novel features described herein can be implemented by anyone, not just the end-user multiscreen systems, communications infrastructure and service providers. 1. Many alternative or complementary devices, components, elements, or services can be integrated into a multiscreen configuration, as disclosed in U.S. Pat. No. No. U.S. patent publication 20151509, entitled?System, Method and Device for Providing a Mobile Application across Smartphone Platforms to Enable Consumer Connectivity and Control Of Media? U.S. Patent publication 20120210349 is titled “Multiple-Screen Interactive Screen Architecture?” U.S. Patent publication 20130061267 is titled “Method and System for Using a Second Screen Device To Interact With A Set Top Box To Enhance A Users Experience?” U.S. Patent publication 20130111514 is titled “Second Screen Interactive Platform?”. Each of the cited references provides disclosures with respect their respective FIG. 1. that directly relate to the disclosure above in relation to FIG. 1, and are incorporated herein as reference.”

“Video Map”

“An implementation of the disclosed video map and playback capabilities are implemented in a currently free CustomPlay PC app which provides users with a comprehensive set video playback features as well as in-video functions for movies on DVD. The 14 feature sets of CustomPlay include Presentations and Compilations, Subjects. Dilemmas. Best Of. Performers. Filmmaking, Plot Info. Shopping. Music. Locations. Search. Preview. Control. The CustomPlay has eight in-video, playback functions. These include Plot Info and Locations. In the movie Casino Royale, examples of the video map enabled CustomPlay capabilities include 91 Shopping items, 19 Locations with links to Apple Maps, 12 entertaining Subjects (1-2 minutes each), 5 story-driven Presentations (26-115 minutes each), over 51,000 keywords that drive the extensive Search feature, and 14 content-customization categories.”

A video map can be included with or separately from the video’s audio and video data. A movie can be downloaded, streamed or downloaded from a remote provider. The corresponding video map may then be downloaded from a secondary remote source (e.g., via the communications interface, from a distant server). A multi-screen configuration may include processing, a memory device and communications capabilities. This could provide in-video and playback functions for movies streamed from remote video providers. This embodiment allows for the downloading of a video map, user interface, and other control programs specific to the motion picture from a remote server or player. The control program reads the video source’s identifier and searches the mass storage fixed storage device looking for a video map. If the map is not found, it communicates with an outside source to obtain the map. This is how linear video programs in the conventional format create a collection of motion pictures that can be used to illustrate the principles.

A video map and/or its components (e.g. acoustic signature information) can be downloaded before playing the relevant video. This is possible simultaneously with the video’s playing and after the video has finished playing. You can download some components before, others as you need them, and some at the end of the video’s playing. The user may also be able to download information and content based on their pre-established or contemporaneously established features, functions preferences, and the specific multi-screen environment.

“Devices”

“Specifically, in regard to second screen devices the teachings of smartphones (e.g. Samsung Galaxy, iPhone, and Nokia Lumia), tablets, (e.g. iPad Air, Amazon Kindle Fire and Microsoft Surface), smart TVs (e.g. Samsung UHD Series, Sony KDL Smart TV Series), are incorporated herein as reference.”

These devices can be further enhanced by adding processing, firmware and memory to support the second screen capabilities. A processor may include a central processing unit, CPU and any associated computing or calculation devices (e.g. graphic chips) as well as application software. A second screen device can provide tactile feedback, depending on the multi-screen configuration of the user and their preferences. This could include device vibrations, mechanical movements of a key, auditory and/or visual notifications that alert the viewer to any new content or functions. Tactile feedback could include device vibration or mechanical movement of a key or button. The number of vibrations and particular rings tones that are used in a notification may indicate whether the function is being performed.

“Remote Control Functions.”

A second screen serves a significant purpose. It can provide video playback controls and display content that is complementary to the primary screen. A smartphone and tablet application can provide all functionality for the remote control and interfaces on the screen. These devices can communicate with the player over Wi-Fi via a transmitter application in controller and a listener app in controlled device.

“FIG. 2. This is an illustration for a remote control interface 200 for smartphones. To activate a function, touch a button/object. Interface 200 includes, for instance, navigation controls 201-209 (e.g. Exit 201; Settings 202; Help 203; Lists 205; Search 207; Browser 208; selection functions 211-213; audio volume controls 221-227 (e.g. Play/Pause Toggle 221, Skip Forward Clip/Scene 222; Fast Forward 224; Fast Rewind 226; What 227; in-video function controls 241-247; content controls 251(e.p Info 244, Filmmaking, Plot Info 246, and Dilemma; content category Violence, Mild and level of explicitness, None, Mild and graphic

“The Exit control 200 confirms that you have exited the application. The Settings function202 displays a screen with options to customize the display of in-video notification on the primary screen or a secondary screen. The Help function203 provides context-sensitive help information. The List function205 displays, on the primary screen and the secondary screen, a menu with CustomPlay features. The screen that displays the various in-video functions as well as play control functions is called In-Video function. 2. Search function 207 displays a screen with Search functionality. The browser function 208 provides keyboard and mouse functionality. The Set-up function 209 provides access to various utilities that allow you to establish communications with other devices (e.g. IP address and port number).

“The Play From control 222 allows a user to playback the video from the current position, regardless of which feature is being used. A user might use the Search feature to find segments responsive to a keyword search. To play the segment from the selected segment, the user can activate the Play From control. The Play From control can also be responsive to the user’s pre-established presentation preferences (e.g. Custom presentations or Play As Is). The Play From control can be set to default instead to the Custom presentation, even if the user does not have a preference for the presentation. The Play From control can also default to playing the last movie that was played by the user. Or, a presentation could be selected in response to the feature that was last used. The Play From control can enable the Play As Is presentation if Search or Preview is not being used. The Custom presentation is available if the Best Of feature has been used in the past.

When enabled by the user during playback the What replay control 227 rewinds the video for a specified amount of time and replays a section of the video with subtitles enabled. The disclosures in U.S. Pat. No. No. ; and U.S. Pat. No. No. 7,430,360 titled “Replaying a Video Segment with Changed Audio?” ; are incorporated by reference herein

The Play Current Clip/Scene 228 uses the clip and scene databases portions of a Video Map to identify a clip/scene definition that is responsive at the current play position. It then automatically rewinds to the beginning of the clip/scene and plays from that point. A Clip/Scene control can respond to a Clip/Scene (Skip Back Clip/Scene 22,3, 226, Play Current Clip/Scene 228), or a Scene. This option is predefined.

“Second Screen Functions.”

A second screen function, in general, is an in-video function that’s enabled on a secondary screen or multi-screen system while video is being played back on a primary screen. The following disclosures relate to second screen apparatuses, architectures and methods. No. No. ; U.S. Pat. No. No. ; U.S. Pat. No. No. 7,899 915 is titled “Method and Apparatus for Browsing Using Multiple Coordinated Devices?” U.S. Patent Application 20120210349 entitled?Multiple Screen Interactive Screen Architecture? U.S. Patent Application 20130061267 is titled “Method and System for Using a Second Screen Device To Interact With A Set TopBox To Enhance A Users Experience?” U.S. Patent Application 20130014155 is titled “System and Method for Presenting Content with Time Based Metadata?” ; and U.S. Patent Application 20140165112 entitled?Launching a Second-Screen App Related to A Non-Triggered Initial-Screen?

This document incorporates the teachings of current second screen capabilities and functions such as Google’s Play Info Cards and Amazon’s X-Ray. There are many ways to sync multiple devices over Wi-Fi networks or remote servers (e.g. JargonTalk).

“It is intended that the features, playback functionality, in-video functions and playback features described herein may be implemented in different second screen embodiments that don’t require altering a traditional playing of a movie on a primary monitor (e.g. remote control functions on second screen, superimposing notifications indications, seamless skipping of video segments and selective playing).

The second screen functions can take advantage of any additional content (e.g. video/audio commentary or supplementary audio content) that is provided with the movie. The video map will be able to map the audio segments and associated descriptors from the additional content. The synchronization data could be used to provide additional information on the second screen (e.g. Additional video content will be displayed during the movie’s playback.”

“What Second Screen Function?”

“FIG. 3A is an illustration showing a second screen that displays features of the What function. A screenshot of the user interface 301 is from Casino Royal, in which 007 approaches an Aston Martin car. This clip is followed by another clip in which 007 sits in the car and delivers a line to open an envelope. The What function control 302 is activated by the user who wants to understand the words. It displays the subtitles 303 from the recently delivered line: “I love you too M?”. Optionally and/or responsively to the second screen embodiment, the What function control 302 may not rewind or pause the video. If the What function control 302 is enabled and the user prefers, it will display the subtitles 303 on the second screen. This allows the user to view a specific portion of the video. The video is not replayed. The disclosure of previously incorporated U.S. Pat. No. No. Multitasking and accessing external information are advantages of the second screen application. This increases the chance that a user will miss dialogue they would otherwise have missed.

“Activating the What function (e.g. touching the icon 302 on second screen interface 301) causes second screen processing to identify current play position directly or indirectly through a play of the video on a primary display and/or the internal clock of the second screen that is synchronized with the video playback. This is used to determine the time that subtitles will be displayed within the video. The display of subtitles is determined by the system default and/or viewer preferences (e.g., “replayed?”). 20 seconds; the subtitle language (e.g. English or another preferred language depending on whether sub-title information is available); the type of subtitles (e.g. subtitles, closed captioning and commentary); and the end time for the display of subtitles. This can be at the point at which the What feature was activated, at some system specific time and/or predetermined preferences of a viewer with respect to when the What function was activated. To identify the appropriate subtitle information for the requested time period, the video map subtitle data are searched. Without the need to rewind or replay the video, the appropriate subtitles will be displayed on the secondary screen and/or second screen.

“In contrast to the method used when the video is replayed without subtitles, there are many novel methodologies that can be applied in the case of a secondary screen embodiment. These include the nature of the primary/video playback device/second screen communication, whether another device/software is playing and whether it is capable multitasking and playing the video uninterrupted while retrieving the required subtitle information. The second screen may also be able to retrieve the subtitle information independently of the video playback capabilities. The preferred second screen embodiment uses the local or remote storage device/server to download the subtitle information. One embodiment includes the downloading of subtitle information and subtitle synchronizing information as part of a download of a videomap. The synchronizing data allows the second screen to sync the display of subtitles with the video playback on its primary screen. The third party providing the subtitle information and synchronizing details is in this case. The video is not being replayed so the subtitle display does not need to be perfectly synchronized with audio tracks. A delay or offset could be beneficial before delivering data to the second screen. The synchronizing information could already offset subtitles display.

“Alternatively, to an intermittent activation the What function, a particularly innovative continuous display of subtitles implements an advantageous offset synchronization method. This is a novel implementation of closed captioning that is displayed on a primary screen. A haphazard delay in closed captioning display is not beneficial and is only possible on the primary screen.

A display of subtitles can be responsive to user preferences with respect to all videos or a particular category of videos (e.g. Foreign Films), certain sequences within a movie (e.g. scenes with Heavily Accented dialogue or Non U.S. Accented English dialog), and any portion that contains unclear dialogue. The What function can be activated at any time by the user pressing and holding the What control 302. This will activate the What function in continuous mode, rather than the temporary mode enabled by touching the What function controller 302. A subsequent touch of the What function controller 302 may deactivate the continuous mode.

If the What function is activated in continuous mode, then the subtitles and preselected parts of the subtitles will be displayed continuously in a synced offset fashion. The concepts of offset, delay and the related synonyms are interchangeable in this context. FIG. FIG. 3B illustrates offset synchronization methods. Subtitles are typically displayed as close to the audio track’s actual dialogue as possible. FIG. FIG. 322 is synchronized in a conventional way with an audio track that includes a portion of dialogue 321. FIG. 3B also illustrates an offset synchronization of the display of subtitles that may be controlled by one or a combinations of offset methodologies, including, for example: the subtitle portion/sentence/phrase (subtitle unit) is displayed the equivalent of a subtitle unit(s) 323 after the corresponding accurately synchronized subtitle would have been displayed 322; the subtitle is displayed at a preestablished play position 324 after the position at which the corresponding accurately synchronized subtitle would have been began to be displayed 322; and the subtitle is displayed a system default and/or a viewer’s preestablished time offset period 325 (e.g., a delay of 2 seconds) after the time at which the corresponding accurately synchronized subtitle would have been displayed or began to be displayed. A What embodiment uses a subtitle offset syncization to control the delay between the real-time created closed captioning (and the corresponding dialogue) rather than the random unsynchronized delay.

A continuous mode does NOT necessarily mean all subtitle units are being played. For example, continuous mode may only play a subset or a few subtitle segments (e.g. 10-12 subtitle sections corresponding to the instances where the audio is not clear enough and/or has the best lines). Intermitting continuous mode will reduce the number of subtitles displayed and reduce the data required to be presented to the second screen.

Examples of embodiments include: downloading information (e.g. subtitles) from a remote provider over a communications network; receiving synchronizing data that is responsive for a play of a movie on a primary device (e.g. using the identification of current play positions or comparing the audio fingerprint to an acoustic library to determine the current position of the video; or displaying the supplementary info on the secondary screen device responsively to an offset sync to the video on primary device. This means that the second-screen device’s of the first screen device’st of the second-screen device shows the second-play of the second-screen device’s playing of the primary device.

The individual steps described above can be performed by either the primary screen device or the second screen device. Remote service providers may also perform these steps. The primary screen device and the second screen device may identify the current play position by creating and comparing an audio fingerprint. A remote service provider can analyze the audio information received from the second device via a computer communications network. Remote processing may perform offset synchronization, and may adjust the downloaded supplementary data accordingly. Other functions, the display of information, and the display of supplementary information may benefit from offset synchronization. Multiple synchronization methods may be active simultaneously on one display. Each responsive to, for instance, the relationship between the supplementary information and the depiction currently being displayed on the primary screen. A user has the ability to activate, deactivate, select, and/or adjust offset delay parameters (e.g. time delay and subtitle unit delays). The offset in synchronization can be deactivated to restore a normal synchronization.

“Notwithstanding conceptual distinctions, the previously incorporated U.S. Patent. No. No. Similar to the disclosure of rewinding the video that is cumulatively responsive to multiple consecutive activations of the What function in a second screen embodiment of the invention, the time period that defines the display subtitles is also cumulatively responsive to multiple successive activations of the What function.

“ID and Credits Second-screen Functions”

FIG. 3A includes an ID function control control 305 as well as a Credits function controller 311. The ID function control 305 allows the user to request identification information. An image and/or promotional image can be used to identify an item. Written identification 307 is required (e.g. brand name, model, description of?Aston Martin DBS Sports Car? type of action, and name of the action?martial arts butterfly kick?) ), a write up, and access to additional information 308.

“Further, an ID function might synergistically benefit from the data associated to a Subject presentation. The Subject could be ‘The Dude? In the movie The Big Lebowski the subject can identify over 100 instances of dialogue, including the term ‘Dude?. An ID Game function displays a notification for each instance of the ID Game set. This allows the participants to take the action they have associated with that game (e.g., give the best line from the movie). An ID Game notification may also include the presentation of Trivia function information/question, and/or the presentation of a simplified Yes/No question, to determine if the particular action that has been associated with the game or the particular notification instance is to be performed.”

The Credits function’s primary purpose is to notify the user if there is any additional content after or during the credit play. The Credits function control 311 allows users to query the function and display the appropriate message. The user is informed in this example that there is no content after or during the credits 312.

“In contrast, in The Avengers, after one minute, fifty six seconds of the film’s credits, a brief scene shows Thanos, the evil supervillain, talking to his emissary The Other. The credit sequence ends with a final scene showing the Avenges dining at a Middle Eastern restaurant. The Credits function control 311 activates, and the display informs the user that there are two distinct sequences in the credits.

“The Credits function can also be activated automatically, depending on system default or user preference, as soon as credits start to appear on the primary screen. This would notify the user on the primary screen and/or secondary screen if there is any additional content. The Credits function control can also be activated to allow the user to choose whether or not they want the movie to stop playing. The non-content parts of the credits are automatically skipped if there is any content after or during the credits. This is responsive to the video map. The motion picture The Avengers has the credits skipped for the first minute and fifty-six seconds. Then, a brief scene displaying Thanos, an evil supervillain, talking to his emissary The Other, is played. The next six minutes and seventeen second credits are then skipped and the last scene featuring the Avengers is played.

As in the other examples, the label and icon for a function control can be responsive to its object. In the example of the Credits function controller 311, the icon of a glass depicts that it is not empty if there is any content. The label, for example, reads: “There’s more?”

“In contrast to the Credits function, which allows a user see all the content of a movie, the Synopsys function allows a user not to view a video and to still see the storyline. Synopsys functions map/define fragments of a video into chapters, scenes and clips. They can also generate multiple units from the video. Each unit is associated to a time code that corresponds to the video’s start position. Each unit is given a brief synopsis (e.g. a paragraph) in order to help them understand what they are doing.

Click here to view the patent on Google Patents.