Invented by Eric Clayton, Danny Valente, Sonos Inc

The market for social playback queues has been steadily growing in recent years, as more and more people are looking for ways to enhance their music listening experience and share it with others. Social playback queues, also known as collaborative playlists, allow users to create playlists together and listen to music in real-time with their friends or followers. One of the main reasons for the popularity of social playback queues is the sense of community and connection they provide. Music has always been a powerful tool for bringing people together, and social playback queues take this to the next level. Whether it’s a group of friends at a party, a couple on a road trip, or even strangers from different parts of the world, these playlists allow people to share their favorite songs and discover new ones together. In addition to the social aspect, social playback queues also offer a more personalized and curated music experience. Instead of relying on algorithms or radio stations, users have the ability to handpick the songs that are added to the playlist. This allows for a more tailored listening experience, where users can create playlists that suit their specific moods or occasions. Furthermore, social playback queues have become a popular feature for music streaming platforms. Platforms like Spotify, Apple Music, and YouTube Music have all integrated collaborative playlist features into their services, recognizing the demand for this type of social interaction. These platforms not only allow users to create and share their own playlists but also provide the option to follow and contribute to playlists created by others. The market for social playback queues is not limited to music streaming platforms alone. There are also standalone apps and websites dedicated solely to creating and sharing collaborative playlists. These platforms often offer additional features such as voting on songs, commenting, and even live chat options, further enhancing the social aspect of the experience. As the market for social playback queues continues to grow, there are also opportunities for businesses and brands to tap into this trend. Collaborative playlists can be used as a marketing tool, allowing companies to engage with their audience through music. For example, a clothing brand could create a playlist that reflects their brand identity and share it with their followers, creating a unique and immersive brand experience. In conclusion, the market for social playback queues is thriving due to the desire for social connection, personalized music experiences, and the integration of collaborative playlist features by music streaming platforms. This trend not only enhances the way we listen to music but also provides opportunities for businesses to engage with their audience in a unique and interactive way. Whether it’s for personal enjoyment or marketing purposes, social playback queues are here to stay.

The Sonos Inc invention works as follows

The method may also include identifying at least one media item corresponding to the indication of the medium and causing a playback queue of a media playback system to include the identified at least one media item. The method can also include identifying at the least one media piece that corresponds to the indication. A media playback system may then be configured to play one or more media items from the identified media item.

Background for Social playback queues

Options to access and listen to digital audio out loud were limited up until 2003, when SONOS, Inc. submitted one of its initial patent applications, entitled ‘Method for synchronizing Audio Playback Between Multiple Networked Devices,’ In 2005, SONOS began selling a media-playback system. The Sonos Wireless HiFi System allows users to listen to music from a variety of sources using one or more networked devices. A software application that is installed on a computer, tablet or smartphone allows users to play music in any room with a networked device. The controller can also be used to stream different songs into each room that has a playback system, group rooms together for synchronous playing, or play the same song in all rooms simultaneously. “[4] With the growing popularity of digital media, it is important to continue to develop consumer-accessible technology to enhance the listening experience.

I. Overview

As described above, the examples of this invention involve updating a queue for playback via sending communications to an input feed. A method is disclosed in one aspect. The method includes monitoring, by a computer device, a communication feed for an indicator of media. Detecting, in the communication feed, the indication, and identifying at lease one media item that corresponds to the indication.

In another aspect, there is a method. The method includes: detecting indication data by a computing device that includes a media indication; sending this media indication to a computing device that corresponds to a communications stream; detecting a second media indication in the communications stream, wherein the second media indication includes at least one item of media corresponding with the first media indication; detecting command data by the computing device to indicate a command causing a media playback queue to include the identified media item or items; and sending an indication of command to the computing device that corresponds to

In another aspect, there is a method. The method includes generating data by a computing system that represents a playback list of a media system. This data contains (i) a playback order for one or more media items in the queue, and (ii), an indication that a communications stream is included in the queue.

It will be obvious to anyone with a basic understanding of the subject that this disclosure contains numerous other embodiments. One of ordinary skill will understand that this disclosure contains numerous other embodiments. Some examples may be referring to the functions performed by certain actors, such as “users” This description should only be used to explain the claims. “The claims should not be understood to require any action by such an example actor, unless the language of those claims explicitly requires it.

II. Example Operating Environment

FIG. The figure 1 illustrates an example configuration for a media playback device 100, in which one of more embodiments described herein can be implemented or practiced. The media playback 100 shown corresponds to an example of a home environment with multiple rooms and spaces such as, for example, the master bedroom, office, dining room and living room. In the example shown in FIG. The media playback system includes playback devices 102-124, control devices 126-128 and a wireless or wired network router 130.

The following sections provide further discussion on the components of the media playback example system 100, and how they may interact with each other to give a user a media experience. Although the discussion herein can generally refer to an example media playback device 100, the technologies described herein do not limit themselves to applications in, for instance, the home environment shown in FIG. 1. The technologies described in this document may, for instance, be used to provide multi-zone audio in commercial settings such as a restaurant or mall, or an airport. They may also be used on vehicles like sports utility vehicles (SUVs), buses or cars, ships or boats, airplanes, etc.

a. “Example Playback Devices

FIG. The functional block diagram in FIG. 2 illustrates an example of a playback device that can be configured as one or more playback devices 100-124 of the media-playback system 100 shown in FIG. 1. The playback device may consist of a processor (202), software components (204), memory (206), audio processing components (208), audio amplifiers 210, speakers 212, a network interface (214) including wireless interfaces 216 and wired connections 218. In some cases, the playback devices 200 do not have speakers 212 but instead a speaker connector to connect the device 200 with external speakers. In another instance, the playback 200 may not include the speaker(s), 212, nor the audio amplifiers, 210, but instead an audio interface to connect the playback 200 to an external audio receiver or audio-visual amplifier.

In an example, the CPU 202 could be a clock driven computing component configured for processing input data in accordance with instructions stored in memory 206. The memory 206 can be a tangible medium that is readable by a computer and configured to store instructions for the processor 202. The memory 206 can, for example, be a data storage device that can be filled with software components 204 which are executable by processor 202 in order to perform certain functions. In one example, functions could include the playback 200 retrieving audio from an audio source. In another example, functions could include the playback 200 sending audio data from one device to another or another playback device over a network. Another example is pairing the playback device with multiple playback devices in order to create a multichannel audio environment.

Some functions may require the playback device to synchronize audio content with other playback devices. Listeners will not be able, during synchronous playback to detect any time delay differences between the playback device and one or more of the other playback devices. U.S. Pat. No. No. 8,234,395 entitled?System for synchronizing operations between a plurality independently clocked digital information processing devices? This patent, which is hereby incorporated as a reference, gives more details on audio playback synchronization between playback devices.

The memory 206 can be further configured to store data related to the playback devices 200. For example, the memory 206 could be used to store audio sources that are accessible by the device 200 or a queue of playbacks with which the device 200 or another playback system may be associated. The data can be stored in one or more variables of state that are updated periodically and used to describe a state of the playback 200. Memory 206 can also store data related to the state of other devices in the media system. This data is shared periodically among devices, so that each device has the latest data. Other embodiments are possible.

The audio processing components 208 can include one or multiple digital-to-analog (DAC) converters, audio preprocessing components, audio enhancement components or digital signal processors (DSP). In one embodiment, the audio processing component 208 can be a part of the processor 202. Audio signals can be produced by audio processing components 208, for example. The audio signals produced may be sent to the audio amplifiers 210 for playback and amplification through speaker(s). The audio amplifier(s), in particular, may include devices that are configured to amplify audio signals at a level sufficient to drive one or more speakers 212. Speaker(s) 212 can include a single transducer, such as a “driver” The speaker(s) 212 may include an individual transducer (e.g., a?driver?) or a complete system that includes an enclosure and one or more speakers. The speaker(s), 212, may have a specific driver, such as a subwoofer, midranger, or tweeter, depending on the frequency range. In some cases, the audio amplifiers 210 may drive each transducer of the speakers 212. The audio processing components 208 can be configured to produce analog signals that are sent to the playback device for playback.

The network interface 214 or an audio line input connection (e.g. an auto-detecting 3,5 mm audio connection) can be used to receive audio content that is intended to be played by the playback device.

The network interface 214 can be configured to facilitate data flows between the playback devices 200 and other devices on a network. The playback 200 can be configured to receive audio from other playback devices that are in communication with it, network devices on a local network or audio content sources via a wide-area network like the Internet. In one example the audio content, and other signals received and transmitted by the playback 200 can be sent in digital packet data with IP-based source addresses and IP-based destinations. In this case, the network 214 can be configured to parse digital packet data so that data intended for the playback 200 is received and processed properly by the playback 200.

The network interface 214, as shown in FIG. 1, may include wired and wireless interfaces. Wireless interfaces 216 can provide network interface functions to the playback unit 200, allowing it to wirelessly communicate (e.g. with other playback devices, speakers, receivers, network device(s), and control device(s)) with other devices, in accordance with the communication protocol. The wired interfaces 218 can provide network interface functions to allow the playback device 200 communicate with other devices over a wired link in accordance with an IEEE 802.3 communication protocol. The network interface 214 in FIG. While the network interface 214 shown in FIG.

In one example, a playback device and another playback device can be paired together to play separate audio components from audio content. Playback device 200, for example, may be configured so that it plays a left-channel audio component while the other device is configured to play the right-channel audio component. This produces or enhances a stereo effect in the audio content. The paired devices (also known as “bonded playback device”) can play audio content in synchrony with other playback devices. “The paired playback devices (also referred to as?bonded playback devices?) may play audio content in sync with other playback device.

Click here to view the patent on Google Patents.