Invented by Stephen E. Goetzinger, Steelcase Inc
The Steelcase Inc invention works as followsA conferencing systems for conferencing between local conferees in a conference room and at the least one remote conferee, located away from the conference room includes an emissive panel located inside the conference area for generating first and second representations of remote conferees and at the least a processor to drive the emissive panel and simultaneously display at least the second and first video representations. The first video represents the remote conference from a certain perspective, and the second video shows it from another perspective. The first video is only visible from a single viewing area and not from a second zone in the conference room. And the second video is only visible from the second zone.
Background for Systems and Methods for Implementing Augmented Reality and/or Virtual Reality
The present invention is related to conference systems, and more specifically to various methods and system for using virtual and augmented reality to enhance conferencing activity including communication and content-sharing.
The term “meeting” will be used in the following paragraphs unless otherwise indicated. The term “meeting” will refer to any gathering between two or three people, in which they communicate, including but not restricted to conferences, gatherings etc. The term “attendee” is also used. The term “attendee” will also be used for any person who communicates with other people in a meeting.
Face-to-face meetings are still the most effective way to communicate between people. This belief was strong years ago and is sometimes even more prevalent today. Face-to-face meeting attendees can also develop relationships with each other. Face-to-face meeting participants can use their hearing and vision senses to understand what the other attendees are trying to say. As is well-known, a person’s posture, facial expressions and other actions that are visible to others can often belie the words they say or provide a deeper understanding of those words. Therefore, true or more informed communication requires visual communication as well as vocal communication.
Second, if an attendee is paying attention, the natural feedback provided by both audio-visual senses allows him to determine the effect of his communications on the other attendees. After a speaker makes a statement, he can see and hear the reactions of other attendees. This allows him to know (i) whether or not other attendees were paying attention, (2) which attendees agreed or disagreed with the statement and (3) which attendees understood or did not understand the statement. Visual feedback can be multi-faceted, and include facial expressions, body language and other non-verbal communications (e.g. a grunt or a sigh).
Third, the simple fact that a person is present at a meeting can be enough to get attention. Imagine the difference in presence when you are sitting with someone and talking on the phone as opposed to being present together. “When a person is in the room, others are more considerate of their time. They pay greater attention and don’t divide their attention.
The fourth point is that when a person is communicating simultaneously with multiple participants, such as in a team meeting with many attendees, there’s a dynamic that can be felt as a group by observing how they are interacting and acting, even if some aren’t actively participating (e.g. attendees who are listening to other people speak). In a meeting, there is a general sense that attendees are either in agreement or disagreement.
Fifthly, when attendees are sharing information in a tangible form, such as documents, or content on digital electronic surfaces or emissive surface, the content that attendees pay attention to is a form of communication. Assume, for example, that three large emissive surfaces with common content are placed in a conference room. In the first meeting, assume that all six attendees are looking on one of the three emissive surfaces. Assume that, a second, only one attendee will be looking at the content of the first surface. Two attendees will be looking at the content of the third surface. The fourth and fifth attendee are both looking at the attendee who is holding a document. “Clearly, merely sensing who and what attendees are looking at can be extremely valuable in understanding what’s going on within a conference room.
While face-toface collocated communication is still important in many cases, two recent developments have led to a substantial reduction of the total number of person-toperson communications that are held at the same location. Many companies are very large and have employees located in geographically dispersed locations. This means that communication which was once between a colleague in the next building or down the hall is now happening with people who are in other states, counties, and even continents. Face-to-face communication is often prohibitively expensive due to the dispersed locations of employees.
Secondly, technology has developed which can be used as a “substitute” for in-person meetings. “Second, technology has been developed that operates as a?substitute? The term “substitute” is used here. The term’substitute’ is used in quotations because, in many cases, technology is not a good substitute for face-to-face meetings.
The first breakthrough in communication technologies that made a significant impact on the proliferation of collocated meeting was audible telephone calls, conferencing computers and software. This enabled remote participants to hear spoken words and to voice their own communications with one or more local phone conference attendees. The use of phone conferencing software and hardware has become commonplace in offices, other workplaces and private offices.
While voice telephone systems are useful and have significantly reduced the cost of communication between people, they have several flaws. In phone systems, visual feedback is not available. Attendees of phone-linked meetings must rely on audio output to determine meaning, level of attention, understanding, and group thinking. The inability to discern meaning, level of understanding, and other indicators of successful communication is exacerbated when there are multiple (e.g. 8-12) participants on the phone, and sometimes even remote attendees.
Second, audio for remotes is often provided by one speaker, or a few speakers (e.g. 2 on a computer) with little if no ability to produce any type of directional sounds (e.g. sound coming in any of several directions towards a distant attendee). The remote attendee will hear the comments of any 12 local attendees from the non-directional speaker at her location. She cannot rely on this direction to tell who is speaking.
The addition of video conferencing to audio conferencing is a second technology development that addresses the problems associated with phone systems. The idea is to have remote meeting attendees using cameras to record video. This is then transmitted and shown to other attendees who are located at different locations along with voice or audio signals. Employees can therefore see and hear one another during a conference. In some instances, video conferences can be set up only between two attendees who are located at different locations. The cameras are usually positioned at an edge of the video conferencing surface or display at each station, and are pointed directly at the person at that station. The video from each station is sent to the remote attendee and displayed on the emissive display screen next to the edge-located camera at the receiving stations.
In other cases, several local attendees can be located in the same conference room while a remote participant linked via video conferencing may be at his or her own workstation. In most cases, a emissive screen or surface is used in the local conference room to display a video of the distant attendee. A camera is then positioned adjacent to the edge of the surface (e.g. the top edge), capturing video of all local attendees. The remote attendee can see all local attendees from the perspective of a camera located along the edge of the surface where the representation of remote attendee appears. Local attendees view the remote participant from the viewpoint of the camera situated along the edge of remote attendee?s emissive surfaces.
Unless otherwise indicated, the large view that a remote participant has of a local area or conference space will be called a “local area view” hereafter. A view of a distant attendee taken from the camera at their station, which is located on the edge of the emissive surface, will be called a “station view”. A’station view’ is what we are referring to here. A second remote attendee can view a first remote participant or local attendees in a local conference area may view a remote participant.
The attendees in the local area and station views look off into the distance instead of directly at the other attendees viewing those views. As an example, when first and second remote participants are videoconferencing, the image that the first attendee sees on the emissive surfaces of his station shows a ST which is not aligned with the camera in her station. Her image at the station of the other attendee is then misaligned to the ST of the other attendee. As the second attendee views the representation of first on his station’s surface, his image is shown to the first with a ST which is misaligned. His image at first’s station, therefore, is misaligned. This phenomenon, where the attendee’s sight trajectory is misaligned with cameras positioned at the edge of the screen, will be called “the misaligned effect” unless otherwise indicated.
Video conferencing Systems, like Voice Conferencing Systems, have several flaws. These are often dependent on whether an attendee’s conference is a local multi-attendee conference or a remote conference with a single attendee. There are four major shortcomings from the viewpoint of a remote participant linked to a conference room with multiple attendees.
First, remote participants have a difficult time, for a variety of reasons, determining what or who other attendees are paying attention to, whether they are local or remote. While a local area view can be used by a distant attendee in order to see the general direction of an attendee, it’s not always easy for the remote to tell who or what the local attendee in question is looking at. In the local conference room, for example, if the local attendees are seated along the right edge of a large table, and the third local attendee sits across the table on the left side, as shown in the local area view of the remote attendant, it may be difficult to determine which of the two local attendees is the third attendee. The difficulty in determining local attendees’ sight paths is exacerbated when the number of attendees increases. “As another example, if the first local attendee looks at a second attendee who is behind a third attendee and is a distant attendee in their local area view, it is difficult to discern that first attendee?s sight trajectory.
As an example, it’s not always easy for a remote participant to tell, from the local area view of the remote participant, if a local attendee has their eyes on the remote participant or another nearby information (e.g. the second remote attendant). A remote attendee might mistakenly believe that a local participant is looking directly at a remote participant when she is actually looking at another information displayed adjacent to the emissive surfaces that present the view of the distant attendee. This misaligned viewing effect makes it difficult to tell if local attendees are actually looking at remote attendees.
In another instance, when at least the first and second remote participants are linked into a local conference, there is no system that allows the first remote participant to determine what or who the second remote participant is viewing. In known configurations, a first remote attendant may see a second remote participant with a misaligned effect while the second may have the same view but with the opposite effect. Neither the first nor second remote attendees can tell what the other remote attendee is looking at. The second remote attendant may be looking at a local view of a meeting space adjacent to the z-station view of the remote first attendee. In this case, the remote first attendee will have a difficult time determining if the remote second attendee was viewing the local view or a view of the remote first attendee.
The remote attendee may not be able to see all of the local participants, or at best, will have a distorted and imperfect view. This is due to the preferences of the local attendees regarding where they want to place their chairs and fix their vision trajectories. For example, if a local attendee moves her chair 2 feet away from the edge of a table, and a second attendee is located directly between the camera, and the first local participant, the view for the local attendee may be blocked or partially blocked. There are many other scenarios that can result in one local attendee being hidden from the local area view of a remote attendee.Click here to view the patent on Google Patents.