Alphabet – Erick Tseng, Amith Yamasani, Michael A. Cleron, Paul A. Dickinson, Google LLC

Abstract for “Accelerated pan user interface interaction”

A computer-implemented user interface includes: displaying on a touchscreen a portion a large scale graphic space that is at most multiples larger than the device’s display; receiving input from a user to pan within this graphical area; and receiving a response from the user to the panning control. The panning control can be used to provide panning in the space.

Background for “Accelerated pan user interface interaction”

People spend hours with electronic devices, such as phones, computers, music players, and other similar gadgets. They prefer devices that are easy to use and have the best interactions. They interact with electronics by using inputs and outputs. The outputs are usually provided audibly or on a flat graphical screen. Inputs can be via touch screens, joysticks and mice as well as 4-directional keypads and other input mechanisms.

Mobile devices are becoming more powerful and users interact more with them by using graphic objects such as lists of items or maps, images, and so on. These objects can contain large amounts of information, which may make it difficult to display on mobile devices. For example, a map of the United States could be miles in size. It can be difficult to present graphical information in enough detail to a user (e.g. by zooming in one area of an item) while still giving the user a feeling of space and allowing the user to move in intuitively through the space.

This document describes the systems and techniques that can be used to interact with a user using a computing device such as a mobile phone with a touch screen interface. The techniques can react to inputs that allow you to move around a multi-dimensional space in more than one direction. When a user intends to pan in a space by scrolling or panning in an image or list, the techniques can determine whether the space has a large size and present a visible but non-obtrusive control element that allows for accelerated panning. A scroll bar, for instance, may be automatically generated at the edge of the screen when a user starts to pan in large spaces using touch screen inputs.

In certain instances, such systems and techniques may offer one or more benefits. A user of a device might be able to save time when navigating through large spaces (which would otherwise require them to drag their finger across the touch screen surface) by using the accelerated panning control. They can move across the entire space with one finger input. The user may also be given a context indication to show them where they are located in the larger space. The scrolling control might be placed along the edge of the display to reflect the user’s current position within the space. This will make the user’s interaction with the device easier and more enjoyable. The user might also use the specific applications more frequently and be more likely to buy the device.

“In one embodiment, a computer-implemented visual navigator method is disclosed. A portion of large-scale graphical space is displayed on a touch screen. The method includes receiving input from the user to pan within the graphic space. In response to this input, the pop-up graphical panning control is generated. A user input to the panning control is received and used to provide panning. This allows the user to move the panning control in one selection to allow the display to span a significant portion of the large-scale graphical area. A slider button can be used to control the pop-up control. It is located at an edge of touch screen. The method may also include increasing the size of the graphical pan control if multiple panning inputs are provided without having to select the control.

“In some aspects, the graphic space can contain a list of items. The graphical panning control causes rapid scrolling through that list. The graphical space may also contain a map or an image. Accelerated panning can be caused by the graphical pan control. Automatically removing the graphic panning control can be included in the method after the user has selected the graphical pan control. The method may also include the display on the touch screen of a miniature representation and indicator of the current user’s location within the graphical area during the selection of the panning controls.

“In some other aspects, the method also includes displaying on touch screen, during user select of the panning control a segment indicator from within a group discrete segments in graphical space that is currently being displayed. The pop-up graphical panning control may also be generated by a long pressing of the touch screen or a quick flick of the touch screen. You can adjust the control’s size to match the size of your touch screen and the graphical space. The method may also include receiving long press inputs from the user on touch screen and then generating a zoom control for the touch screen in response.

“In another implementation, a article containing a computer-readable storage medium that stores program code is disclosed. The code can cause one or more machines perform certain operations. This includes displaying on a touchscreen a portion of large-scale graphical spaces, receiving input from a user to pan within the space, then automatically generating a popup graphical panning controller in response to receiving that input. Finally, providing panning in this graphical area, wherein panning is achieved by moving the panning controls in one selection.

“A computer-implemented user interface is disclosed in yet another implementation. The system includes a graphical display that presents portions of large-scale graphical areas. A touch screen input mechanism is used to receive user selections. There are also means to generate an accelerated panning control when a user selects portions of large-scale graphical areas. The system can also contain a mapping app, where the pop-up control includes a panning control to control the mapping program.

“The accompanying drawings and description below detail one or more embodiments. Additional features and benefits will be evident from the drawings and claims.

“DESCRIPTION of Drawings”

“FIGS. “FIGS.

“FIG. 2A shows sequential displays that may be generated for a user when they navigate a long list using a mobile device with a touch screen.

“FIG. “FIG.

“FIG. 2C is an example of a user interface that allows zooming and panning in large spaces.

“FIG. “FIG.

“FIGS. “FIGS.4.2A-4B are flow diagrams of examples of user selections via a graphical user interface.

“FIGS. 4C-4D are flow charts that show how to update a display based on the movement of a mobile device.

“FIG. “FIG.

“FIG. FIG. 6. A block diagram showing the internal architecture of FIG. 5.”

“FIG. FIG. 7 is a block diagram that illustrates the components of the operating systems used in FIG. 5.”

“FIG. “FIG. 7.”

“FIG. “FIG.

“Like reference symbols on the different drawings indicate like elements.”

This document describes the systems and techniques that mobile devices can interact with users of such devices. A user might be shown icons or graphical objects that show where they are in a large virtual space. The controls may also allow the user to control how to move within the space. A scroll bar that is proportional to the screen’s edge may be displayed when a user scrolls. This happens when there is a large list of items, such as the titles of songs in a playlist on a player digital media. A large letter might appear on the screen if the user scrolls at a sufficient speed or amount. This will indicate the letter of the alphabet at which they are located. Although the list might appear blurred, it may still be easy for the user to see where they are at any given moment. The vertical position of the letter on the display might be similar to its location within the alphabet. For example, the letter?A will appear at the top. The letter?A????? will be at the top, while the letter?Z?????? will appear at its bottom. The scroll bar will be located at the top of the display, and the letter?Z? will be located at its bottom. As a user scrolls further, the scroll bar may change in appearance. It will become larger or more prominent.

A virtual magnifying glass can be used to magnify objects in large areas of visual space. This object could be an area of the screen that is substantially magnified. This object can be used during web browsing to allow a user to see the overall layout of a page and then can quickly read or review a section of that page.

“Another example is a 360-degree panorama of a point in real life, such as the one provided by Google Streetview. This panorama can be created by simultaneously taking digital images or almost simultaneously using a number of cameras located near a common point, and directed radially inward. These images can be navigated normally on a personal computer using the GOOGLEMAPS service. The images can be navigated by inbuilt position-detecting components, such as the compass or a compass built into a mobile device. A user may select a geographical location that is closest to their current location. The view will then be displayed on their mobile device in the same direction as their current orientation (e.g., determined by their mobile device’s compass). If they hold their mobile device in front of them, their images will change to match the view from that location.

“FIGS. “FIGS. FIG. FIG. 1B shows navigation across a large map. The area that can be displayed in each figure (shown in dashed lines) will be substantially larger than what is possible at once (shown in solid lines). Thus, mechanisms are discussed here that assist a user in navigating across the spaces in ways that are more convenient than repeatedly panning across display-after-display-after-display until the user finally gets to their desired area.”

Referring to FIG. Referring to FIG. 1A, we see a graphical system 101 that contains a list of 108 items stored on a smartphone. Items may include personal contact information, music or recordings from a user’s library, files on a device and video files. Other appropriate groups may also be displayed in a list format. A user may see an individual item 110 with information about the item. If the item 110 is a contact, it may display information such as the name and telephone number of the contact. If the item 110 is a music group, the system might display an image of the group or the album cover, the name of the group and any other pertinent information about the group. The system might display the file name, size, and last-saved date of item 110 if it is a file within a list.

“A display 106 is superimposed at the middle of the list108. This is a typical portrait-formatted video display on a mobile device. It may measure approximately 3-4 inches diagonally. To show that the user can scroll through the list and see different parts of it 108 simultaneously, the display 106 is displayed as a window over the list.

The list 108 is conceptually displayed as moving up and down under the display 106. The display 106 acts as a window into the list. Standard mechanisms may be used to determine how the list 108 is organized and how the display 106 formats the items for presentation. The display 106’s top and bottom are shaded so that items from the list 108 can fade to black at the display 106. This gives the user the illusion that they are spinning on the three-dimensional reel as they move up and down the list.

The display 106 could be integrated into a touch screen structure so that the user can drag the list108 up and down by moving their finger along the list’s top. The list 108 can be very long so swiping a finger on display106 or flicking on the display 106 to give momentum to panning up and back down may not work well. Display 106 displays a visual control 112 to aid in the navigation across the long list of 108. This control 112 could be a slider button, which will be familiar to users of various scrolling bars applications, such as spreadsheets and word processors. The control 112 can be displayed as a scrolling bar at the side of the list 1108, or as an element that visually floats above the elements in the 108.

The control 112 could take a proportional form as it is well-known in art. For example, the control 112 might be shorter if the list 108 is larger. The control 112 can be used to navigate the user to the top of or bottom list 108. This is done by the user moving the control 112 from one of its predetermined positions within display106. A shorter control 112 could be used to indicate the display area 106 has displayed, while list 108 is very long. Each control 112 movement through a span equal in height to control 112 could approximate movement across one display, 106 of list108. This means that a smaller control 112 user may be able to move items more quickly across display 106 than if control 112 has a larger control and list 108 is shorter.

“The control 112 can take many other forms. The control 112 could be placed in a different location than the display 106, such as overlaid on top of the display 106. However, in certain cases, control 112 may be placed as far as possible from the display 106 to prevent any user’s finger, or other pointer, from obscuring its content.

“The movement in one direction of control 112 may cause the movement in another direction of display 106 across list 108, depending on how it is implemented. The dragging of control 112 down may visually drag display 108 downward. This could make it appear that display is climbing the list and that control 112 is attached to list 108 in an accelerating linkage. Or, control 112 may be moved down to cause display 106 to move up through list 108, giving the impression that control 112 connects to display 106 via an accelerating linkage.

“FIG. 1B illustrates a variety of examples displays 114-116 and 118 which provide views into a map of a particular metropolitan area. Here, it is the Minneapolis-St. Paul metropolitan region. This example shows the map 104. It is simplified to allow for a better view of the components on the displays 114,116, and 118. The displays 114,116, and 118 may only show a portion of the map. In these cases, mechanisms have been created to allow for intuitive, easy-to-use panning across the map (104) by a user of a mobile device with a touch screen.

“A first display of 114 is a user viewing a Southwest metro area zone from their mobile device. Display 114 displays the generation of a four headed arrow 114a above the map in display.114. To indicate a desire to pan around the map, 104, a user can drag the arrow 112 a up or down or sideways. One example is that if display 114a is not displayed, the user may pan the map 104 by moving the arrow 114a up or down. The user can drag the arrow 112 a to the upper right-hand corner to cause display 114 in the Northeast to move. You can also manipulate the arrow 112 a to make other exaggerated and accelerated movements.

“The display 114 also contains a small overview map (114 b) of the entire area. Map 114b shows the map in a familiar way. It has a large box representing the entire area of the map, as well as a smaller box which represents the current display (114) of the user. This allows the user to quickly identify their location relative other locations on the larger map. Map 114 b may contain major features from map 104, but not all of them.

“Display116 shows slider controls, 116a and 116b that work in a similar way to the slider control 112 shown in FIG. 1A. A user may be presented with display 116 and shown the map information. The user may also be able to show controls 116a and 116b if they start to pan in their display, 116, across map104. By sliding control 116a, the user can pan to the right or left of map 104. Display 116 can be moved to the top of or bottom map 104 in a similar fashion by sliding control 116b to the top of or bottom display 116. This allows the user to quickly move around map 104 by using controls 116a,116b to accelerate their movement across the map. 116 can be swiped on display 116 to move display 116 farther than a normal panning motion directly onto the map in display.116.

Display 118 allows navigation in a similar way to display 114 but with an annular band displayed above the map 104. The relative position of display 118 on map 104.4 is indicated by the location of ring 118a on display 118. The display 118 is located near the top of the map 104 and slightly left. The ring 118a is similarly located near the top and slightly left on display 128. To quickly pan across the map 104, the user can drag their finger around display 118 but not ring 118a. Or, they may place their finger on the ring 118a.

“Thus, FIGS. “The FIGS. 1A and 1B demonstrate various ways that a user can navigate in a large virtual space. These mechanisms can provide a user with a sense about their current position within the large virtual space. They also allow the user to select a control or controls that will let them control their computer device to move around the space. These mechanisms may be particularly useful for touch screen devices and mobile devices with touch screens.

FIG. 2A illustrates sequential displays 200-206 that can be generated by a user when navigating a long list using a mobile device with a touch screen. This example shows a list of singers or music groups. It could conceptually look like FIG. 1A.”

“Display 200 displays seven groups with the group name, number of albums on the user’s device, and total number of songs in those albums. Each group is also represented by a graphic icon. The icon indicates whether one album or multiple albums are available. Album cover art can be downloaded either manually or automatically. Other images may also be added to the icons if they are available.

In this example, a slider control 208 is displayed along the right-hand side of display 200. The slider control 208 can be displayed whenever the display 200 shows a larger list than the display 200. It may also be displayed in specific contextual circumstances (e.g. after a particular action by a user that indicates an intent to pan long distances across a representation data representation), as explained more thoroughly below.

“Display 202 shows a user moving from a small dot to a larger dot and ring by flicking across the screen. This graphic is not normally displayed on display 202. Instead, it shows a typical user input on display 202. In situations where the slider control is not displayed, flick 210 could result in the creation of slider control 212. The user is at the top alphabet of the list. Therefore, the slider control 212 is displayed at the top display 202. The display 202 list length may have an effect on the size of the slider control, 212. Here, for example, the slider control 212 measures approximately 1/10th of display 202. The list contains approximately 60 artists. Display 202 may be smaller than control 212, but the size of control 212 could also be affected by the length of the list. A minimum size may be set for the slider control 212, so even though the list contains thousands of entries, it will still be visible clearly enough to be able to select it easily.

“Display 204 results of flick 210. The list of artists scrolled up and rested two letters lower down the alphabet. The flick 210 only covered the distance of two artists. However, the display scrolled through many artists. The speed at which the flick is moving may affect the scrolling distance. This is because it is similar to the movement of a spinning wheel or other similar object that a user might flick. It is also familiar to skilled artisans.

“Additionally control 212 has been renamed control 214. First, control 214, a scrolling control has seen its position shift slightly from control 212. This is to indicate that the user is lower down in the list in display 209 than in display 202. To make it more obvious to the user, control 214 is also more prominent than control 212, to help them find it more easily. To indicate that the control 214 may be used for specific contextual functions, it has become thicker and bulgier at its center.

The control 214 can be used to perform accelerated panning of the artists’ list. The control could be pulled all the way to the side of display 204, and while it will only affect five artists, motion can occur that moves the entire list down to Z.

“The control number 214 could be made more prominent by other means. The control 214 could be made brighter to make it more prominent. The control 214 can also be made to appear stretched and under pressure when the user performs multiple flicks such as flick 210. This may make it seem more urgent to use control 214 for accelerated paning. Multiple flicks on the list should show that the user would be more successful with accelerated panning than if they had to do so manually. The color of control 214 can also change depending on how many flicks a user does across a list. Also, control 214 may move up and down along the display’s edge to draw the user’s attention to control 214. This is to make it more obvious that the user may need the control.

Display 206 is a result of a user selecting control 214. This can be seen by the dot at control 214’s bottom in the arrow that leads to display 206. The dot on control 216, indicates that the user has kept pressure on control 216 while scrolling through display 206. This selection could cause the control’s shape to change from control 214 to control 216. To provide guidance to the user, an index letter is displayed for each item in the list in a familiar way. The index letter 218 is a grouping of elements in the list. Here, a letter from the alphabet is used to indicate the beginning letter of an artist. It’s shown at the top display 206. Other forms of the index letter 218 could be used, such as a number representing a file size or any indicator that allows a list to be divided into distinct groups.

The index letter 218 can be displayed in many ways. The index letter 218 is placed near the edge of a display to minimize its impact on artist names. However, it can also be partially or fully transparent to allow viewing of names even if they are under the index 218. The index letter 218 can move along with control 216 up and down on display 206. The index letter 218 could be placed just to the left control 216, allowing the user to see the index letter 218 even if their finger is on control 215. The control 216 and the index letter 218 could move along with it. This allows the user to focus more on the letter and navigate closer to the artist.

“The index letters may change as the letters on the display 206 list change. For example, if many artists started with A but very few started with C, then the index letters A to Z (and possibly 0 to 9) may have an equal division to control 216. This means that moving down 26ths of display 206 will always result the changing of one index letter 218, such as A.

“In some cases, the index letter can change when the user moves the control 214 up or down. However, the items in the list may not move while such control is taking place.” There may be two types of control: normal panning where items scroll up and down while the user pans; and accelerated panning where items don’t move and an index letter is cycled in an accelerated fashion as the control is moved.

The techniques herein allow users to navigate easily by simply dragging their fingers across a list of items. You can also navigate with larger movements by moving your finger across the screen. This will give the list virtual momentum, allowing you to move more displays at once. A convenient way to scroll quickly through the list is to hide a control until the user indicates that they intend to scroll or pan through it.

“FIG. 2B illustrates the display that a mobile device can generate for a user based on the position or motion of the device. You can configure any number of mobile devices to detect different types of motion. In some cases, the mobile device may have an electronic compass, which can detect changes in heading according to the Earth’s magnet field (e.g. the North and South Poles). The mobile device detects a change in heading when the user holds it. Another example is an accelerometer, which can detect changes in motion. The mobile device can sense a change in acceleration when the user holds it. These motions can be used in some cases to update the display of the mobile device automatically.

Referring to FIG. 2B shows a mobile device 220 being held by a user 234 a-224 C in various orientations and directions. Particular headings are shown for the user 224a-224c (according to?North?). arrow). The same user can be shown with different headings depending on the compass that is included in the mobile phone 220. The mobile device 220 can include a web browser, or another software application that allows the user to view a map of a specific area. Some implementations include images that were captured from street-side vantage points. STREETVIEW is a Google Map (Mountain View in California). The user can enter an address using 224 a-224 cm. This address will be sent to the web browser, or to other software applications. It will then generate a view of the area around the address. The mobile device 220 can provide an address automatically in some cases using a global positioning system (GPS), or other systems that automatically locate the device.

“The user may provide initial address information. Here is the Hubert H. Humphrey Metrodome address. Map tiles and other data may also be available in a familiar format for the area surrounding?The Dome. The user can then choose to view a street view of The Dome. The user might be shown images taken from The Dome’s exterior, but they could also be shown images of their home. The user can also search for the view using the query “MetroDome”. This may return an address as Onebox results, which could include a link to a map of the area surrounding the structure. Users may then choose images from a specific point on the map.

“The virtual direction that a person looks via STREETVIEW can be coordinated with their own frame-of-reference compass direction. The mobile device 220 displays the 222 a-292 c of a specific region as the user moves from 224 a to 224 c. This information is based on the map data, location of the device and/or any other information that can detect the device (e.g. acceleration of the 220). User 224 b, for example, is looking in general SSE and is shown the view of the Dome in a similar direction.

The mobile device 220 detects if the user turns to their right (e.g. in a heading illustrated in user 224a) and adjusts the view to match the heading. Display 222a illustrates this. The mobile device 220 displays (in display 222 a) a different section of the Metrodome based on the new heading it detects. The mobile device 220 detects the user’s movement (e.g. changing in heading). It automatically pans the view to match the current heading.

“The heading on the device can be easily matched to the relevant heading data, which identifies the specific street view or portions thereof. Multiple images can be blended together to make them appear seamless. This allows the user to view the area around where the images were taken on their mobile devices, even though they may be far from the location.

In some cases, an accelerometer may be used in place of or in addition to a compass. The accelerometer can detect movement (e.g. shaking, walking, changing in elevation, orientation or any other motion) and update displays 222a-222c accordingly. The mobile device can sense accelerations and pan the displays 222a-222c accordingly. You can also shake the device to create forward motion in the space. This is similar to selecting a travel arrow from GOOGLE STREETVIEW to allow the user to virtually move down a street. Another example is the mobile device 220 that can detect changes in orientation (e.g. according to acceleration detected via the accelerometer). It can also pan the displays 222a-222c up and down as if it were the user 224a-224c were looking up and down. The remote server provides panoramic images to the device.

“The device’s direction may be relative, rather than absolute. This is especially true if an accelerometer is being used and a compasse is not. The rule may be used to select the initial orientation of the view provided to the user, rather than the direction the user is facing. An accelerometer may detect relative motion of the user as they move to one side or the other. The viewer’s images may then be panned relative to this motion. However, it may not be perfectly proportional to their motion. The display might be limited in its ability to detect absolute motion, so the user could rotate the display 90 degrees. However, the display can rotate 60 degrees.

“FIG. 2C is an example of a user interface that allows for zooming and panning in large spaces. The figure shows four screen shots (a-d) which show different times in zooming and panning on web pages. As shown at the edges of display (a), a web page can be initially raised in a zoomed-out state. Zooming in on the page will allow the user to browse it more easily and find the relevant content within context. The user can double-tap on the page to indicate that they want to zoom in. Double tapping may produce a magnifying zoombox, as shown in screen shot (a). This box appears in a large format to bring it to the user’s attention and then shrinks to show the area to be displayed if they choose to zoom in.

“At shot (b), the user can be seen moving their finger towards the zoom box. The user may then press on the box to drag it over the area they wish to examine more closely. The zoom box may have a slightly larger content than the outside content. This is shown in the shots (b) and (d). When the motion begins, the zoom box can also follow the user’s finger slightly (see shot (d), where the finger is moving towards the lower left corner). The box might then “catch up” The box will then spring back into place once the finger is stopped moving. This will give the user a better idea of how they are moving the box and prevent the finger from covering it completely while it is being moved.

“When the user moves the zoom box over content they wish to see more closely, they might lift their finger and leave the zoom box at the exact location they lifted it. This action can cause the display manager to zoom in automatically on the zoom area until it fills the entire screen. The user can then zoom in on the page by moving their finger across the touch screen, or rolling a trackball. If they wish to zoom out again, they may double-tap on the screen.

“FIG. “FIG. A mobile device, such as device 302, could be used to implement the system 300. Device 302 has various input and out mechanisms, such as touch screen display 304 or roller ball 306. Device 302 can be equipped with a variety of components that provide different selection functionality, such as movement in large spaces exceeding the display 304’s size.

Display manager 312 is one such component. It may be responsible for rendering content to display 304. Display manager 312 can receive graphic-related information from many sources. It may also decide how content will be presented to the user. The display manager 312 can determine which windows to display for different applications 310 on the device. It may also decide which ones to display and which ones to hide if there are overlaps between different graphical objects.

The display manager 312 may include different components that provide the device with specific functionality to interact with displayed components. These components may be shared across multiple apps and may be provided, for instance, by an operating systems of device 302. Interface navigation module 311 may provide such functionality. It may receive input from users who want to move between elements on display 304. This example shows a control 305 on display 304. It may look similar to control (118a) on FIG. 1B. 1B.

Interface navigation module 311 might display control 305 when the user drags the map. This may cause the map’s pan to change in proportion to the dragging motion. The map may be reoriented if the control is moved further away from control 305. In certain cases, the control may move slightly if the control location 305 on the map corresponds with the location on display 304 of the sub-section of the map. Interface navigation module 311 may also allow for changes to the display 304 as a result of user input.

Individual applications 310 may register with the display manager 312 according to an API in order to indicate what display elements they require. An application might identify a set of data elements that correspond to a list. The interface navigation module 311 can then treat these elements visually as a list. For example, it may display an accelerated scrolling control if the list is sufficiently long or a user input indicates an intent to scroll up/down within the list.

An input manager 314 might be responsible for translating commands from device 302. These commands could come from a keyboard or touch screen display 304 or trackball 306, as well as other sources such buttons and soft buttons (e.g. buttons that have different functions over time and may be displayed in areas adjacent to those buttons). Inferential inputs may also be possible, such as signals from an accelerometer or on-board compass. The input manager 314 can determine, for instance, where commands are being received and in which application they are intended. It may also interpret the touch screen 304 input motions into a common format, then pass the interpreted motions to the appropriate application (e.g. flicks, short presses, and straight line drags). These inputs may be reported to the event manager 314 (not shown), who then reports them to appropriate modules or applications.

A variety of applications 310 can be run on the device 302, generally using a common microprocessor. Applications 310 can take many forms such as web browser applications, mapping applications, music and videos, and applications that run within a browser or are extensions to a browser.

“GOOGLE MAPS, GOOGLE STREETVIEW are two applications that can be used independently or in conjunction with a browser. This application can accept readings from the compass module 313 of the device 302, which could include an electronic compass, related circuitry, software for interpreting and software for interpreting readings, as well as an accelerometer 315. As described in FIG. 2B to detect user motion and orientation in order to change the view of a geographical area previously photographed panoramically. Digital images can be downloaded from a server to the device 302.

“A wireless interface 308 allows for communication with a wireless network. This may include a data network that carries voice communications. The wireless interface can operate in a familiar way, as shown in the examples below. It may allow communication between the device 302 and messaging services like text messaging, email, and voice mail messaging. The wireless interface 308 can also allow for uploads and downloads of computer code and content over a wireless network. Images may be downloaded over the wireless network by applications such as GOOGLE STREETVIEW. An application running on the device (302) (such as a JavaScript application running on a webpage displayed on the device) may have access the compass data and may request additional image data around a specific geographical point in response to a user moving the device 302.

“There are many options for persistent storage, including fixed disk drives and solid state memory devices. Here are two examples. The first is maps/lists/etc storage 3316. This can contain all data that applications 310 need. It can also include lists of data elements like map tiles and other well-known data structures to allow a user to interact with apps on device 302.

“Other storage” includes user defaults 318. This may be profile information about a user that is stored on the same media like maps/links/etc. storage 316. The user defaults 318 contain various parameters about the user of the device. The user profile data may contain information about the user 302.

The device 302 can respond to user inputs by using the components shown and other parts that have been omitted. The device 302 might respond to panning inputs in large areas in specific ways. For example, it may display a control that allows for accelerated panning (i.e. panning that is significantly faster than dragging across an object). This typically permits navigation from one end of the area using a single swipe of the controls.

“FIGS. 4A-4B are flow diagrams of process examples for receiving user selections via a graphical user interface. FIG. FIG.

The process starts at box 400 where a request is made to display large areas of data. Large area data can include many forms of data that extend beyond the boundaries of one screen. These data could include large lists, large images, maps, or other similar information. A request to display large area information can take many forms. For example, a request for search results sent to a device. The search results may include large area information in the form of a list or map on a computer.

“At box 402, the process selects the largest area data and displays it. If the large area data is a mapping, the subset displayed may include a portion of the map that surrounds an address. This could be the result of a search query entered by the user. A panning input is received from box 404. This input can be generally received by the user moving their finger, or using a stylus to move across the touch screen display.

“The process responds to user input at box 406. It displays a panning control on a display. After determining the relative size between the display and the area of data, the process determines the relative size of each. If the display area is larger than the display, or slightly larger than it is before the control is displayed, it may be that no control is shown. Panning is not possible in such cases.

The speed and method of input may also affect the display of the control. If the user drags slowly across a display, it may be assumed that they are not interested in navigation to distant corners of the data. In this case, the control may be declined to display. The same goes for if the user does not leave their finger on the screen at the end of an action. This could indicate that they are not interested in panning very far and therefore do not need an accelerated control. If the user moves fast and lifts their finger towards the end to create a “fling”, it may be an indication that they are not interested in panning very far. If the user moves quickly and lifts their finger at the end, it may indicate that they intend to pan long distances. This may allow the user to generate control in this situation.

“At box 408 the process responds to user input, such a ‘fling? By the input of panning inputs or subsequent panning inputs, the process increases the prominence of its input control. This could involve increasing the control’s size or brightness, or pulsing it. You can increase the prominence of the control once, for example, when a second panning input has been received. Or, you may go through several increasing phases until reaching a maximum point. To increase the prominence of this control, we want to make it more obvious that the user has the option to use an accelerated panning control. The more the user attempts to pan on the subject matter, the more likely they are to do so, and the more help the control can provide.

“At box 411, the user sees the control and chooses it to enable accelerated panning. The process responds to the user’s manipulation by quickly panning the data according to the control. If the control is moved downward by the user, the display will move at an exaggerated speed as the user watches. The control may disappear after a period of inactivity. This could be due to the user not selecting it or the user not making the normal panning motion that would bring up the control. For example, the control could be faded out to allow the user to see any data underneath it.

The control can be brought back if the user re-does the flicking action. The control will be placed in a way that is most representative of the user’s current view. If the user is looking at the middle section of a list video files on her device, then the control might be located on the side of her display, halfway between the top or bottom.

“FIG. 4B illustrates a method for creating an accelerated pan control with respect to a list graphically represented items such as names and musical artists or similar groups of data elements. The process is very similar to that shown in FIG. 4A, but requires more specific responses from the user.

The process begins at box 420. Here, the process receives input from the user in the form of a quick movement followed by lifting the finger or stylus on a long-list. The process will then examine the length of the list (boxes 422 and 424). If the list is short, or not significantly longer than the display on the device, the process may continue to receive input from the user (box 426).

A thin panning control (box 428) may be used to display the long list. It is usually displayed along the left edge of the display, in order to not cover the left-justified text. The flick may cause the list to scroll in response to the flick. This may indicate that the speed and distance of scrolling represent the movement and motion of a physical object. As such, the list scrolls after the flick and slows down gradually as if it were being pulled down by friction. The control will also move along the side display to reflect the user’s position (i.e. the location of the display) between the top-bottom of the list.

“At box 433, the process receives another flick input from the user. The list may scroll again as it did after the initial flick input. The fact that the user has flicked twice may indicate that they want to scroll further down or higher on the list. The process at box 432 thickens the display of control to make it more prominent for the user. Box 434 checks if the user has selected the control. However, it will generally check for input at any time after the control is displayed. The control disappears if the user doesn’t select the control or flicks the subject matter for a specified period of time (box 436). The control is thickened if the user selects it. This makes it easier to use and pans the display according the user’s manipulations of the control (such as the ones discussed above).

“FIGS. 4C-4D are flowcharts that show an example of how to update a display based on the movement of a mobile device. FIG. FIG. 4C illustrates, for instance, how a mobile device might respond to the sensed movement of the device. FIG. FIG. 4D shows, for instance, an example of actions that can be performed by additional systems outside of the mobile phone. These systems may be used to detect motion and update the mobile device’s display to reflect it. The process shown in FIG. 4C is illustrated using a map application or mobile device. 4C can be described using a mobile device or map application, and the process illustrated in FIG.

Referring to FIG. “In reference to FIG. 4C, the process begins at box 440 when a map app is launched. In some cases, the map application can be launched automatically after the device boots. The map application could be bootstrapped to the initialization routines of the mobile device. The map application can be launched by the user in some cases. To launch the map app, the user may choose an icon on their mobile device’s display. The map application might be launched by another application or process in some cases. A social networking app may launch the map application if it presents the locations of friends to the user.

The map application will receive an address once the application has been launched in box 442. There are many ways to do this. The user can also manually enter the address. Another example is a GPS device on the mobile device that may automatically provide an address.

“In box 444, the map application can fetch and show map tiles around the address. The map application can display tiles that depict buildings, parks, streets or other locations, for example.

“Box 446 allows the map application to fetch and display StreetView images. A StreetView image is a photograph taken at street level from the perspective of a person or vehicle. In some cases, the StreetView image may be requested by the user of the map app to identify the street on which the StreetView images are being retrieved and displayed. The first image might be the digital image taken at the point by the camera, facing in the direction the device is facing.

“In box 448 the map application can sense the motion of the smartphone. The mobile device can communicate with any one or more sensors or modules that detect motion to sense motion. The map application, for example, can communicate with an accelerometer or compass to sense the motion of the mobile phone.

“In box 450 the map application pans StreetView images to match user’s orientation (i.e. the orientation of their mobile device). FIG. 2B StreetView images are displayed on the displays 222a-222c of mobile device 200 based on the orientation of the user 224-224c.

This allows a user to quickly see a location, such as the shopping district they plan to visit or the neighborhood of a friend they intend to visit. You can get a better view of the area by holding your mobile device in front and moving in a circular motion. The images will then pan to match your movements. The user can see the location quickly and decide whether they want to visit it.

“In reference to FIG. “In reference to FIG. 4D, the process begins at box 452 when a map search box is launched by the mobile device. Search boxes, in general, are input controls that allow a user to enter information using a query. A search box could be an editable textbox or another user interface component.

“In box 454, the device receives an address input form the search box. Some implementations allow users to input an address in the search box. or ?123 Main Street, Anytown, CA?). An automated process could also provide the address in other implementations. An example is that the map application might be pre-configured with addresses such as the user’s workplace, home or other addresses. The map application can then automatically give the address to the device upon request. Another example is that a GPS module could determine the address of the device using a GPS calculation.

“In box 456, the device submits a formatted map query. The maps query can be formatted to any number of program interfaces (API). Examples of formatting include, but are not limited to, any number or formats of database queries, common gateway interfaces (CGI), request formats, hypertext Markup Language (HTML), formats, and any other traditional formats for submitting questions.

Summary for “Accelerated pan user interface interaction”

People spend hours with electronic devices, such as phones, computers, music players, and other similar gadgets. They prefer devices that are easy to use and have the best interactions. They interact with electronics by using inputs and outputs. The outputs are usually provided audibly or on a flat graphical screen. Inputs can be via touch screens, joysticks and mice as well as 4-directional keypads and other input mechanisms.

Mobile devices are becoming more powerful and users interact more with them by using graphic objects such as lists of items or maps, images, and so on. These objects can contain large amounts of information, which may make it difficult to display on mobile devices. For example, a map of the United States could be miles in size. It can be difficult to present graphical information in enough detail to a user (e.g. by zooming in one area of an item) while still giving the user a feeling of space and allowing the user to move in intuitively through the space.

This document describes the systems and techniques that can be used to interact with a user using a computing device such as a mobile phone with a touch screen interface. The techniques can react to inputs that allow you to move around a multi-dimensional space in more than one direction. When a user intends to pan in a space by scrolling or panning in an image or list, the techniques can determine whether the space has a large size and present a visible but non-obtrusive control element that allows for accelerated panning. A scroll bar, for instance, may be automatically generated at the edge of the screen when a user starts to pan in large spaces using touch screen inputs.

In certain instances, such systems and techniques may offer one or more benefits. A user of a device might be able to save time when navigating through large spaces (which would otherwise require them to drag their finger across the touch screen surface) by using the accelerated panning control. They can move across the entire space with one finger input. The user may also be given a context indication to show them where they are located in the larger space. The scrolling control might be placed along the edge of the display to reflect the user’s current position within the space. This will make the user’s interaction with the device easier and more enjoyable. The user might also use the specific applications more frequently and be more likely to buy the device.

“In one embodiment, a computer-implemented visual navigator method is disclosed. A portion of large-scale graphical space is displayed on a touch screen. The method includes receiving input from the user to pan within the graphic space. In response to this input, the pop-up graphical panning control is generated. A user input to the panning control is received and used to provide panning. This allows the user to move the panning control in one selection to allow the display to span a significant portion of the large-scale graphical area. A slider button can be used to control the pop-up control. It is located at an edge of touch screen. The method may also include increasing the size of the graphical pan control if multiple panning inputs are provided without having to select the control.

“In some aspects, the graphic space can contain a list of items. The graphical panning control causes rapid scrolling through that list. The graphical space may also contain a map or an image. Accelerated panning can be caused by the graphical pan control. Automatically removing the graphic panning control can be included in the method after the user has selected the graphical pan control. The method may also include the display on the touch screen of a miniature representation and indicator of the current user’s location within the graphical area during the selection of the panning controls.

“In some other aspects, the method also includes displaying on touch screen, during user select of the panning control a segment indicator from within a group discrete segments in graphical space that is currently being displayed. The pop-up graphical panning control may also be generated by a long pressing of the touch screen or a quick flick of the touch screen. You can adjust the control’s size to match the size of your touch screen and the graphical space. The method may also include receiving long press inputs from the user on touch screen and then generating a zoom control for the touch screen in response.

“In another implementation, a article containing a computer-readable storage medium that stores program code is disclosed. The code can cause one or more machines perform certain operations. This includes displaying on a touchscreen a portion of large-scale graphical spaces, receiving input from a user to pan within the space, then automatically generating a popup graphical panning controller in response to receiving that input. Finally, providing panning in this graphical area, wherein panning is achieved by moving the panning controls in one selection.

“A computer-implemented user interface is disclosed in yet another implementation. The system includes a graphical display that presents portions of large-scale graphical areas. A touch screen input mechanism is used to receive user selections. There are also means to generate an accelerated panning control when a user selects portions of large-scale graphical areas. The system can also contain a mapping app, where the pop-up control includes a panning control to control the mapping program.

“The accompanying drawings and description below detail one or more embodiments. Additional features and benefits will be evident from the drawings and claims.

“DESCRIPTION of Drawings”

“FIGS. “FIGS.

“FIG. 2A shows sequential displays that may be generated for a user when they navigate a long list using a mobile device with a touch screen.

“FIG. “FIG.

“FIG. 2C is an example of a user interface that allows zooming and panning in large spaces.

“FIG. “FIG.

“FIGS. “FIGS.4.2A-4B are flow diagrams of examples of user selections via a graphical user interface.

“FIGS. 4C-4D are flow charts that show how to update a display based on the movement of a mobile device.

“FIG. “FIG.

“FIG. FIG. 6. A block diagram showing the internal architecture of FIG. 5.”

“FIG. FIG. 7 is a block diagram that illustrates the components of the operating systems used in FIG. 5.”

“FIG. “FIG. 7.”

“FIG. “FIG.

“Like reference symbols on the different drawings indicate like elements.”

This document describes the systems and techniques that mobile devices can interact with users of such devices. A user might be shown icons or graphical objects that show where they are in a large virtual space. The controls may also allow the user to control how to move within the space. A scroll bar that is proportional to the screen’s edge may be displayed when a user scrolls. This happens when there is a large list of items, such as the titles of songs in a playlist on a player digital media. A large letter might appear on the screen if the user scrolls at a sufficient speed or amount. This will indicate the letter of the alphabet at which they are located. Although the list might appear blurred, it may still be easy for the user to see where they are at any given moment. The vertical position of the letter on the display might be similar to its location within the alphabet. For example, the letter?A will appear at the top. The letter?A????? will be at the top, while the letter?Z?????? will appear at its bottom. The scroll bar will be located at the top of the display, and the letter?Z? will be located at its bottom. As a user scrolls further, the scroll bar may change in appearance. It will become larger or more prominent.

A virtual magnifying glass can be used to magnify objects in large areas of visual space. This object could be an area of the screen that is substantially magnified. This object can be used during web browsing to allow a user to see the overall layout of a page and then can quickly read or review a section of that page.

“Another example is a 360-degree panorama of a point in real life, such as the one provided by Google Streetview. This panorama can be created by simultaneously taking digital images or almost simultaneously using a number of cameras located near a common point, and directed radially inward. These images can be navigated normally on a personal computer using the GOOGLEMAPS service. The images can be navigated by inbuilt position-detecting components, such as the compass or a compass built into a mobile device. A user may select a geographical location that is closest to their current location. The view will then be displayed on their mobile device in the same direction as their current orientation (e.g., determined by their mobile device’s compass). If they hold their mobile device in front of them, their images will change to match the view from that location.

“FIGS. “FIGS. FIG. FIG. 1B shows navigation across a large map. The area that can be displayed in each figure (shown in dashed lines) will be substantially larger than what is possible at once (shown in solid lines). Thus, mechanisms are discussed here that assist a user in navigating across the spaces in ways that are more convenient than repeatedly panning across display-after-display-after-display until the user finally gets to their desired area.”

Referring to FIG. Referring to FIG. 1A, we see a graphical system 101 that contains a list of 108 items stored on a smartphone. Items may include personal contact information, music or recordings from a user’s library, files on a device and video files. Other appropriate groups may also be displayed in a list format. A user may see an individual item 110 with information about the item. If the item 110 is a contact, it may display information such as the name and telephone number of the contact. If the item 110 is a music group, the system might display an image of the group or the album cover, the name of the group and any other pertinent information about the group. The system might display the file name, size, and last-saved date of item 110 if it is a file within a list.

“A display 106 is superimposed at the middle of the list108. This is a typical portrait-formatted video display on a mobile device. It may measure approximately 3-4 inches diagonally. To show that the user can scroll through the list and see different parts of it 108 simultaneously, the display 106 is displayed as a window over the list.

The list 108 is conceptually displayed as moving up and down under the display 106. The display 106 acts as a window into the list. Standard mechanisms may be used to determine how the list 108 is organized and how the display 106 formats the items for presentation. The display 106’s top and bottom are shaded so that items from the list 108 can fade to black at the display 106. This gives the user the illusion that they are spinning on the three-dimensional reel as they move up and down the list.

The display 106 could be integrated into a touch screen structure so that the user can drag the list108 up and down by moving their finger along the list’s top. The list 108 can be very long so swiping a finger on display106 or flicking on the display 106 to give momentum to panning up and back down may not work well. Display 106 displays a visual control 112 to aid in the navigation across the long list of 108. This control 112 could be a slider button, which will be familiar to users of various scrolling bars applications, such as spreadsheets and word processors. The control 112 can be displayed as a scrolling bar at the side of the list 1108, or as an element that visually floats above the elements in the 108.

The control 112 could take a proportional form as it is well-known in art. For example, the control 112 might be shorter if the list 108 is larger. The control 112 can be used to navigate the user to the top of or bottom list 108. This is done by the user moving the control 112 from one of its predetermined positions within display106. A shorter control 112 could be used to indicate the display area 106 has displayed, while list 108 is very long. Each control 112 movement through a span equal in height to control 112 could approximate movement across one display, 106 of list108. This means that a smaller control 112 user may be able to move items more quickly across display 106 than if control 112 has a larger control and list 108 is shorter.

“The control 112 can take many other forms. The control 112 could be placed in a different location than the display 106, such as overlaid on top of the display 106. However, in certain cases, control 112 may be placed as far as possible from the display 106 to prevent any user’s finger, or other pointer, from obscuring its content.

“The movement in one direction of control 112 may cause the movement in another direction of display 106 across list 108, depending on how it is implemented. The dragging of control 112 down may visually drag display 108 downward. This could make it appear that display is climbing the list and that control 112 is attached to list 108 in an accelerating linkage. Or, control 112 may be moved down to cause display 106 to move up through list 108, giving the impression that control 112 connects to display 106 via an accelerating linkage.

“FIG. 1B illustrates a variety of examples displays 114-116 and 118 which provide views into a map of a particular metropolitan area. Here, it is the Minneapolis-St. Paul metropolitan region. This example shows the map 104. It is simplified to allow for a better view of the components on the displays 114,116, and 118. The displays 114,116, and 118 may only show a portion of the map. In these cases, mechanisms have been created to allow for intuitive, easy-to-use panning across the map (104) by a user of a mobile device with a touch screen.

“A first display of 114 is a user viewing a Southwest metro area zone from their mobile device. Display 114 displays the generation of a four headed arrow 114a above the map in display.114. To indicate a desire to pan around the map, 104, a user can drag the arrow 112 a up or down or sideways. One example is that if display 114a is not displayed, the user may pan the map 104 by moving the arrow 114a up or down. The user can drag the arrow 112 a to the upper right-hand corner to cause display 114 in the Northeast to move. You can also manipulate the arrow 112 a to make other exaggerated and accelerated movements.

“The display 114 also contains a small overview map (114 b) of the entire area. Map 114b shows the map in a familiar way. It has a large box representing the entire area of the map, as well as a smaller box which represents the current display (114) of the user. This allows the user to quickly identify their location relative other locations on the larger map. Map 114 b may contain major features from map 104, but not all of them.

“Display116 shows slider controls, 116a and 116b that work in a similar way to the slider control 112 shown in FIG. 1A. A user may be presented with display 116 and shown the map information. The user may also be able to show controls 116a and 116b if they start to pan in their display, 116, across map104. By sliding control 116a, the user can pan to the right or left of map 104. Display 116 can be moved to the top of or bottom map 104 in a similar fashion by sliding control 116b to the top of or bottom display 116. This allows the user to quickly move around map 104 by using controls 116a,116b to accelerate their movement across the map. 116 can be swiped on display 116 to move display 116 farther than a normal panning motion directly onto the map in display.116.

Display 118 allows navigation in a similar way to display 114 but with an annular band displayed above the map 104. The relative position of display 118 on map 104.4 is indicated by the location of ring 118a on display 118. The display 118 is located near the top of the map 104 and slightly left. The ring 118a is similarly located near the top and slightly left on display 128. To quickly pan across the map 104, the user can drag their finger around display 118 but not ring 118a. Or, they may place their finger on the ring 118a.

“Thus, FIGS. “The FIGS. 1A and 1B demonstrate various ways that a user can navigate in a large virtual space. These mechanisms can provide a user with a sense about their current position within the large virtual space. They also allow the user to select a control or controls that will let them control their computer device to move around the space. These mechanisms may be particularly useful for touch screen devices and mobile devices with touch screens.

FIG. 2A illustrates sequential displays 200-206 that can be generated by a user when navigating a long list using a mobile device with a touch screen. This example shows a list of singers or music groups. It could conceptually look like FIG. 1A.”

“Display 200 displays seven groups with the group name, number of albums on the user’s device, and total number of songs in those albums. Each group is also represented by a graphic icon. The icon indicates whether one album or multiple albums are available. Album cover art can be downloaded either manually or automatically. Other images may also be added to the icons if they are available.

In this example, a slider control 208 is displayed along the right-hand side of display 200. The slider control 208 can be displayed whenever the display 200 shows a larger list than the display 200. It may also be displayed in specific contextual circumstances (e.g. after a particular action by a user that indicates an intent to pan long distances across a representation data representation), as explained more thoroughly below.

“Display 202 shows a user moving from a small dot to a larger dot and ring by flicking across the screen. This graphic is not normally displayed on display 202. Instead, it shows a typical user input on display 202. In situations where the slider control is not displayed, flick 210 could result in the creation of slider control 212. The user is at the top alphabet of the list. Therefore, the slider control 212 is displayed at the top display 202. The display 202 list length may have an effect on the size of the slider control, 212. Here, for example, the slider control 212 measures approximately 1/10th of display 202. The list contains approximately 60 artists. Display 202 may be smaller than control 212, but the size of control 212 could also be affected by the length of the list. A minimum size may be set for the slider control 212, so even though the list contains thousands of entries, it will still be visible clearly enough to be able to select it easily.

“Display 204 results of flick 210. The list of artists scrolled up and rested two letters lower down the alphabet. The flick 210 only covered the distance of two artists. However, the display scrolled through many artists. The speed at which the flick is moving may affect the scrolling distance. This is because it is similar to the movement of a spinning wheel or other similar object that a user might flick. It is also familiar to skilled artisans.

“Additionally control 212 has been renamed control 214. First, control 214, a scrolling control has seen its position shift slightly from control 212. This is to indicate that the user is lower down in the list in display 209 than in display 202. To make it more obvious to the user, control 214 is also more prominent than control 212, to help them find it more easily. To indicate that the control 214 may be used for specific contextual functions, it has become thicker and bulgier at its center.

The control 214 can be used to perform accelerated panning of the artists’ list. The control could be pulled all the way to the side of display 204, and while it will only affect five artists, motion can occur that moves the entire list down to Z.

“The control number 214 could be made more prominent by other means. The control 214 could be made brighter to make it more prominent. The control 214 can also be made to appear stretched and under pressure when the user performs multiple flicks such as flick 210. This may make it seem more urgent to use control 214 for accelerated paning. Multiple flicks on the list should show that the user would be more successful with accelerated panning than if they had to do so manually. The color of control 214 can also change depending on how many flicks a user does across a list. Also, control 214 may move up and down along the display’s edge to draw the user’s attention to control 214. This is to make it more obvious that the user may need the control.

Display 206 is a result of a user selecting control 214. This can be seen by the dot at control 214’s bottom in the arrow that leads to display 206. The dot on control 216, indicates that the user has kept pressure on control 216 while scrolling through display 206. This selection could cause the control’s shape to change from control 214 to control 216. To provide guidance to the user, an index letter is displayed for each item in the list in a familiar way. The index letter 218 is a grouping of elements in the list. Here, a letter from the alphabet is used to indicate the beginning letter of an artist. It’s shown at the top display 206. Other forms of the index letter 218 could be used, such as a number representing a file size or any indicator that allows a list to be divided into distinct groups.

The index letter 218 can be displayed in many ways. The index letter 218 is placed near the edge of a display to minimize its impact on artist names. However, it can also be partially or fully transparent to allow viewing of names even if they are under the index 218. The index letter 218 can move along with control 216 up and down on display 206. The index letter 218 could be placed just to the left control 216, allowing the user to see the index letter 218 even if their finger is on control 215. The control 216 and the index letter 218 could move along with it. This allows the user to focus more on the letter and navigate closer to the artist.

“The index letters may change as the letters on the display 206 list change. For example, if many artists started with A but very few started with C, then the index letters A to Z (and possibly 0 to 9) may have an equal division to control 216. This means that moving down 26ths of display 206 will always result the changing of one index letter 218, such as A.

“In some cases, the index letter can change when the user moves the control 214 up or down. However, the items in the list may not move while such control is taking place.” There may be two types of control: normal panning where items scroll up and down while the user pans; and accelerated panning where items don’t move and an index letter is cycled in an accelerated fashion as the control is moved.

The techniques herein allow users to navigate easily by simply dragging their fingers across a list of items. You can also navigate with larger movements by moving your finger across the screen. This will give the list virtual momentum, allowing you to move more displays at once. A convenient way to scroll quickly through the list is to hide a control until the user indicates that they intend to scroll or pan through it.

“FIG. 2B illustrates the display that a mobile device can generate for a user based on the position or motion of the device. You can configure any number of mobile devices to detect different types of motion. In some cases, the mobile device may have an electronic compass, which can detect changes in heading according to the Earth’s magnet field (e.g. the North and South Poles). The mobile device detects a change in heading when the user holds it. Another example is an accelerometer, which can detect changes in motion. The mobile device can sense a change in acceleration when the user holds it. These motions can be used in some cases to update the display of the mobile device automatically.

Referring to FIG. 2B shows a mobile device 220 being held by a user 234 a-224 C in various orientations and directions. Particular headings are shown for the user 224a-224c (according to?North?). arrow). The same user can be shown with different headings depending on the compass that is included in the mobile phone 220. The mobile device 220 can include a web browser, or another software application that allows the user to view a map of a specific area. Some implementations include images that were captured from street-side vantage points. STREETVIEW is a Google Map (Mountain View in California). The user can enter an address using 224 a-224 cm. This address will be sent to the web browser, or to other software applications. It will then generate a view of the area around the address. The mobile device 220 can provide an address automatically in some cases using a global positioning system (GPS), or other systems that automatically locate the device.

“The user may provide initial address information. Here is the Hubert H. Humphrey Metrodome address. Map tiles and other data may also be available in a familiar format for the area surrounding?The Dome. The user can then choose to view a street view of The Dome. The user might be shown images taken from The Dome’s exterior, but they could also be shown images of their home. The user can also search for the view using the query “MetroDome”. This may return an address as Onebox results, which could include a link to a map of the area surrounding the structure. Users may then choose images from a specific point on the map.

“The virtual direction that a person looks via STREETVIEW can be coordinated with their own frame-of-reference compass direction. The mobile device 220 displays the 222 a-292 c of a specific region as the user moves from 224 a to 224 c. This information is based on the map data, location of the device and/or any other information that can detect the device (e.g. acceleration of the 220). User 224 b, for example, is looking in general SSE and is shown the view of the Dome in a similar direction.

The mobile device 220 detects if the user turns to their right (e.g. in a heading illustrated in user 224a) and adjusts the view to match the heading. Display 222a illustrates this. The mobile device 220 displays (in display 222 a) a different section of the Metrodome based on the new heading it detects. The mobile device 220 detects the user’s movement (e.g. changing in heading). It automatically pans the view to match the current heading.

“The heading on the device can be easily matched to the relevant heading data, which identifies the specific street view or portions thereof. Multiple images can be blended together to make them appear seamless. This allows the user to view the area around where the images were taken on their mobile devices, even though they may be far from the location.

In some cases, an accelerometer may be used in place of or in addition to a compass. The accelerometer can detect movement (e.g. shaking, walking, changing in elevation, orientation or any other motion) and update displays 222a-222c accordingly. The mobile device can sense accelerations and pan the displays 222a-222c accordingly. You can also shake the device to create forward motion in the space. This is similar to selecting a travel arrow from GOOGLE STREETVIEW to allow the user to virtually move down a street. Another example is the mobile device 220 that can detect changes in orientation (e.g. according to acceleration detected via the accelerometer). It can also pan the displays 222a-222c up and down as if it were the user 224a-224c were looking up and down. The remote server provides panoramic images to the device.

“The device’s direction may be relative, rather than absolute. This is especially true if an accelerometer is being used and a compasse is not. The rule may be used to select the initial orientation of the view provided to the user, rather than the direction the user is facing. An accelerometer may detect relative motion of the user as they move to one side or the other. The viewer’s images may then be panned relative to this motion. However, it may not be perfectly proportional to their motion. The display might be limited in its ability to detect absolute motion, so the user could rotate the display 90 degrees. However, the display can rotate 60 degrees.

“FIG. 2C is an example of a user interface that allows for zooming and panning in large spaces. The figure shows four screen shots (a-d) which show different times in zooming and panning on web pages. As shown at the edges of display (a), a web page can be initially raised in a zoomed-out state. Zooming in on the page will allow the user to browse it more easily and find the relevant content within context. The user can double-tap on the page to indicate that they want to zoom in. Double tapping may produce a magnifying zoombox, as shown in screen shot (a). This box appears in a large format to bring it to the user’s attention and then shrinks to show the area to be displayed if they choose to zoom in.

“At shot (b), the user can be seen moving their finger towards the zoom box. The user may then press on the box to drag it over the area they wish to examine more closely. The zoom box may have a slightly larger content than the outside content. This is shown in the shots (b) and (d). When the motion begins, the zoom box can also follow the user’s finger slightly (see shot (d), where the finger is moving towards the lower left corner). The box might then “catch up” The box will then spring back into place once the finger is stopped moving. This will give the user a better idea of how they are moving the box and prevent the finger from covering it completely while it is being moved.

“When the user moves the zoom box over content they wish to see more closely, they might lift their finger and leave the zoom box at the exact location they lifted it. This action can cause the display manager to zoom in automatically on the zoom area until it fills the entire screen. The user can then zoom in on the page by moving their finger across the touch screen, or rolling a trackball. If they wish to zoom out again, they may double-tap on the screen.

“FIG. “FIG. A mobile device, such as device 302, could be used to implement the system 300. Device 302 has various input and out mechanisms, such as touch screen display 304 or roller ball 306. Device 302 can be equipped with a variety of components that provide different selection functionality, such as movement in large spaces exceeding the display 304’s size.

Display manager 312 is one such component. It may be responsible for rendering content to display 304. Display manager 312 can receive graphic-related information from many sources. It may also decide how content will be presented to the user. The display manager 312 can determine which windows to display for different applications 310 on the device. It may also decide which ones to display and which ones to hide if there are overlaps between different graphical objects.

The display manager 312 may include different components that provide the device with specific functionality to interact with displayed components. These components may be shared across multiple apps and may be provided, for instance, by an operating systems of device 302. Interface navigation module 311 may provide such functionality. It may receive input from users who want to move between elements on display 304. This example shows a control 305 on display 304. It may look similar to control (118a) on FIG. 1B. 1B.

Interface navigation module 311 might display control 305 when the user drags the map. This may cause the map’s pan to change in proportion to the dragging motion. The map may be reoriented if the control is moved further away from control 305. In certain cases, the control may move slightly if the control location 305 on the map corresponds with the location on display 304 of the sub-section of the map. Interface navigation module 311 may also allow for changes to the display 304 as a result of user input.

Individual applications 310 may register with the display manager 312 according to an API in order to indicate what display elements they require. An application might identify a set of data elements that correspond to a list. The interface navigation module 311 can then treat these elements visually as a list. For example, it may display an accelerated scrolling control if the list is sufficiently long or a user input indicates an intent to scroll up/down within the list.

An input manager 314 might be responsible for translating commands from device 302. These commands could come from a keyboard or touch screen display 304 or trackball 306, as well as other sources such buttons and soft buttons (e.g. buttons that have different functions over time and may be displayed in areas adjacent to those buttons). Inferential inputs may also be possible, such as signals from an accelerometer or on-board compass. The input manager 314 can determine, for instance, where commands are being received and in which application they are intended. It may also interpret the touch screen 304 input motions into a common format, then pass the interpreted motions to the appropriate application (e.g. flicks, short presses, and straight line drags). These inputs may be reported to the event manager 314 (not shown), who then reports them to appropriate modules or applications.

A variety of applications 310 can be run on the device 302, generally using a common microprocessor. Applications 310 can take many forms such as web browser applications, mapping applications, music and videos, and applications that run within a browser or are extensions to a browser.

“GOOGLE MAPS, GOOGLE STREETVIEW are two applications that can be used independently or in conjunction with a browser. This application can accept readings from the compass module 313 of the device 302, which could include an electronic compass, related circuitry, software for interpreting and software for interpreting readings, as well as an accelerometer 315. As described in FIG. 2B to detect user motion and orientation in order to change the view of a geographical area previously photographed panoramically. Digital images can be downloaded from a server to the device 302.

“A wireless interface 308 allows for communication with a wireless network. This may include a data network that carries voice communications. The wireless interface can operate in a familiar way, as shown in the examples below. It may allow communication between the device 302 and messaging services like text messaging, email, and voice mail messaging. The wireless interface 308 can also allow for uploads and downloads of computer code and content over a wireless network. Images may be downloaded over the wireless network by applications such as GOOGLE STREETVIEW. An application running on the device (302) (such as a JavaScript application running on a webpage displayed on the device) may have access the compass data and may request additional image data around a specific geographical point in response to a user moving the device 302.

“There are many options for persistent storage, including fixed disk drives and solid state memory devices. Here are two examples. The first is maps/lists/etc storage 3316. This can contain all data that applications 310 need. It can also include lists of data elements like map tiles and other well-known data structures to allow a user to interact with apps on device 302.

“Other storage” includes user defaults 318. This may be profile information about a user that is stored on the same media like maps/links/etc. storage 316. The user defaults 318 contain various parameters about the user of the device. The user profile data may contain information about the user 302.

The device 302 can respond to user inputs by using the components shown and other parts that have been omitted. The device 302 might respond to panning inputs in large areas in specific ways. For example, it may display a control that allows for accelerated panning (i.e. panning that is significantly faster than dragging across an object). This typically permits navigation from one end of the area using a single swipe of the controls.

“FIGS. 4A-4B are flow diagrams of process examples for receiving user selections via a graphical user interface. FIG. FIG.

The process starts at box 400 where a request is made to display large areas of data. Large area data can include many forms of data that extend beyond the boundaries of one screen. These data could include large lists, large images, maps, or other similar information. A request to display large area information can take many forms. For example, a request for search results sent to a device. The search results may include large area information in the form of a list or map on a computer.

“At box 402, the process selects the largest area data and displays it. If the large area data is a mapping, the subset displayed may include a portion of the map that surrounds an address. This could be the result of a search query entered by the user. A panning input is received from box 404. This input can be generally received by the user moving their finger, or using a stylus to move across the touch screen display.

“The process responds to user input at box 406. It displays a panning control on a display. After determining the relative size between the display and the area of data, the process determines the relative size of each. If the display area is larger than the display, or slightly larger than it is before the control is displayed, it may be that no control is shown. Panning is not possible in such cases.

The speed and method of input may also affect the display of the control. If the user drags slowly across a display, it may be assumed that they are not interested in navigation to distant corners of the data. In this case, the control may be declined to display. The same goes for if the user does not leave their finger on the screen at the end of an action. This could indicate that they are not interested in panning very far and therefore do not need an accelerated control. If the user moves fast and lifts their finger towards the end to create a “fling”, it may be an indication that they are not interested in panning very far. If the user moves quickly and lifts their finger at the end, it may indicate that they intend to pan long distances. This may allow the user to generate control in this situation.

“At box 408 the process responds to user input, such a ‘fling? By the input of panning inputs or subsequent panning inputs, the process increases the prominence of its input control. This could involve increasing the control’s size or brightness, or pulsing it. You can increase the prominence of the control once, for example, when a second panning input has been received. Or, you may go through several increasing phases until reaching a maximum point. To increase the prominence of this control, we want to make it more obvious that the user has the option to use an accelerated panning control. The more the user attempts to pan on the subject matter, the more likely they are to do so, and the more help the control can provide.

“At box 411, the user sees the control and chooses it to enable accelerated panning. The process responds to the user’s manipulation by quickly panning the data according to the control. If the control is moved downward by the user, the display will move at an exaggerated speed as the user watches. The control may disappear after a period of inactivity. This could be due to the user not selecting it or the user not making the normal panning motion that would bring up the control. For example, the control could be faded out to allow the user to see any data underneath it.

The control can be brought back if the user re-does the flicking action. The control will be placed in a way that is most representative of the user’s current view. If the user is looking at the middle section of a list video files on her device, then the control might be located on the side of her display, halfway between the top or bottom.

“FIG. 4B illustrates a method for creating an accelerated pan control with respect to a list graphically represented items such as names and musical artists or similar groups of data elements. The process is very similar to that shown in FIG. 4A, but requires more specific responses from the user.

The process begins at box 420. Here, the process receives input from the user in the form of a quick movement followed by lifting the finger or stylus on a long-list. The process will then examine the length of the list (boxes 422 and 424). If the list is short, or not significantly longer than the display on the device, the process may continue to receive input from the user (box 426).

A thin panning control (box 428) may be used to display the long list. It is usually displayed along the left edge of the display, in order to not cover the left-justified text. The flick may cause the list to scroll in response to the flick. This may indicate that the speed and distance of scrolling represent the movement and motion of a physical object. As such, the list scrolls after the flick and slows down gradually as if it were being pulled down by friction. The control will also move along the side display to reflect the user’s position (i.e. the location of the display) between the top-bottom of the list.

“At box 433, the process receives another flick input from the user. The list may scroll again as it did after the initial flick input. The fact that the user has flicked twice may indicate that they want to scroll further down or higher on the list. The process at box 432 thickens the display of control to make it more prominent for the user. Box 434 checks if the user has selected the control. However, it will generally check for input at any time after the control is displayed. The control disappears if the user doesn’t select the control or flicks the subject matter for a specified period of time (box 436). The control is thickened if the user selects it. This makes it easier to use and pans the display according the user’s manipulations of the control (such as the ones discussed above).

“FIGS. 4C-4D are flowcharts that show an example of how to update a display based on the movement of a mobile device. FIG. FIG. 4C illustrates, for instance, how a mobile device might respond to the sensed movement of the device. FIG. FIG. 4D shows, for instance, an example of actions that can be performed by additional systems outside of the mobile phone. These systems may be used to detect motion and update the mobile device’s display to reflect it. The process shown in FIG. 4C is illustrated using a map application or mobile device. 4C can be described using a mobile device or map application, and the process illustrated in FIG.

Referring to FIG. “In reference to FIG. 4C, the process begins at box 440 when a map app is launched. In some cases, the map application can be launched automatically after the device boots. The map application could be bootstrapped to the initialization routines of the mobile device. The map application can be launched by the user in some cases. To launch the map app, the user may choose an icon on their mobile device’s display. The map application might be launched by another application or process in some cases. A social networking app may launch the map application if it presents the locations of friends to the user.

The map application will receive an address once the application has been launched in box 442. There are many ways to do this. The user can also manually enter the address. Another example is a GPS device on the mobile device that may automatically provide an address.

“In box 444, the map application can fetch and show map tiles around the address. The map application can display tiles that depict buildings, parks, streets or other locations, for example.

“Box 446 allows the map application to fetch and display StreetView images. A StreetView image is a photograph taken at street level from the perspective of a person or vehicle. In some cases, the StreetView image may be requested by the user of the map app to identify the street on which the StreetView images are being retrieved and displayed. The first image might be the digital image taken at the point by the camera, facing in the direction the device is facing.

“In box 448 the map application can sense the motion of the smartphone. The mobile device can communicate with any one or more sensors or modules that detect motion to sense motion. The map application, for example, can communicate with an accelerometer or compass to sense the motion of the mobile phone.

“In box 450 the map application pans StreetView images to match user’s orientation (i.e. the orientation of their mobile device). FIG. 2B StreetView images are displayed on the displays 222a-222c of mobile device 200 based on the orientation of the user 224-224c.

This allows a user to quickly see a location, such as the shopping district they plan to visit or the neighborhood of a friend they intend to visit. You can get a better view of the area by holding your mobile device in front and moving in a circular motion. The images will then pan to match your movements. The user can see the location quickly and decide whether they want to visit it.

“In reference to FIG. “In reference to FIG. 4D, the process begins at box 452 when a map search box is launched by the mobile device. Search boxes, in general, are input controls that allow a user to enter information using a query. A search box could be an editable textbox or another user interface component.

“In box 454, the device receives an address input form the search box. Some implementations allow users to input an address in the search box. or ?123 Main Street, Anytown, CA?). An automated process could also provide the address in other implementations. An example is that the map application might be pre-configured with addresses such as the user’s workplace, home or other addresses. The map application can then automatically give the address to the device upon request. Another example is that a GPS module could determine the address of the device using a GPS calculation.

“In box 456, the device submits a formatted map query. The maps query can be formatted to any number of program interfaces (API). Examples of formatting include, but are not limited to, any number or formats of database queries, common gateway interfaces (CGI), request formats, hypertext Markup Language (HTML), formats, and any other traditional formats for submitting questions.

Click here to view the patent on Google Patents.