Alphabet – Vassilis Papakonstantinou, Stavros Schizas, Mobiltron Inc

Abstract for “Systems and methods to detect and report personal emergencies in real-time”

“According at least one aspect, the system and method for reporting personal emergencies includes a mobile app running on a user’s device and a backend software running on a backend server. Both the mobile application and backend can communicate regularly. The backend and mobile applications can monitor sensor data, context data, or any other information that may indicate user health, activities, or environment. Based on these data, the application can detect or predict an emergency. The backend or mobile applications can notify emergency contacts or warn users if an emergency occurs. The backend app can use sensor data and other user data to update an existing emergency predictive model that is specific to the user, which is used to predict or detect emergency events.

Background for “Systems and methods to detect and report personal emergencies in real-time”

People take on risks in their everyday activities that they don’t usually realize. A person can be in an accident whether they are driving, crossing roads, walking, or climbing a mountain. Even if they are not at home, elderly and sick persons can become the victims of an emergency. If a personal emergency occurs, it is important to notify the authorities immediately so that proper assistance or rescue can be given to those at risk. If a person experiences a personal emergency alone or in a remote location, it may be harder for others to detect the event and the person will not receive the proper help they need. Research has also shown that people often assume someone else will call the emergency services providers (e.g. police, fire department or hospital) when they witness an emergency. This behavior decreases the likelihood of timely reporting an incident by a proportional amount to the number people who witnessed it.

“Some emergency detection/response system make it easier for someone who is experiencing a personal emergency at their home to seek medical attention. The personal emergency response system (PERS), for example, provides a device that can connect to an emergency switchboard via a phone line. If a person is experiencing a personal emergency at home (such as an elderly person or sick person falling alone), they can press a button to call for help.

The present disclosure is about methods and systems that detect and report personal emergency events using a mobile app running on a user’s client device. These systems and methods allow for the detection of emergency events associated with the user using regularly collected sensor data, user context data, and/or any other relevant data to the user’s state of health, activities, environment, and their current health. The systems and methods described in this article include a mobile application and a backend application that runs on the backend system. Both the mobile application and backend can be set up to communicate regularly. You can set up the mobile application and the backend to monitor sensor data, user context data, as well as other data that may indicate the user’s health, activities, or environment. The mobile application and the backend can use the data to determine if the user is in or near to an emergency. The mobile app or the backend can notify emergency contacts or warn the user if the application detects a personal emergency. The mobile app can be set up to transmit sensor data as well as other relevant data about the user’s health, activities, and environment to the backend application. The backend app can combine the data from the mobile app with data from other sources to create or update an emergency prediction model that is specific to the user. The backend app can transmit the emergency predictive model (or parameters thereof) to the mobile application in order to detect personal emergency events. The backend application may use the generated emergency prediction model or its parameters to detect personal emergencies associated with the user in certain embodiments.

In some cases, sensor data can be linked to sensors embedded in the client device of a user (such a accelerometer(s), digitizer(s), digital clock, microphone(s), and the like), sensors mounted onto wearables of the user (such a thermometer(s), pulse measurement device(s), pulseoximetry, or thelike), ambient sensors (such s thermometer(s), humidity measure device(s), or other sensors), and sensors associated with medical devices (such a heart beat detection device or batteries chargeability detection device(s), battery(s), device(s), such as well as well as the like), or similar), heart rate detection device(s), etc. Data monitored by the client app can include data such as user data, temperature, blood sugar, and readings of the user’s pulse, as well as data about user activities (such user activity, type, location, or the similar).

“Some implementations can detect an imminent emergency or a current or immediate emergency. A mobile application or backend app can be set up to notify the user or an emergency contact if an impending emergency is detected. The user can be notified (or someone else) about an impending emergency. This could include information about the event and/or the actions that the user should take. If an emergency is detected, the mobile app or the backed can be set up to notify the emergency contact(s) of the user. Notifying emergency contacts can be as simple as sending a message with information about the emergency, the current location, and any pertinent medical information. You can configure the mobile application to display information about an emergency, user medical information, or any other information on your client device’s screen. The displayed information can be used to lock the client’s screen. You can configure the mobile app or backend application to notify an emergency response service about the detected emergency and the location of your user.

“In certain implementations, the backend app can be configured so that it receives data from both the mobile application and data from other online sources. Based on user-related data, the backend application can be set up to personalize the emergency prediction model and its parameters. The backend app can send data indicative to the customized predictive model and its parameters to the mobile application. This allows the user to use the personalized emergency predictive model to predict or detect emergency events. The adaptation/customization of the emergency predictive model on a regular basis by the backend application allows accurate detection of emergency events for the user. Further, the backend application can also be configured to allow users to share data with healthcare providers or social network services. The backend app can also obtain user-related information from web services like online weather services, news services, government websites, and the like. The backend application may be configured to transmit information indicative of emergency events to a backend system call center or an emergency responsive service system.

“Some implementations allow the backend to configure user data encryption with user-specific credentials. The backend can perform processes to customize the emergency prediction model for the user on a micro virtual computer (mVM). The backend application can detect the presence of the mobile application on the client’s device and launch the appropriate mVM. To ensure privacy and security, the mVM can run on a micro virtual machine with a time-varying Internet Protocol address. The backend can detect that the client device is not using the corresponding mobile app. This will cause the mVM to be stopped, encrypt the image and store encrypted data. The mVM increases security and privacy of user data. With little to no risk of exposing user privacy, the user can export the corresponding data from backend system. The user can delete the corresponding data from backend system by deleting the mVM containing encrypted user data. There will be no copies on backend system. This prevents (or mitigates) any risk of others accessing user data.

According to at least one aspect, a mobile device can receive data about the user’s health, activity, and environment. The mobile device, or the server that analyzes the data about the user’s health, activity, or environment and detects an emergency from the data can be part of the method. This method could also include the server or mobile device that executes, in response to the detection, an emergency protocol.

“In certain embodiments, the method may include data being received from one or more sensors regarding one or more motion, location, ambient temperature and ambient humidity, or medical measurements. This can include the ability to receive content from one or several sources through one or more networks and contextual data about the user. This contextual data could include information about the user’s location, air pollution level, previous activity, posts, messages, or changes in status in a user feed. This can include data from the mobile device about public alerts related one or more to a user’s location or activity. The data about public alarms could include data about disaster alerts or weather alerts or crime alerts.

“In some embodiments, the method may include either the mobile device or server that uses an emergency predictive model to analyze the data. This model takes into account the current condition of the user, environmental parameters and user settings to determine the likelihood of an emergency. A score value can be computed by the mobile device or server to indicate the likelihood that an emergency will occur. The score value can be compared to a threshold value by the mobile device or server. If the score value is greater than the threshold value, the mobile device or server can alert the server.

“In certain embodiments, the method may include the verification of an emergency via the mobile device. The mobile device, or the server that initiates responsive to verification, can also be included in the method. The mobile device or server can send, responsive to verification, a distress message to one or more emergency contacts.

According to at least one aspect, an application that can be executed on both a mobile device or a server in a system to detect an emergency occurring to a user may be included in a system to detect it. The application can be configured to receive data about the user’s health, activity, or environment. The server can communicate with the mobile app and receive data from it. The server or at least one mobile application can be set up to analyze data about the user’s health, activity, or environment, identify an emergency event from the data, and then execute an emergency protocol in response to that detection.

“In some cases, the data can be obtained from one or more sensors. It may include one or more of these: motion, location and ambient temperature. Data can be obtained from one or several content sources over one or multiple networks. It can also include contextual data about the users. Context data may include traffic patterns, air pollution levels, previous activity, posts, messages, or changes in status of user feeds. The server or at least one mobile app can be set up to receive data about public warnings related to a user’s location. This data may include information about disaster alerts and weather alerts.

“In certain embodiments, at most one of the mobile applications or the server may be configured to use an Emergency Predictive Model for analyzing the data. This model takes into account the user’s current condition, environment parameters, and past knowledge in order to predict the likelihood of an emergency. The server or mobile device can calculate a score value that indicates the likelihood of an emergency occurring. The score value can be compared to a threshold value by the mobile device or server. If the score value is greater than the threshold value, the mobile device or server can alert the server.

“In certain embodiments, at most one of the server or mobile applications can be configured to verify an emergency via the mobile device. The server or mobile application can initiate, upon verification, emergency reporting. The server or mobile app can send, upon verification, a distress message to one or more emergency contacts. This will enable the user to create an emergency response service, or an ad-hoc safety net.

“In certain embodiments, the server may monitor the availability of the mobile app and launch a microvirtual machine(mVM) when the mobile application has been started on the device. If the server detects that the mobile app is not being used, it can stop the mVM. The server can also encrypt the image of the mVM as well as data associated with the user. The encrypted data can be stored by the server for later use by the appropriate mobile application. An mVM can run on a microvirtual host and the IP address of that micro-virtual host can be set randomly at initiation. The micro-virtual service can be renewed at random intervals while the mVM runs on it.

A mobile device can receive data about the user’s health, activity, and environment. This is at least one aspect of a method to predict an emergency. The mobile device, or the server, can analyze the data to determine at least one aspect of the user’s health, activity, or environment. The mobile device, or the server, can calculate a score value that indicates the likelihood of an emergency occurring and notify the user if the score value is greater than a threshold.

“In some embodiments, this method may include the mobile device/server detecting that the behavior or user has not changed and responding to the detecting initiating by one of the mobile devices or the server emergency reporting. The mobile device or server can send a distress signal via the mobile device to one or more of these: emergency contacts, an emergency response service, or a d-hoc safety network that the user has created. The data may include contextual data about the user as well as data from one or more sensors that are associated with the user in some embodiments.

According to at least one aspect, a system to predict an emergency can include a mobile app that is executable on a smartphone and a server that communicates with the mobile application to receive data from the device. The mobile application can receive data about at most one of the following: a user’s health, activity, or environment. You can configure the mobile application and the server to receive data about at minimum one of the following: health, activity, or environment. The server or mobile app can be set up to analyze data about the user’s health, activity, and environment. A score value can be determined from the data by the mobile application or server to indicate the likelihood of an emergency. If the score value is greater than a threshold, the mobile app or server can send a notification to the user, informing them to modify their behavior to avoid an emergency.

“In certain embodiments, the mobile app or the server can detect that the behavior is not changing and respond to this by initiating an emergency reporting, either via one of the mobile devices or the server. A mobile application or server can be set up to respond to the detection of a distress signal. This could include sending a distress message to one or more of these: emergency contacts, an emergency response service, or a d-hoc safety network that the user has created. The data may include contextual data about the user as well as data from one or more sensors that are associated with the user in some embodiments.

“The following sections of the specification with their respective contents can be useful for reading the descriptions of various embodiments:”

“Section A” describes a computing environment and network environment that can be used to practice embodiments of reporting and detecting personal emergencies.

“Section A describes systems and methods that generate and use codes embedded on articles of manufacture to initiate or execute a transaction.”

“A. Computing and Network Environment.”

“In addition to discussing specific embodiments for generating and using codes embedded on articles of manufacture to initiate or execute a transaction, it may also be useful to describe aspects the operating environment and associated system components (e.g. hardware elements) in relation to the methods and systems discussed herein.”

“FIG. “FIG. The network environment 100 consists of one or several client devices 102a-102n (also known as local machine(s), 102 and client(s), 102), client node (s), 102 or client machine(s), 102), client computer(s), 102), client device(s), 103, client computer(s), 102), client device(s), 102), client machine(s), 102), client client application(s), 103), endpoint(s), 102 or endpoint(s), backend system (also called emergency contact(s), web service(s), 99, or healthcare provider(s), service(s), web service(s), 97, 98 or social network(s), or network service(s), or social network(s), or network(s), 99, or service(s), or 98 or social network(s), or network(s), or s), or 98 or s), or s), or s), or 96), or s), or s), or s), 98 or, s), 98 or, 98 or, or s), or s), or s), 98 or,, s), or s), or s), or s), or s). Client devices 102 can receive sensor data from wearable articles 95 such as jewelry and other accessories. One or more backend programs 120 a?120 n may include the execution of micro virtual machines (mVMs), 129 a?129 n (also known as mVM129).

“The client device (102), the backend system (105), and emergency contact devices (107) can all be connected to one or more networks104 and can communicate through one or more of the networks 104. The backend system (105) or client device 102 can also be configured to communicate through one or more networks with systems associated with the healthcare provider services(s), 97, 98, and 99. The client device 102 or the backend system (105) can be configured to communicate automatically with the system(s), associated with the emergency service(s), 96 via one or more networks (104) One or more networks 104 allow wired and/or mobile communication between devices (such client devices 102 and computer devices associated to the backend system105, emergency contact device 107 and/or electronic devices associ with the emergency responsive services 96, 97, 98, web service(s), 98 and social network service(s), 99). The FIGS. show examples of the networks 104 as well as devices that are coupled to them. 1B-1E, in terms of software and hardware. Some implementations have one or more call center 101 that are communicatively linked with the backend system (105).

The mobile application 103 can detect an emergency in the client device 102 based on the analysis of the data. The mobile application 103 can detect an emergency and cause the client device to take the appropriate action. This includes sending out signals warning the user or communicating with the emergency contact device (s) 107 that is associated with the user’s emergency contact person(s). After receiving notification(s), the client device 102 can notify the emergency contact person(s) via the corresponding device (107) and one or more networks 104. Client device 102 can also notify emergency responsive service (96) of an immediate emergency by sending electronic messages. The client device 102 can store sensor-measured data and user context data. The client device 102 can transmit the collected sensor-measured information to the backend system105 via the mobile app 103.

“The backend system (105) can contain a computer system that can execute multiple backend applications 120. Each mobile application 103 that runs on the user’s client device (102) is associated with the backend application 120. The backend 120 and the corresponding mobile app 103 are set up to communicate frequently to exchange data. The backend application 120 and the corresponding smartphone application 103 may communicate using a secure communication protocol, such as the Hypertext Transport Protocol Secure (HTTPS), Secure Real-time Transport Protocols (SRTP), Secure Communications Interoperability Protocols (SCIP) or any other secure communication protocol that is known to a person of ordinary skill.

The backend application 120 can be used to create emergency predictive models (or parameters) that can then be used by the corresponding mobile app 103 for analyzing the emergency events of the user. The backend application 120 provides the emergency predictive model (or parameters) to the corresponding mobile app 103 through the backend system, one or more networks and client device 102. The backend 120 can also provide additional information to the corresponding mobile app, such as medical data obtained from the healthcare provider service(s), 97, weather information, public alerts, and user context data from the web service(s), 98 or any other information from the social media service(s), 99 or a combination thereof. The backend application 120 may be configured to launch the mVM129 when the client device 102 indicates that the mobile app 103 has been launched. The mVM129 can be configured to execute user-related algorithms or processes. These user-related algorithms and processes can include the creation of a user-specific emergency prediction model or the detection of emergency events. The mVM129 can improve data security and privacy.

The backend application 120 can also be configured to receive sensor and user data. The backend 120 can store user data, sensor data, and user context data. This data can then be used to track the user’s health, as well as generate/update user-specific emergency prediction models. The backend system can be used to store users’ data. The backend application 120 may be configured to mirror the functionality of the client application 103. Backend application 120 can be used to analyze sensor data and user context data and to report emergencies. The backend application 120 can be used to analyze data available to determine if there is an impeding or current emergency. The backend application 120 can detect an emergency and report it or take other measures to respond. The backend system 101 can be linked to certain implementations by the backend application 120 and backend system 110. The call center 101 can receive emergency information and other user information when an emergency is detected. Operators at the 101 call center can verify an emergency with the user by calling them. Operators can also contact emergency service(s), 96, or emergency contacts of the user.

“Each user can be associated with a client device (102) and a mobile app running thereon to have an emergency contacts network that includes one or more individuals or entities and one or several corresponding emergency contact devices (107). A client device 102 can be set up to act as an emergency contact device for another user. A client device 102 can be used as an emergency contact device 107. A user can associate with a client phone 102 to be an emergency contact for another user. If an emergency is detected, the mobile application 103 and the backend 120 can be set up to send notifications to emergency contact devices (107). An emergency notification can contain a description of the emergency, a location indication 102, and medical information 102 about the user. If an emergency notification has been received, emergency contact persons can contact the emergency responsive services 96. The emergency responsive service number 96 can be linked to a government agency, hospital, fire department or other entity that provides emergency services.

“The healthcare provider service(s), 97 can be associated to a pharmacy, hospital(s), and other healthcare-related entities. The backend application 120, or the corresponding mobile app 103, can retrieve data from a pharmacy system about prescriptions and medications associated with the user. The backend app 120 or the corresponding mobile software 103 can exchange information with a computer system that is associated with a doctor, other healthcare provider entity, or the user.

The web service(s), 98 may include web pages that provide weather information, public alerts (such a hurricane, storm, or tornado alerts), virus and infectious diseases alerts, crime alarms, or a combination thereof), or news. The web service(s), 98 information can be accessed by the mobile application 103 directly or via the backend application 120. This information can then be used to assess any immediate or imminent emergency. The backend application 120 can filter the information from the web service(s), 98 based on the user’s location and other information.

“The social network service(s), 99 can include any social network services to the which the user of client device 102 has subscribed. One or more of the user’s social network contacts can automatically receive some user information, such as login/logout times, location information, and other information. This information can, for example, be shared automatically (based on the user’s preferences), with friends, family, emergency contacts or other contacts who are subscribed to the same social networking service(s). 99 This shared information can be used to allow remote monitoring of user’s activities by one or more people in the social network. The client device 102 allows the user to view the activities of their friends using information from the social networking service(s). 99. The backend 120 and the corresponding client app 103 allow for automatic retrieval of data from social network service(s), 99 and presentation to the user. The backend application 120 and the corresponding client app 103 allow for automatic publishing/sharing user information to social network contacts. A social network service 99 can be set up to send an emergency message indicative of a current emergency (received via a client device 120 or backend application 120) directly to the emergency response team 96.

FIGS. 2-4 describe the hardware and software implementations of one or more networks (104 and the devices thereto, such as client devices 102, devices/serves that are associated with backend system105, emergency contact devices107, and servers/devices that are associated with services 96-99). 1B-1E. The following is a discussion of the processes for reporting and detecting emergency events. 2-3.”

Referring to FIG. “Referring to FIG. 1B, a depiction of a network environment. The network environment includes one to three clients 102a-102n (also known as client node(s), 102 and client node (s), 102, client client(s), 102 and client client(s), 102), client machine(s), 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102 and client device(s), 102), client node(s), 102, 102, 102, 106 or backend system (ss106 or backend remote(s), 105 through one or several networks 104. A client 102 can be used as both a client server that requests resources from a server or as a server that provides access to shared resources for clients 102a-102n.

“Although FIG. FIG. 1B shows a network of 104 connecting the clients and backend servers. However, clients 102 may be connected to the same network 104. Some embodiments may have multiple networks 104 connecting the clients 102 to the backend server 106. A network 104 is one example of such an embodiment. A network 104 could be a private network, while a network (not shown), may be public. A network 104 could be a private network, while a network 104.1 may be a public network. A public network. Networks 104 and104 are also possible in another embodiment. Both networks 104 and 104 may be private networks.”

“The network 104 can be connected via either wired or wireless links. Digital Subscriber Line (DSL), coaxial cables lines or optical fiber lines can all be connected via wired links. Wireless links can include BLUETOOTH and Wi-Fi (Worldwide Interoperability for Microwave Access) as well as an infrared channel, satellite band, or BLUETOOTH. Wireless links can also include any cellular network standard used to communicate between mobile devices. This includes standards that are 1G, 2G or 3G. If the network standards meet a specified or set of standards, they may be considered one or more generations of mobile telecommunications standards. The 3G standards, for example, may correspond to the International Mobile Telecommunications-2000 (IMT-2000) specification, and the 4G standards may correspond to the International Mobile Telecommunications Advanced (IMT-Advanced) specification. AMPS, GSM and UMTS are some examples of cellular network standards. Cellular network standards may use various channel access methods e.g. FDMA/TDMA/CDMA, SDMA. Different types of data can be transmitted using different standards and links in some embodiments. Other embodiments allow the transmission of identical data via different standards and links.

The network 104 can be any type or form of network. The network 104’s geographical coverage can vary greatly. It could be a body-area network (BAN), personal area network, or a local-area network. Intranet, metropolitan area network (MAN), wide area network(WAN), or Internet. The network 104’s topology can be any type and could include any combination of: bus, star or ring, tree, point-to-point, bus or star. The network 104 could be an overlay network that is virtual and sits on top one or more layers from other networks 104?. The network 104 can be any network topology known to ordinary skill in the art and capable of supporting operations. The network 104 can use different protocols and layers, such as the Ethernet protocol, TCP/IP, the ATM (Asynchronous Transfer Mode), SONET (Synchronous Optical Networking), or SDH (Synchronous Digital Hierarchy). TCP/IP’s internet protocol suite can include the application layer, transport layer and internet layer (including IPv6). Network 104 could be classified as a broadcast network or a telecommunications network. It also may include a data communication network or computer network.

“In some embodiments, multiple backend servers may be logically grouped together 106. One of these embodiments may refer to the logical grouping of backend servers as either a backend server farm or a backup machine farm 38. Another embodiment may allow the backend servers to be geographically dispersed. A machine farm 38 can be managed as one entity in other embodiments. Another embodiment of the backend farm 38 may include a number of backend machines farms 38. Each backend server 106 in a backend machine farm 38 may be heterogeneous. One or more backend servers 106, or backend machines 106, can operate according to a particular operating system platform (e.g. WINDOWS NT manufactured by Microsoft Corp. of Redmond. Wash.). While one or more backend servers 106 can work according to another operating system platform (e.g. Unix, Linux or Mac OS X).

In one embodiment, the backend servers 106 of the backend machine farms 38 can be stored in rack systems with high density and associated storage systems. These rack systems may then be located in an enterprise-level data center. This embodiment consolidates the backend servers (106) in a way that improves system management, data security, and system performance. Backend servers 106 are located on localized high-performance networks and are able to access high performance storage systems. The centralization of the backend servers and storage systems, and their coupling with advanced system management tools, allows for more efficient use.

The backend servers 106 and 106 of each backend farm 38 don’t need to be physically close to the other backend server (106) in the same backend farm 38. The backend server 106 that are logically connected as a backend farm 38 can be interconnected via a wide-area or metropolitan-area network connection. A backend machine farm 38 could include backend servers located on different continents, in different areas of a country, state, city or campus. The data transmission speeds between backend server 106 of the backend machine farm 38 may be improved if backend servers (106) are connected via a local-area networking connection (LAN), or another type of direct connection. A heterogeneous backend server farm 38 could include one or two backend servers (106) that operate according to an operating system. One or more of the backend servers (106) may also be connected using a local-area network connection or some form of direct connection. The backend servers are connected via a local-area network (LAN) connection or another type of direct connection. Hypervisors can be used in these embodiments to simulate virtual hardware, partition and virtualize physical hardware, as well as to execute virtual machines that allow access to computing environments. Native hypervisors can run directly on the host machine. VMware ESX/ESXi, made by VMWare, Inc. of Palo Alto, Calif., and the Xen hypervisor, which is an open-source product whose development was overseen by Citrix System, Inc., as well as the HYPERV hypervisors that Microsoft or other companies provide. Hosted hypervisors can run in an operating system at a second level. VIRTUALBOX and VMware Workstation are two examples of hosted hypervisors.

“Management of backend machine farm 38 could be decentralized. One or more backend server 106 could include components, subsystems, and modules that support one or several management services for backend machine farm 38. One or more backend server 106 can be used to manage dynamic data. This includes techniques for failover, replication and increasing the resilience of the backend machine farms 38. One backend server 106 can communicate with both a persistent store or, in certain embodiments, a dynamic store.

“Backend server106” could be a file, application, web, proxy, server, firewall, gateway, gateway, virtualization server or deployment server. One embodiment may refer to the backend server as a “backend remote machine” or “backend node”.

Referring to FIG. “Referring to FIG. Client 102 may have access to one or more resources through a cloud computing environment. One or more clients 102a-102n may be part of the cloud computing environment. They can communicate with the cloud 108 via one or several networks 104. Clients 102 could include thick clients, thin client, or zero clients. Even if the client is disconnected from the cloud108 or backend servers106, a thick client can still provide some functionality. To provide functionality, a thin client or zero client might depend on the cloud 108 connection or backend server106. Zero clients may be dependent on the cloud108, other networks 104, or backend servers106 to retrieve the operating system data. Backend platforms may be included in the cloud 108, such as backend servers 106 and storage.

“The cloud108 can be private, public, or hybrid. Public clouds could include public backend servers106 that are managed by third parties for the owners 120 of the backend applications. Backend servers 106 can be found in remote locations, as described above. The backend servers 106 may be connected over a public network to the public clouds. Private clouds could include backend servers 106 that are owned by backend application 120 owners. Private clouds can be connected to backend servers (106) via a private network. Hybrid clouds (108) may connect to both public and private networks 104, and backend servers106.

“The cloud108 may also include cloud-based delivery, e.g. Software as a Service 110, Platform as a Service 112, and Infrastructure as a Service 114. IaaS can refer to renting infrastructure resources for a specific time period. IaaS providers can offer large amounts of storage, networking, servers, or virtualization resources. This allows users to scale up quickly and access more resources as they need them. IaaS may include AMAZON WEB Services provided by Amazon.com, Inc., Seattle, Wash., Rackspace US, Inc., San Antonio, Tex., and RACKSPACE CLUD provided by Rackspace US, Inc., San Antonio, Tex., Google Compute Engine offered by Google Inc., Mountain View, Calif., or RIGHTSCALE supplied by RightScale, Inc., Santa Barbara, Calif. Additional resources, such as the operating system, middleware, and runtime resources, like the operating system, software, or other than Iaa. PaaS may include WINDOWS AZURE, provided by Microsoft Corporation of Redmond, Wash., Google App Engine, provided by Google Inc., or HEROKU, provided by Heroku, Inc. of San Francisco, Calif. SaaS providers might offer the same resources as PaaS, including storage, networking, servers, virtualization, operation system, middleware, runtime resources, or storage. SaaS providers can offer additional resources, such as data and application resources, in some instances. SaaS includes GOOGLE APPS offered by Google Inc., SALESFORCE offered by Salesforce.com Inc. San Francisco, Calif. or OFFICE 365 offered by Microsoft Corporation. Data storage providers may also be included in SaaS, for example. DROPBOX provided Dropbox, Inc., San Francisco, Calif., Microsoft SKYDRIVE provided Microsoft Corporation, Google Drive provided Google Inc., and Apple ICLOUD provided Apple Inc., Cupertino, Calif.

Clients 102 can access IaaS resources using one or more IaaS standard, such as Amazon Elastic Compute Cloud, Open Cloud Computing Interface, Cloud Infrastructure Management Interface, or OpenStack standards. Clients may be able to access resources via HTTP using some IaaS standards. These standards may use the Representational state Transfer (REST), Simple Object Access Protocol, or both. Clients with 102 clients may have access to PaaS resources using different PaaS interfaces. Some PaaS interfaces may use HTTP packages, JavaMail APIs (JDO), Java Persistence API(JPA), Java Data Objects/JDO), Java Data Objects/JDO (JDO), Java Data Objects/JDO), Java Data Objects/JDO (JDO), Java Data Objects/JPA), Python APIs and web integration APIs. These APIs can be used for programming languages such as Rack for Ruby, WSGI For Python, PSGI for Perl or other protocols. Clients 102 can access SaaS resources via web-based user interfaces provided by a browser (e.g. GOOGLE CHROME and Microsoft INTERNET Explorer are some examples of SaaS resources that clients 102 can access. Clients 102 can also access SaaS resources via smartphone or tablet apps, such as Salesforce Sales Cloud or Google Drive app. Clients 102 can also access SaaS resources via the client operating system. This includes, e.g. Windows file system for Dropbox.

“In certain embodiments, access may be authenticated to IaaS or PaaS resources. A server or authentication server might authenticate a user using security certificates, HTTPS, and API keys. API keys can include different encryption standards, such as Advanced Encryption Standard, (AES). “Data resources can be sent via Transport Layer Security (TLS), or Secure Sockets Layers (SSL).

“The client server 102 and the backend server 106 can be deployed on any type of computing device and/or executed from them. A computer, network device, or appliance that can communicate on any type of network and perform the operations described herein. FIGS. FIGS. 1D and 1E show block diagrams of a computing unit 100 that can be used to practice an embodiment of client 102 or backend server106. FIGS. 1D and 1E show that each computing device 100 has a central processing module 121 and a main storage unit 122. FIG. FIG. 1D shows that a computing device 100 can include a storage device 128, a installation device 116 and a network interface 118. Display devices 124 a-124 n are shown. A keyboard 126 is also shown. A mouse. Storage device 128 can include an operating system, software, backend apps 120, or any combination thereof. FIG. FIG. 1E shows that each computing device 100 can also have additional elements, e.g. A memory port 113, bridge 170, input/output devices 130a-130n (generally referred by using reference number 130), and a cache storage 140 in communication to the central processing unit.

“The central processing module 121 is any logic circuitry which responds and processes instructions from the main memory device 122. A microprocessor unit is often used to provide the central processing unit (121) in many embodiments. Those manufactured by Intel Corporation, Mountain View, Calif., and those manufactured at Motorola Corporation, Schaumburg (Ill.); the ARM processor with TEGRA system on a Chip (SoC), manufactured by Nvidia, Santa Clara, Calif. ; the POWER7 process, manufactured by International Business Machines, White Plains, N.Y., or those manufactured at Advanced Micro Devices, Sunnyvale, Calif. These processors or any other processor that can operate as described herein may be used to create the computing device 100. The central processing unit (121) may use instruction level parallelism or thread level parallelism. It can also utilize different levels of cache and multi-core processors. Multi-core processors may contain multiple processing units within a single component. The AMD PHENOM IIX2, the INTEL Core i5 or INTEL CPU i7 are examples of multi-core processors.

“Main memory unit (122) may contain one or more memory chips that can store data and allow any storage location to directly be accessed by the microprocessor. 121 Main memory unit 122, which may be volatile, can store more data than 128 memory. The main memory unit 122 can be Dynamic random-access memory (DRAM), or any variants thereof, Burst SRAM/SynchBurst SRAM/BSRAM, Fast Page Mode DRAM/EDRAM (FPMDRAM), Extended DRAM/EDO RAM (EDO DRAM), Extended DRAM/EDO DRAM), Burst Extended DRAM/BEDO DRAM), Single data rate synchronous DRAM/SDR SDRAM, Double Data Rate SDRAM/DDRRAM (DRDRAM), Direct Rambus DRAM/XDRDRAM (DRDRAM) and/DRDRAM (DRDRAM/DRDRAM (DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM (DRDRAM) or Extreme DRAM/DR DRAM/XDR DRAM/DR DRAM/DRDRAM (DRDRAM), Direct Rambus (DRDRAM/DRDRAM), Direct Rambus DRAM/DRDRAM (DRDRAM), In some embodiments, the main memory 122 or the storage 128 may be non-volatile; e.g., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive-bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. You can use any of the memory chips described above, or any other memory chips that are capable of operating in accordance with this invention. FIG. FIG. 1D shows how the processor 121 communicates via a system bus 150 with main memory (described below). FIG. FIG. 1E shows an embodiment of a computing system 100, in which the processor communicates with main memory via a memory port. FIG. FIG. 1E may show DRDRAM as the main memory.

“FIG. “FIG. Other embodiments of the main processor 121 connect with cache memory 140 via the system bus 150. Cache memory 140 is usually faster than main memory 122 in response times and is typically supplied by SRAM (BSRAM), EDRAM or EDRAM. FIG. 1E shows how the processor 121 communicates via a local bus 150 with various I/O device 130. There are many buses that can be used to link the central processing unit 121 with any I/O device 130. These include a PCI bus or a PCIX bus or a PCI Express bus or a NuBus. In embodiments where the I/O device 124 is a video monitor, the processor 121 can use an Advanced Graphics Port to communicate with the display (124) or the I/O controller (123 for the display 124) FIG. FIG. 1E shows an example of a computer 100 where the main processor 121 can communicate directly with I/O device130 b or other processors. via HYPERTRANSPORT or RAPIDIO communications technology. FIG. FIG. 1E shows another embodiment where local buses and direct communication are combined: processor 121 uses a local interconnect bus to communicate with I/O devices 130a while also communicating directly with 130b.

The computing device 100 may contain a variety of I/O devices 130a-130n. Trackpads, trackpads and trackballs can be used as input devices. Video displays, graphical displays and speakers can be output devices.

“Devices 130a-130n may contain multiple input or output devices such as Microsoft KINECT or Nintendo Wiimote, Nintendo WII U GAMEPAD or Apple IPHONE. Some devices 130 a-13 n can combine some inputs and outputs to allow gesture recognition inputs. Devices 130 a-130n allow facial recognition, which can be used for authentication or other commands. Devices 130 a-130n provide voice recognition and inputs such as Microsoft KINECT by Apple, SIRI to IPHONE by Apple or Google Now.

“Additional devices 130a-130n can be used as input or output devices. They include haptic feedback devices and touchscreen displays. Multi-touch screens, touchpads and touch mice may use different technologies to sense touch. These technologies include capacitive (surface capacitive), projected capacitive (PCT), resistive (infrared), waveguide, dispersive touch (DST), in cell optical, surface acoustic (SAW), bendingwave touch (BWT), force-based sensing technologies. Multi-touch devices can allow for two or more contact points with the surfaces, which allows advanced functionality such as pinch, spread, rotate and scroll. Some touchscreen devices, such as Microsoft PIXELSENSE and Multi-Touch Collaboration Wall may have larger surfaces like on a table-top, or on a wall. They may also interact with other electronic gadgets. A group of I/O devices 130a-130n, display devices 124-64 n and some other devices could be augment reality. An I/O controller 123 may control the I/O devices as shown in FIG. 1C. 1C. An I/O device can also be used to store and/or install the computing device 100. Other embodiments may also provide USB connections (not illustrated) for receiving handheld USB storage devices. An I/O device 130 can be used as a bridge between system bus 150, external communication buses, e.g. A USB bus, a SCSI Bus, a FireWire Bus, a FireWire Bus, an Ethernet bus or Gigabit Ethernet bus.

In some embodiments, display devices 124a-124n can be connected to I/O control 123. Display devices include liquid crystal displays (LCD), thin-film transistor LCD (TFTLCD), blue-phase LCD, electronic papers, (e-ink), and liquid crystal on silicon displays (LCOS). They may also be connected to I/O controller 123. You can use the following examples of 3-D displays: Stereoscopy, polarization filters or active shutters are some examples of 3-D displays. Display devices 124a-124n can also be head-mounted displays (HMD). Display devices 124 a?124 n and the corresponding I/O control units 123 can be controlled or have hardware support OPENGL, DIRECTX API, or other graphics libraries in some embodiments.

“In some instances, the computing device 100 can connect to multiple display devices (124 a-124n), which may be the same type or different. Any of the I/O device 130 a?130 n or the I/O controller123 can include any type or combination of hardware, software, and hardware to enable, support, enable, or provide for multiple display devices 124a?124n. The computing device 100 could include any type or form of video adapter or video card, driver and/or library to connect, communicate, connect, or otherwise use multiple display devices. Software may be developed and built to work with another computer’s display device 124a. One example is that an Apple iPad can connect to a computing device 100, and the display of the 100 may be used as an additional screen. This could allow the user to use the 100’s display as an extended desktop. A computing device 100 can be configured to support multiple display devices 124a-124n. One who is skilled in the art will appreciate and recognize the many ways that this configuration may be possible.

Referring to FIG. “Referring again to FIG. One or more hard drives, or redundant arrays or independent disks, for storing operating systems or software. Also, for storing software programs like the 120-related software 120 for server 107. One example of a storage device 128 is a hard disk drive (HDD), optical drive including CD, DVD, or Blu-ray drive; solid-state drives (SSD); USB flash drive; and any other device that can store data. Many storage devices can include both volatile and nonvolatile memories. This includes solid-state hybrid drives, which combine hard disks and solid state cache. One storage device 128 could be read-only, non-volatile or mutable. One storage device 128 could be internal and connect via a bus 150 to the computing device 100. One storage device 128 can be external and connects to the computing device 100 via an I/O device 130. This provides an external bus. One storage device 128 can connect to the computing devices 100 via the network interface 118. This network 104 includes, e.g. the Remote Disk For MACBOOK AIR from Apple. Client devices 100 may not need a non-volatile data storage device 128. They may also be thin clients or zero clients. A storage device 128 can also be used to install software or programs 116. The operating system and software can also be run from a bootable media, such as a CD or DVD. KNOPPIX is a bootable CD that runs GNU/Linux. It can be downloaded from knoppix.net.

Client device 100 can also download software from an application distribution platform. The App Store for iOS, provided by Apple, Inc., is the Mac App Store provided to Apple, Inc., GOOGLE LAY for Android OS provided o Google Inc., Chrome Webstore CHROME OS provided o Google Inc., Amazon Appstore for Android OS, KINDLE FIRE, provided by Amazon.com, Inc., are all examples of application distribution platforms. A repository of applications may be included in an application distribution platform. This can be on a server (106) or cloud 108 that clients 102 a-102n can access via a network (104). A distribution platform could include applications developed by different developers. An application distribution platform allows users of client devices 102 to select, buy and/or download applications.

“Moreover, the computing device 100 can include a network interface 118 that allows it to connect to the network 104 via a variety connections such as standard telephone lines LAN/WAN links (e.g. 802.11, T3, Gigabit Ethernet and Infiniband), broadband connections (e.g. ISDN, Frame Relay ATM, Gigabit Ethernet or Ethernet-over-SONET), ADSL, VDSL BPON, GPON or fiber optical including FiOS), or a combination of all of these connections. TCP/IP can establish connections using a variety communication protocols, such as Ethernet, ARCNET and SONET, SDH. Fiber Distributed Data Interface (FDDI), IEEE 802.21/b/g/n/ac CDMA. GSM, WiMax, and direct asynchronous connections. One embodiment shows that the computing device 100 can communicate with computing devices 100. Any type and/or combination of tunneling protocols or gateways, e.g. Secure Socket Layer, Transport Layer Security, or Citrix Gateway Protocol, manufactured by Citrix Systems, Inc., Ft. Lauderdale, Fla. The network interface 118 can include a built-in network connector, network card or PCMCIA network card. It may also contain an EXPRESSCARD networkcard, EXPRESSCARD card network card, card bus adapter and wireless network adapter. Modems, or any other device that is capable of interfacing with the computing device 100 to any network that can communicate the operations described in this article.

“A computing device 100, of the type shown in FIGS. The operating system may control the scheduling of tasks and access of system resources. 1D and 1E can operate under the control 1D and 1E. Any operating system can run on computing device 100, including any of the MICROSOFT WINDOWS versions, all releases of Unix or Linux operating systems, any embedded operating software, any real-time operation system, any proprietary system, and any operating system for mobile computing devices. WINDOWS 2000, WINDOWS server 2012, WINDOWS CE, WINDOWS Phone, WINDOWS XP, WINDOWS VISTA and WINDOWS 7 all manufactured by Microsoft Corporation of Redmond, Wash., MAC OS, iOS, manufactured Apple, Inc. of Cupertino, Calif., and Linux, a freely available operating system, e.g. Linux Mint distribution (?distro?) Ubuntu distributed by Canonical Ltd. in London, United Kingdom. Unix or other Unix like derivative operating systems. Android designed by Google, Mountain View, Calif. Certain operating systems, such as the CHROME OS from Google, can be used on zero clients and thin clients including, for example, CHROMEBOOKS.

“The computer system 100 may include any computer system that can communicate with a computer network, such as a desktop, phone, notebook, computer, computer or telephone, netbook, ULTRABOOK or tablet, server or handheld computer, mobile telephone, smartphone, tablet or mobile computing device, media player, gaming system, mobile computing device or any other form of computing, telecommunications, or media device. The computer system 100 is equipped with sufficient memory and processor power to carry out the operations described. The computing device 100 can have different operating systems and processors depending on its configuration. Samsung GALAXY smartphones, for example, are controlled by the Android operating system, developed by Google, Inc. GALAXY phones receive input via a touch interface.

“In certain embodiments, the computing devices 100 are a tablet e.g. The IPAD line of Apple devices; the GALAXY TAB and KINDLE FIRE devices by Amazon.com, Inc., Seattle, Wash. The computing device 100 can also be used as an eBook reader. The KINDLE family devices by Amazon.com, and the NOOK family devices by Barnes & Noble, Inc., New York City, N.Y.”

“In some embodiments, the communication device 102 may include a combination of devices. A combination of a smartphone and a digital audio or portable media player. One example of one of these embodiments would be a smartphone. The IPHONE smartphone family manufactured by Apple, Inc., a Samsung GALAXY smartphone family manufactured by Samsung, Inc., or a Motorola DROID smartphone family. Another embodiment of the communications device102 is a computer or laptop that has a web browser, microphone, and speaker system. a telephony headset. These communications devices 102 can be web-enabled to receive and initiate calls. A laptop or desktop computer may also be equipped with a webcam, or another video capture device that allows for video chat and video calling.

“In some embodiments, one or more machines 102 and 106 are monitored as part of network management. One of these embodiments may identify the machine’s status, such as the number of processes running on it, their CPU utilization, memory usage, or port information. This could include information about the available communication ports and addresses, or the session status, which can include the type and duration of the processes and whether they are active or inactive. Another embodiment of this type of information can be identified using a variety of metrics. The plurality may be used at least partially to make decisions regarding load distribution, network traffic management and network failure recovery, as well as other aspects of the operations described herein. The systems and methods described herein will make it easy to see aspects of the components and operating environments mentioned above.

“B. “B.

“Various embodiments herein refer to systems and methods to monitor and analyze sensor data in order to detect and report emergencies for one or more people. A platform that runs on backend systems 105 and mobile apps 103 on clients devices 102 can detect and report emergency events. The users. A mobile app 103 can be downloaded to a client device 101 of a user whose health, medical information and activities are to be monitored in order for emergency reporting and detection. A backend 120 or the mobile application 103 can be used to analyze and monitor sensor data and user context data. This will allow you to report and alert on emergency events. Emergency events can be reported by the user to emergency contacts via the corresponding emergency contact devices (107).

The backend application 120 can be used to analyze sensor data and user context data and to report emergencies. The backend application 120 can analyse the data available to determine if there is an imminent emergency or a lost communication link between the mobile device and the backend system. The backend application 120 can notify the user if an emergency is detected. The backend application 120 can, in some instances, be set up to mirror the functionality of the mobile app so that it can perform emergency event prediction or detection using any data available to it 105. In cases of communication failure between the mobile device 101 and the backend system105 105, the backend 120 can assume the role as the mobile application 103.

The user will be asked to identify emergency contacts and import contacts from an existing address book or other database during the setup of the mobile app 103. The mobile application 103 allows users to provide emergency contact information such as email addresses, phone numbers and social network information. It also includes information about messaging applications, voice over IP (VoIP), and other information that can be used for communication with the emergency contacts. A social safety network that is created by existing social networks like Facebook, Twitter and Foursquare, Google+ or the like can be added to the user’s safety net. After setup, the user can update the emergency contact information. After the emergency contacts have been identified, the mobile app 103 can notify emergency contact persons or entities via instant messaging, social networking messaging, email or other messaging services that they have been designated as emergency contacts. A notification to emergency contacts may include an invitation for them to join a safety network. After a notification has been sent to the prospective emergency contact about being assigned, he/she may either accept or decline the invitation. The mobile application 103 can be used to accept or reject the invitation if the prospective emergency contact has already registered with the system. The reply can be sent as a text message, SMS, email, or any other form that is associated with the original invitation. Social safety network configurations can include authentication of mobile application103 or user credentials on respective social networks(s), in order to permit automatic posting of messages via the mobile app103, taking into account privacy settings.

The client device 102 will prompt the user to input the appropriate medical information during the mobile app setup. This could include current medical conditions, medications, and any other information that may indicate the user’s health, such as age, weight, or information about his or her physical condition. To adjust the level of emergency sensitivity, the mobile application 103 can access the medical information of the user. The client device 102 can transmit the user’s medical information to the backend system105. In response, the backend 120 can create an emergency prediction model that is specific to the user using the information. If a user has heart problems or a heart disease, an emergency predictive model can be tailored to that user. This is because it is more likely to identify certain situations as emergency situations than emergency predictive models for people who don’t have heart problems. The mobile application 103 may prompt the user to update their medical information on a regular basis in some cases. The mobile application 103 and the backend 120 can also receive updates to the user’s medical information via other sources, such as systems associated with the healthcare provider service(s). 97. The mobile application 103 and the backend 120 can also receive updates of the user’s medical information from other sources, such as systems associated with the healthcare provider service(s) 97. Some implementations allow users to manually adjust the level of emergency sensitivity to match their current activity or state.

“The mobile app 103 can prompt the user to connect their client device 102 with the available sensors. These sensors include those embedded in jewelry, clothes, and accessories. Sensors associated with medical devices (such pacemaker(s), defibrillator, wearable artificial kidney(s), etc. Or the like. Wearable sensors can be motion sensors, thermometers, pulse detection sensors, pulse oximetry sensors, or any combination thereof. Sensors that are attached to medical devices may include sensors to detect the level of chargeability, medical sensor or other sensors. Ambient sensors may include humidity sensor(s), ambient thermometer(s) and other ambient sensors. The client device 102 can be connected wirelessly to one or more sensors via BLUETOOTH LE or near field communication (NFC), or other wireless communication methods through the appropriate handshaking processes. Other communication options may also be used. Connecting the client device with available sensors may include the user entering sensor identification information on the client device. Mobile application 103 can also access data from the sensors embedded in client device 102 (such motion sensors, other sensors).

The mobile application 103 must be installed on the client device 102. Once that is done, the mobile app 103 can begin collecting user-related information, checking for emergencies, and communicating on a regular basis with the backend 120. The deployment phase of the mobile application 103 is described below. The mobile application 103 may ask for fingerprint identification from clients with biometric authentication capabilities (such as fingerprint recognition) on client devices 102. This is based on the user’s preferences.

“FIG. “FIG. The 200-step process can include the receipt of sensor data, user context data or other data (step 215), analyzing the received data (220), checking for an immediate emergency (decision block 223), initiating an emergency protocol (424), and if none is detected, checking to see if there is an impending event (decision blocks 250) and notifying the user (step 266). The backend application 120 can perform the method 200 in some embodiments.

“Method 200 includes the mobile app 103 receiving sensor data and user context data or any other data indicative about the health, activity, and environment of the user (step 211). The sensor data can be received by the mobile application 103 from one or more sensors. Sensors can be embedded in client devices 102, sensors on wearable articles, ambient sensor, sensors that are associated with medical devices, or any combination thereof. The mobile application 103 can collect sensor data such as motion data, location, ambient temperature measurements and ambient humidity measurements. It also has medical measurement data (such user body temperature measurements. user heart beat measurements. user blood sugar measurements. user blood pressure measurements. etc.) and other sensor data (such a chargeability level for medical devices batteries or any combination thereof).

“The backend application 120 or mobile app 103 can collect contextual data about the user, such as data regarding the user’s current location (e.g. traffic patterns, air quality level, etc. Data indicative of previous user activity, data indicative that changes in the user’s social or economic status (e.g., divorce, loss, death of a loved one, etc.), and public alerts (e.g. weather alerts or crime alters, alerts for bacteria or viruses, alerts about infectious diseases, etc.). The data can be associated with the location or activity of a user or any other contextual data. The backend application 120 and the mobile app 103 can collect user context data from the user (e.g. as input to the client device 102, the Internet, social media sites or apps, emergency contacts or any other source). The backend 120 or mobile application 103 can retrieve the previous activities and social or economic changes of the user through social media content, such as comments or feeds made by the user. The public alerts data can be retrieved by the mobile application or backend application 120 from the systems associated with the healthcare provider service(s), 97 or web service(s).

“Furthermore the mobile app 103 and the backend 120 can receive user data (such as prescription information, medical imaging results or any other user-related medical information) from the systems associated with the medical provider services(s)97. The mobile application 103 may prompt the user to input information about a current activity, select the mode or settings for mobile application 103, or enter any other data. The mobile application 103 and the backend app 120 may be able to infer current activity from the user, or select a mode or settings for the mobile application 103 based on sensor data or other data. The data collected by the backend 120 or mobile application 103 relate to the user’s health, current activity, and environment. It can also indicate a user’s economic, social, or psychological status.

“The backend application 120 may receive sensor data, user context data or any other data indicative about the user’s health, activity or environment. The backend 120 can receive sensor data, user context data, and other relevant data from the client device (102), systems associated to the web service(s), 98, systems related with the healthcare provider service(s), 97, the Internet, and other sources. The mobile application 103, for example, can cause the mobile device to transmit sensor data from the sensor(s), to the backend system (105). The backend app 120 can have access (or store) user data in the backend software 105.

“The 200 method can include either the mobile application 103 (or the backend app 120) that analyzes the received data (step 222). The received data can be stored temporarily or permanently by either the mobile application 103, or the backend app 120. The backend application 120 or mobile application 103 can be set up to analyze received data from sensors and other sources. The backend app 120 can generate the emergency predictive model. The backend application 120 can generate an emergency predictive model. It can also include proprietary short-term protective analytics that analyze the input data and take into account any combination (such as current health conditions, current activity, current location, or current socioeconomic state). The environment parameters, which are available through ambient sensors, backend application 120, healthcare service provider service(s), 97, web service(s), 98 and 97), user settings, past data, and other data, can be used to determine the likelihood of an imminent or current emergency. The mobile application 103 and the backend 120 can be used to calculate a probability value or score that indicates the likelihood of an immediate or imminent emergency. The backend application 120 can, in some instances, be set up to analyze the user data using an emergency predictive model. The backend app 120, for example, can monitor client device 102’s connectivity to the backend system105. Upon detection of a loss of connectivity the backend 120 can analyze sensor data, as well as other user-related data, using the emergency predictive modeling.

“For instance, if you look at motion sensor data, a sudden drop in speed can indicate that the user has been in a traffic accident. A sudden drop in altitude could be an indication that the user is having problems on the ground. A microphone recording can indicate that the user is being investigated for a crime. Sensor measurements that indicate a sudden increase in the user’s pulse rate or heartbeat rate could be an indication of a heart attack, panic attack, or other emergency situations. A low battery charge detected by a pacemaker, artificial kidney, or any other medical device can also be a sign that the user is at risk for an emergency.

The mobile application 103 and the backend 120 can use the sensor data received to calculate the probability of an emergency event. The user’s medical information, such as their current or past medical conditions, current/past medications, weight, and so forth, can be used to determine the likelihood of an emergency event. The likelihood of having a panic attack or heart attack may be affected by current activity or stress levels. Information about the current/past location or activities of the user could lead to an increase in weight for certain risks, such as falling, getting injured, losing consciousness, or being infected with a bacteria or virus. The mobile app 103 and the backend 120 can modify your weight to reduce the risk of suicide, depression, or psychological breakdown based on certain information (such as the loss or job loss)

“Storing received data may include the storage of the data locally on the client device 102 and at the backend system105. For data analysis purposes, the received data can be temporarily stored at client device 102. The backend application 120 can receive the data and send it to it via one or more networks, 104 for storage in the backend software 105. The mobile application 103 can configure the backend application 120 to encrypt all data received before it is sent to the backend system 120. Based on data from the mobile app 103, the backend application 120 can be set up to update or generate an emergency predictive model. The emergency predictive model can also be generated using data from the mobile application 103. This includes the medical history, weight, and level of activity. The backend app 120 can be configured so that the emergency predictive model, or any parameters thereof, is available to the mobile app 103 to detect emergency events.

“In certain embodiments, the backend app 120 can use the emergency predictive model in order to detect current or imminent emergency events and notify the user (or his/her network) about these events. One or more functions can be included in the backend application 120 that mirrors those of the mobile app 103. These functionalities enable the backend app 120 to detect and predict emergency events using sensor data. This is especially useful in situations where communication with the mobile application 102 is disrupted due to power loss or network coverage.

The mobile application 103 can communicate with the backend 120 to update the user’s profile and to run user-specific processes on the backend 120. These include processes to create/adapt an emergency predictive model, retrieve data from other sources or processes. These regular communications enable the user to update their current location, current activity, and battery level.

“The backend application 120 may also receive information from the mobile application 103 about any backup emergency timesrs running on the client device. Backup emergency timers are useful tools to detect potential emergencies that cannot be detected using sensor data, other data received by the mobile app 103 or backend application 120. The mobile application 103 might not detect an emergency event if the user does not have the appropriate client device 102. The mobile application 103 lets the user set a countdown to a time limit based on the expected duration of the potentially dangerous activity. The backend app 120 can relay the timer information and run a similar countdown clock. To avoid false emergency detection, the user can stop the timer from expiring. An emergency notification protocol can also be initiated if the timer expires with no user intervention. For each emergency timer that is set on client device 102, a corresponding synchronized duplicator timer can be set to run backend application 120. This will provide additional protection against temporary or permanent communication problems such as network failures or battery drains. In some cases, the duplicate timer can expire during a period when communication fails with the mobile app 103. A different version of emergency notification protocol (different from the one associated with good communication via mobile application 103) may be executed.

“The method 200 can also include the determination of whether an actual or current emergency is occurring to the user (decision block 232). This can be done by comparing the computed probability value or score value with a threshold value. The state of the user may affect the threshold value. The threshold value can be exceeded by the mobile application 103 and backend 120 if the emergency probability or score is greater than the threshold. This will allow the application to determine that the user is experiencing an actual or current emergency (decision block 223) and create an emergency protocol (step 244).

The emergency protocol can contain a series of actions that the backend 120 or mobile app 103 take when there is an actual (or current) emergency. The emergency protocol may include both a verification stage (i) and a reporting stage (ii). The emergency verification stage allows the client device 102 or the mobile app 103 to perform one or more actions that prompt the user to stop the emergency protocol in the event of a false alarm. The emergency verification stage can be used to avoid or reduce false alarms. User feedback, such as stopping the emergency protocol, can also be used during the emergency verification phase (for example as training data) in order to improve the system’s emergency detection capabilities (e.g. the mobile app 103 or the backend software 120) over time. Data associated with false alarms and data associated correctly detected emergencies can be used to improve emergency predictive models or user-related processes on the backend app 120.

“Once an emergency is detected, the mobile app 103 can prompt the user stop the emergency protocol during emergency verification stage. The mobile application 103 may cause the client device to play loud sounds and flash a flashlight to draw attention. This state, which can include playing pitch sounds, blinking light, flashlight, and flashing light, can last a few minutes, up to ten minutes depending on what emergency sensitivity settings are used. The mobile application 103 will register the false alarm if the user cancels an emergency protocol. The mobile application 103 can also register the emergency event as valid if the user does nothing to stop the emergency protocol. The mobile application 103 might also be used to validate an emergency event. The backend application 120 can store information associated with detected emergency events, such as input data for the emergency prediction model, and information indicative of the validity or false alarm. This data can then be used to train the user to improve their emergency detection model.

“In certain implementations, the mobile app 103 may request user authentication to allow the user stop the emergency protocol. User authentication may include entering a security code, fingerprint identification capabilities on the client device 102 or other authentication methods. User authentication is used to prevent cancellation of emergency protocols that have been initiated.

The emergency reporting stage can be started if the user doesn’t stop the emergency protocol in the emergency verification stage. The backend application 120 can transmit a distress signal to the emergency contact (or safety network) of the user. The backend app 120 can send the distress signal to either the emergency contact device (s) 107, or the social network service 99. Some implementations allow the distress signal to be sent to the following: emergency contacts, safety network, emergency response service 96, subset of user contacts, or a specialized safety network. Some implementations allow the user to set up an ad-hoc network to protect themselves in case of emergency situations, such as when they travel abroad and it is difficult for their regular emergency contacts to respond quickly. The user may choose from a list that includes friends, family members or friends of friends who are willing to look after the user for a specific period. The user can choose from a list of friends, family members or friends of friends who are willing to provide their services free of charge or for a fee. The user may view the profiles of those individuals who list certain skills, such as?nurse,??fire fighter? or?emergency responseer?, and?knows CPR’. You can view the availability, ratings from past engagements, fees per month, etc. and then contract the one that best suits your needs. The ad-hoc safety system can be created in some cases based on the proximity of users to each other, and user’s relationships (e.g. family,?). ?friend,? ?close friend,? etc.) To the appropriate contacts, user preferences, or other criteria. Information such as: (i) a standard message or a personalized message to recipients; (ii) an indication of last known location and direction of user; (iii). An indication of last known activity of user (such as running, walking, driving, etc. (iv), information about the user’s medical condition, medication, (v), other physiological or social data or a combination thereof.

“In certain implementations, after the distress signal has sent, the mobile app 103 can cause client device 102 into a protected mode. The protected mode allows the screen to display basic information about the user, while also locking the device to prevent accidental cancellation of an emergency protocol. In order to maximize battery life, the client device 102 can beep periodically while in protected mode. It will also periodically relay information to backend 120 in order to optimize battery consumption. Emergency responders can access the displayed information of the user and then treat them accordingly.

“Depending on the characteristics of the client device102 and the subscription level of user, the distress signal may be an automated SMS message (SMS) message. Security restrictions prevent automatic SMS message transmission on client devices 102, such as those that run the iOS operating system. Third-party messaging apps can provide automatic SMS messaging for such client devices.

“In some cases, the distress signal may include plain text information that indicates the emergency event detected. These implementations allow the distress signal to include an iMessage, SMS message or email. It can also include a message to other users of emergency detection and reporting systems, such as message to them, messages on social networks, and messages to them. A temporary uniform resource locator (URL) can be included in some implementations. These implementations allow sensitive information (in terms privacy) to be posted on the URL, such as personal or medical information. Credential(s), authentication, can be used to gain access to the URL. Credential(s), which can be pin, password, and other authentication information, may be used to access the URL. Credential authentication allows for access to sensitive information to be restricted to the appropriate contacts. The credential(s), in some cases, can be shared with a subset the emergency contacts or members to the safety network before the detection of an emergency. The credential(s), in some cases, can be sent as part of an emergency protocol to a subset of users through separate messages. To prevent unauthorized access to sensitive data, credential(s), authentication can also be used. The URL can be set up to temporarily exist to protect the user’s sensitive data from being accessed by unauthorised intruders. A temporary URL, for example, prevents sensitive data from being stored in the Internet archives (not searchable or cached). The temporary URL can also protect sensitive information from being spread in the event of false alarms. The temporary URL can be removed if an emergency protocol is cancelled.

“If there is no emergency (decision block 223), the mobile application 103 and the backend app 120 can check for an imminent emergency (decision block 253). The mobile application 103 can use the previously calculated probability or score value to detect an impending emergency event. Other implementations allow the mobile application 103 and the backend 120 to compute a second probability value or score value, which can be different from the one used for detecting current emergency events. The second probability value or score value can then be compared to a second threshold. Impending emergencies include low battery charge detected by a medical device, abnormal medical test results, and detecting risks (e.g. based on received public alerts) associated to a user’s location or event.

The mobile application 103 and the backend 120 can be configured to display a message to the user on the client device 102 when the application detects an imminent emergency (decision block 250). The client device can also be configured to emit a sound, flashing light or any other output signal to draw attention to the mobile application 103. A warning message may include information about the impending emergency, suggestions for the user, and a link to further information.

“In some cases, checking for an impending crisis (decision block 250), can be done before checking for a current emergency (decision block 255). In other cases, one check can be done to detect if there is an impending or current emergency. The mobile app 103 can, for example, determine which emergency is most likely based on a calculated probability or score value.

“On detecting an impeding event (decision block 255), the mobile app 103 or backend 120 can send a notification (or warning message) to the user indicating that the impeding event is about to occur (step 226). A notification can contain an instruction or request for the user to modify a behavior in order to avoid the anticipated emergency. If reckless driving is detected, the notification may include a request to the user to slow down and drive safely. The notification may include a request to the user to seek immediate medical attention if sensor data shows that the user has high blood pressure or irregular heart beat. The backend application 120 or mobile app 103 can monitor user behavior, such as by tracking user speed and location or user motion. The mobile application 103 and the backend app 120 can detect if the user’s behavior has not changed (e.g. by monitoring sensor data or other data indicative user behavior), and the mobile application or backend 120 can report the anticipated emergency to the user’s emergency contact or to a member of their social safety network. The mobile application 103 and the backend 120 can report the anticipated emergency event. They can also include information such as a description, location indication, or other pertinent information. The backend application 120 or mobile application 103 can transmit a distress signal to a user’s emergency contact, an emergency response service or a specialized safety network.

“In some embodiments, the detection of an actual emergency (decision block 220 and step 240) can be optional. The mobile application 103 and the backend 120 can be configured in certain embodiments to perform a first method of detecting and reporting actual emergencies and/or a secondary method for reporting impeding emergencies.

Summary for “Systems and methods to detect and report personal emergencies in real-time”

People take on risks in their everyday activities that they don’t usually realize. A person can be in an accident whether they are driving, crossing roads, walking, or climbing a mountain. Even if they are not at home, elderly and sick persons can become the victims of an emergency. If a personal emergency occurs, it is important to notify the authorities immediately so that proper assistance or rescue can be given to those at risk. If a person experiences a personal emergency alone or in a remote location, it may be harder for others to detect the event and the person will not receive the proper help they need. Research has also shown that people often assume someone else will call the emergency services providers (e.g. police, fire department or hospital) when they witness an emergency. This behavior decreases the likelihood of timely reporting an incident by a proportional amount to the number people who witnessed it.

“Some emergency detection/response system make it easier for someone who is experiencing a personal emergency at their home to seek medical attention. The personal emergency response system (PERS), for example, provides a device that can connect to an emergency switchboard via a phone line. If a person is experiencing a personal emergency at home (such as an elderly person or sick person falling alone), they can press a button to call for help.

The present disclosure is about methods and systems that detect and report personal emergency events using a mobile app running on a user’s client device. These systems and methods allow for the detection of emergency events associated with the user using regularly collected sensor data, user context data, and/or any other relevant data to the user’s state of health, activities, environment, and their current health. The systems and methods described in this article include a mobile application and a backend application that runs on the backend system. Both the mobile application and backend can be set up to communicate regularly. You can set up the mobile application and the backend to monitor sensor data, user context data, as well as other data that may indicate the user’s health, activities, or environment. The mobile application and the backend can use the data to determine if the user is in or near to an emergency. The mobile app or the backend can notify emergency contacts or warn the user if the application detects a personal emergency. The mobile app can be set up to transmit sensor data as well as other relevant data about the user’s health, activities, and environment to the backend application. The backend app can combine the data from the mobile app with data from other sources to create or update an emergency prediction model that is specific to the user. The backend app can transmit the emergency predictive model (or parameters thereof) to the mobile application in order to detect personal emergency events. The backend application may use the generated emergency prediction model or its parameters to detect personal emergencies associated with the user in certain embodiments.

In some cases, sensor data can be linked to sensors embedded in the client device of a user (such a accelerometer(s), digitizer(s), digital clock, microphone(s), and the like), sensors mounted onto wearables of the user (such a thermometer(s), pulse measurement device(s), pulseoximetry, or thelike), ambient sensors (such s thermometer(s), humidity measure device(s), or other sensors), and sensors associated with medical devices (such a heart beat detection device or batteries chargeability detection device(s), battery(s), device(s), such as well as well as the like), or similar), heart rate detection device(s), etc. Data monitored by the client app can include data such as user data, temperature, blood sugar, and readings of the user’s pulse, as well as data about user activities (such user activity, type, location, or the similar).

“Some implementations can detect an imminent emergency or a current or immediate emergency. A mobile application or backend app can be set up to notify the user or an emergency contact if an impending emergency is detected. The user can be notified (or someone else) about an impending emergency. This could include information about the event and/or the actions that the user should take. If an emergency is detected, the mobile app or the backed can be set up to notify the emergency contact(s) of the user. Notifying emergency contacts can be as simple as sending a message with information about the emergency, the current location, and any pertinent medical information. You can configure the mobile application to display information about an emergency, user medical information, or any other information on your client device’s screen. The displayed information can be used to lock the client’s screen. You can configure the mobile app or backend application to notify an emergency response service about the detected emergency and the location of your user.

“In certain implementations, the backend app can be configured so that it receives data from both the mobile application and data from other online sources. Based on user-related data, the backend application can be set up to personalize the emergency prediction model and its parameters. The backend app can send data indicative to the customized predictive model and its parameters to the mobile application. This allows the user to use the personalized emergency predictive model to predict or detect emergency events. The adaptation/customization of the emergency predictive model on a regular basis by the backend application allows accurate detection of emergency events for the user. Further, the backend application can also be configured to allow users to share data with healthcare providers or social network services. The backend app can also obtain user-related information from web services like online weather services, news services, government websites, and the like. The backend application may be configured to transmit information indicative of emergency events to a backend system call center or an emergency responsive service system.

“Some implementations allow the backend to configure user data encryption with user-specific credentials. The backend can perform processes to customize the emergency prediction model for the user on a micro virtual computer (mVM). The backend application can detect the presence of the mobile application on the client’s device and launch the appropriate mVM. To ensure privacy and security, the mVM can run on a micro virtual machine with a time-varying Internet Protocol address. The backend can detect that the client device is not using the corresponding mobile app. This will cause the mVM to be stopped, encrypt the image and store encrypted data. The mVM increases security and privacy of user data. With little to no risk of exposing user privacy, the user can export the corresponding data from backend system. The user can delete the corresponding data from backend system by deleting the mVM containing encrypted user data. There will be no copies on backend system. This prevents (or mitigates) any risk of others accessing user data.

According to at least one aspect, a mobile device can receive data about the user’s health, activity, and environment. The mobile device, or the server that analyzes the data about the user’s health, activity, or environment and detects an emergency from the data can be part of the method. This method could also include the server or mobile device that executes, in response to the detection, an emergency protocol.

“In certain embodiments, the method may include data being received from one or more sensors regarding one or more motion, location, ambient temperature and ambient humidity, or medical measurements. This can include the ability to receive content from one or several sources through one or more networks and contextual data about the user. This contextual data could include information about the user’s location, air pollution level, previous activity, posts, messages, or changes in status in a user feed. This can include data from the mobile device about public alerts related one or more to a user’s location or activity. The data about public alarms could include data about disaster alerts or weather alerts or crime alerts.

“In some embodiments, the method may include either the mobile device or server that uses an emergency predictive model to analyze the data. This model takes into account the current condition of the user, environmental parameters and user settings to determine the likelihood of an emergency. A score value can be computed by the mobile device or server to indicate the likelihood that an emergency will occur. The score value can be compared to a threshold value by the mobile device or server. If the score value is greater than the threshold value, the mobile device or server can alert the server.

“In certain embodiments, the method may include the verification of an emergency via the mobile device. The mobile device, or the server that initiates responsive to verification, can also be included in the method. The mobile device or server can send, responsive to verification, a distress message to one or more emergency contacts.

According to at least one aspect, an application that can be executed on both a mobile device or a server in a system to detect an emergency occurring to a user may be included in a system to detect it. The application can be configured to receive data about the user’s health, activity, or environment. The server can communicate with the mobile app and receive data from it. The server or at least one mobile application can be set up to analyze data about the user’s health, activity, or environment, identify an emergency event from the data, and then execute an emergency protocol in response to that detection.

“In some cases, the data can be obtained from one or more sensors. It may include one or more of these: motion, location and ambient temperature. Data can be obtained from one or several content sources over one or multiple networks. It can also include contextual data about the users. Context data may include traffic patterns, air pollution levels, previous activity, posts, messages, or changes in status of user feeds. The server or at least one mobile app can be set up to receive data about public warnings related to a user’s location. This data may include information about disaster alerts and weather alerts.

“In certain embodiments, at most one of the mobile applications or the server may be configured to use an Emergency Predictive Model for analyzing the data. This model takes into account the user’s current condition, environment parameters, and past knowledge in order to predict the likelihood of an emergency. The server or mobile device can calculate a score value that indicates the likelihood of an emergency occurring. The score value can be compared to a threshold value by the mobile device or server. If the score value is greater than the threshold value, the mobile device or server can alert the server.

“In certain embodiments, at most one of the server or mobile applications can be configured to verify an emergency via the mobile device. The server or mobile application can initiate, upon verification, emergency reporting. The server or mobile app can send, upon verification, a distress message to one or more emergency contacts. This will enable the user to create an emergency response service, or an ad-hoc safety net.

“In certain embodiments, the server may monitor the availability of the mobile app and launch a microvirtual machine(mVM) when the mobile application has been started on the device. If the server detects that the mobile app is not being used, it can stop the mVM. The server can also encrypt the image of the mVM as well as data associated with the user. The encrypted data can be stored by the server for later use by the appropriate mobile application. An mVM can run on a microvirtual host and the IP address of that micro-virtual host can be set randomly at initiation. The micro-virtual service can be renewed at random intervals while the mVM runs on it.

A mobile device can receive data about the user’s health, activity, and environment. This is at least one aspect of a method to predict an emergency. The mobile device, or the server, can analyze the data to determine at least one aspect of the user’s health, activity, or environment. The mobile device, or the server, can calculate a score value that indicates the likelihood of an emergency occurring and notify the user if the score value is greater than a threshold.

“In some embodiments, this method may include the mobile device/server detecting that the behavior or user has not changed and responding to the detecting initiating by one of the mobile devices or the server emergency reporting. The mobile device or server can send a distress signal via the mobile device to one or more of these: emergency contacts, an emergency response service, or a d-hoc safety network that the user has created. The data may include contextual data about the user as well as data from one or more sensors that are associated with the user in some embodiments.

According to at least one aspect, a system to predict an emergency can include a mobile app that is executable on a smartphone and a server that communicates with the mobile application to receive data from the device. The mobile application can receive data about at most one of the following: a user’s health, activity, or environment. You can configure the mobile application and the server to receive data about at minimum one of the following: health, activity, or environment. The server or mobile app can be set up to analyze data about the user’s health, activity, and environment. A score value can be determined from the data by the mobile application or server to indicate the likelihood of an emergency. If the score value is greater than a threshold, the mobile app or server can send a notification to the user, informing them to modify their behavior to avoid an emergency.

“In certain embodiments, the mobile app or the server can detect that the behavior is not changing and respond to this by initiating an emergency reporting, either via one of the mobile devices or the server. A mobile application or server can be set up to respond to the detection of a distress signal. This could include sending a distress message to one or more of these: emergency contacts, an emergency response service, or a d-hoc safety network that the user has created. The data may include contextual data about the user as well as data from one or more sensors that are associated with the user in some embodiments.

“The following sections of the specification with their respective contents can be useful for reading the descriptions of various embodiments:”

“Section A” describes a computing environment and network environment that can be used to practice embodiments of reporting and detecting personal emergencies.

“Section A describes systems and methods that generate and use codes embedded on articles of manufacture to initiate or execute a transaction.”

“A. Computing and Network Environment.”

“In addition to discussing specific embodiments for generating and using codes embedded on articles of manufacture to initiate or execute a transaction, it may also be useful to describe aspects the operating environment and associated system components (e.g. hardware elements) in relation to the methods and systems discussed herein.”

“FIG. “FIG. The network environment 100 consists of one or several client devices 102a-102n (also known as local machine(s), 102 and client(s), 102), client node (s), 102 or client machine(s), 102), client computer(s), 102), client device(s), 103, client computer(s), 102), client device(s), 102), client machine(s), 102), client client application(s), 103), endpoint(s), 102 or endpoint(s), backend system (also called emergency contact(s), web service(s), 99, or healthcare provider(s), service(s), web service(s), 97, 98 or social network(s), or network service(s), or social network(s), or network(s), 99, or service(s), or 98 or social network(s), or network(s), or s), or 98 or s), or s), or s), or 96), or s), or s), or s), 98 or, s), 98 or, 98 or, or s), or s), or s), 98 or,, s), or s), or s), or s), or s). Client devices 102 can receive sensor data from wearable articles 95 such as jewelry and other accessories. One or more backend programs 120 a?120 n may include the execution of micro virtual machines (mVMs), 129 a?129 n (also known as mVM129).

“The client device (102), the backend system (105), and emergency contact devices (107) can all be connected to one or more networks104 and can communicate through one or more of the networks 104. The backend system (105) or client device 102 can also be configured to communicate through one or more networks with systems associated with the healthcare provider services(s), 97, 98, and 99. The client device 102 or the backend system (105) can be configured to communicate automatically with the system(s), associated with the emergency service(s), 96 via one or more networks (104) One or more networks 104 allow wired and/or mobile communication between devices (such client devices 102 and computer devices associated to the backend system105, emergency contact device 107 and/or electronic devices associ with the emergency responsive services 96, 97, 98, web service(s), 98 and social network service(s), 99). The FIGS. show examples of the networks 104 as well as devices that are coupled to them. 1B-1E, in terms of software and hardware. Some implementations have one or more call center 101 that are communicatively linked with the backend system (105).

The mobile application 103 can detect an emergency in the client device 102 based on the analysis of the data. The mobile application 103 can detect an emergency and cause the client device to take the appropriate action. This includes sending out signals warning the user or communicating with the emergency contact device (s) 107 that is associated with the user’s emergency contact person(s). After receiving notification(s), the client device 102 can notify the emergency contact person(s) via the corresponding device (107) and one or more networks 104. Client device 102 can also notify emergency responsive service (96) of an immediate emergency by sending electronic messages. The client device 102 can store sensor-measured data and user context data. The client device 102 can transmit the collected sensor-measured information to the backend system105 via the mobile app 103.

“The backend system (105) can contain a computer system that can execute multiple backend applications 120. Each mobile application 103 that runs on the user’s client device (102) is associated with the backend application 120. The backend 120 and the corresponding mobile app 103 are set up to communicate frequently to exchange data. The backend application 120 and the corresponding smartphone application 103 may communicate using a secure communication protocol, such as the Hypertext Transport Protocol Secure (HTTPS), Secure Real-time Transport Protocols (SRTP), Secure Communications Interoperability Protocols (SCIP) or any other secure communication protocol that is known to a person of ordinary skill.

The backend application 120 can be used to create emergency predictive models (or parameters) that can then be used by the corresponding mobile app 103 for analyzing the emergency events of the user. The backend application 120 provides the emergency predictive model (or parameters) to the corresponding mobile app 103 through the backend system, one or more networks and client device 102. The backend 120 can also provide additional information to the corresponding mobile app, such as medical data obtained from the healthcare provider service(s), 97, weather information, public alerts, and user context data from the web service(s), 98 or any other information from the social media service(s), 99 or a combination thereof. The backend application 120 may be configured to launch the mVM129 when the client device 102 indicates that the mobile app 103 has been launched. The mVM129 can be configured to execute user-related algorithms or processes. These user-related algorithms and processes can include the creation of a user-specific emergency prediction model or the detection of emergency events. The mVM129 can improve data security and privacy.

The backend application 120 can also be configured to receive sensor and user data. The backend 120 can store user data, sensor data, and user context data. This data can then be used to track the user’s health, as well as generate/update user-specific emergency prediction models. The backend system can be used to store users’ data. The backend application 120 may be configured to mirror the functionality of the client application 103. Backend application 120 can be used to analyze sensor data and user context data and to report emergencies. The backend application 120 can be used to analyze data available to determine if there is an impeding or current emergency. The backend application 120 can detect an emergency and report it or take other measures to respond. The backend system 101 can be linked to certain implementations by the backend application 120 and backend system 110. The call center 101 can receive emergency information and other user information when an emergency is detected. Operators at the 101 call center can verify an emergency with the user by calling them. Operators can also contact emergency service(s), 96, or emergency contacts of the user.

“Each user can be associated with a client device (102) and a mobile app running thereon to have an emergency contacts network that includes one or more individuals or entities and one or several corresponding emergency contact devices (107). A client device 102 can be set up to act as an emergency contact device for another user. A client device 102 can be used as an emergency contact device 107. A user can associate with a client phone 102 to be an emergency contact for another user. If an emergency is detected, the mobile application 103 and the backend 120 can be set up to send notifications to emergency contact devices (107). An emergency notification can contain a description of the emergency, a location indication 102, and medical information 102 about the user. If an emergency notification has been received, emergency contact persons can contact the emergency responsive services 96. The emergency responsive service number 96 can be linked to a government agency, hospital, fire department or other entity that provides emergency services.

“The healthcare provider service(s), 97 can be associated to a pharmacy, hospital(s), and other healthcare-related entities. The backend application 120, or the corresponding mobile app 103, can retrieve data from a pharmacy system about prescriptions and medications associated with the user. The backend app 120 or the corresponding mobile software 103 can exchange information with a computer system that is associated with a doctor, other healthcare provider entity, or the user.

The web service(s), 98 may include web pages that provide weather information, public alerts (such a hurricane, storm, or tornado alerts), virus and infectious diseases alerts, crime alarms, or a combination thereof), or news. The web service(s), 98 information can be accessed by the mobile application 103 directly or via the backend application 120. This information can then be used to assess any immediate or imminent emergency. The backend application 120 can filter the information from the web service(s), 98 based on the user’s location and other information.

“The social network service(s), 99 can include any social network services to the which the user of client device 102 has subscribed. One or more of the user’s social network contacts can automatically receive some user information, such as login/logout times, location information, and other information. This information can, for example, be shared automatically (based on the user’s preferences), with friends, family, emergency contacts or other contacts who are subscribed to the same social networking service(s). 99 This shared information can be used to allow remote monitoring of user’s activities by one or more people in the social network. The client device 102 allows the user to view the activities of their friends using information from the social networking service(s). 99. The backend 120 and the corresponding client app 103 allow for automatic retrieval of data from social network service(s), 99 and presentation to the user. The backend application 120 and the corresponding client app 103 allow for automatic publishing/sharing user information to social network contacts. A social network service 99 can be set up to send an emergency message indicative of a current emergency (received via a client device 120 or backend application 120) directly to the emergency response team 96.

FIGS. 2-4 describe the hardware and software implementations of one or more networks (104 and the devices thereto, such as client devices 102, devices/serves that are associated with backend system105, emergency contact devices107, and servers/devices that are associated with services 96-99). 1B-1E. The following is a discussion of the processes for reporting and detecting emergency events. 2-3.”

Referring to FIG. “Referring to FIG. 1B, a depiction of a network environment. The network environment includes one to three clients 102a-102n (also known as client node(s), 102 and client node (s), 102, client client(s), 102 and client client(s), 102), client machine(s), 102, 102, 102, 102, 102, 102, 102, 102, 102, 102, 102 and client device(s), 102), client node(s), 102, 102, 102, 106 or backend system (ss106 or backend remote(s), 105 through one or several networks 104. A client 102 can be used as both a client server that requests resources from a server or as a server that provides access to shared resources for clients 102a-102n.

“Although FIG. FIG. 1B shows a network of 104 connecting the clients and backend servers. However, clients 102 may be connected to the same network 104. Some embodiments may have multiple networks 104 connecting the clients 102 to the backend server 106. A network 104 is one example of such an embodiment. A network 104 could be a private network, while a network (not shown), may be public. A network 104 could be a private network, while a network 104.1 may be a public network. A public network. Networks 104 and104 are also possible in another embodiment. Both networks 104 and 104 may be private networks.”

“The network 104 can be connected via either wired or wireless links. Digital Subscriber Line (DSL), coaxial cables lines or optical fiber lines can all be connected via wired links. Wireless links can include BLUETOOTH and Wi-Fi (Worldwide Interoperability for Microwave Access) as well as an infrared channel, satellite band, or BLUETOOTH. Wireless links can also include any cellular network standard used to communicate between mobile devices. This includes standards that are 1G, 2G or 3G. If the network standards meet a specified or set of standards, they may be considered one or more generations of mobile telecommunications standards. The 3G standards, for example, may correspond to the International Mobile Telecommunications-2000 (IMT-2000) specification, and the 4G standards may correspond to the International Mobile Telecommunications Advanced (IMT-Advanced) specification. AMPS, GSM and UMTS are some examples of cellular network standards. Cellular network standards may use various channel access methods e.g. FDMA/TDMA/CDMA, SDMA. Different types of data can be transmitted using different standards and links in some embodiments. Other embodiments allow the transmission of identical data via different standards and links.

The network 104 can be any type or form of network. The network 104’s geographical coverage can vary greatly. It could be a body-area network (BAN), personal area network, or a local-area network. Intranet, metropolitan area network (MAN), wide area network(WAN), or Internet. The network 104’s topology can be any type and could include any combination of: bus, star or ring, tree, point-to-point, bus or star. The network 104 could be an overlay network that is virtual and sits on top one or more layers from other networks 104?. The network 104 can be any network topology known to ordinary skill in the art and capable of supporting operations. The network 104 can use different protocols and layers, such as the Ethernet protocol, TCP/IP, the ATM (Asynchronous Transfer Mode), SONET (Synchronous Optical Networking), or SDH (Synchronous Digital Hierarchy). TCP/IP’s internet protocol suite can include the application layer, transport layer and internet layer (including IPv6). Network 104 could be classified as a broadcast network or a telecommunications network. It also may include a data communication network or computer network.

“In some embodiments, multiple backend servers may be logically grouped together 106. One of these embodiments may refer to the logical grouping of backend servers as either a backend server farm or a backup machine farm 38. Another embodiment may allow the backend servers to be geographically dispersed. A machine farm 38 can be managed as one entity in other embodiments. Another embodiment of the backend farm 38 may include a number of backend machines farms 38. Each backend server 106 in a backend machine farm 38 may be heterogeneous. One or more backend servers 106, or backend machines 106, can operate according to a particular operating system platform (e.g. WINDOWS NT manufactured by Microsoft Corp. of Redmond. Wash.). While one or more backend servers 106 can work according to another operating system platform (e.g. Unix, Linux or Mac OS X).

In one embodiment, the backend servers 106 of the backend machine farms 38 can be stored in rack systems with high density and associated storage systems. These rack systems may then be located in an enterprise-level data center. This embodiment consolidates the backend servers (106) in a way that improves system management, data security, and system performance. Backend servers 106 are located on localized high-performance networks and are able to access high performance storage systems. The centralization of the backend servers and storage systems, and their coupling with advanced system management tools, allows for more efficient use.

The backend servers 106 and 106 of each backend farm 38 don’t need to be physically close to the other backend server (106) in the same backend farm 38. The backend server 106 that are logically connected as a backend farm 38 can be interconnected via a wide-area or metropolitan-area network connection. A backend machine farm 38 could include backend servers located on different continents, in different areas of a country, state, city or campus. The data transmission speeds between backend server 106 of the backend machine farm 38 may be improved if backend servers (106) are connected via a local-area networking connection (LAN), or another type of direct connection. A heterogeneous backend server farm 38 could include one or two backend servers (106) that operate according to an operating system. One or more of the backend servers (106) may also be connected using a local-area network connection or some form of direct connection. The backend servers are connected via a local-area network (LAN) connection or another type of direct connection. Hypervisors can be used in these embodiments to simulate virtual hardware, partition and virtualize physical hardware, as well as to execute virtual machines that allow access to computing environments. Native hypervisors can run directly on the host machine. VMware ESX/ESXi, made by VMWare, Inc. of Palo Alto, Calif., and the Xen hypervisor, which is an open-source product whose development was overseen by Citrix System, Inc., as well as the HYPERV hypervisors that Microsoft or other companies provide. Hosted hypervisors can run in an operating system at a second level. VIRTUALBOX and VMware Workstation are two examples of hosted hypervisors.

“Management of backend machine farm 38 could be decentralized. One or more backend server 106 could include components, subsystems, and modules that support one or several management services for backend machine farm 38. One or more backend server 106 can be used to manage dynamic data. This includes techniques for failover, replication and increasing the resilience of the backend machine farms 38. One backend server 106 can communicate with both a persistent store or, in certain embodiments, a dynamic store.

“Backend server106” could be a file, application, web, proxy, server, firewall, gateway, gateway, virtualization server or deployment server. One embodiment may refer to the backend server as a “backend remote machine” or “backend node”.

Referring to FIG. “Referring to FIG. Client 102 may have access to one or more resources through a cloud computing environment. One or more clients 102a-102n may be part of the cloud computing environment. They can communicate with the cloud 108 via one or several networks 104. Clients 102 could include thick clients, thin client, or zero clients. Even if the client is disconnected from the cloud108 or backend servers106, a thick client can still provide some functionality. To provide functionality, a thin client or zero client might depend on the cloud 108 connection or backend server106. Zero clients may be dependent on the cloud108, other networks 104, or backend servers106 to retrieve the operating system data. Backend platforms may be included in the cloud 108, such as backend servers 106 and storage.

“The cloud108 can be private, public, or hybrid. Public clouds could include public backend servers106 that are managed by third parties for the owners 120 of the backend applications. Backend servers 106 can be found in remote locations, as described above. The backend servers 106 may be connected over a public network to the public clouds. Private clouds could include backend servers 106 that are owned by backend application 120 owners. Private clouds can be connected to backend servers (106) via a private network. Hybrid clouds (108) may connect to both public and private networks 104, and backend servers106.

“The cloud108 may also include cloud-based delivery, e.g. Software as a Service 110, Platform as a Service 112, and Infrastructure as a Service 114. IaaS can refer to renting infrastructure resources for a specific time period. IaaS providers can offer large amounts of storage, networking, servers, or virtualization resources. This allows users to scale up quickly and access more resources as they need them. IaaS may include AMAZON WEB Services provided by Amazon.com, Inc., Seattle, Wash., Rackspace US, Inc., San Antonio, Tex., and RACKSPACE CLUD provided by Rackspace US, Inc., San Antonio, Tex., Google Compute Engine offered by Google Inc., Mountain View, Calif., or RIGHTSCALE supplied by RightScale, Inc., Santa Barbara, Calif. Additional resources, such as the operating system, middleware, and runtime resources, like the operating system, software, or other than Iaa. PaaS may include WINDOWS AZURE, provided by Microsoft Corporation of Redmond, Wash., Google App Engine, provided by Google Inc., or HEROKU, provided by Heroku, Inc. of San Francisco, Calif. SaaS providers might offer the same resources as PaaS, including storage, networking, servers, virtualization, operation system, middleware, runtime resources, or storage. SaaS providers can offer additional resources, such as data and application resources, in some instances. SaaS includes GOOGLE APPS offered by Google Inc., SALESFORCE offered by Salesforce.com Inc. San Francisco, Calif. or OFFICE 365 offered by Microsoft Corporation. Data storage providers may also be included in SaaS, for example. DROPBOX provided Dropbox, Inc., San Francisco, Calif., Microsoft SKYDRIVE provided Microsoft Corporation, Google Drive provided Google Inc., and Apple ICLOUD provided Apple Inc., Cupertino, Calif.

Clients 102 can access IaaS resources using one or more IaaS standard, such as Amazon Elastic Compute Cloud, Open Cloud Computing Interface, Cloud Infrastructure Management Interface, or OpenStack standards. Clients may be able to access resources via HTTP using some IaaS standards. These standards may use the Representational state Transfer (REST), Simple Object Access Protocol, or both. Clients with 102 clients may have access to PaaS resources using different PaaS interfaces. Some PaaS interfaces may use HTTP packages, JavaMail APIs (JDO), Java Persistence API(JPA), Java Data Objects/JDO), Java Data Objects/JDO (JDO), Java Data Objects/JDO), Java Data Objects/JDO (JDO), Java Data Objects/JPA), Python APIs and web integration APIs. These APIs can be used for programming languages such as Rack for Ruby, WSGI For Python, PSGI for Perl or other protocols. Clients 102 can access SaaS resources via web-based user interfaces provided by a browser (e.g. GOOGLE CHROME and Microsoft INTERNET Explorer are some examples of SaaS resources that clients 102 can access. Clients 102 can also access SaaS resources via smartphone or tablet apps, such as Salesforce Sales Cloud or Google Drive app. Clients 102 can also access SaaS resources via the client operating system. This includes, e.g. Windows file system for Dropbox.

“In certain embodiments, access may be authenticated to IaaS or PaaS resources. A server or authentication server might authenticate a user using security certificates, HTTPS, and API keys. API keys can include different encryption standards, such as Advanced Encryption Standard, (AES). “Data resources can be sent via Transport Layer Security (TLS), or Secure Sockets Layers (SSL).

“The client server 102 and the backend server 106 can be deployed on any type of computing device and/or executed from them. A computer, network device, or appliance that can communicate on any type of network and perform the operations described herein. FIGS. FIGS. 1D and 1E show block diagrams of a computing unit 100 that can be used to practice an embodiment of client 102 or backend server106. FIGS. 1D and 1E show that each computing device 100 has a central processing module 121 and a main storage unit 122. FIG. FIG. 1D shows that a computing device 100 can include a storage device 128, a installation device 116 and a network interface 118. Display devices 124 a-124 n are shown. A keyboard 126 is also shown. A mouse. Storage device 128 can include an operating system, software, backend apps 120, or any combination thereof. FIG. FIG. 1E shows that each computing device 100 can also have additional elements, e.g. A memory port 113, bridge 170, input/output devices 130a-130n (generally referred by using reference number 130), and a cache storage 140 in communication to the central processing unit.

“The central processing module 121 is any logic circuitry which responds and processes instructions from the main memory device 122. A microprocessor unit is often used to provide the central processing unit (121) in many embodiments. Those manufactured by Intel Corporation, Mountain View, Calif., and those manufactured at Motorola Corporation, Schaumburg (Ill.); the ARM processor with TEGRA system on a Chip (SoC), manufactured by Nvidia, Santa Clara, Calif. ; the POWER7 process, manufactured by International Business Machines, White Plains, N.Y., or those manufactured at Advanced Micro Devices, Sunnyvale, Calif. These processors or any other processor that can operate as described herein may be used to create the computing device 100. The central processing unit (121) may use instruction level parallelism or thread level parallelism. It can also utilize different levels of cache and multi-core processors. Multi-core processors may contain multiple processing units within a single component. The AMD PHENOM IIX2, the INTEL Core i5 or INTEL CPU i7 are examples of multi-core processors.

“Main memory unit (122) may contain one or more memory chips that can store data and allow any storage location to directly be accessed by the microprocessor. 121 Main memory unit 122, which may be volatile, can store more data than 128 memory. The main memory unit 122 can be Dynamic random-access memory (DRAM), or any variants thereof, Burst SRAM/SynchBurst SRAM/BSRAM, Fast Page Mode DRAM/EDRAM (FPMDRAM), Extended DRAM/EDO RAM (EDO DRAM), Extended DRAM/EDO DRAM), Burst Extended DRAM/BEDO DRAM), Single data rate synchronous DRAM/SDR SDRAM, Double Data Rate SDRAM/DDRRAM (DRDRAM), Direct Rambus DRAM/XDRDRAM (DRDRAM) and/DRDRAM (DRDRAM/DRDRAM (DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM), DRAM/DRDRAM (DRDRAM (DRDRAM) or Extreme DRAM/DR DRAM/XDR DRAM/DR DRAM/DRDRAM (DRDRAM), Direct Rambus (DRDRAM/DRDRAM), Direct Rambus DRAM/DRDRAM (DRDRAM), In some embodiments, the main memory 122 or the storage 128 may be non-volatile; e.g., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive-bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. You can use any of the memory chips described above, or any other memory chips that are capable of operating in accordance with this invention. FIG. FIG. 1D shows how the processor 121 communicates via a system bus 150 with main memory (described below). FIG. FIG. 1E shows an embodiment of a computing system 100, in which the processor communicates with main memory via a memory port. FIG. FIG. 1E may show DRDRAM as the main memory.

“FIG. “FIG. Other embodiments of the main processor 121 connect with cache memory 140 via the system bus 150. Cache memory 140 is usually faster than main memory 122 in response times and is typically supplied by SRAM (BSRAM), EDRAM or EDRAM. FIG. 1E shows how the processor 121 communicates via a local bus 150 with various I/O device 130. There are many buses that can be used to link the central processing unit 121 with any I/O device 130. These include a PCI bus or a PCIX bus or a PCI Express bus or a NuBus. In embodiments where the I/O device 124 is a video monitor, the processor 121 can use an Advanced Graphics Port to communicate with the display (124) or the I/O controller (123 for the display 124) FIG. FIG. 1E shows an example of a computer 100 where the main processor 121 can communicate directly with I/O device130 b or other processors. via HYPERTRANSPORT or RAPIDIO communications technology. FIG. FIG. 1E shows another embodiment where local buses and direct communication are combined: processor 121 uses a local interconnect bus to communicate with I/O devices 130a while also communicating directly with 130b.

The computing device 100 may contain a variety of I/O devices 130a-130n. Trackpads, trackpads and trackballs can be used as input devices. Video displays, graphical displays and speakers can be output devices.

“Devices 130a-130n may contain multiple input or output devices such as Microsoft KINECT or Nintendo Wiimote, Nintendo WII U GAMEPAD or Apple IPHONE. Some devices 130 a-13 n can combine some inputs and outputs to allow gesture recognition inputs. Devices 130 a-130n allow facial recognition, which can be used for authentication or other commands. Devices 130 a-130n provide voice recognition and inputs such as Microsoft KINECT by Apple, SIRI to IPHONE by Apple or Google Now.

“Additional devices 130a-130n can be used as input or output devices. They include haptic feedback devices and touchscreen displays. Multi-touch screens, touchpads and touch mice may use different technologies to sense touch. These technologies include capacitive (surface capacitive), projected capacitive (PCT), resistive (infrared), waveguide, dispersive touch (DST), in cell optical, surface acoustic (SAW), bendingwave touch (BWT), force-based sensing technologies. Multi-touch devices can allow for two or more contact points with the surfaces, which allows advanced functionality such as pinch, spread, rotate and scroll. Some touchscreen devices, such as Microsoft PIXELSENSE and Multi-Touch Collaboration Wall may have larger surfaces like on a table-top, or on a wall. They may also interact with other electronic gadgets. A group of I/O devices 130a-130n, display devices 124-64 n and some other devices could be augment reality. An I/O controller 123 may control the I/O devices as shown in FIG. 1C. 1C. An I/O device can also be used to store and/or install the computing device 100. Other embodiments may also provide USB connections (not illustrated) for receiving handheld USB storage devices. An I/O device 130 can be used as a bridge between system bus 150, external communication buses, e.g. A USB bus, a SCSI Bus, a FireWire Bus, a FireWire Bus, an Ethernet bus or Gigabit Ethernet bus.

In some embodiments, display devices 124a-124n can be connected to I/O control 123. Display devices include liquid crystal displays (LCD), thin-film transistor LCD (TFTLCD), blue-phase LCD, electronic papers, (e-ink), and liquid crystal on silicon displays (LCOS). They may also be connected to I/O controller 123. You can use the following examples of 3-D displays: Stereoscopy, polarization filters or active shutters are some examples of 3-D displays. Display devices 124a-124n can also be head-mounted displays (HMD). Display devices 124 a?124 n and the corresponding I/O control units 123 can be controlled or have hardware support OPENGL, DIRECTX API, or other graphics libraries in some embodiments.

“In some instances, the computing device 100 can connect to multiple display devices (124 a-124n), which may be the same type or different. Any of the I/O device 130 a?130 n or the I/O controller123 can include any type or combination of hardware, software, and hardware to enable, support, enable, or provide for multiple display devices 124a?124n. The computing device 100 could include any type or form of video adapter or video card, driver and/or library to connect, communicate, connect, or otherwise use multiple display devices. Software may be developed and built to work with another computer’s display device 124a. One example is that an Apple iPad can connect to a computing device 100, and the display of the 100 may be used as an additional screen. This could allow the user to use the 100’s display as an extended desktop. A computing device 100 can be configured to support multiple display devices 124a-124n. One who is skilled in the art will appreciate and recognize the many ways that this configuration may be possible.

Referring to FIG. “Referring again to FIG. One or more hard drives, or redundant arrays or independent disks, for storing operating systems or software. Also, for storing software programs like the 120-related software 120 for server 107. One example of a storage device 128 is a hard disk drive (HDD), optical drive including CD, DVD, or Blu-ray drive; solid-state drives (SSD); USB flash drive; and any other device that can store data. Many storage devices can include both volatile and nonvolatile memories. This includes solid-state hybrid drives, which combine hard disks and solid state cache. One storage device 128 could be read-only, non-volatile or mutable. One storage device 128 could be internal and connect via a bus 150 to the computing device 100. One storage device 128 can be external and connects to the computing device 100 via an I/O device 130. This provides an external bus. One storage device 128 can connect to the computing devices 100 via the network interface 118. This network 104 includes, e.g. the Remote Disk For MACBOOK AIR from Apple. Client devices 100 may not need a non-volatile data storage device 128. They may also be thin clients or zero clients. A storage device 128 can also be used to install software or programs 116. The operating system and software can also be run from a bootable media, such as a CD or DVD. KNOPPIX is a bootable CD that runs GNU/Linux. It can be downloaded from knoppix.net.

Client device 100 can also download software from an application distribution platform. The App Store for iOS, provided by Apple, Inc., is the Mac App Store provided to Apple, Inc., GOOGLE LAY for Android OS provided o Google Inc., Chrome Webstore CHROME OS provided o Google Inc., Amazon Appstore for Android OS, KINDLE FIRE, provided by Amazon.com, Inc., are all examples of application distribution platforms. A repository of applications may be included in an application distribution platform. This can be on a server (106) or cloud 108 that clients 102 a-102n can access via a network (104). A distribution platform could include applications developed by different developers. An application distribution platform allows users of client devices 102 to select, buy and/or download applications.

“Moreover, the computing device 100 can include a network interface 118 that allows it to connect to the network 104 via a variety connections such as standard telephone lines LAN/WAN links (e.g. 802.11, T3, Gigabit Ethernet and Infiniband), broadband connections (e.g. ISDN, Frame Relay ATM, Gigabit Ethernet or Ethernet-over-SONET), ADSL, VDSL BPON, GPON or fiber optical including FiOS), or a combination of all of these connections. TCP/IP can establish connections using a variety communication protocols, such as Ethernet, ARCNET and SONET, SDH. Fiber Distributed Data Interface (FDDI), IEEE 802.21/b/g/n/ac CDMA. GSM, WiMax, and direct asynchronous connections. One embodiment shows that the computing device 100 can communicate with computing devices 100. Any type and/or combination of tunneling protocols or gateways, e.g. Secure Socket Layer, Transport Layer Security, or Citrix Gateway Protocol, manufactured by Citrix Systems, Inc., Ft. Lauderdale, Fla. The network interface 118 can include a built-in network connector, network card or PCMCIA network card. It may also contain an EXPRESSCARD networkcard, EXPRESSCARD card network card, card bus adapter and wireless network adapter. Modems, or any other device that is capable of interfacing with the computing device 100 to any network that can communicate the operations described in this article.

“A computing device 100, of the type shown in FIGS. The operating system may control the scheduling of tasks and access of system resources. 1D and 1E can operate under the control 1D and 1E. Any operating system can run on computing device 100, including any of the MICROSOFT WINDOWS versions, all releases of Unix or Linux operating systems, any embedded operating software, any real-time operation system, any proprietary system, and any operating system for mobile computing devices. WINDOWS 2000, WINDOWS server 2012, WINDOWS CE, WINDOWS Phone, WINDOWS XP, WINDOWS VISTA and WINDOWS 7 all manufactured by Microsoft Corporation of Redmond, Wash., MAC OS, iOS, manufactured Apple, Inc. of Cupertino, Calif., and Linux, a freely available operating system, e.g. Linux Mint distribution (?distro?) Ubuntu distributed by Canonical Ltd. in London, United Kingdom. Unix or other Unix like derivative operating systems. Android designed by Google, Mountain View, Calif. Certain operating systems, such as the CHROME OS from Google, can be used on zero clients and thin clients including, for example, CHROMEBOOKS.

“The computer system 100 may include any computer system that can communicate with a computer network, such as a desktop, phone, notebook, computer, computer or telephone, netbook, ULTRABOOK or tablet, server or handheld computer, mobile telephone, smartphone, tablet or mobile computing device, media player, gaming system, mobile computing device or any other form of computing, telecommunications, or media device. The computer system 100 is equipped with sufficient memory and processor power to carry out the operations described. The computing device 100 can have different operating systems and processors depending on its configuration. Samsung GALAXY smartphones, for example, are controlled by the Android operating system, developed by Google, Inc. GALAXY phones receive input via a touch interface.

“In certain embodiments, the computing devices 100 are a tablet e.g. The IPAD line of Apple devices; the GALAXY TAB and KINDLE FIRE devices by Amazon.com, Inc., Seattle, Wash. The computing device 100 can also be used as an eBook reader. The KINDLE family devices by Amazon.com, and the NOOK family devices by Barnes & Noble, Inc., New York City, N.Y.”

“In some embodiments, the communication device 102 may include a combination of devices. A combination of a smartphone and a digital audio or portable media player. One example of one of these embodiments would be a smartphone. The IPHONE smartphone family manufactured by Apple, Inc., a Samsung GALAXY smartphone family manufactured by Samsung, Inc., or a Motorola DROID smartphone family. Another embodiment of the communications device102 is a computer or laptop that has a web browser, microphone, and speaker system. a telephony headset. These communications devices 102 can be web-enabled to receive and initiate calls. A laptop or desktop computer may also be equipped with a webcam, or another video capture device that allows for video chat and video calling.

“In some embodiments, one or more machines 102 and 106 are monitored as part of network management. One of these embodiments may identify the machine’s status, such as the number of processes running on it, their CPU utilization, memory usage, or port information. This could include information about the available communication ports and addresses, or the session status, which can include the type and duration of the processes and whether they are active or inactive. Another embodiment of this type of information can be identified using a variety of metrics. The plurality may be used at least partially to make decisions regarding load distribution, network traffic management and network failure recovery, as well as other aspects of the operations described herein. The systems and methods described herein will make it easy to see aspects of the components and operating environments mentioned above.

“B. “B.

“Various embodiments herein refer to systems and methods to monitor and analyze sensor data in order to detect and report emergencies for one or more people. A platform that runs on backend systems 105 and mobile apps 103 on clients devices 102 can detect and report emergency events. The users. A mobile app 103 can be downloaded to a client device 101 of a user whose health, medical information and activities are to be monitored in order for emergency reporting and detection. A backend 120 or the mobile application 103 can be used to analyze and monitor sensor data and user context data. This will allow you to report and alert on emergency events. Emergency events can be reported by the user to emergency contacts via the corresponding emergency contact devices (107).

The backend application 120 can be used to analyze sensor data and user context data and to report emergencies. The backend application 120 can analyse the data available to determine if there is an imminent emergency or a lost communication link between the mobile device and the backend system. The backend application 120 can notify the user if an emergency is detected. The backend application 120 can, in some instances, be set up to mirror the functionality of the mobile app so that it can perform emergency event prediction or detection using any data available to it 105. In cases of communication failure between the mobile device 101 and the backend system105 105, the backend 120 can assume the role as the mobile application 103.

The user will be asked to identify emergency contacts and import contacts from an existing address book or other database during the setup of the mobile app 103. The mobile application 103 allows users to provide emergency contact information such as email addresses, phone numbers and social network information. It also includes information about messaging applications, voice over IP (VoIP), and other information that can be used for communication with the emergency contacts. A social safety network that is created by existing social networks like Facebook, Twitter and Foursquare, Google+ or the like can be added to the user’s safety net. After setup, the user can update the emergency contact information. After the emergency contacts have been identified, the mobile app 103 can notify emergency contact persons or entities via instant messaging, social networking messaging, email or other messaging services that they have been designated as emergency contacts. A notification to emergency contacts may include an invitation for them to join a safety network. After a notification has been sent to the prospective emergency contact about being assigned, he/she may either accept or decline the invitation. The mobile application 103 can be used to accept or reject the invitation if the prospective emergency contact has already registered with the system. The reply can be sent as a text message, SMS, email, or any other form that is associated with the original invitation. Social safety network configurations can include authentication of mobile application103 or user credentials on respective social networks(s), in order to permit automatic posting of messages via the mobile app103, taking into account privacy settings.

The client device 102 will prompt the user to input the appropriate medical information during the mobile app setup. This could include current medical conditions, medications, and any other information that may indicate the user’s health, such as age, weight, or information about his or her physical condition. To adjust the level of emergency sensitivity, the mobile application 103 can access the medical information of the user. The client device 102 can transmit the user’s medical information to the backend system105. In response, the backend 120 can create an emergency prediction model that is specific to the user using the information. If a user has heart problems or a heart disease, an emergency predictive model can be tailored to that user. This is because it is more likely to identify certain situations as emergency situations than emergency predictive models for people who don’t have heart problems. The mobile application 103 may prompt the user to update their medical information on a regular basis in some cases. The mobile application 103 and the backend 120 can also receive updates to the user’s medical information via other sources, such as systems associated with the healthcare provider service(s). 97. The mobile application 103 and the backend 120 can also receive updates of the user’s medical information from other sources, such as systems associated with the healthcare provider service(s) 97. Some implementations allow users to manually adjust the level of emergency sensitivity to match their current activity or state.

“The mobile app 103 can prompt the user to connect their client device 102 with the available sensors. These sensors include those embedded in jewelry, clothes, and accessories. Sensors associated with medical devices (such pacemaker(s), defibrillator, wearable artificial kidney(s), etc. Or the like. Wearable sensors can be motion sensors, thermometers, pulse detection sensors, pulse oximetry sensors, or any combination thereof. Sensors that are attached to medical devices may include sensors to detect the level of chargeability, medical sensor or other sensors. Ambient sensors may include humidity sensor(s), ambient thermometer(s) and other ambient sensors. The client device 102 can be connected wirelessly to one or more sensors via BLUETOOTH LE or near field communication (NFC), or other wireless communication methods through the appropriate handshaking processes. Other communication options may also be used. Connecting the client device with available sensors may include the user entering sensor identification information on the client device. Mobile application 103 can also access data from the sensors embedded in client device 102 (such motion sensors, other sensors).

The mobile application 103 must be installed on the client device 102. Once that is done, the mobile app 103 can begin collecting user-related information, checking for emergencies, and communicating on a regular basis with the backend 120. The deployment phase of the mobile application 103 is described below. The mobile application 103 may ask for fingerprint identification from clients with biometric authentication capabilities (such as fingerprint recognition) on client devices 102. This is based on the user’s preferences.

“FIG. “FIG. The 200-step process can include the receipt of sensor data, user context data or other data (step 215), analyzing the received data (220), checking for an immediate emergency (decision block 223), initiating an emergency protocol (424), and if none is detected, checking to see if there is an impending event (decision blocks 250) and notifying the user (step 266). The backend application 120 can perform the method 200 in some embodiments.

“Method 200 includes the mobile app 103 receiving sensor data and user context data or any other data indicative about the health, activity, and environment of the user (step 211). The sensor data can be received by the mobile application 103 from one or more sensors. Sensors can be embedded in client devices 102, sensors on wearable articles, ambient sensor, sensors that are associated with medical devices, or any combination thereof. The mobile application 103 can collect sensor data such as motion data, location, ambient temperature measurements and ambient humidity measurements. It also has medical measurement data (such user body temperature measurements. user heart beat measurements. user blood sugar measurements. user blood pressure measurements. etc.) and other sensor data (such a chargeability level for medical devices batteries or any combination thereof).

“The backend application 120 or mobile app 103 can collect contextual data about the user, such as data regarding the user’s current location (e.g. traffic patterns, air quality level, etc. Data indicative of previous user activity, data indicative that changes in the user’s social or economic status (e.g., divorce, loss, death of a loved one, etc.), and public alerts (e.g. weather alerts or crime alters, alerts for bacteria or viruses, alerts about infectious diseases, etc.). The data can be associated with the location or activity of a user or any other contextual data. The backend application 120 and the mobile app 103 can collect user context data from the user (e.g. as input to the client device 102, the Internet, social media sites or apps, emergency contacts or any other source). The backend 120 or mobile application 103 can retrieve the previous activities and social or economic changes of the user through social media content, such as comments or feeds made by the user. The public alerts data can be retrieved by the mobile application or backend application 120 from the systems associated with the healthcare provider service(s), 97 or web service(s).

“Furthermore the mobile app 103 and the backend 120 can receive user data (such as prescription information, medical imaging results or any other user-related medical information) from the systems associated with the medical provider services(s)97. The mobile application 103 may prompt the user to input information about a current activity, select the mode or settings for mobile application 103, or enter any other data. The mobile application 103 and the backend app 120 may be able to infer current activity from the user, or select a mode or settings for the mobile application 103 based on sensor data or other data. The data collected by the backend 120 or mobile application 103 relate to the user’s health, current activity, and environment. It can also indicate a user’s economic, social, or psychological status.

“The backend application 120 may receive sensor data, user context data or any other data indicative about the user’s health, activity or environment. The backend 120 can receive sensor data, user context data, and other relevant data from the client device (102), systems associated to the web service(s), 98, systems related with the healthcare provider service(s), 97, the Internet, and other sources. The mobile application 103, for example, can cause the mobile device to transmit sensor data from the sensor(s), to the backend system (105). The backend app 120 can have access (or store) user data in the backend software 105.

“The 200 method can include either the mobile application 103 (or the backend app 120) that analyzes the received data (step 222). The received data can be stored temporarily or permanently by either the mobile application 103, or the backend app 120. The backend application 120 or mobile application 103 can be set up to analyze received data from sensors and other sources. The backend app 120 can generate the emergency predictive model. The backend application 120 can generate an emergency predictive model. It can also include proprietary short-term protective analytics that analyze the input data and take into account any combination (such as current health conditions, current activity, current location, or current socioeconomic state). The environment parameters, which are available through ambient sensors, backend application 120, healthcare service provider service(s), 97, web service(s), 98 and 97), user settings, past data, and other data, can be used to determine the likelihood of an imminent or current emergency. The mobile application 103 and the backend 120 can be used to calculate a probability value or score that indicates the likelihood of an immediate or imminent emergency. The backend application 120 can, in some instances, be set up to analyze the user data using an emergency predictive model. The backend app 120, for example, can monitor client device 102’s connectivity to the backend system105. Upon detection of a loss of connectivity the backend 120 can analyze sensor data, as well as other user-related data, using the emergency predictive modeling.

“For instance, if you look at motion sensor data, a sudden drop in speed can indicate that the user has been in a traffic accident. A sudden drop in altitude could be an indication that the user is having problems on the ground. A microphone recording can indicate that the user is being investigated for a crime. Sensor measurements that indicate a sudden increase in the user’s pulse rate or heartbeat rate could be an indication of a heart attack, panic attack, or other emergency situations. A low battery charge detected by a pacemaker, artificial kidney, or any other medical device can also be a sign that the user is at risk for an emergency.

The mobile application 103 and the backend 120 can use the sensor data received to calculate the probability of an emergency event. The user’s medical information, such as their current or past medical conditions, current/past medications, weight, and so forth, can be used to determine the likelihood of an emergency event. The likelihood of having a panic attack or heart attack may be affected by current activity or stress levels. Information about the current/past location or activities of the user could lead to an increase in weight for certain risks, such as falling, getting injured, losing consciousness, or being infected with a bacteria or virus. The mobile app 103 and the backend 120 can modify your weight to reduce the risk of suicide, depression, or psychological breakdown based on certain information (such as the loss or job loss)

“Storing received data may include the storage of the data locally on the client device 102 and at the backend system105. For data analysis purposes, the received data can be temporarily stored at client device 102. The backend application 120 can receive the data and send it to it via one or more networks, 104 for storage in the backend software 105. The mobile application 103 can configure the backend application 120 to encrypt all data received before it is sent to the backend system 120. Based on data from the mobile app 103, the backend application 120 can be set up to update or generate an emergency predictive model. The emergency predictive model can also be generated using data from the mobile application 103. This includes the medical history, weight, and level of activity. The backend app 120 can be configured so that the emergency predictive model, or any parameters thereof, is available to the mobile app 103 to detect emergency events.

“In certain embodiments, the backend app 120 can use the emergency predictive model in order to detect current or imminent emergency events and notify the user (or his/her network) about these events. One or more functions can be included in the backend application 120 that mirrors those of the mobile app 103. These functionalities enable the backend app 120 to detect and predict emergency events using sensor data. This is especially useful in situations where communication with the mobile application 102 is disrupted due to power loss or network coverage.

The mobile application 103 can communicate with the backend 120 to update the user’s profile and to run user-specific processes on the backend 120. These include processes to create/adapt an emergency predictive model, retrieve data from other sources or processes. These regular communications enable the user to update their current location, current activity, and battery level.

“The backend application 120 may also receive information from the mobile application 103 about any backup emergency timesrs running on the client device. Backup emergency timers are useful tools to detect potential emergencies that cannot be detected using sensor data, other data received by the mobile app 103 or backend application 120. The mobile application 103 might not detect an emergency event if the user does not have the appropriate client device 102. The mobile application 103 lets the user set a countdown to a time limit based on the expected duration of the potentially dangerous activity. The backend app 120 can relay the timer information and run a similar countdown clock. To avoid false emergency detection, the user can stop the timer from expiring. An emergency notification protocol can also be initiated if the timer expires with no user intervention. For each emergency timer that is set on client device 102, a corresponding synchronized duplicator timer can be set to run backend application 120. This will provide additional protection against temporary or permanent communication problems such as network failures or battery drains. In some cases, the duplicate timer can expire during a period when communication fails with the mobile app 103. A different version of emergency notification protocol (different from the one associated with good communication via mobile application 103) may be executed.

“The method 200 can also include the determination of whether an actual or current emergency is occurring to the user (decision block 232). This can be done by comparing the computed probability value or score value with a threshold value. The state of the user may affect the threshold value. The threshold value can be exceeded by the mobile application 103 and backend 120 if the emergency probability or score is greater than the threshold. This will allow the application to determine that the user is experiencing an actual or current emergency (decision block 223) and create an emergency protocol (step 244).

The emergency protocol can contain a series of actions that the backend 120 or mobile app 103 take when there is an actual (or current) emergency. The emergency protocol may include both a verification stage (i) and a reporting stage (ii). The emergency verification stage allows the client device 102 or the mobile app 103 to perform one or more actions that prompt the user to stop the emergency protocol in the event of a false alarm. The emergency verification stage can be used to avoid or reduce false alarms. User feedback, such as stopping the emergency protocol, can also be used during the emergency verification phase (for example as training data) in order to improve the system’s emergency detection capabilities (e.g. the mobile app 103 or the backend software 120) over time. Data associated with false alarms and data associated correctly detected emergencies can be used to improve emergency predictive models or user-related processes on the backend app 120.

“Once an emergency is detected, the mobile app 103 can prompt the user stop the emergency protocol during emergency verification stage. The mobile application 103 may cause the client device to play loud sounds and flash a flashlight to draw attention. This state, which can include playing pitch sounds, blinking light, flashlight, and flashing light, can last a few minutes, up to ten minutes depending on what emergency sensitivity settings are used. The mobile application 103 will register the false alarm if the user cancels an emergency protocol. The mobile application 103 can also register the emergency event as valid if the user does nothing to stop the emergency protocol. The mobile application 103 might also be used to validate an emergency event. The backend application 120 can store information associated with detected emergency events, such as input data for the emergency prediction model, and information indicative of the validity or false alarm. This data can then be used to train the user to improve their emergency detection model.

“In certain implementations, the mobile app 103 may request user authentication to allow the user stop the emergency protocol. User authentication may include entering a security code, fingerprint identification capabilities on the client device 102 or other authentication methods. User authentication is used to prevent cancellation of emergency protocols that have been initiated.

The emergency reporting stage can be started if the user doesn’t stop the emergency protocol in the emergency verification stage. The backend application 120 can transmit a distress signal to the emergency contact (or safety network) of the user. The backend app 120 can send the distress signal to either the emergency contact device (s) 107, or the social network service 99. Some implementations allow the distress signal to be sent to the following: emergency contacts, safety network, emergency response service 96, subset of user contacts, or a specialized safety network. Some implementations allow the user to set up an ad-hoc network to protect themselves in case of emergency situations, such as when they travel abroad and it is difficult for their regular emergency contacts to respond quickly. The user may choose from a list that includes friends, family members or friends of friends who are willing to look after the user for a specific period. The user can choose from a list of friends, family members or friends of friends who are willing to provide their services free of charge or for a fee. The user may view the profiles of those individuals who list certain skills, such as?nurse,??fire fighter? or?emergency responseer?, and?knows CPR’. You can view the availability, ratings from past engagements, fees per month, etc. and then contract the one that best suits your needs. The ad-hoc safety system can be created in some cases based on the proximity of users to each other, and user’s relationships (e.g. family,?). ?friend,? ?close friend,? etc.) To the appropriate contacts, user preferences, or other criteria. Information such as: (i) a standard message or a personalized message to recipients; (ii) an indication of last known location and direction of user; (iii). An indication of last known activity of user (such as running, walking, driving, etc. (iv), information about the user’s medical condition, medication, (v), other physiological or social data or a combination thereof.

“In certain implementations, after the distress signal has sent, the mobile app 103 can cause client device 102 into a protected mode. The protected mode allows the screen to display basic information about the user, while also locking the device to prevent accidental cancellation of an emergency protocol. In order to maximize battery life, the client device 102 can beep periodically while in protected mode. It will also periodically relay information to backend 120 in order to optimize battery consumption. Emergency responders can access the displayed information of the user and then treat them accordingly.

“Depending on the characteristics of the client device102 and the subscription level of user, the distress signal may be an automated SMS message (SMS) message. Security restrictions prevent automatic SMS message transmission on client devices 102, such as those that run the iOS operating system. Third-party messaging apps can provide automatic SMS messaging for such client devices.

“In some cases, the distress signal may include plain text information that indicates the emergency event detected. These implementations allow the distress signal to include an iMessage, SMS message or email. It can also include a message to other users of emergency detection and reporting systems, such as message to them, messages on social networks, and messages to them. A temporary uniform resource locator (URL) can be included in some implementations. These implementations allow sensitive information (in terms privacy) to be posted on the URL, such as personal or medical information. Credential(s), authentication, can be used to gain access to the URL. Credential(s), which can be pin, password, and other authentication information, may be used to access the URL. Credential authentication allows for access to sensitive information to be restricted to the appropriate contacts. The credential(s), in some cases, can be shared with a subset the emergency contacts or members to the safety network before the detection of an emergency. The credential(s), in some cases, can be sent as part of an emergency protocol to a subset of users through separate messages. To prevent unauthorized access to sensitive data, credential(s), authentication can also be used. The URL can be set up to temporarily exist to protect the user’s sensitive data from being accessed by unauthorised intruders. A temporary URL, for example, prevents sensitive data from being stored in the Internet archives (not searchable or cached). The temporary URL can also protect sensitive information from being spread in the event of false alarms. The temporary URL can be removed if an emergency protocol is cancelled.

“If there is no emergency (decision block 223), the mobile application 103 and the backend app 120 can check for an imminent emergency (decision block 253). The mobile application 103 can use the previously calculated probability or score value to detect an impending emergency event. Other implementations allow the mobile application 103 and the backend 120 to compute a second probability value or score value, which can be different from the one used for detecting current emergency events. The second probability value or score value can then be compared to a second threshold. Impending emergencies include low battery charge detected by a medical device, abnormal medical test results, and detecting risks (e.g. based on received public alerts) associated to a user’s location or event.

The mobile application 103 and the backend 120 can be configured to display a message to the user on the client device 102 when the application detects an imminent emergency (decision block 250). The client device can also be configured to emit a sound, flashing light or any other output signal to draw attention to the mobile application 103. A warning message may include information about the impending emergency, suggestions for the user, and a link to further information.

“In some cases, checking for an impending crisis (decision block 250), can be done before checking for a current emergency (decision block 255). In other cases, one check can be done to detect if there is an impending or current emergency. The mobile app 103 can, for example, determine which emergency is most likely based on a calculated probability or score value.

“On detecting an impeding event (decision block 255), the mobile app 103 or backend 120 can send a notification (or warning message) to the user indicating that the impeding event is about to occur (step 226). A notification can contain an instruction or request for the user to modify a behavior in order to avoid the anticipated emergency. If reckless driving is detected, the notification may include a request to the user to slow down and drive safely. The notification may include a request to the user to seek immediate medical attention if sensor data shows that the user has high blood pressure or irregular heart beat. The backend application 120 or mobile app 103 can monitor user behavior, such as by tracking user speed and location or user motion. The mobile application 103 and the backend app 120 can detect if the user’s behavior has not changed (e.g. by monitoring sensor data or other data indicative user behavior), and the mobile application or backend 120 can report the anticipated emergency to the user’s emergency contact or to a member of their social safety network. The mobile application 103 and the backend 120 can report the anticipated emergency event. They can also include information such as a description, location indication, or other pertinent information. The backend application 120 or mobile application 103 can transmit a distress signal to a user’s emergency contact, an emergency response service or a specialized safety network.

“In some embodiments, the detection of an actual emergency (decision block 220 and step 240) can be optional. The mobile application 103 and the backend 120 can be configured in certain embodiments to perform a first method of detecting and reporting actual emergencies and/or a secondary method for reporting impeding emergencies.

Click here to view the patent on Google Patents.