Artificial Intelligence – Ashutosh Saxena, Hema Swetha Koppula, Chenxia Wu, Ozan Sener, Brainoft Inc

Abstract for “Automatically learning connected devices and controlling them”

“A first input is one that has been received by at least one sensor. A first probability associated to a first state can be determined based on the first input. At least one sensor receives a second input. A second probability is associated with the second state is calculated based on the second input. Based on the second probability and a transition model, the second state is determined. The transition model links the first and second states. It indicates the likelihood of a transition to the second state. A rule is activated to modify the state of at least one device connected to the network based on the second.

Background for “Automatically learning connected devices and controlling them”

“Network-connected devices (e.g. Internet of Things (i.e. IoT devices)) allow remote control of and automation of devices in an environment (e.g. home). These devices are not always fully autonomous. Users often have to operate the devices remotely, or create rules to simulate autonomous operations. IoT devices often have limited functionality and are not able to make many autonomous decisions. Even if it were possible to perform autonomous operations on a single device’s behalf, device functionality could be enhanced by sharing information between different devices. It is therefore necessary to automate the functionality of all network connected devices in an efficient and generalized manner.

The invention may be implemented in many ways. It can be used as an apparatus, a process, a system, a composition, a product of computer programming, and/or a CPU, such as one that executes instructions stored on or provided by a memory connected to the processor. These implementations and any other form of the invention can be called techniques in this specification. The invention allows for the possibility of altering the order of steps in disclosed processes. A component, such as a processor and a memory, described as being capable of performing a task can be implemented either as a general component that is temporarily set up to perform the task at a particular time or as a specific component that was manufactured to do the task. The term “processor” is used herein. The term “processor” refers to any one or more devices, circuits and/or processing cores that are designed to process data such as computer program instruction.

Below is a detailed description of some embodiments of the invention, along with accompanying figures that illustrate its principles. Although the invention is described with these embodiments in mind, it is not limited to them. The claims limit the scope of the invention, and the invention includes many alternatives, modifications, and equivalents. The following description provides a detailed understanding of the invention. These details are given for example purposes only. The invention can be used according to the claims without any or all of these details. To be clear, the technical material related to the invention that is well-known has not been described in detail. This is done in order not to obscure the invention.

“Controlling network-connected devices is disclosed. The first input is received from a plurality network connected devices and/or sensor. Data from a motion sensor (e.g. infrared motion sensors), a camera or microphone could be received. Received from a home is data from a motion sensor (e.g., infrared motion sensor), a camera, a microphone, tamper-resistant temperature sensor, etc. The sensor can detect a chemical characteristic or a physical characteristic (e.g. movement, switch position and temperature). The sensor can be integrated into a network-connected device (e.g. integrated in a thermostat network connected). The first input is used to determine the subject’s first state. A first probability is associated with the first state. A location and an action (e.g. sitting, sleeping, standing) of a subject can be associated with a first state. It is desirable to know the environment in which it occurs. The subject could be a human, an animal (e.g. a pet), or a robot. It is not possible to pinpoint the exact location or action of the subject with absolute certainty. However, the first probability can identify the probability that the first state associated the location and the action is correct based upon the first input. In some embodiments, the first valid state is the one identified. The first state, for example, is associated with the highest probability of all possible states that are associated with the input. The plurality of sensors receives a second input. The second input, for example, is received after the first and detects any state changes in the same subject. The second input is used to determine the second state. A second probability is associated with the second state. The second state could be a new state for the subject (e.g. person). In some cases, the second status may indicate the probability that the updated location or action of the subject is correct. A transition model links the first and second states, which indicates a likelihood (e.g. a probability) that the subject will transition to the second state. If a subject is found to be in the first condition, then the next possible state for the subject is determined based on that first condition. Based on the minimum of the transition model and second probability, a level of confidence that a second state corresponds with an actual state can be achieved. The second state is used to trigger a rule to change the behavior of a network-connected device. An example is that an IoT device can be switched on when the second state is met.

“FIG. “FIG. Devices 102 include one or more network-connected devices, including sensor devices and controllable device (e.g. IoT devices). Devices 102 can include a switch or a door lock, thermostat, light bulb, kitchen/home appliances, a camera and speaker, as well as a garage door opener, window treatment, fan, electrical outlet, light dimmer, irrigation system, and any other device that can be connected to a computer network. Devices 102 can include a switch and a camera as well as a motion detector, light detector, an accelerometer or an infrared sensor, smoke detectors, microphones, humidity detectors, door sensors, window sensors, water detectors, glass breakage detectors, and other sensors.

“Hub104 communicates with devices 102 via a wireless and/or wired connection. A device with devices 102 can communicate with hub 104 using a WiFi or Bluetooth connection. In some embodiments, device 102 and hub 104 can be deployed together. Devices 102 include devices that are deployed in homes. Hub 104 wirelessly connects to the devices inside the home using a short-range wireless signal. Hub 104, in some embodiments, facilitates communication and/or control between devices 102 and user device 108 or via network 110. User device 108 can connect to hub 104 by wireless (e.g. Bluetooth, Wifi etc.). To view and control information about devices 102, you can also access them via network 110. Another example is user device 108 connecting to network 110 to gain access to devices 102 via server 106 and hub 104. One or more devices/sensors of device 102 may report information to hub104, which in turn relays the information to server 110 via network 110. One or more of the devices/sensors of devices 102 can communicate directly with network 110, without needing to be routed through hub 104. A device may include a Wifi or cellular radio that allows direct access to network 110.

“User device 108 can be a laptop, smartphone, tablet, desktop, or mobile computer. It also includes a smartwatch and other electronic computing devices. Some embodiments provide information from user device108 to server 106 or hub 104 in order to allow automation of devices 102. As an example, user device 108 can provide a GPS location to server 106 and/or hub 104 as inputs to trigger and determine device automation rules. User device 108 may receive information about a device of devices 101 in some instances. A hub 104 and/or server106 can provide a user device with a detected alert from a device of device 102, for example. Server 106 could be part of a cloud computing infrastructure that processes and/or stores information related to devices 102 in order to allow control and management for devices 102. Hub 104 may be used as an interface or control hub to control devices 102 in some embodiments. FIG. 1.A. Storage elements may be included in any of the components shown, but they are not explicitly shown.

“Some embodiments of the system shown at FIG. 1A, machine learning is used to learn how users interact with one or more controllable device of devices 102. This data can be used to adjust controllable device settings automatically. Some embodiments automatically build powerful statistical models that predict user preferences and behavior. To learn more about the user’s activities and preferences, devices 102 and 108 can be used as sensors. Based on this data, controllable devices can be managed and controlled automatically by devices 102.

“In some embodiments machine learning (e.g. local and/or cloud-based) is used to combine input from multiple devices/sensors (e.g. devices 102) to create a unique model that describes a user’s activities and behaviors. Devices 102 can be used to control a home environment remotely or locally. Devices 102 can collect data such as the presence and movement of people in an environment and measurements of environmental properties like light, temperature, humidity, motion of subjects and video of various locations within the environment. Some embodiments allow for learning to control controllable devices automatically and adjust controllable devices to meet the user’s preferences. Sensors detect environmental conditions that can trigger changes in controllable device states, such as behavior and interaction with switches and thermostats. Devices 102 are then commanded autonomously according to the user’s preferences.

“In some embodiments machine learning is performed by a local hub for a specific environment. Hub 104, for example, learns for users of its deployed environment which includes devices 102. Some embodiments allow for machine learning to be performed remotely, such as cloud-based, instead of being done at a hub. Server 106 performs machine learning in the environment of hub 1004. A backend server may learn from different environments in order to automatically configure each environment.

“In certain embodiments, the hub 104 or server 106 contains one or more inference engine(s) that convert sensor data from one or more devices 102 into state representations (e.g. state of a person?s behavior, location, and so forth). The inference engine uses machine learning algorithms that are based on deep and statistical learning techniques. Hub 104 and/or Server 106 may include a “vision engine” in some embodiments. (e.g. ML Inference) which receives images/videos from one or more cameras and analyzes them using vision algorithms to infer a subject’s (e.g. human, pet, etc.). Location, behavior, and activities (e.g. spatial and motion features) are all taken into account. Some embodiments analyze camera video data to determine hand gestures that allow a person to control connected devices to a desired status. One embodiment teaches gestures through statistical demonstrations. One embodiment uses deep learning to learn the gestures. Some embodiments combine vision data with data from other sensors to create a semantic representation that contains information about the person’s activities and preferences.

“In some embodiments, such as the one shown, data is sent using an event driven database architecture. The sensor data is converted first into a feature vector before streaming. Some embodiments clean the sensor data before streaming using statistical learning models that model different types of noise.

“In some embodiments hub 104 or server 106 include one or more?rule engine? These use a detected state in order to trigger one or more automatic determined automation rules. The rule engine is composed of pre-conditions as well as post-conditions. If a pre-condition is found, a rule triggers to place one or more network-connected controllable devices in the post condition state. If the system detects that someone is reading a book at a desk, it will turn on the reading lights.

Server 106 may store various types of information which can be used to determine automated rules in some embodiments. Sensor data at different points in time and device control events of devices102 (e.g. pressing a light switch or changing the temperature of a thermostat) can be stored. are logged. By learning the preferences/configurations desired by users, the preferences/configurations may be recreated when the conditions in the environment match the previously recorded conditions. In some embodiments, statistical models, such as the transition model, are used to determine the control actions of devices. When a network-connected light bulb is turned on/off, the associated environmental conditions (e.g. user location, action time, day etc.) are determined. The associated status of controllable devices are also stored. Some embodiments allow for the identification of which state vector corresponds to which control action of controllable device. A learning method may be used to determine the weight of a function that maps state vector value to state change of controllable device. Deep learning is used in some embodiments to learn a multi-staged, non-linear mapping between the state vector and the state changes of controllable device states. Some embodiments provide feedback to the user about an action taken by an automatically triggered rule. Reinforcement learning is used to modify the control rule automatically.

“In certain embodiments, alarming or critical events can be detected and stored. A break-in can be detected as an uncommon event and output alerts are generated, including an alert sent to a smartphone, activation of a siren, and an alert that is sent to the phone. Another example is a baby climbing out from under a crib. The detection of unusual events can be automated, unlike manually set alarms. This is possible because the statistical modeling of events with sensor data allows for the detection of unexpected events.

“In some embodiments, an output response (e.g. sound, user alerts, light alerts, etc.) is generated in response to an event being detected. “In some embodiments, an output response (e.g., sound or user alert, light alert, wearable alert) is generated in response to detecting an event. If a stove is left on for more than a certain amount of time, or if a human subject is detected, it will be turned off automatically and/or an alert generated. Another example is when there is a water leak, an alert is sent to the stove and the water valve is turned off. Another example is when a person falls and is not detected by the system, an automatic alert is sent to an emergency contact and/or authority. Another example is when a person is found in a living area during the morning, curtains will be opened automatically. A fan can also be turned on if humidity levels exceed a certain threshold. A humidifier can be automatically turned on/off to maintain the desired humidity. Another example is when the coffee maker is set to turn on at a predetermined time, it is automatically switched on. Another example is when the dishwasher is scheduled to run at lower energy rates. Another example is when light intensity is adjusted according to the time of day. For instance, lights are turned on at a lower intensity when someone wakes up in middle of the night to go to the bathroom. Another example is music being automatically turned on when a subject is eating dinner. Another example is when ambient temperature and humidity exceed threshold levels and the subject is seated, a fan will automatically turn on.”

“One or more of these may be included in network 110. They include a direct or indirect communication connection, mobile communications network, Internet, intranet and Wide Area Network, Storage Area Network, wireless network, cellular network, and any other method of connecting multiple systems, components or storage devices together. FIG. 1A may show additional instances of any component. 1A could also exist. Multiple hubs can be deployed and used in the same environment. Multiple user devices can be used to receive user information and/or control devices in the same environment. Server 106 could be one of many distributed servers that process data from network connected devices. Some components may not be shown in FIG. 1A could also exist in some embodiments.”

“FIG. “FIG. 1B” is a diagram that illustrates the interactions between components used to control network devices. Rules engine 122 is communicated with cloud-based machine learning module 120. Rules engine 122 receives inputs from sensors 124, and controls devices 122. One or more modules of cloud-based machine intelligence module 120 are discussed in conjunction with FIG. 1A. 1A. Hub 104 and/or Server 106 are shown in FIG. 1A. Devices 102 of FIG. may include sensors 124 and/or control devices 126. 1A.”

“FIG. 1C is a diagram that illustrates an embodiment of sub-components of a system for controlling network devices automatically. Components 130 of FIG. 1C could be included in FIG. 106. 1A. FIG. 1C could be included in hub number 104 of FIG. 1A.”

“FIG. “FIG. FIG. The process of FIG. 1A.”

“At 202, sensor data is received. In some embodiments, received sensor data may include data from one or more sensors of devices 102 in FIG. 1A. Data from switches, cameras, motion detectors, light detectors, infrared detectors, thermometers, smoke detectors, air quality sensors, temperature detectors, microphones, humidity detectors, door sensors, window sensors, water detectors, glass breakage detectors, and other sensors monitoring the environment are all received. Some embodiments include data from a user device (e.g. user device 108 in FIG. 1A). Data from the user device’s sensor (e.g. location sensor, GPS and accelerometer, heart rate sensor or orientation sensor, microphone, gyroscope etc.) can be received. Received. Another example is that the data received from the device may include status data and/or user-specified data. In some cases, sensor data may include data from one or more controllable device. A status, configuration, functional state, parameter and/or other data from a controllable device 102 of FIG. 1A is received. The sensor data may be received periodically in some embodiments. A sensor device may send periodically the current detected sensor data, for example. Some embodiments allow for dynamic reception of the sensor data. The sensor data may be received, for example, when the sensor data has been detected. Some embodiments allow for the reception of multiple sensor devices. Some embodiments receive the sensor data at a hub (e.g. hub 104 in FIG. 1A), and the sensor data are shared/sent to another hub or sent to the cloud computing for processing (e.g. sent to server 106 in FIG. 1A).”

“At 204, machine learning is used to identify one or more states. Machine learning can be performed with the sensor data in some embodiments. Some embodiments of machine learning include the use of a hidden Markov model (recursive) and/or expectation maximization. Each state may be associated with discrete types of information that machine learning can detect. It is possible to detect an activity that a person is currently engaged in. This can be done by analysing camera video/image data in order to identify a person and the activity they are engaging in. In some embodiments, each subject can be assigned a state. Some embodiments reduce the sensor data to one or two specific states. This allows the sensor data to be reduced to meaningful variable value that can be used to determine one or several automation rules. The state can be represented as a vector in some embodiments. The state vector may include a set of values, for example. The state vector may include one or more of these: a time value and a weather forecast.

“It is often difficult to pinpoint a particular state with 100% accuracy in many cases. It may be difficult, for example, to pinpoint the location and activity of a subject using sensor data. In some embodiments, a likelihood/probability that the determined state is correct is determined. A state could be any of many possible states. The probability that each state is correct is determined. Machine learning, such as statistical and/or deep-learning, may be used in some embodiments to determine the probability. For example, statistical and/or deep learning models of correlations between sensor data and a potential state, a previous state and the potential state (e.g., transition model), and associations between different states/state components are built and utilized to determine an overall likelihood/percentage for each candidate state.”

“In some embodiments, observations in time can be used as statistical inputs to calculate a state vector that is changing in time. An inference engine may output the state vector, which is information about the detected subject’s activities and presence. One embodiment models the dynamic processes and motions of subjects within the house, such as human and pet. Deep learning is used to discover non-linear relationships in one embodiment. Some embodiments use sounds from microphone sensors to determine the location and activities of people. One embodiment uses statistical models to link sound to the state vector.

“In some embodiments, the state vector includes the activity state (e.g. general activity state?reading or sleeping, cooking, etc. and detailed activity state?reading-opening-book, placing-book-down, etc.) Estimated information about a subject. The state vector may include other information, such as weather conditions, current activities, time-of-day and location. The state vector may include one or more controllable devices states. In some embodiments, a state vector may include a list of subjects for each region (e.g. room) in an environment. A learning method may learn the weight of a function that maps sensor data values to the state vector. Deep learning is used in some embodiments to learn a multi-staged, non-linear mapping of the sensor data to state vector.

“In some embodiments data from the sensors or cameras is analyzed to predict which activities and whereabouts the person will go. Predictive actions can be taken based on activity or location predictions. If a user moves towards a reading desk, it can be predicted that they will read at the desk. A network-connected light will also turn on before the user begins reading.

“At 206 one or more automation rules can be discovered based upon the identified state. Once it is determined that an identified state correlates with a particular controllable devices state/status/action, the rule is created that places the controllable in the associated state/status/action. An automation rule can be dynamically created or updated if there is a correlation between a state and a controllable device state/status/action. An automation rule can be generated by identifying a correlation between several states or a range state values and a controllable device status/action. The probability measure for each state can be used in some embodiments to determine the correlations and/or automation laws. In some embodiments, a history of determined states and associated probability values and co-occurring controllable device states/status/actions over time are stored and analyzed using machine learning (e.g., statistical and/or deep learning) to discover correlations. A corresponding automation rule can be created or updated if a measure of correlation exceeds a threshold value. Automation rules may be continuously updated based on new correlations.

“In some embodiments the rules that control the devices can be automatically learned. One embodiment uses reinforcement learning to help the policy learn. In one embodiment, the execution of the policy is performed using proportional-derivative controllers. Based on sensor information and triggering state vector, the rule engine can take actions (e.g. changing the state devices). One embodiment modifies parameters of a machine-learning algorithm using online stochastic grade algorithms. The rule engine can take into consideration data from user devices, web services and weather services as well as calendar events in order to learn and/or trigger rules.

“At 208, an automatic discovered rule is invoked based upon the determined state. For example, a triggering condition of the automatically discovered rule is the identified state, and the rule is invoked to modify a state/status/function/action of a controllable device to be as specified by the rule. In some embodiments, the rule’s triggering condition can be a number of states or a range of states. One embodiment allows for multiple rules to be applied/triggered depending on the state. Some embodiments allow for the application of a rule that is not compatible with another rule. One rule may specify that a light switch must be turned on, while another specifies that it should be shut off. Each rule can be associated with a preference number that indicates how much the rule is preferred over another in order to resolve conflict between them. Some embodiments allow users to provide feedback regarding the action of an automatically triggered rules. This feedback is then used for reinforcement learning and the ability to modify the control rule.

“FIG. “FIG. FIG. The process of FIG. 1A. 1A. 3 is included in FIG. 204. 2. In certain embodiments, FIG. 3. is repeated periodically. The process shown in FIG. 3. is dynamically performed in some embodiments. FIG. When new sensor data are received, the process of FIG.

“At 302, candidate state of an actual country are identified. Candidat states that correspond to newly received sensor data may be identified. In some cases, candidate states may be states that could represent the current state of a subject (e.g. human, animal, etc.). Because it is difficult to determine the exact state of a subject’s current activity (e.g. location and activity) from sensor data, candidate states are identified.

“In some embodiments, determining candidate states involves identifying all states that could be associated with the sensor data. For example, the camera data can identify all predefined activities that a subject may have. In some cases, the identification of the most likely candidate states is part of determining the candidate state. Instead of identifying all states possible, the most likely candidate candidates are identified. In certain embodiments, candidate states can be identified by analysing the associated sensor data in FIG. 202. 2. Some embodiments allow for the identification of a subject with newly received sensor data to determine the candidate states. Some embodiments identify the most likely candidate states based on previous current states. If a state is a previous state, such as a place that was subject to a lawsuit, then only certain states can become the current state. These states are identified by the state closest to the former location.

“Some embodiments include a number of sub-states within a single state. Each state may include an identifier for a subject, a coarse place of the subject (e.g. which room in a house/building), and a specific location within that coarse location (e.g. on a bed in a bedroom). It also includes whether the subject is present in an environment, the type of the subject (e.g. human vs. pet or specific individual). A coarse activity (e.g. reading) and a specific activity (e.g. opening a book). In some embodiments, each candidate status includes a state for a controllable item. “An activity state for a subject can be defined as one of the predefined activities that can easily be detected.

“At 304, each candidate state is assigned a probability that the received sensor data corresponds to the candidate state. A probability that a received data corresponds with a candidate state, for example, is determined. In some embodiments, the likelihood indicates if the candidate state is the state that was actually created from the sensor data. This likelihood can be determined by machine learning in some cases. To determine the probabilities of different sensor data being associated with different states, deep learning and statistical processing have been used. A predetermined algorithm is used to determine the likelihood that the candidate state matches the received sensor data in some embodiments. A computer vision pattern recognition algorithm, for example, is used to analyze camera sensor data. The algorithm gives the likelihood.

“At 306, each candidate state determines the likelihood that the candidate is the next state following a previously identified one. A probability that the candidate state will be the actual state, after a previously identified state of a subject can be determined. This likelihood can be determined by machine learning in some instances. To determine the probability of each candidate state, one can use statistical or deep learning processing. One example is that each room in a house has a motion detector sensor. Machine learning can be used to automatically determine the relative positions of rooms in a house by observing the patterns of triggers that subjects use to move between rooms. Knowing the connections between rooms, one can determine the likelihood that a subject will visit the next room in the connected rooms. The previous state that indicated a location for a subject is the state that can be reached if the sensor data between states are accurate.

Concurrent state component correlation can be determined at 308 for each candidate state. Certain candidate state components may be more likely to be included in the correct state together than other candidate state components. In some cases, the simultaneous state correlation is determined by determining the probability that a candidate state component is included in an actual/correct state given another candidate state component. A candidate state may include a location component and an activity component. The probability that the activity component of the candidate state is included in the correct/actual state given another component of the candidate state is then determined. In some embodiments, the simultaneous state correlation is determined by determining multiple probabilities associated with different combinations of state components in the candidate state.

“At 310, a general likelihood that the candidate state actually is the state is determined for each candidate state. The overall probability that a candidate state is the right state for a subject, is calculated, by way of example. The overall state can be determined by multiplying one or more probabilities from 304, 306 and 308. To calculate the overall likelihood, one or more of the probabilities determined in 304, 306 and 308 may be multiplied. Some embodiments sort the candidate states based on their overall likelihoods. The candidate state with highest overall likelihood is chosen to be the correct/actual state.

“FIG. “FIG. FIG. The process of FIG. 1A. 1A. FIG. 206 includes 4 of FIG. 2. In certain embodiments, FIG. 4. is repeated periodically. In certain embodiments, FIG. 4. is dynamically performed in some embodiments. FIG. 4. is executed when a new state is identified (e.g. at 310 in FIG. 3).”

“At 402, identified state are correlated to corresponding controllable devices states. A state identified in FIG. 310 as the actual state of a subject is an example. 3. is correlated to a corresponding state, configuration, functional state, and/or other data of a controllable devices of devices 102. FIG. 1A. 1A. An example of this is determining the corresponding pairings between an identifiable state (e.g. state vector) with a corresponding controllable status (e.g. status, configurations functional states, parameters and/or other data) of a controlling device. Machine learning is used to identify correlations between identified states and controllable devices states in some embodiments. To find temporal correlations between the identified states and controllable devices states, deep learning and statistical techniques can be used. Some embodiments identify states that include state vectors. These vectors may include a time value or a weather forecast. A date value and any other data related to time and/or environmental conditions. A historical probability that an identified status corresponds to a particular controllable device state in some embodiments is determined.

“Identified states at 404 are correlated with clusters one or more identified state. Some embodiments combine similar states (e.g. state values within a range) and correlate them with a controllable state. In one example, device states associated with physical locations within a close range of each other are clustered together. This group of states can be correlated with the corresponding controllable state states. In some embodiments, the cluster probability that a group of identified states corresponds with the same controllable state is calculated. A cluster probability can identify the historical probability that any state in the cluster corresponds with the controllable state. The cluster probability may be determined in some embodiments by multiplying the individual probabilities for each state. 3.) of the cluster.

“A 406 is the point at which a historical probability exceeds a threshold. An associated automation rule is then created. If a historical probability is determined in 402 or a cluster probability is determined in 404, the associated automation rule is created and stored in a rule databank. In some embodiments, the automation rule identifies that if an identified state (e.g., included in the cluster of identified states) is detected, the corresponding controllable state is to be recreated/implemented (e.g., state of corresponding controllable device(s) modified to be the rule specified controllable device state). The automation rule may be updated regularly in some embodiments. An example of this is an automation rule that has an expiration date. The rule can be renewed or deleted at expiration.

“FIG. “FIG. FIG. The process of FIG. 5 could be implemented at least partially on hub 104 or server 106 of FIG. 1A. 1A. FIG. 208 includes FIG. 5. 2. In certain embodiments, FIG. 5. is performed on a regular basis. In certain embodiments, FIG. 5. is dynamically performed in some embodiments. FIG. 5 is an example of dynamically performed. FIG. 5 is executed when a new state (e.g. at 310 in FIG. 3).”

“At 502, an automation rule’s triggering state is detected. The automation rule may be created in FIG. 406 in some embodiments. 4. Some embodiments have the automation rule preconfigured. An example is when a programmer manually configures the automation rule. FIG. 310 shows some embodiments of the triggering state. 3. In some embodiments, the detection of the triggering state for the automation rule involves identifying whether an identified state is part of a group of states that will trigger the rule. In some embodiments, once an identifiable state is identified as an actual state (e.g. in FIG. 310), it can be used to trigger the automation rule. 3. A database of automation rules can be searched/traversed in order to identify automation rules that will be triggered by the identified actual state. An identified state triggers an automation rule only in certain embodiments. 3.) that the identified state is associated with an actual condition meets a threshold. In some embodiments, the trigger state may include a state vector that includes one of the following: a date value or weather forecast, a time value, and any other data related to time and/or environmental conditions.

“At 504, it’s determined if an automation rule conflicts or is active simultaneously. Two or more automation rules can be activated if the identified state triggers them. However, the rules specify conflicting controllable devices states that cannot all be implemented simultaneously (e.g. one rule specifies an “on?”). For example, two or more automation rules can be activated because the identified state triggers them. However, the rules specify conflicting controllable device states that cannot all be implemented at once (e.g., one rule specifies an “on” state and another specifies an “off?” state). state). The conflict can be resolved in some embodiments. Each automation rule may be associated with a priority value. This priority value specifies the priority order in the event that there is a conflict. Some embodiments increase the priority value for an automation rule if it is stored in a rule bank for a longer time. Automation rules can be dynamically updated, and rules that are renewed in a database have priority over those that are newer because they have been validated over time. This can be done by increasing the priority of a particular rule after it is revalidated or after a certain time period since it was added to a database. A user’s feedback may be used to update the priority value of a given rule. The priority value can be decreased if a user modifies the state of a controllable device to undo the modification caused by the rule being activated. Another example is that the priority value increases if a controllable devices state is not altered after a rule activation. Another example is that a user can confirm whether an automation rule has been activated correctly by giving a user indication. A user can, for example, indicate whether an activated automation rule is correct using a device. Some embodiments prohibit a rule from being activated if its priority value is lower than a threshold.

“At 506, a conflict solved triggered automation rule is activated, if any. In some embodiments, an automation rule that has the highest priority value activates a group of conflicting rules. In some cases, activating an automated rule involves changing the state of one or more controllable device(s) to the state specified by the activated rule. To modify the future activation of the rule, feedback from users regarding activation could be used.

“The invention does not limit itself to the details given in the above embodiments. The invention can be implemented in many different ways. These embodiments are only examples and are not intended to be restrictive.

Summary for “Automatically learning connected devices and controlling them”

“Network-connected devices (e.g. Internet of Things (i.e. IoT devices)) allow remote control of and automation of devices in an environment (e.g. home). These devices are not always fully autonomous. Users often have to operate the devices remotely, or create rules to simulate autonomous operations. IoT devices often have limited functionality and are not able to make many autonomous decisions. Even if it were possible to perform autonomous operations on a single device’s behalf, device functionality could be enhanced by sharing information between different devices. It is therefore necessary to automate the functionality of all network connected devices in an efficient and generalized manner.

The invention may be implemented in many ways. It can be used as an apparatus, a process, a system, a composition, a product of computer programming, and/or a CPU, such as one that executes instructions stored on or provided by a memory connected to the processor. These implementations and any other form of the invention can be called techniques in this specification. The invention allows for the possibility of altering the order of steps in disclosed processes. A component, such as a processor and a memory, described as being capable of performing a task can be implemented either as a general component that is temporarily set up to perform the task at a particular time or as a specific component that was manufactured to do the task. The term “processor” is used herein. The term “processor” refers to any one or more devices, circuits and/or processing cores that are designed to process data such as computer program instruction.

Below is a detailed description of some embodiments of the invention, along with accompanying figures that illustrate its principles. Although the invention is described with these embodiments in mind, it is not limited to them. The claims limit the scope of the invention, and the invention includes many alternatives, modifications, and equivalents. The following description provides a detailed understanding of the invention. These details are given for example purposes only. The invention can be used according to the claims without any or all of these details. To be clear, the technical material related to the invention that is well-known has not been described in detail. This is done in order not to obscure the invention.

“Controlling network-connected devices is disclosed. The first input is received from a plurality network connected devices and/or sensor. Data from a motion sensor (e.g. infrared motion sensors), a camera or microphone could be received. Received from a home is data from a motion sensor (e.g., infrared motion sensor), a camera, a microphone, tamper-resistant temperature sensor, etc. The sensor can detect a chemical characteristic or a physical characteristic (e.g. movement, switch position and temperature). The sensor can be integrated into a network-connected device (e.g. integrated in a thermostat network connected). The first input is used to determine the subject’s first state. A first probability is associated with the first state. A location and an action (e.g. sitting, sleeping, standing) of a subject can be associated with a first state. It is desirable to know the environment in which it occurs. The subject could be a human, an animal (e.g. a pet), or a robot. It is not possible to pinpoint the exact location or action of the subject with absolute certainty. However, the first probability can identify the probability that the first state associated the location and the action is correct based upon the first input. In some embodiments, the first valid state is the one identified. The first state, for example, is associated with the highest probability of all possible states that are associated with the input. The plurality of sensors receives a second input. The second input, for example, is received after the first and detects any state changes in the same subject. The second input is used to determine the second state. A second probability is associated with the second state. The second state could be a new state for the subject (e.g. person). In some cases, the second status may indicate the probability that the updated location or action of the subject is correct. A transition model links the first and second states, which indicates a likelihood (e.g. a probability) that the subject will transition to the second state. If a subject is found to be in the first condition, then the next possible state for the subject is determined based on that first condition. Based on the minimum of the transition model and second probability, a level of confidence that a second state corresponds with an actual state can be achieved. The second state is used to trigger a rule to change the behavior of a network-connected device. An example is that an IoT device can be switched on when the second state is met.

“FIG. “FIG. Devices 102 include one or more network-connected devices, including sensor devices and controllable device (e.g. IoT devices). Devices 102 can include a switch or a door lock, thermostat, light bulb, kitchen/home appliances, a camera and speaker, as well as a garage door opener, window treatment, fan, electrical outlet, light dimmer, irrigation system, and any other device that can be connected to a computer network. Devices 102 can include a switch and a camera as well as a motion detector, light detector, an accelerometer or an infrared sensor, smoke detectors, microphones, humidity detectors, door sensors, window sensors, water detectors, glass breakage detectors, and other sensors.

“Hub104 communicates with devices 102 via a wireless and/or wired connection. A device with devices 102 can communicate with hub 104 using a WiFi or Bluetooth connection. In some embodiments, device 102 and hub 104 can be deployed together. Devices 102 include devices that are deployed in homes. Hub 104 wirelessly connects to the devices inside the home using a short-range wireless signal. Hub 104, in some embodiments, facilitates communication and/or control between devices 102 and user device 108 or via network 110. User device 108 can connect to hub 104 by wireless (e.g. Bluetooth, Wifi etc.). To view and control information about devices 102, you can also access them via network 110. Another example is user device 108 connecting to network 110 to gain access to devices 102 via server 106 and hub 104. One or more devices/sensors of device 102 may report information to hub104, which in turn relays the information to server 110 via network 110. One or more of the devices/sensors of devices 102 can communicate directly with network 110, without needing to be routed through hub 104. A device may include a Wifi or cellular radio that allows direct access to network 110.

“User device 108 can be a laptop, smartphone, tablet, desktop, or mobile computer. It also includes a smartwatch and other electronic computing devices. Some embodiments provide information from user device108 to server 106 or hub 104 in order to allow automation of devices 102. As an example, user device 108 can provide a GPS location to server 106 and/or hub 104 as inputs to trigger and determine device automation rules. User device 108 may receive information about a device of devices 101 in some instances. A hub 104 and/or server106 can provide a user device with a detected alert from a device of device 102, for example. Server 106 could be part of a cloud computing infrastructure that processes and/or stores information related to devices 102 in order to allow control and management for devices 102. Hub 104 may be used as an interface or control hub to control devices 102 in some embodiments. FIG. 1.A. Storage elements may be included in any of the components shown, but they are not explicitly shown.

“Some embodiments of the system shown at FIG. 1A, machine learning is used to learn how users interact with one or more controllable device of devices 102. This data can be used to adjust controllable device settings automatically. Some embodiments automatically build powerful statistical models that predict user preferences and behavior. To learn more about the user’s activities and preferences, devices 102 and 108 can be used as sensors. Based on this data, controllable devices can be managed and controlled automatically by devices 102.

“In some embodiments machine learning (e.g. local and/or cloud-based) is used to combine input from multiple devices/sensors (e.g. devices 102) to create a unique model that describes a user’s activities and behaviors. Devices 102 can be used to control a home environment remotely or locally. Devices 102 can collect data such as the presence and movement of people in an environment and measurements of environmental properties like light, temperature, humidity, motion of subjects and video of various locations within the environment. Some embodiments allow for learning to control controllable devices automatically and adjust controllable devices to meet the user’s preferences. Sensors detect environmental conditions that can trigger changes in controllable device states, such as behavior and interaction with switches and thermostats. Devices 102 are then commanded autonomously according to the user’s preferences.

“In some embodiments machine learning is performed by a local hub for a specific environment. Hub 104, for example, learns for users of its deployed environment which includes devices 102. Some embodiments allow for machine learning to be performed remotely, such as cloud-based, instead of being done at a hub. Server 106 performs machine learning in the environment of hub 1004. A backend server may learn from different environments in order to automatically configure each environment.

“In certain embodiments, the hub 104 or server 106 contains one or more inference engine(s) that convert sensor data from one or more devices 102 into state representations (e.g. state of a person?s behavior, location, and so forth). The inference engine uses machine learning algorithms that are based on deep and statistical learning techniques. Hub 104 and/or Server 106 may include a “vision engine” in some embodiments. (e.g. ML Inference) which receives images/videos from one or more cameras and analyzes them using vision algorithms to infer a subject’s (e.g. human, pet, etc.). Location, behavior, and activities (e.g. spatial and motion features) are all taken into account. Some embodiments analyze camera video data to determine hand gestures that allow a person to control connected devices to a desired status. One embodiment teaches gestures through statistical demonstrations. One embodiment uses deep learning to learn the gestures. Some embodiments combine vision data with data from other sensors to create a semantic representation that contains information about the person’s activities and preferences.

“In some embodiments, such as the one shown, data is sent using an event driven database architecture. The sensor data is converted first into a feature vector before streaming. Some embodiments clean the sensor data before streaming using statistical learning models that model different types of noise.

“In some embodiments hub 104 or server 106 include one or more?rule engine? These use a detected state in order to trigger one or more automatic determined automation rules. The rule engine is composed of pre-conditions as well as post-conditions. If a pre-condition is found, a rule triggers to place one or more network-connected controllable devices in the post condition state. If the system detects that someone is reading a book at a desk, it will turn on the reading lights.

Server 106 may store various types of information which can be used to determine automated rules in some embodiments. Sensor data at different points in time and device control events of devices102 (e.g. pressing a light switch or changing the temperature of a thermostat) can be stored. are logged. By learning the preferences/configurations desired by users, the preferences/configurations may be recreated when the conditions in the environment match the previously recorded conditions. In some embodiments, statistical models, such as the transition model, are used to determine the control actions of devices. When a network-connected light bulb is turned on/off, the associated environmental conditions (e.g. user location, action time, day etc.) are determined. The associated status of controllable devices are also stored. Some embodiments allow for the identification of which state vector corresponds to which control action of controllable device. A learning method may be used to determine the weight of a function that maps state vector value to state change of controllable device. Deep learning is used in some embodiments to learn a multi-staged, non-linear mapping between the state vector and the state changes of controllable device states. Some embodiments provide feedback to the user about an action taken by an automatically triggered rule. Reinforcement learning is used to modify the control rule automatically.

“In certain embodiments, alarming or critical events can be detected and stored. A break-in can be detected as an uncommon event and output alerts are generated, including an alert sent to a smartphone, activation of a siren, and an alert that is sent to the phone. Another example is a baby climbing out from under a crib. The detection of unusual events can be automated, unlike manually set alarms. This is possible because the statistical modeling of events with sensor data allows for the detection of unexpected events.

“In some embodiments, an output response (e.g. sound, user alerts, light alerts, etc.) is generated in response to an event being detected. “In some embodiments, an output response (e.g., sound or user alert, light alert, wearable alert) is generated in response to detecting an event. If a stove is left on for more than a certain amount of time, or if a human subject is detected, it will be turned off automatically and/or an alert generated. Another example is when there is a water leak, an alert is sent to the stove and the water valve is turned off. Another example is when a person falls and is not detected by the system, an automatic alert is sent to an emergency contact and/or authority. Another example is when a person is found in a living area during the morning, curtains will be opened automatically. A fan can also be turned on if humidity levels exceed a certain threshold. A humidifier can be automatically turned on/off to maintain the desired humidity. Another example is when the coffee maker is set to turn on at a predetermined time, it is automatically switched on. Another example is when the dishwasher is scheduled to run at lower energy rates. Another example is when light intensity is adjusted according to the time of day. For instance, lights are turned on at a lower intensity when someone wakes up in middle of the night to go to the bathroom. Another example is music being automatically turned on when a subject is eating dinner. Another example is when ambient temperature and humidity exceed threshold levels and the subject is seated, a fan will automatically turn on.”

“One or more of these may be included in network 110. They include a direct or indirect communication connection, mobile communications network, Internet, intranet and Wide Area Network, Storage Area Network, wireless network, cellular network, and any other method of connecting multiple systems, components or storage devices together. FIG. 1A may show additional instances of any component. 1A could also exist. Multiple hubs can be deployed and used in the same environment. Multiple user devices can be used to receive user information and/or control devices in the same environment. Server 106 could be one of many distributed servers that process data from network connected devices. Some components may not be shown in FIG. 1A could also exist in some embodiments.”

“FIG. “FIG. 1B” is a diagram that illustrates the interactions between components used to control network devices. Rules engine 122 is communicated with cloud-based machine learning module 120. Rules engine 122 receives inputs from sensors 124, and controls devices 122. One or more modules of cloud-based machine intelligence module 120 are discussed in conjunction with FIG. 1A. 1A. Hub 104 and/or Server 106 are shown in FIG. 1A. Devices 102 of FIG. may include sensors 124 and/or control devices 126. 1A.”

“FIG. 1C is a diagram that illustrates an embodiment of sub-components of a system for controlling network devices automatically. Components 130 of FIG. 1C could be included in FIG. 106. 1A. FIG. 1C could be included in hub number 104 of FIG. 1A.”

“FIG. “FIG. FIG. The process of FIG. 1A.”

“At 202, sensor data is received. In some embodiments, received sensor data may include data from one or more sensors of devices 102 in FIG. 1A. Data from switches, cameras, motion detectors, light detectors, infrared detectors, thermometers, smoke detectors, air quality sensors, temperature detectors, microphones, humidity detectors, door sensors, window sensors, water detectors, glass breakage detectors, and other sensors monitoring the environment are all received. Some embodiments include data from a user device (e.g. user device 108 in FIG. 1A). Data from the user device’s sensor (e.g. location sensor, GPS and accelerometer, heart rate sensor or orientation sensor, microphone, gyroscope etc.) can be received. Received. Another example is that the data received from the device may include status data and/or user-specified data. In some cases, sensor data may include data from one or more controllable device. A status, configuration, functional state, parameter and/or other data from a controllable device 102 of FIG. 1A is received. The sensor data may be received periodically in some embodiments. A sensor device may send periodically the current detected sensor data, for example. Some embodiments allow for dynamic reception of the sensor data. The sensor data may be received, for example, when the sensor data has been detected. Some embodiments allow for the reception of multiple sensor devices. Some embodiments receive the sensor data at a hub (e.g. hub 104 in FIG. 1A), and the sensor data are shared/sent to another hub or sent to the cloud computing for processing (e.g. sent to server 106 in FIG. 1A).”

“At 204, machine learning is used to identify one or more states. Machine learning can be performed with the sensor data in some embodiments. Some embodiments of machine learning include the use of a hidden Markov model (recursive) and/or expectation maximization. Each state may be associated with discrete types of information that machine learning can detect. It is possible to detect an activity that a person is currently engaged in. This can be done by analysing camera video/image data in order to identify a person and the activity they are engaging in. In some embodiments, each subject can be assigned a state. Some embodiments reduce the sensor data to one or two specific states. This allows the sensor data to be reduced to meaningful variable value that can be used to determine one or several automation rules. The state can be represented as a vector in some embodiments. The state vector may include a set of values, for example. The state vector may include one or more of these: a time value and a weather forecast.

“It is often difficult to pinpoint a particular state with 100% accuracy in many cases. It may be difficult, for example, to pinpoint the location and activity of a subject using sensor data. In some embodiments, a likelihood/probability that the determined state is correct is determined. A state could be any of many possible states. The probability that each state is correct is determined. Machine learning, such as statistical and/or deep-learning, may be used in some embodiments to determine the probability. For example, statistical and/or deep learning models of correlations between sensor data and a potential state, a previous state and the potential state (e.g., transition model), and associations between different states/state components are built and utilized to determine an overall likelihood/percentage for each candidate state.”

“In some embodiments, observations in time can be used as statistical inputs to calculate a state vector that is changing in time. An inference engine may output the state vector, which is information about the detected subject’s activities and presence. One embodiment models the dynamic processes and motions of subjects within the house, such as human and pet. Deep learning is used to discover non-linear relationships in one embodiment. Some embodiments use sounds from microphone sensors to determine the location and activities of people. One embodiment uses statistical models to link sound to the state vector.

“In some embodiments, the state vector includes the activity state (e.g. general activity state?reading or sleeping, cooking, etc. and detailed activity state?reading-opening-book, placing-book-down, etc.) Estimated information about a subject. The state vector may include other information, such as weather conditions, current activities, time-of-day and location. The state vector may include one or more controllable devices states. In some embodiments, a state vector may include a list of subjects for each region (e.g. room) in an environment. A learning method may learn the weight of a function that maps sensor data values to the state vector. Deep learning is used in some embodiments to learn a multi-staged, non-linear mapping of the sensor data to state vector.

“In some embodiments data from the sensors or cameras is analyzed to predict which activities and whereabouts the person will go. Predictive actions can be taken based on activity or location predictions. If a user moves towards a reading desk, it can be predicted that they will read at the desk. A network-connected light will also turn on before the user begins reading.

“At 206 one or more automation rules can be discovered based upon the identified state. Once it is determined that an identified state correlates with a particular controllable devices state/status/action, the rule is created that places the controllable in the associated state/status/action. An automation rule can be dynamically created or updated if there is a correlation between a state and a controllable device state/status/action. An automation rule can be generated by identifying a correlation between several states or a range state values and a controllable device status/action. The probability measure for each state can be used in some embodiments to determine the correlations and/or automation laws. In some embodiments, a history of determined states and associated probability values and co-occurring controllable device states/status/actions over time are stored and analyzed using machine learning (e.g., statistical and/or deep learning) to discover correlations. A corresponding automation rule can be created or updated if a measure of correlation exceeds a threshold value. Automation rules may be continuously updated based on new correlations.

“In some embodiments the rules that control the devices can be automatically learned. One embodiment uses reinforcement learning to help the policy learn. In one embodiment, the execution of the policy is performed using proportional-derivative controllers. Based on sensor information and triggering state vector, the rule engine can take actions (e.g. changing the state devices). One embodiment modifies parameters of a machine-learning algorithm using online stochastic grade algorithms. The rule engine can take into consideration data from user devices, web services and weather services as well as calendar events in order to learn and/or trigger rules.

“At 208, an automatic discovered rule is invoked based upon the determined state. For example, a triggering condition of the automatically discovered rule is the identified state, and the rule is invoked to modify a state/status/function/action of a controllable device to be as specified by the rule. In some embodiments, the rule’s triggering condition can be a number of states or a range of states. One embodiment allows for multiple rules to be applied/triggered depending on the state. Some embodiments allow for the application of a rule that is not compatible with another rule. One rule may specify that a light switch must be turned on, while another specifies that it should be shut off. Each rule can be associated with a preference number that indicates how much the rule is preferred over another in order to resolve conflict between them. Some embodiments allow users to provide feedback regarding the action of an automatically triggered rules. This feedback is then used for reinforcement learning and the ability to modify the control rule.

“FIG. “FIG. FIG. The process of FIG. 1A. 1A. 3 is included in FIG. 204. 2. In certain embodiments, FIG. 3. is repeated periodically. The process shown in FIG. 3. is dynamically performed in some embodiments. FIG. When new sensor data are received, the process of FIG.

“At 302, candidate state of an actual country are identified. Candidat states that correspond to newly received sensor data may be identified. In some cases, candidate states may be states that could represent the current state of a subject (e.g. human, animal, etc.). Because it is difficult to determine the exact state of a subject’s current activity (e.g. location and activity) from sensor data, candidate states are identified.

“In some embodiments, determining candidate states involves identifying all states that could be associated with the sensor data. For example, the camera data can identify all predefined activities that a subject may have. In some cases, the identification of the most likely candidate states is part of determining the candidate state. Instead of identifying all states possible, the most likely candidate candidates are identified. In certain embodiments, candidate states can be identified by analysing the associated sensor data in FIG. 202. 2. Some embodiments allow for the identification of a subject with newly received sensor data to determine the candidate states. Some embodiments identify the most likely candidate states based on previous current states. If a state is a previous state, such as a place that was subject to a lawsuit, then only certain states can become the current state. These states are identified by the state closest to the former location.

“Some embodiments include a number of sub-states within a single state. Each state may include an identifier for a subject, a coarse place of the subject (e.g. which room in a house/building), and a specific location within that coarse location (e.g. on a bed in a bedroom). It also includes whether the subject is present in an environment, the type of the subject (e.g. human vs. pet or specific individual). A coarse activity (e.g. reading) and a specific activity (e.g. opening a book). In some embodiments, each candidate status includes a state for a controllable item. “An activity state for a subject can be defined as one of the predefined activities that can easily be detected.

“At 304, each candidate state is assigned a probability that the received sensor data corresponds to the candidate state. A probability that a received data corresponds with a candidate state, for example, is determined. In some embodiments, the likelihood indicates if the candidate state is the state that was actually created from the sensor data. This likelihood can be determined by machine learning in some cases. To determine the probabilities of different sensor data being associated with different states, deep learning and statistical processing have been used. A predetermined algorithm is used to determine the likelihood that the candidate state matches the received sensor data in some embodiments. A computer vision pattern recognition algorithm, for example, is used to analyze camera sensor data. The algorithm gives the likelihood.

“At 306, each candidate state determines the likelihood that the candidate is the next state following a previously identified one. A probability that the candidate state will be the actual state, after a previously identified state of a subject can be determined. This likelihood can be determined by machine learning in some instances. To determine the probability of each candidate state, one can use statistical or deep learning processing. One example is that each room in a house has a motion detector sensor. Machine learning can be used to automatically determine the relative positions of rooms in a house by observing the patterns of triggers that subjects use to move between rooms. Knowing the connections between rooms, one can determine the likelihood that a subject will visit the next room in the connected rooms. The previous state that indicated a location for a subject is the state that can be reached if the sensor data between states are accurate.

Concurrent state component correlation can be determined at 308 for each candidate state. Certain candidate state components may be more likely to be included in the correct state together than other candidate state components. In some cases, the simultaneous state correlation is determined by determining the probability that a candidate state component is included in an actual/correct state given another candidate state component. A candidate state may include a location component and an activity component. The probability that the activity component of the candidate state is included in the correct/actual state given another component of the candidate state is then determined. In some embodiments, the simultaneous state correlation is determined by determining multiple probabilities associated with different combinations of state components in the candidate state.

“At 310, a general likelihood that the candidate state actually is the state is determined for each candidate state. The overall probability that a candidate state is the right state for a subject, is calculated, by way of example. The overall state can be determined by multiplying one or more probabilities from 304, 306 and 308. To calculate the overall likelihood, one or more of the probabilities determined in 304, 306 and 308 may be multiplied. Some embodiments sort the candidate states based on their overall likelihoods. The candidate state with highest overall likelihood is chosen to be the correct/actual state.

“FIG. “FIG. FIG. The process of FIG. 1A. 1A. FIG. 206 includes 4 of FIG. 2. In certain embodiments, FIG. 4. is repeated periodically. In certain embodiments, FIG. 4. is dynamically performed in some embodiments. FIG. 4. is executed when a new state is identified (e.g. at 310 in FIG. 3).”

“At 402, identified state are correlated to corresponding controllable devices states. A state identified in FIG. 310 as the actual state of a subject is an example. 3. is correlated to a corresponding state, configuration, functional state, and/or other data of a controllable devices of devices 102. FIG. 1A. 1A. An example of this is determining the corresponding pairings between an identifiable state (e.g. state vector) with a corresponding controllable status (e.g. status, configurations functional states, parameters and/or other data) of a controlling device. Machine learning is used to identify correlations between identified states and controllable devices states in some embodiments. To find temporal correlations between the identified states and controllable devices states, deep learning and statistical techniques can be used. Some embodiments identify states that include state vectors. These vectors may include a time value or a weather forecast. A date value and any other data related to time and/or environmental conditions. A historical probability that an identified status corresponds to a particular controllable device state in some embodiments is determined.

“Identified states at 404 are correlated with clusters one or more identified state. Some embodiments combine similar states (e.g. state values within a range) and correlate them with a controllable state. In one example, device states associated with physical locations within a close range of each other are clustered together. This group of states can be correlated with the corresponding controllable state states. In some embodiments, the cluster probability that a group of identified states corresponds with the same controllable state is calculated. A cluster probability can identify the historical probability that any state in the cluster corresponds with the controllable state. The cluster probability may be determined in some embodiments by multiplying the individual probabilities for each state. 3.) of the cluster.

“A 406 is the point at which a historical probability exceeds a threshold. An associated automation rule is then created. If a historical probability is determined in 402 or a cluster probability is determined in 404, the associated automation rule is created and stored in a rule databank. In some embodiments, the automation rule identifies that if an identified state (e.g., included in the cluster of identified states) is detected, the corresponding controllable state is to be recreated/implemented (e.g., state of corresponding controllable device(s) modified to be the rule specified controllable device state). The automation rule may be updated regularly in some embodiments. An example of this is an automation rule that has an expiration date. The rule can be renewed or deleted at expiration.

“FIG. “FIG. FIG. The process of FIG. 5 could be implemented at least partially on hub 104 or server 106 of FIG. 1A. 1A. FIG. 208 includes FIG. 5. 2. In certain embodiments, FIG. 5. is performed on a regular basis. In certain embodiments, FIG. 5. is dynamically performed in some embodiments. FIG. 5 is an example of dynamically performed. FIG. 5 is executed when a new state (e.g. at 310 in FIG. 3).”

“At 502, an automation rule’s triggering state is detected. The automation rule may be created in FIG. 406 in some embodiments. 4. Some embodiments have the automation rule preconfigured. An example is when a programmer manually configures the automation rule. FIG. 310 shows some embodiments of the triggering state. 3. In some embodiments, the detection of the triggering state for the automation rule involves identifying whether an identified state is part of a group of states that will trigger the rule. In some embodiments, once an identifiable state is identified as an actual state (e.g. in FIG. 310), it can be used to trigger the automation rule. 3. A database of automation rules can be searched/traversed in order to identify automation rules that will be triggered by the identified actual state. An identified state triggers an automation rule only in certain embodiments. 3.) that the identified state is associated with an actual condition meets a threshold. In some embodiments, the trigger state may include a state vector that includes one of the following: a date value or weather forecast, a time value, and any other data related to time and/or environmental conditions.

“At 504, it’s determined if an automation rule conflicts or is active simultaneously. Two or more automation rules can be activated if the identified state triggers them. However, the rules specify conflicting controllable devices states that cannot all be implemented simultaneously (e.g. one rule specifies an “on?”). For example, two or more automation rules can be activated because the identified state triggers them. However, the rules specify conflicting controllable device states that cannot all be implemented at once (e.g., one rule specifies an “on” state and another specifies an “off?” state). state). The conflict can be resolved in some embodiments. Each automation rule may be associated with a priority value. This priority value specifies the priority order in the event that there is a conflict. Some embodiments increase the priority value for an automation rule if it is stored in a rule bank for a longer time. Automation rules can be dynamically updated, and rules that are renewed in a database have priority over those that are newer because they have been validated over time. This can be done by increasing the priority of a particular rule after it is revalidated or after a certain time period since it was added to a database. A user’s feedback may be used to update the priority value of a given rule. The priority value can be decreased if a user modifies the state of a controllable device to undo the modification caused by the rule being activated. Another example is that the priority value increases if a controllable devices state is not altered after a rule activation. Another example is that a user can confirm whether an automation rule has been activated correctly by giving a user indication. A user can, for example, indicate whether an activated automation rule is correct using a device. Some embodiments prohibit a rule from being activated if its priority value is lower than a threshold.

“At 506, a conflict solved triggered automation rule is activated, if any. In some embodiments, an automation rule that has the highest priority value activates a group of conflicting rules. In some cases, activating an automated rule involves changing the state of one or more controllable device(s) to the state specified by the activated rule. To modify the future activation of the rule, feedback from users regarding activation could be used.

“The invention does not limit itself to the details given in the above embodiments. The invention can be implemented in many different ways. These embodiments are only examples and are not intended to be restrictive.

Click here to view the patent on Google Patents.