Consumer Products – Piali De, Hugh A. Stoddart, SENSCIO SYSTEMS

Abstract for “Systems and Methods for Semantic Reasoning in Personal Health Management”

An extended semantic model of a personal illness knowledge domain, a semantic database for personal illness management and an inference engine are all part of a personal illness management system. Extended semantic models of health care knowledge domains include existing concepts that relate to personal illness management. Existing relationships are also included, as well as inference logic embedded within each concept. The semantic knowledge database for personal illnesses management is separate from the extended semantic models and contains existing nodes as well as existing links. The existing nodes are instances of existing concepts. The existing links are instances of existing relationships between the existing concepts. The inference engine, which is independent of knowledge domain, populates the semantic knowledge base with instances of existing concepts and instances of existing relationships. It follows the inference logic embedded in each existing concept.

Background for “Systems and Methods for Semantic Reasoning in Personal Health Management”

“Currently, the World Wide Web is expanding rapidly and integrated sensors mean that there is more information than the human brain can comprehend. Systems and methods that automatically process data are required to generate new insights and knowledge.

“A variety of approaches have been used in an effort to develop systems or methods for representing knowledge. A semantic network is one such approach. A semantic network is a structure that encodes knowledge. It can encode information about any domain such as financial transactions, military operations, organizational relationships, medical conditions, and their treatment.

This invention addresses the challenges of homebound patients with chronic complex conditions managing their own health at home. Multiple factors contribute to this problem: a patient’s inability to retain or improve their cognitive abilities; a lack of understanding of the home’s situation; and a lack of information from the healthcare team about the effectiveness of treatment. The invention incorporates semantic reasoning to automatically derive insights from facts. Computer-based methods and the computer system are used to automate tasks, improve situational understanding, and to activate people to take the right actions at the right moment.

In general, the invention provides a personal sickness management system. It includes an extended semantic model for a health-care knowledge domain, a semantic database for personal illness management and an inference engine. Extended semantic models of health care knowledge domains include existing concepts that relate to personal illness management and existing relationships between them. Inference logic is embedded within each concept. Each existing concept has associated properties. The embedded inference logic, which is used to infer values from the associated properties, is used for drawing inferences about existing downstream concepts that are connected via existing relationships. The semantic knowledge database to manage personal illness is separate from the extended semantic model. It includes existing nodes as well as existing links. The existing nodes are instances of existing concepts. The existing links represent instances for existing relationships between the existing concepts. The inference engine, which is independent of knowledge domains, populates the semantic knowledge base with instances of existing concepts and existing relationships. It follows the inference logic embedded in each concept. Inference engines add new nodes to the semantic knowledge base by receiving external data first, then creating instances of fact nodes with associated links representing that external data and then adding and updating downstream insights nodes and linked. A computing system is also included that includes at least one processor to run the inference engine using the extended semantic model’s inference logic and to host the semantic information database. A user interface is also included in the system. It allows patients and caregivers to send queries to the semantic knowledge base. The database then receives the query results and provides support for the patient or caregiver. The system includes computer-implemented instructions for semantic reasoning to activate the patient’s behavior in order to manage his illness.

These are the implementations of this aspect of invention. Interactive devices are also included in the system. They include at least one of a caregiver station, a wearable sensor or an external sensor. The caregiver and patient send data to the semantic knowledge database through at least one interactive device. They also receive query results from it. The care station or the care portal are the interfaces that allow patients and caregivers to send queries to the semantic knowledge base. They also provide decision support for the patient and/or caregiver. The personal communication device, a wearable device that transmits movements data to the inference engines and notifies patients when they need to be reminded or alerted about a particular action, is the personal communication device. Computer-implemented instructions for semantic reasoning to activate behavioral activation include scheduling and monitoring activities of the patient, as well as analyzing data from those activities. The patient can also be monitored and notified when they need to be alerted or reminded. The patient’s schedule and structure includes instructions for medication, meal preparation, exercise, monitoring vital signs, checking for symptoms, and attending clinical appointments. Monitoring and analysing data from patient’s schedule activities include monitoring patient’s vital signs, symptoms, medication adherence and meal taking, exercise, attendance at clinical appointments, and responses to social interactions. Inferences can be made about the quality of the patient’s adherence to medication, exercise, taking medications, eating, attending clinical appointments, vital sign response, symptom check, and social interaction response. Extended semantic models include concepts for patients and caregivers. Existing relationships between the concepts indicate which caregiver is assigned which patient and which physician is assigned which patient. Each concept includes notes that include name, gender, and age. Concepts for providing illness management are also included in the existing extended semantic model concepts. These concepts include hospital discharge, care guide medication guide, medicine supply medicine bottle, master adherence, and clinical appointment. The inference engine also receives patient care plan data and patient vital signals data. It also collects patient symptom, patient movement, self-reported care plan adherence, patient symptom, and patient care plan data. The inference engine processes external data and adds insight nos that include patient daily care plan, reminders for care tasks, monitoring health, and additional reminders when there are issues with health or adherence. Each existing node contains existing notes that represent instances of the associated properties for each concept. Directionality is a way of connecting existing nodes. It originates at an upstream node, and ends at a downstream one. A data normalizer is also included in the system. It can receive data and normalize them with the existing concepts and relationships of the extended semantic modeling. A repertoire library is also part of the system. The repertoire library contains a collection of repertoire functions. The unique identifier number assigned to each repertoire function identifies it with the inference logic embedded within each concept of the extended semantic model. In the process updating insight nodes, the inference engine automatically invokes the repertoire functions. There are five types of repertoire functions: collector, resolver, context-gate and qualifier. Each type of repertoire function plays a role in the inference process. Each type of repertoire function plays a specific role in the inference process. The extended semantic model also includes downstream concepts and upstream concepts. These upstream concepts are linked to the downstream concepts via relationships. Each downstream concept is automatically inferred using the inference language embedded in the corresponding upstream conceptual whenever it is instanced as an upstream node, or when the node is modified. Each property contains values that are associated with an existing concept, or a chain linking data within an upstream and downstream concept. The properties are used by inference engines to implement the logic for insinuating a link to a downstream idea or for insinuating properties of said downstream concepts. The extensible meta language is used to query the semantic knowledge base. The computing system includes a mechanism to store and retrieve knowledge from the semantic knowledge database that is distributed across multiple computing systems. An extended semantic model can be extended by adding new fact and insight concepts or another extended semantic model. Each new insight concept must be added to the upstream concepts. This will allow the system to specify the logic for adding or updating the new insight.

“The invention also provides a method of providing personal illness management in one aspect. First, the invention provides an extended semantic model for a health care knowledge domain that includes existing concepts and relationships between them, as well as inference logic embedded within each concept. Each concept has associated properties. The embedded inference logic can be used to infer values from the associated properties. It also includes conditions and processes for drawing inferences about an already existing downstream concept that is linked to another concept via an existing relationship. Next, a semantic knowledge base for personal illness management is provided. It is distinct from the extended semantic models and includes existing nodes as well as existing links. The existing nodes are instances of existing concepts. The existing links are instances of existing relationships between the existing concepts. Next, an inference engine is created that is independent of knowledge domain and populates the semantic information database with instances of existing concepts and instances of existing relationships. This follows the inference logic embedded in each concept. Inference engines add new nodes to the semantic knowledge base by receiving external data first, then creating instances of fact nodes with associated links representing external data and then adding and updating downstream insights nodes and linked. Next, a computing system that includes at least one processor to run the inference engine using the extended semantic model’s inference logic and to host the semantic information database. Next, a user interface is provided that allows patients and caregivers to send queries to the semantic knowledge base, and receives the query results. The caregiver can then provide decision support for the patient or caregiver. Next, computer-implemented instructions for semantic reasoning to activate the patient’s behavior in order to manage his illness.

“The present invention provides a semantic computing system and a computer-based method of semantic reasoning for automatically drawing insights and conclusions from facts using a reasoning framework. The semantic reasoning system contains a semantic model, semantic knowledge base, as well as an inference engine. The semantic model defines the information that is needed. The semantic knowledge base is all that is known. Inference engines add knowledge to the semantic knowledge database by following a process that draws conclusions. The computer-based semantic reasoning system and the semantic reasoning computer system are used to automate tasks, improve situational understanding, and to activate people to take the right actions at the right moment.

Referring to FIG. “Referring to FIG. The semantic model 1000 contains the concepts 1001, 1002 & relationships 1003 which define the content of a domain’s knowledge. The semantic model 1000 also describes the conditions and processes that lead to a new concept being inferred from an already existing concept. The semantic knowledge base 3000 is different from the semantic modeling 1000. It contains what we call instances of concepts and relationships as defined in the semantic model 1000. The semantic knowledge base 3000 represents knowledge via nodes 3001, 3002, and 3003, which represent instances of concepts. Links 3003 are instances relationships between concepts and attributes of links. The semantic knowledge base 3000 uses links and nodes to represent what is explicit known and what can be inferred. Inference engine 2000 uses the reasoning process defined in the semantic model 1000 in order to continuously add inferred nodes or links to the knowledge base 3000. Recursive reasoning may be triggered by inferred nosdes, which may in turn trigger additional reasoning that may in turn create new inferred nosdes.

Referring to FIG. “Referring to FIG. The extended semantic model 1000 contains the concepts and relationships to be known as well as the methods by which they will be known. Inference engine 2000 draws conclusions from facts using the specifications of the extended semantic model 1000. This is done through a cascaded process that infers insights and triggers further inferencing. The semantic knowledge database 3000 contains knowledge stored in the form nodes linked via links. Data normalizer 4000 is a process and meta-language that normalizes data about the real world to the concepts and relationships in the semantic model 1000. The data then goes to the inference engine 2000 to be converted to nodes or links. The repertoire 5000 contains a collection of functions that have been identified in the semantic models 1000. They are then systematically invoked to create instances in the semantic model 1,000 by the inference engines 2000. Knowledge query 6000 contains a meta language and process for querying the knowledge base 3000 for nodes/links and attributes of links. The user interface 7000 displays queried knowledge 6000 in cognitively friendly fashion.

“In one example the semantic reasoning system 100A (FIG. 5 is used for decision support. Inferred nodes are knowledge that the user is looking for. The user can query the semantic knowledge base 3000 to find inferred nodes via a user interface. The resulting inferences will be presented to him via the user interface. 7000 for decision support. FIG. 5 shows another example of the semantic reasoning system. 5. This is used to build a narrative using data. The semantic model can define the content and the structure of the narrative. The inference engine creates the narrative supported by the data as data becomes available.

“FIG. “FIG. 6 shows an example of how data 110 can be converted into linked knowledge, and the associated decision support and narrative that is derived from a process called inference. The inference engine 2000 stitches together a collection of 110 facts from a person’s history to create a composite picture 120. The inference engine 2000 can draw conclusions 130 based on the medical history 120 and clinical guidelines. The person is retaining fluid) as well as recommendations (e.g. The person should contact a physician. Alerts may be sent to caregivers if health problems, such as fluid retention are inferred. A narrative would describe the inferred condition, supporting data, and the reason for the inference.

“Extended Semantic Model.”

An extended semantic model 1000 is the first step in accumulating knowledge by inferencing. An extended semantic model 1000 defines concepts and relationships between concepts to represent all knowledge about a domain. The semantic model defines how concepts and relationships are inferred.

“In prior art systems the semantic network does not distinguish between concepts and instances. FIG. 1, ?Person? 1,?Person? Both nodes are part of the network. A link of type?is an instance. FIG. 83 is the knowledge that Bob is a human being. 2. Inferencing in prior art semantic networks is difficult because of this ambiguity about whether a node represents an instance or a concept. The present invention allows for representations of concepts (i.e. Things that are known can be kept apart from nodes. Things that are known. The semantic model 1000 represents concepts, while the semantic knowledge database 3000 contains nodes. FIG. 9 illustrates this separation. FIG. 9 and FIG. 11. FIG. 9 depicts ?Person? FIG. FIG. 11. The semantic model that corresponds to?Bob’ represents?Bob? 11 and?Sue? as instances of a “Person?”. This separation creates a clear role of the inference engine to populate the semantic knowledgebase with instances from the semantic model.

“FIG. “FIG.7” illustrates different ways the present invention extends a particular semantic model. These extensions aim to overcome limitations in inferencing in existing semantic networks. FIG. 1. Knowledge in a semantic network can be represented by nodes 81 and links82 connecting nodes. Links are usually not directed. It is the information about which nodes are connected that gives rise to the entire network knowledge. The present invention gives directionality to relationships that link concepts. As shown in FIG. 7. Directionality permits the representation of an “upstream”. Concept 1100 is shown at the origination point of the arrow 150. A?downstream?” is also depicted. Concept 1500 is depicted at termination of the Arrow 150. It is crucial to distinguish between the upstream and downstream concepts in order to inference. FIG. 16 The instantiation a upstream concept triggers the process to determine if downstream concepts are ready for instantiation.

“Concepts in the present invention can have data associated with them. This is an extension to the semantic model. FIG. 1200 shows the notes associated with concepts. 7. There are two types of notes: they can refer to values associated with a concept like?age?. For a?Person?. A note can also refer data in upstream concepts that are linked by a chain link to the downstream concept containing referred notes. The notes for a concept refer to the data used by the inference engine to draw conclusions about the concept.

“In an extension of the semantic model, the concept defines multiple types 1250 of functions 1250. Each function has a clearly defined purpose in the inference process. FIG. FIG. 7 shows 5 types of functions 1250, including Resolver and Context Gate, Collectors, Qualifiers, Operators, and Qualifiers. As will be explained later, each of these functions plays a unique role during the inference process. FIG. 5 shows how each function is cataloged in a collection called “Repertoire 5000”. 5, and is identified with a unique number. These functions are referred to in the semantic model by their identity numbers. These functions are dynamically and systematically invoked by the inference engine 2000 during the creation of instances of the concept.

“In an extension of the semantic model, 1300 relationships are defined to possess certain properties. The inference engine 2000 also infers the relationships 1300 and they are then instanced in knowledge database 3000. Multiple types of properties can be used, e.g. Cardinality, Active/Passive/Custom 1400, among others. The inference engine 2000 interprets the properties 1400 for a relationship 1300 to implement specific logics for the instancing of the links.

“FIG. 8 shows yet another extension of the semantic model 1000. The present invention describes two types concepts in the semantic model:?Facts? 1100 and ‘Insights? 1700. The facts 1100 are what can be directly observed about the world. Insights 1100 are concepts inferred from facts 1150. The inference engine 2000 automatically infers relationships 1300 between facts 1100, insights 1700, and 1100. Intermediate knowledge is derived from analysis, learning and applying logic. Insights 1700 are intermediate knowledge. The ultimate answer to be found is at the highest level of insight 1700. The semantic model 1000 is capable of defining as many facts as 1100, and as many insights as 1700 as necessary to describe the domain knowledge.

“FIG. “FIG. This extends the semantic model in FIG. It also includes insight concepts that can be inferred. FIG. FIG. 1101, ?Bank Account? 1102, ?Bank Account? 1102,?Bank Account? FIG. 1. These are facts 1100 that are known explicitly. ?Personal Transaction? 1501, ?Spending Pattern? 1502, ?Fund Need? 1701 and ‘Transactional History? 1702 refers to concepts that can be inferred. Some relationships, such as the?owns account?, are explicit known. 1306 is an explicit known number and represents facts 1100. Others relationships, such as the?transacts cash? relationship? 1301 and ‘primarily receives money? The inference engine 2000 infers 1304.”

A semantic model can be extended. Its components include upstream and downstream 1100 concepts, notes, functions 1250 and relationships 1300. The semantic model 1000 for a particular domain (e.g. FIG. 9) outlines the details of the building blocks. ?Person? is an upstream concept of?Bank Account? that is connected through a relationship ‘owns account?. The inference engine 2000 processes the?Person? As an upstream concept, and?Bank account? as a downstream concept. As a downstream concept. These concepts’ details, i.e. its notes and functions are all extracted from the semantic model. The inference engine 2000 dynamically processes these data through its inner logic to inferencing. A semantic model can be represented in terms of building blocks, which allows it to be used for any number of domain applications.

“Semantic Knowledge Base”.

FIG. 4. The semantic model 1000 is the blueprint for what knowledge is stored within the semantic knowledge database 3000. Inference engine 2000 is responsible to insinuate all knowledge in the knowledge base 3000.

“FIG. The general form of a semantic knowledge database 3000 is shown in FIG. 10. This example shows the relationships and concepts for the general semantic model shown in FIG. 8. The fact concepts 1100 and 2100 in FIG. 8 are the fact nodes 31100 of FIG. 10. Corresponding with the insight concepts 1500 & 1700 of FIG. 8 corresponds to the insight concepts 1500 and 1700 of FIG. 10. FIG. 3300 links 10 link all nodes and correspond with relationships 1300 in FIG. 8 between the nodes. The inference engine 2000 creates all nodes and links using direct knowledge of the world 3100 and conclusions drawn from direct knowledge of the world 3500 and 33700.

“FIG. FIG. 11 shows an example of a particular semantic knowledge base for financial transactions. 9. This example shows two instances of the person concept. Bob 3101 is with City Bank 3105 and Sue 3102 are each having a bank account. Bob’s account is at City Bank 3105, while Sue’s account is at State Bank 3106. Three financial transactions were made between the two accounts 3103 and 3104. They include 3107, 3108 and 3109. Each transaction is described using two types of links. Sue’s account 3104 is with State Bank 3106. Bob sends the money indicated by links 3312, 3314, 3314, and Sue receives it indicated with links 3311 to 3313 to 3315. Nodes 3101-3109 represent instances of fact concepts that are known from external data sources.

Multiple types of inferences can occur at the same time that fact nodes are instanced. One type of inference creates new links between nodes. FIG. FIG. 11. Link 3307 is created between Bob 311, and Sue 312, to indicate that Bob and Sue have transacted money. Another type of inference changes attribute values in existing nodes. A link to a bank account node, or the creation thereof, triggers certain functions that are associated with the bank account concept. This includes the calculation of the account balance attribute. The third type of inferencing determines whether the creation upstream fact nodes triggers the creation downstream inferred nosdes. FIG. FIG. 11. Three examples of inferred downstream nos 3501, 3503, and 3504 are shown. The creation of each transaction 3107-3108-3109-3109 between Bob’s Bank Accounts 3103-3105 and 3109 respectively triggers Nodes 3501, 3505, 356, 3506 First instance of concept type?Personal Transaction FIG. 1501 FIG. 1501 in FIG. FIG. 1702 9. Every subsequent personal transaction 3505, 3506 generates additional links to node3702 which tracks the history personal transactions between Bob & Sue and allows you to infer who primarily sent money 3320 and who receives it 3321. FIG. 11 also contains inferred nodes. 11. These inferred nodes include 3502 (Sue’s) and 3503 (Bob’s), which are used to track Sue and Bob’s spending habits. The inference engine then predicts when an account will require a deposit. This invention allows automatic enhancements to semantic knowledge by inferencing, such as the updates to 3503 (from 3104) and to 3701 (from 3503).

“FIG. “FIG. 12” shows the three ways knowledge is represented in extended semantic knowledge base. Nodes represent concepts 1100 and attributes 1200 respectively. Attributes represent notes 1200 within concepts, capture data about the node and links represent relationships 1300 between nodes

“The FIG. 12 example shows how to do it. FIG. 12 contains details about two concepts. 9: The?Person? 9: The?Person? concepts. Person concepts are defined as having three notes: Age, Name, and Gender. FIG. 3050 is Bob’s corresponding node in our semantic knowledge base. 12, age 42, male. He has a bank account 3060, account number 123456 and account balance $1345.60. The bank account concept contains two notes: the account number and the account balance. The account holder is the third note in the bank accounts concept. This note links to the?Name. Note in the upstream ‘Person? concept.”

“Inference Engine”

“The inference engine 2000 creates nodes and links in a semantic knowledge base. These are instances of concepts and relationships within the semantic model. FIG. FIG. 13 shows the flow of the 100B inference process. The semantic knowledge base 3000 uses the enhanced semantic model 1000 to define the logic for creating instances. The semantic model defines the functions that will be used by the inference engine 2000 to systematically invoke a particular concept. These functions are collectively known as the repertoire 2700. They can be compiled, or they may be represented in meta-language that is interpreted by the inference engine in real time. It is possible to code domain dependent functions using an interpreted meta language. This allows them to be added/updated whenever needed, without the need for an update to core inference engine software.

“The repertoire 2700 functions are fixed types Resolver and Context Gate, Collectors, Qualifiers, Operator, and Qualifier. Each type of repertoire function plays a different role in the inference process. A repertoire function can be domain-independent and therefore applicable to many inference problems. A repertoire function could also be specific to a particular semantic model in a domain.

“The inference process starts with facts about reality that have been normalized to the semantic model 2660 and fed to the inference engine. Normalized facts are seed knowledge in semantic knowledge base 3000.

“Whenever any knowledge is updated or created in the semantic knowledgebase nodes, links and attributes, it triggers a cascading knowledge that is downstream 1002 of the upstream concept 1001 just instanced or updated as shown in FIG. 4. Instances of concepts are identified by their downstream concepts and corresponding nodes. These nodes can then be considered for updating or instancing 2200. The inference engine executes a series of functions associated with the concept in semantic model 1000 to determine if conditions are met for creating new node/link/updating an existing node/link 2300.

“FIG. 14 shows the beginning of the data normalization procedure 60. FIG. 2600 is an example of a data normalizer. 13) periodically polls external data sources for new information. Alternately, data normalizers could be pushed new data from external sources as they become available. The data normalizer interprets the semantic model to determine which data are included in it. Data that aren’t described are ignored. Those that are described in the semantic model are sent to the inference engine as a packet of information that identifies which concept/note/relationship the data correspond to. The inference engine checks if the data are already in the semantic know base and, if so, adds them to it.

FIG. shows the inference process in three stages. The first stage is 40, which creates instances of fact nodes with associated links. FIG. 15; FIG. 16; and the third stage 70?recursively updating downstream nodes, is shown in FIG. 17. FIGS. 15-17 invoke five types functions: Context-Gate collector, operator, qualifier, resolver, and resolution. These functions can be identified in the definitions of concepts in a semantic model. These functions can be accessed via their index in the repertoire 2700, a collection of functions. These functions are dynamically accessed by the inference engine based on their index within the repertoire.

“FIG. “FIG.15” shows the logical flow of stage 40 of inferencing, creating instances of fact nodes with associated links. First, check whether the fact is a node already in the knowledge base. New attribute values are added to existing node instances if this is the case. The inference engine checks if the fact concept is a?Resolver’. function. It defines when a new instance is needed of a concept. The?Standard Resolutioner? is the most popular. This method checks if there are nodes in the knowledgebase with the same attribute that the argument for the resolver function. It is also known as the ‘Resolving Attribute. A Social Security Number is an example of a resolving attribute for a person. It is a unique qualifier. A node that has a resolving attributes is already present, so no need to create a new instance. A new instance will be created if the resolver function does not include an argument. If the resolver function doesn’t include an argument, a new instance is created each time. The inference engine then runs the concept’s “Qualifier” function to determine if a new instance is needed or if an existing node needs to be upgraded. Function to determine whether the node should become enabled. Enabled nodes activate the cascading knowledge to downstream nodes. When certain conditions are met, like having certain values for required attributes, nodes become enabled. An upstream node can’t trigger the creation or deletion of downstream nodes until it is enabled. Once enabled, the concepts?Operator?” are run by the inference engine. functions. Operator functions calculate the values of the notes specified for the concept. For example,?Account Balance? For the?Bank Account? FIG. 1160 shows concept 1160. 12.”

“In the second stage, a node’s downstream links and nodes become active when it is enabled or updated. This is called inferencing. FIG. FIG. 16 shows the logical flow 50 for the algorithm to update downstream Nodes. This process updates node attributes that are dependent on links to upstream nodes. If the attributes have been updated, the qualifier function will be run again to determine if the node should still be enabled. If enabled, the process of inputting downstream nodes is initiated. Finally, the operator function executes to complete the node upgrade.

“Inferencing 3 is when instances of insight nodes are created and their associated links. FIG. FIG. 17 shows the logical flow 70 for the algorithm to create insight nodes and the associated links. When a fact node contains an operator function called “Posit Handler”, the process begins. This triggers the init process for downstream nodes. In order to insinuate insight nodes, the first step is to ensure that the attributes of the upstream nodes contain values that allow for the creation of a downstream insight notde. This criterion can be coded in a “Context-Gate?” The inference engine dynamically invokes the function for the downstream node. The context-gate function returns the Boolean value. If the Boolean value returned is true, the downstream node’s solver function is invoked. A new node is created if there is no existing node fulfilling the resolving criteria in the knowledgebase. It will be linked to the upstream one that triggered its creation. The knowledge base stores the new node as well as its links. The third stage of inference involves the collection of attributes for the newly created node. Remember that concepts can have two types of notes. One that stores values directly and one that links to attributes in other nodes as illustrated in FIG. 12. The ‘Collector? function collects attributes values from upstream nodes and associates it with own notes. The?Collector? function gathers attributes values from upstream nodes, and associates them with its own notes. These attribute values are saved in the knowledge base. Executing the qualifier function of the concept to determine if the node has been enabled is the third step in the inference process. Operator functions are executed once a node has been enabled. If the operator function contains a?Posit handler?, downstream nodes will be triggered and FIG. 17 is used again by the downstream nodes to continue the cascading of knowledge.

“FIG. 18. This is the Posit-Handler’s result. Let’s say that A has relationships with B, C, and D. As an example, A could have a link to B. The Posit-Handler operator A suggests that new links could be created between A, C, and A and A and A and B. The inference process activates the node B. When nodes B, C, and D are activated, the Posit-Handler in nodes B and C triggers, triggering the creation of the next set nodes and linking. This is the heart of the semantic reasoning method.

“Meta-Language Interface to Inference Engine”

“The present invention’s inference engine is intended to be used for a specific class of inference problems that are represented by the semantic model. The inference engine reduces the semantic models to their basic building blocks, e.g. The inference engine reduces the semantic model to its fundamental building blocks, e.g. concepts, nodes and functions. It then applies its logic to the details represented by the building block. An extensible meta-language describes the building blocks of the semantic model, and the corresponding semantic knowledge database. It also includes a vocabulary of ‘words? an extensible meta-language that includes a vocabulary of?words? and?syntax? For interpreting words. The vocabulary of the meta language builds upon other words. The meta-language extends the FORTH programming language. The present invention provides an interpreter for FORTH. The interpreter acts as an interface between the INFER engine and the semantic model and semantic knowledge base. The interpreter interprets standard FORTH terms. It also interprets words that describe elements of the semantic model, such as notes, relationships, functions, and concepts. The interpreter can also interpret the semantic knowledge base attributes, nodes, and links. The interpreter can also interpret new words in meta-language, which are built upon abstract words to describe domain specific semantic concepts.

“FIG. 19. illustrates the meta language of the present invention. This example shows the meta-language of the present invention. ?DROP? and?IF? are fundamental words. These words are essential words that can be used in any FORTH vocabulary. Words like?SEEK? Words such as?SEEK? und?MATCH? are essential words that can be used to describe the building blocks of the semantic model. These words are essential for describing the building blocks of the semantic model. ?SCAN? This is an example domain-specific word that describes the repertoire function in this example. This example also contains a definition for a domain-specific query word,?TypeNEFriendPhone. It searches the knowledge base to return a list with friends in New England. The syntax begins with?:? The syntax ?;?. ends the definition. The user interface sends a string to the interpreter for inference engine. This string can contain the dynamic definition of a Word, such as TypeNEFriendPhone. It also contains the request to execute that word. In this example, the string?26 >circumflex over (?).} TypeFriendPhone#isFriendOf SCAN?. The inference interprets the query to return the friends of the person identified in the knowledge base by the number?26?. This extensible meta-language allows the inference engine and semantic model to communicate with each other. It is easy to add/update the semantic model without having to update the core inference engines software. The present invention is not able to dynamically create words that represent concepts and relationships, and query for nodes or links stored in the semantic knowledge. It will not be able to infer about real-world problems where concepts evolve and change over time.

The meta-language of this invention allows you to query the contents of the semantic knowledge database, which are represented as nodes connected by links and with attributes. The meta-language can be used to query any knowledge base nodes. It is flexible enough to allow for queries on all nodes and attributes. The meta-language can also be used to query all decision support nodes such as reminders, alerts, and recommendations. These are special types of insight nodes. The interface for each decision support node does one of three things. It notifies users of new actionable item, increases the urgency of previously notified actionable item which have not been acted on, or removes notification if appropriate actions are taken. The system automatically tracks each user’s input and adds them to the knowledge base. This intelligent display is possible.

“An application-specific user interface queries the semantic know base and provides decision support for the user. FIG. FIG. 20 shows the flowchart 72 to update the user interface. The user interface queries the knowledgebase for current values for all knowledge currently displayed at any refresh event. This can be either periodic or user-driven. The interface compares query results to the currently displayed values, and updates as necessary.

“The knowledge in the knowledge base may be stored on multiple computers servers. FIG. FIG. 21 shows one example of a distributed knowledge database. One server acts as the master, containing the address of each link, attribute, and node in the knowledge database. FIG. 20 shows how the knowledge base can be queried. 20. The software service finds the queried node and all its attributes in the distributed servers and compiles them into the form that answers the question and returns it back to the user interface.

“Applications for Semantic Reasoning”

“FIG. “FIG. 22” illustrates an embodiment of the present invention. The reasoning system is implemented using a set operating system instructions. It is then processed by native hardware that hosts the operating system.

The ability to inferencing and extend knowledge in a semantic networks is affecting many different areas. The present teachings focus on three areas: 1) Automating processes based upon inferring which process steps were taken; 2) Situational understanding derived through iterative analyses of routine behavioral data and learning patterns from them; 3) Activating individuals to take correct actions based on what was done, what is needed and what they are capable of doing. These applications require logic and pattern learning to make inferences. The system and method described in the teachings can help with this combination.

A semantic model is used to define the cause-effect relationships between process steps in a process automation application. Inference determines which process steps have been completed and which next steps can be taken. This automatically triggers the next steps. The process steps described by upstream concepts are followed by the downstream concepts. The downstream (preceding processes) data are used to accumulate the data necessary for an upstream process. Context-Gate and Resolver operators are defined in a concept to verify that the requirements for the implementation of the process step have been met. The Qualifier and Collector operators verify and collect all data required to execute the process step. The process will be initiated if all requirements are met and all data is available. The data that results from the initiated process are the input to an inference engine. They are then processed according to the semantic model.

“In a situational-understanding application, data from a variety of sensors and user inputs is analyzed to identify situations of interest. The semantic model describes how the input data are interpreted and how they are used to determine the presence or absence situations of interest. Inference determines whether the data required for interpretation is available. It automates the process for acquiring the data and implements logic to determine the presence or absence situations of interest. Each of these steps cannot be done without inferences.

“In a behavioral activation app, the semantic model extends process automation to situation understanding and includes formulating recommendations for users. The semantic network defines the responses to specific situations of concern. Inferences are made based on the characteristics of the situation when situations of interest are identified, such as in a situational understanding app. Formulating response options is a human function that cannot be done without inferencing.

“FIG. 23 is the central approach to all the applications of the current teachings. Data are interpreted according to context. The context is the environment in which the data are being interpreted. The response is determined by the situation. The semantic model describes the relationships between data and context. The semantic model also identifies inference logic by allowing methods to be associated with concepts. Inference engines map data to semantic models and identify contexts, situations, and responses according to the logic of the semantic model. All inferences are stored within the semantic knowledge base as instances of concepts and relationships between concepts.

“Semantic Reasoning to Automate Processes?a Financial Transaction Automation System(FTAS)”

“An application to automate certain financial transactions is presented as an example of process automation. FIG. 9 shows the application. 9, FIG. 9, FIG. 12 describes a system that analyzes bank transactions to identify patterns and predicts when additional deposits are required. FIG. FIG. 9 shows a small portion of the semantic model. It contains four fact concepts 1101, 1102, 1103 and 1104 as well as four insight concepts 1501, 152, 1701, and 1703. These fact concepts are people 1101, their banks 1103, and bank accounts 1102. Financial transactions 1104. The inference engine uses these concepts to continuously analyze pairwise transactions between account holders 1501, bank accounts 1103, and financial transactions 1104. The definition of concepts 1501 & 1502 outlines the functions that the inference engine uses to update nodes 1501 & 1502 and is part of the application’s repertoire.

FIG. 2 illustrates how financial transaction data is normalized into the knowledgebase to create instances for facts nodes. 11. These data are used to enrich the knowledge base by allowing people to be included in it. Bob 3101 and Sue 312, their bank accounts, and transactions between them 3107-3109. This seed knowledge is used to infer the insight nodes. The inference engine updates the corresponding spending patterns 1502 and transaction patterns between individuals each time a transaction takes place. 3501, and continue on to funds required 3701.

“FIG. FIG. 12 shows details of FIG.’s semantic knowledge base. 11. The semantic model contains values for each instance of each concept, such as the age and gender of a person (3050) and the account balance for bank accounts (3060). The semantic knowledge base also contains links between nodes. For example, Bob owns Account-1 3350. Based on explicit facts (e.g. Age, gender, or inferencing (e.g. ”

“As with any application of the invention, the semantic model (shown at FIG. The semantic model for FTAS (shown in FIG. 9) can be expanded to include additional concepts, facts and insights. The inference engine works the same regardless of semantic model. It applies the functions in the semantic models to infer any data that is available. Spending predictions and automated fund transfers to cover anticipated expenses are two examples of extensions to the semantic model. These extensions involve adding concepts to your semantic model and incorporating the associated functions invoked from the inference engine into the applications repertoire.

“FTAS” illustrates one benefit of the present teaching: large amounts of data can automatically be processed into higher levels knowledge, including predictions (e.g. Spending patterns and funds needed. Another benefit is the possibility of automating certain processes such as fund transfers in anticipation or need. This provides bank customers with a valuable service. The semantic model clearly outlines the logic of anticipation and automation, and all inferences can be traced back.

“This invention includes the semantic model, inference engines, semantic knowledge bases, the repertoire and functions as well as external data sources that seed this knowledge base with fact nos. Any application that uses it also has a user interface that queries the knowledge to find contents that interest the user. The FTAS application will have a user interface that includes tables and charts showing spending patterns as well as alerts when funds are required.

“Semantic Reasoning to Situational Understanding: Monitoring Activities for Daily Living (MADL).”

“An example of semantic reasoning to situational understanding is the system that interprets movement data along with self-reported data to determine if key activities of daily life (ADL) such as sleeping, bathing and eating, are occurring in a normal manner. The semantic model describes the way data are translated into daily activities. Inference engines continuously apply the semantic model to the incoming data to determine if expected activities are occurring and if there are deviations from normal patterns. They also draw attention to abnormal patterns.

“The system for monitoring daily activities MADL (the system for monitoring daily living activities) includes a number of sensors as shown in FIG. 24) that provides movement data. The sensors include the wearable transmitter 12050 (also known as the pager), which emits a pulse every few seconds. A set of 12000 sensors detect the pulse and distribute it throughout the living area. Each detector detects the signal strength of the pulse and sends it to the inference engine via a 12100 communication hub.

“Each transceiver 12000 in FIG. 24 has a unique address. Transceiver data from the wearable transmitter 12050 are retransmitted to 12100 with the transceiver’s address. Transceivers are also identified with the location of their transceivers. This rebroadcasted signal is used to interpret daily activities based on its location. Signals from the kitchen, for example, are analysed for patterns that involve small movements about the stove, with trips to cabinets and refrigerators. The semantic model contains the logic to interpret signals from transceivers and the inference engine uses it to interpret movement data.

“FIG. “FIG. 25” shows the design of the wearable device 12050. One chip transceiver transmits signals periodically. It also includes an accelerometer, magnetometer, and gyroscope. A USB connector is available for charging. An antenna can transmit and receive signals. The transmitters work in the 420 to 450 MHz frequency band. Because the wavelength is long enough to bend around walls and furniture, it can be picked up by multiple transceivers. The accelerometer data allows for movement to be determined, making it possible to distinguish between moving and resting. The data from the gyroscope allows for horizontal and vertical orientation to be determined. The magnetometer can be used to determine the orientation relative to north. This information can be used to adjust for any variations in the signal measured from the transceivers. The total data from the wearable device comprises the signal strength measured at each transceiver and data from the accelerometer and gyroscope. These data are combined by the inference engine to determine repeated patterns of movement within and around the house. The relative signal strengths of the transceivers are adjusted for the rotational orientation derived by the magnetometer data. This allows the inference engine to determine when the user is located in different rooms within the house, such as the kitchen, bathroom, bedroom and living room. It also helps determine if they are engaging in rest activities (i.e. sitting, lying or standing) as inferred either from the accelerometer data or from the gyroscope. It is important to understand the context in which the activity took place. As an example, eating in the dining area after cooking is done indicates that you are eating. However, working at the table in the kitchen without the surrounding activities indicates that you are not doing any cooking. This common sense logic is part the semantic model.

“FIG. 26 shows one way to wear the pager 12050. The circuit is contained in a casing, which can be used as an ornament in a neckpiece bolo. You can weave the antenna, which measures 6 inches in length, into the bolo-lace. Another alternative is to place your pager on a elastic wristband. You can weave the antenna into the wristband. FIG. FIG. 26 shows the battery for the wearable webpager. The battery provides power for at least one day. You can charge the pager using a USB connector (mini or regular). As shown in FIG. 29 to charge the pager. The dock has a guide for the USB connector, which slides into contact with the charging circuit within the communication hub.

“FIG. “FIG. The circuit 12010 can be attached to a wall charger 12011 which provides power for the transceiver.

“FIG. 28 shows the design of the communication Hub 12100. The hub contains the same RF receiver/transmitter chip 12130 that was in the pager. A CPU, an Ethernet port 12010 and a USB port 12120 are included in the hub for charging. FIG. FIG. 29 shows the communication hub enclosed with the charging dock 12150, and the Ethernet and charging port 12140. The Ethernet port transmits partially processed data from the device to a remote server. The server hosts the semantic model and inference engine, as well as the semantic knowledge base for the present invention.

“The pager 12050 and the transceivers 12000, as well as the communication hub 12100, are designed to collect, process, and relay movement data for analysis. Interpretation begins by learning patterns from data that are repeated daily and often multiple times per day. These patterns correspond with common activities. FIG. FIG. 30 shows the signals received by transceivers in FIG. 24. Every transceiver receives an individual signal from the wearable device. The square of the distance between the wearable device (and the transceiver) and the signal strength is ininverse proportion. The signal-vector is a collection of signals from all transceivers in the home at any one time. It has as many components and as many transceivers as possible. A signal-vector is associated to the wearable device’s location within the error limits in signal strengths. FIG. 2 shows a cluster of signal-vectors that are close to common locations in the home. 31. The inference engine can associate signal vector clusters with key daily activities by correlating self-reported data (e.g., a meal was taken) and movement data (e.g., no movement).

“Partially processed data for movement, i.e. Clustered signal vectors can be transferred to a server for further processing. FIG. FIG. 32 shows a section of the semantic model which processes sensor data to provide deep insights. FIG. FIG. 32, 12300 shows the clustered signal signals transmitted by the communication center. These data can be combined with other data such as self-reported data on daily activities such as getting up, eating, and going to bed. These data can be entered into an “Activity/Anomaly Identification” program. semantic concept 12320 identifies and links repeated patterns to common daily activities. Common sense patterns that are a-priori and related to common activities can also be input to Activity/Anomaly Identifier 12320. The Activity/Anomaly Identifier uses a-priori knowledge data to determine patterns of relaxation, meal preparation, eating, sleeping, and balance 12310. A-priori knowledge data can be used to identify activities that occur during rest and relaxation. These include activities with a short duration, no translational motion and a lower than usual center gravity. Balance knowledge describes patterns like the presence of a stabilizing time between sitting and walking. The inference engine uses all the data from the movement sensors 12300 to determine the user’s individual patterns of balance 12330. It also applies any a-priori knowledge to the movement data. Normal sleep times, normal wake up times and normal variations in sleeping patterns are examples of learned patterns.

“In addition to learning the normal patterns of activity, operator functions for the semantic term 12320 also detect deviations from the normal pattern. A fall 12340 is when a user lies down in an unorthodox place after walking. A balance analysis 12350 is performed if the user takes longer to stabilise after sitting. This could lead to a high chance of falling alert 12360. Recurring detections of balance issues may trigger recommendations to lower the risk of falling 12380, such as slowing down or using a cane to help you rise. Through the process described in FIG. 15, all such inferences can be automatically generated by an inference engine in the present teaching. 15-FIG. 17.”

“FIG. 33 shows a specific sequence of the inference process. The Activity/Anomaly Identifier concept activates when movement data becomes available. The data in this example concern movement in the kitchen. The Activity/Anomaly Identifier function’s Context-gate, Qualifier and Operator functions determine that the inference engine needs to fetch a-priori as well as learned patterns about meal preparations. After those patterns have been fetched, the Operator function associated to the Activity/Anomaly Identifier idea continues processing movement data until it matches the apriori and the learned patterns. The movement data representing meal preparation are then classified and added to the learned patterns. The Operator function triggers the Learned Patterns of Dining concept, which prepares for the processing of a particular instance of dining.

The processing of movement data, as described above, converts large quantities of data from multiple sensors (transmitter and accelerometer, gyroscope, magnetometer) into assertions about the normal and abnormal activities of daily life. These assertions can be automatically drawn from a process called inference, which is explicity specified by the semantic and therefore is explicable. This is a key area for the application of the present teachings.

“Semantic Reasoning to Activate Behavioral Behavior: Personal Chronic Illness Management Systems (PCIMS).”

“A personal chronic illness management program (PCIMS) is an example of semantic reasoning used for behavioral activation. The PCIMS was created to help patients and caregivers better manage chronic illnesses. PCIMS manages patient care, schedules activities, monitors them, and accesses quality of care. It also tracks the quality of care and the compliance to prescribed care plans.

“FIG. “FIG. 34” shows the overall process of behavioral activation. It consists of three phases: 1) Activities are planned and structured according to the situation; 2) Activities are monitored and data are analyzed from them; 3) Issues are identified and corrective actions taken. These three phases are repeated continuously. FIG. shows examples of behavioral activation for chronic disease management steps. 34. These three phases are complex and cannot be performed consistently by humans. Automation through inferencing may be necessary.

“Behavioral activation can be automated in the PCIMS application using the semantic reasoning approach described in the present teachings. The first step in behavioral activation is to plan and schedule activities. PCIMS organizes the following activities: Medication taking, meal taking and interfacing with family members. PCIMS collects data about the patient’s physiology and medication adherence. It also monitors the patient’s response to structured activities through clinical follow-up. Monitoring includes inferences about patient’s adherence to prescribed care-plan, physiologic responses, recognition of areas that need care and seeking out help from those who can. Inferencing helps to identify issues in self-care and develop intervention strategies. The intervention strategies are scheduled as planned activities, and the behavioral activation process continues in a loop.

“In one embodiment, the PCIMS system includes the components shown in FIG. 35. Care-Station and Pager are interactive devices that allow patients and caregivers to send data to the inference engines and receive query results from the semantic know base. Care-Station interface receives inputs and provides decision support for the patient. The Pager, a wearable device, transmits movement data to an inference engine. It also pages the patient when needed. The Care-Portal receives inputs and provides decision support for the patient’s caregivers. The three components of the system send data to the same semantic knowledge base interfaces and inference engine, but they serve different purposes.

“In the PCIMS app, the semantic reasoning framework of the present invention converts low-level data, such blood pressure measurement or medication adherence data, to recommend follow up actions to better manage chronic diseases. FIG. FIG. 36 shows that the PCIMS system includes a semantic model that defines concepts related to chronic disease, the domain-independent inference engine of this invention, and a personal knowledge base for managing chronic illness that is created by the application of inference.

“The PCIMS application uses cascading, or inferencing. Each inference leads to the next until a resolution is achieved. If not, the process will halt while it waits for additional data. The semantic model, which includes the definition of insight concepts and relationships between concepts as well as associated operators, describes the inference process. FIG. 37-FIG. 37C shows fragments from the PCIMS semantic models. Items such as a clinical appointment 11054 and a family member 11052, patient11061, physician 11061 or hospital discharge 11053 are examples of fact concepts. Items such as master adherence 11059 with its components, daily plan 11055 with its components, and patient 11051 depict various types of insights. They represent knowledge about different patterns of patient behavior, including medication adherence and meal taking. Other insights can be derived from the patterns such as how often the patient takes medication and what to do if the patient’s health and adherence are not good.

“The PCIMS semantic modeling in FIG. 37-FIG. 37C includes concepts such as ‘Patient? 11051 and ‘Family Member? 11052 is connected by a relationship which describes which family member cares for the patient. Each concept can be defined by notes like name, gender, and age for the?Patient? ‘Family Member? concepts. The semantic model does not only define concepts for chronic care patients, but also a variety of concepts like?Clinical Appointment’. 11054 and ?Hospital Discharge? 11055, which results in the specification a?Care Guide?” 11055 which results in the specification of a?Care Guide? 11057.”

The inference engine implements the creation of a?Daily Caregiver? as the knowledge base is enriched with information about events like hospital discharges and clinical appointments. 11055 prompts the inference engine to remind patients about daily care tasks like taking their medications (?Medication reminder?). The Care-Station interface allows the patient to report the actions taken after the care tasks have been completed. Reminders can be tracked when the appropriate adherence nodes are e.g. (?Medication Adherence? 11059) are processed periodically and adherence patterns are learnt. If the operator function for an adherence node determines otherwise, adherence alerts will be issued.

“Facts such as a ‘Care Guide?” 11056 was created during a ‘Hospital Discharge? 11056 were created at a ‘Hospital Discharge? 11055. Insight concepts are functions that can be used to process the notes in the fact nodes to create insight nodes and their notes. For example,?Process Care Plan Data FIG. 11101 is an example of this function. FIG. 38 shows a particular function called?Generate Daily Care Plan? 11105 is part of the “Daily Care Plan?” FIG. 37C. 37C. To create 11055 instances, the inference engine runs trigger functions at the given clock time. This triggers downstream nodes. The same process is applied to functions 11102-11108. All of these functions are associated with insights concepts within the PCIMS semantic modeling in FIG. 37-FIG. 37C.”

“FIG. 38 shows four types of data that PCIMS stores and presents as facts to its inference engine. These data include data that describes the patient’s treatment plan, physiologic data about patient’s vital signs, self-reported data that captures patient’s care planning adherence behaviors and movement data collected by the wearable webpager. These data are generated by the client responding to the automatic reminders. Inference engines automatically generate insights from the data. An example of insight is to generate a daily care plan, track performance and create reminders when necessary. In the semantic model, insight concepts are defined as the key words and repertoire functions that must be inferred during the inference process (shown in FIG. 37-FIG. 37C) and is implemented by the inference engines of the present invention.”

“The PCIMS data that is presented as facts to inference engines comes from two sources: user inputs and sensors. FIG. FIG. 39-42 describes the inference flow to process the four types of data in the PCIMS.

“FIG. “FIG. 38. This logic is represented in one or more of the repertoire functions, which combine multiple inputs into a patient’s daily care plan. A single daily plan contains a list of all tasks the patient must complete on any given day. During the creation of the daily care plan, multiple intermediate semantic concepts can be used. Intermediate concepts can represent intermediate knowledge, such as inconsistencies in care plans and requests to patients and caregivers to resolve them. The semantic model will determine whether intermediate concepts can be defined. This knowledge must be used to provide decision support for the users of the application.

“FIG. “FIG. 38. These logic functions may be associated with one or several concepts and can be dynamically invoked and triggered by inference engines when new insight nodes or links are created. Each data type in FIG. Each data type in FIG. 40 is represented as an individual concept with its own notes, and repertoire functions that are customized to process those notes. The semantic model also includes distinct decision support concepts for notifications to caregivers and recommendations such as to call doctors. When the logic shown in FIG. 40 is satisfied.”

“FIG. “FIG. 38. The logic can be embedded in one or several repertoire functions and associated with one of the concepts. They are dynamically invoked and updated by the inference engine as part of the creation of new insight nodes or links.

“FIG. “FIG.4.242” describes the logic of functions 11104. See FIG. 38. This logic describes how the movement data is processed to infer daily activities and learn daily habits. Further processing of the movement data may be done to identify patterns in walking, which can then be used to make inferences about falling. The logic for movement analysis could be contained in one or more repertoire functions that are associated with one or several concepts. They are dynamically invoked and linked by the inference engine during the creation of new insight nodes or links.

“The logic described at FIG. 39-FIG. 42 is implemented using functions that are associated with multiple concepts within the semantic model. The logic “Clinical Action Required” is an example. Multiple concepts from the PCIMS semantic modeling are used to implement this logic: Fluid Retention Analysis and Hypoglycemia Analysis. Analysis of Hyperglycemia and Episodic hypertension. The inference engine creates a single instance of each concept when a patient has been diagnosed with the corresponding disease. Each node is activated when the input link connecting it to its corresponding data is opened upon arrival of new data. The following functions are used to analyze the situation: fluid retention, hypoglycemia or hyperglycemia. The analysis may alert you to the need for clinical action.

“FIG. 43 shows the logic of the Hypoglycemia Analysis analysis. FIG. FIG. 44 shows its implementation using the inference mechanism described in the present teachings. New Blood Glucose Data activates Hypoglycemia Analysis. The operator function associated to the node checks if glucose level is below 75. In this case, the Hypoglycemia Protocol node will be created. This node implements FIG. 43 involves multiple, sequential glucose levels measurements, the recommendation to take a glucose tablet and checking if the patient feels dizzy or sweaty. Each action is represented in the semantic model by its own concept. The protocol node operator keeps track of how many of the protocols have been followed. If you need to?Call Doctor? Call for help? Once the state is reached, a Hypoglycemia Alert Node is created. It indicates whether a doctor or a friend should be called for assistance.

“The inference engine created nodes and links and they are stored in the semantic know base. They are queryable by the three interfaces to the PCIMS, which are the Care-Station Pager, Care-Portal and the Care-Portal. 35. Each interface queries the PCIMS semantic know-base using the query meta language illustrated in FIG. 19. FIG. illustrates how the various user interface objects are used to display the answers to queries to the user. 45-FIG. 63.”

“FIG. 45 shows the main interface of the Care-Station. All interfaces can be controlled using touch. The center of the interface is a window which displays the user’s photos. The left panel displays the current date and a clock. The interface also includes buttons that allow users to perform specific actions. On the left panel, you will find a button that allows you to access different options for communicating with your family members and caregivers. The button also communicates the rewards that are earned by users for completing tasks successfully.

“The bottom panel of FIG. 45 displays a sequence of buttons that correspond to the tasks for each day. The icons are used to signify the tasks: the medicine bottle for taking medication, food plates for eating meals, and devices for monitoring vitals. Below the task icon is an oval that indicates the time it takes to complete the task. It can be colored in a variety of colors, including red for due and green for completed. The inference engine dynamically calculates daily tasks and stores them in the semantic knowledge database as reminder concepts. Care-station interface queries the semantic knowledgebase periodically to determine the status of any upcoming tasks. The interface then updates the interface accordingly. Tasks that have been completed are marked?Done? The right panel shows unscheduled tasks that require attention. The interface depicts both scheduled and unscheduled tasks. Each task is a reminder from the PCIMS semantic modeling shown in FIG. 37-FIG. 37C.”

“FIG. “FIG. 46” illustrates what happens when you touch a medication icon: A panel showing all medications that must be taken is displayed. The panel displays information about the medication by touching the image. It also allows users to indicate whether they have taken the medication or not. Each user input is sent directly to the inference engines and then processed according to the logic in the semantic model.

“FIG. 47 shows the panel that appears after the user touches the icon for a blood pressure device. This panel allows users to enter blood pressure readings. The blood pressure reading can be automatically taken if the device is connected directly to the PCIMS software. The inference engine receives blood pressure readings and analyzes them. The main interface’s right panel shows additional tasks. FIG. 45.”

“FIG. 48 is the panel that appears after a blood glucose device icon has been touched. The inference engine receives the manually entered data and triggers a hypoglycemia or hyperglycemia analysis. This protocol is described in FIG. 43 and FIG. 44.”

Summary for “Systems and Methods for Semantic Reasoning in Personal Health Management”

“Currently, the World Wide Web is expanding rapidly and integrated sensors mean that there is more information than the human brain can comprehend. Systems and methods that automatically process data are required to generate new insights and knowledge.

“A variety of approaches have been used in an effort to develop systems or methods for representing knowledge. A semantic network is one such approach. A semantic network is a structure that encodes knowledge. It can encode information about any domain such as financial transactions, military operations, organizational relationships, medical conditions, and their treatment.

This invention addresses the challenges of homebound patients with chronic complex conditions managing their own health at home. Multiple factors contribute to this problem: a patient’s inability to retain or improve their cognitive abilities; a lack of understanding of the home’s situation; and a lack of information from the healthcare team about the effectiveness of treatment. The invention incorporates semantic reasoning to automatically derive insights from facts. Computer-based methods and the computer system are used to automate tasks, improve situational understanding, and to activate people to take the right actions at the right moment.

In general, the invention provides a personal sickness management system. It includes an extended semantic model for a health-care knowledge domain, a semantic database for personal illness management and an inference engine. Extended semantic models of health care knowledge domains include existing concepts that relate to personal illness management and existing relationships between them. Inference logic is embedded within each concept. Each existing concept has associated properties. The embedded inference logic, which is used to infer values from the associated properties, is used for drawing inferences about existing downstream concepts that are connected via existing relationships. The semantic knowledge database to manage personal illness is separate from the extended semantic model. It includes existing nodes as well as existing links. The existing nodes are instances of existing concepts. The existing links represent instances for existing relationships between the existing concepts. The inference engine, which is independent of knowledge domains, populates the semantic knowledge base with instances of existing concepts and existing relationships. It follows the inference logic embedded in each concept. Inference engines add new nodes to the semantic knowledge base by receiving external data first, then creating instances of fact nodes with associated links representing that external data and then adding and updating downstream insights nodes and linked. A computing system is also included that includes at least one processor to run the inference engine using the extended semantic model’s inference logic and to host the semantic information database. A user interface is also included in the system. It allows patients and caregivers to send queries to the semantic knowledge base. The database then receives the query results and provides support for the patient or caregiver. The system includes computer-implemented instructions for semantic reasoning to activate the patient’s behavior in order to manage his illness.

These are the implementations of this aspect of invention. Interactive devices are also included in the system. They include at least one of a caregiver station, a wearable sensor or an external sensor. The caregiver and patient send data to the semantic knowledge database through at least one interactive device. They also receive query results from it. The care station or the care portal are the interfaces that allow patients and caregivers to send queries to the semantic knowledge base. They also provide decision support for the patient and/or caregiver. The personal communication device, a wearable device that transmits movements data to the inference engines and notifies patients when they need to be reminded or alerted about a particular action, is the personal communication device. Computer-implemented instructions for semantic reasoning to activate behavioral activation include scheduling and monitoring activities of the patient, as well as analyzing data from those activities. The patient can also be monitored and notified when they need to be alerted or reminded. The patient’s schedule and structure includes instructions for medication, meal preparation, exercise, monitoring vital signs, checking for symptoms, and attending clinical appointments. Monitoring and analysing data from patient’s schedule activities include monitoring patient’s vital signs, symptoms, medication adherence and meal taking, exercise, attendance at clinical appointments, and responses to social interactions. Inferences can be made about the quality of the patient’s adherence to medication, exercise, taking medications, eating, attending clinical appointments, vital sign response, symptom check, and social interaction response. Extended semantic models include concepts for patients and caregivers. Existing relationships between the concepts indicate which caregiver is assigned which patient and which physician is assigned which patient. Each concept includes notes that include name, gender, and age. Concepts for providing illness management are also included in the existing extended semantic model concepts. These concepts include hospital discharge, care guide medication guide, medicine supply medicine bottle, master adherence, and clinical appointment. The inference engine also receives patient care plan data and patient vital signals data. It also collects patient symptom, patient movement, self-reported care plan adherence, patient symptom, and patient care plan data. The inference engine processes external data and adds insight nos that include patient daily care plan, reminders for care tasks, monitoring health, and additional reminders when there are issues with health or adherence. Each existing node contains existing notes that represent instances of the associated properties for each concept. Directionality is a way of connecting existing nodes. It originates at an upstream node, and ends at a downstream one. A data normalizer is also included in the system. It can receive data and normalize them with the existing concepts and relationships of the extended semantic modeling. A repertoire library is also part of the system. The repertoire library contains a collection of repertoire functions. The unique identifier number assigned to each repertoire function identifies it with the inference logic embedded within each concept of the extended semantic model. In the process updating insight nodes, the inference engine automatically invokes the repertoire functions. There are five types of repertoire functions: collector, resolver, context-gate and qualifier. Each type of repertoire function plays a role in the inference process. Each type of repertoire function plays a specific role in the inference process. The extended semantic model also includes downstream concepts and upstream concepts. These upstream concepts are linked to the downstream concepts via relationships. Each downstream concept is automatically inferred using the inference language embedded in the corresponding upstream conceptual whenever it is instanced as an upstream node, or when the node is modified. Each property contains values that are associated with an existing concept, or a chain linking data within an upstream and downstream concept. The properties are used by inference engines to implement the logic for insinuating a link to a downstream idea or for insinuating properties of said downstream concepts. The extensible meta language is used to query the semantic knowledge base. The computing system includes a mechanism to store and retrieve knowledge from the semantic knowledge database that is distributed across multiple computing systems. An extended semantic model can be extended by adding new fact and insight concepts or another extended semantic model. Each new insight concept must be added to the upstream concepts. This will allow the system to specify the logic for adding or updating the new insight.

“The invention also provides a method of providing personal illness management in one aspect. First, the invention provides an extended semantic model for a health care knowledge domain that includes existing concepts and relationships between them, as well as inference logic embedded within each concept. Each concept has associated properties. The embedded inference logic can be used to infer values from the associated properties. It also includes conditions and processes for drawing inferences about an already existing downstream concept that is linked to another concept via an existing relationship. Next, a semantic knowledge base for personal illness management is provided. It is distinct from the extended semantic models and includes existing nodes as well as existing links. The existing nodes are instances of existing concepts. The existing links are instances of existing relationships between the existing concepts. Next, an inference engine is created that is independent of knowledge domain and populates the semantic information database with instances of existing concepts and instances of existing relationships. This follows the inference logic embedded in each concept. Inference engines add new nodes to the semantic knowledge base by receiving external data first, then creating instances of fact nodes with associated links representing external data and then adding and updating downstream insights nodes and linked. Next, a computing system that includes at least one processor to run the inference engine using the extended semantic model’s inference logic and to host the semantic information database. Next, a user interface is provided that allows patients and caregivers to send queries to the semantic knowledge base, and receives the query results. The caregiver can then provide decision support for the patient or caregiver. Next, computer-implemented instructions for semantic reasoning to activate the patient’s behavior in order to manage his illness.

“The present invention provides a semantic computing system and a computer-based method of semantic reasoning for automatically drawing insights and conclusions from facts using a reasoning framework. The semantic reasoning system contains a semantic model, semantic knowledge base, as well as an inference engine. The semantic model defines the information that is needed. The semantic knowledge base is all that is known. Inference engines add knowledge to the semantic knowledge database by following a process that draws conclusions. The computer-based semantic reasoning system and the semantic reasoning computer system are used to automate tasks, improve situational understanding, and to activate people to take the right actions at the right moment.

Referring to FIG. “Referring to FIG. The semantic model 1000 contains the concepts 1001, 1002 & relationships 1003 which define the content of a domain’s knowledge. The semantic model 1000 also describes the conditions and processes that lead to a new concept being inferred from an already existing concept. The semantic knowledge base 3000 is different from the semantic modeling 1000. It contains what we call instances of concepts and relationships as defined in the semantic model 1000. The semantic knowledge base 3000 represents knowledge via nodes 3001, 3002, and 3003, which represent instances of concepts. Links 3003 are instances relationships between concepts and attributes of links. The semantic knowledge base 3000 uses links and nodes to represent what is explicit known and what can be inferred. Inference engine 2000 uses the reasoning process defined in the semantic model 1000 in order to continuously add inferred nodes or links to the knowledge base 3000. Recursive reasoning may be triggered by inferred nosdes, which may in turn trigger additional reasoning that may in turn create new inferred nosdes.

Referring to FIG. “Referring to FIG. The extended semantic model 1000 contains the concepts and relationships to be known as well as the methods by which they will be known. Inference engine 2000 draws conclusions from facts using the specifications of the extended semantic model 1000. This is done through a cascaded process that infers insights and triggers further inferencing. The semantic knowledge database 3000 contains knowledge stored in the form nodes linked via links. Data normalizer 4000 is a process and meta-language that normalizes data about the real world to the concepts and relationships in the semantic model 1000. The data then goes to the inference engine 2000 to be converted to nodes or links. The repertoire 5000 contains a collection of functions that have been identified in the semantic models 1000. They are then systematically invoked to create instances in the semantic model 1,000 by the inference engines 2000. Knowledge query 6000 contains a meta language and process for querying the knowledge base 3000 for nodes/links and attributes of links. The user interface 7000 displays queried knowledge 6000 in cognitively friendly fashion.

“In one example the semantic reasoning system 100A (FIG. 5 is used for decision support. Inferred nodes are knowledge that the user is looking for. The user can query the semantic knowledge base 3000 to find inferred nodes via a user interface. The resulting inferences will be presented to him via the user interface. 7000 for decision support. FIG. 5 shows another example of the semantic reasoning system. 5. This is used to build a narrative using data. The semantic model can define the content and the structure of the narrative. The inference engine creates the narrative supported by the data as data becomes available.

“FIG. “FIG. 6 shows an example of how data 110 can be converted into linked knowledge, and the associated decision support and narrative that is derived from a process called inference. The inference engine 2000 stitches together a collection of 110 facts from a person’s history to create a composite picture 120. The inference engine 2000 can draw conclusions 130 based on the medical history 120 and clinical guidelines. The person is retaining fluid) as well as recommendations (e.g. The person should contact a physician. Alerts may be sent to caregivers if health problems, such as fluid retention are inferred. A narrative would describe the inferred condition, supporting data, and the reason for the inference.

“Extended Semantic Model.”

An extended semantic model 1000 is the first step in accumulating knowledge by inferencing. An extended semantic model 1000 defines concepts and relationships between concepts to represent all knowledge about a domain. The semantic model defines how concepts and relationships are inferred.

“In prior art systems the semantic network does not distinguish between concepts and instances. FIG. 1, ?Person? 1,?Person? Both nodes are part of the network. A link of type?is an instance. FIG. 83 is the knowledge that Bob is a human being. 2. Inferencing in prior art semantic networks is difficult because of this ambiguity about whether a node represents an instance or a concept. The present invention allows for representations of concepts (i.e. Things that are known can be kept apart from nodes. Things that are known. The semantic model 1000 represents concepts, while the semantic knowledge database 3000 contains nodes. FIG. 9 illustrates this separation. FIG. 9 and FIG. 11. FIG. 9 depicts ?Person? FIG. FIG. 11. The semantic model that corresponds to?Bob’ represents?Bob? 11 and?Sue? as instances of a “Person?”. This separation creates a clear role of the inference engine to populate the semantic knowledgebase with instances from the semantic model.

“FIG. “FIG.7” illustrates different ways the present invention extends a particular semantic model. These extensions aim to overcome limitations in inferencing in existing semantic networks. FIG. 1. Knowledge in a semantic network can be represented by nodes 81 and links82 connecting nodes. Links are usually not directed. It is the information about which nodes are connected that gives rise to the entire network knowledge. The present invention gives directionality to relationships that link concepts. As shown in FIG. 7. Directionality permits the representation of an “upstream”. Concept 1100 is shown at the origination point of the arrow 150. A?downstream?” is also depicted. Concept 1500 is depicted at termination of the Arrow 150. It is crucial to distinguish between the upstream and downstream concepts in order to inference. FIG. 16 The instantiation a upstream concept triggers the process to determine if downstream concepts are ready for instantiation.

“Concepts in the present invention can have data associated with them. This is an extension to the semantic model. FIG. 1200 shows the notes associated with concepts. 7. There are two types of notes: they can refer to values associated with a concept like?age?. For a?Person?. A note can also refer data in upstream concepts that are linked by a chain link to the downstream concept containing referred notes. The notes for a concept refer to the data used by the inference engine to draw conclusions about the concept.

“In an extension of the semantic model, the concept defines multiple types 1250 of functions 1250. Each function has a clearly defined purpose in the inference process. FIG. FIG. 7 shows 5 types of functions 1250, including Resolver and Context Gate, Collectors, Qualifiers, Operators, and Qualifiers. As will be explained later, each of these functions plays a unique role during the inference process. FIG. 5 shows how each function is cataloged in a collection called “Repertoire 5000”. 5, and is identified with a unique number. These functions are referred to in the semantic model by their identity numbers. These functions are dynamically and systematically invoked by the inference engine 2000 during the creation of instances of the concept.

“In an extension of the semantic model, 1300 relationships are defined to possess certain properties. The inference engine 2000 also infers the relationships 1300 and they are then instanced in knowledge database 3000. Multiple types of properties can be used, e.g. Cardinality, Active/Passive/Custom 1400, among others. The inference engine 2000 interprets the properties 1400 for a relationship 1300 to implement specific logics for the instancing of the links.

“FIG. 8 shows yet another extension of the semantic model 1000. The present invention describes two types concepts in the semantic model:?Facts? 1100 and ‘Insights? 1700. The facts 1100 are what can be directly observed about the world. Insights 1100 are concepts inferred from facts 1150. The inference engine 2000 automatically infers relationships 1300 between facts 1100, insights 1700, and 1100. Intermediate knowledge is derived from analysis, learning and applying logic. Insights 1700 are intermediate knowledge. The ultimate answer to be found is at the highest level of insight 1700. The semantic model 1000 is capable of defining as many facts as 1100, and as many insights as 1700 as necessary to describe the domain knowledge.

“FIG. “FIG. This extends the semantic model in FIG. It also includes insight concepts that can be inferred. FIG. FIG. 1101, ?Bank Account? 1102, ?Bank Account? 1102,?Bank Account? FIG. 1. These are facts 1100 that are known explicitly. ?Personal Transaction? 1501, ?Spending Pattern? 1502, ?Fund Need? 1701 and ‘Transactional History? 1702 refers to concepts that can be inferred. Some relationships, such as the?owns account?, are explicit known. 1306 is an explicit known number and represents facts 1100. Others relationships, such as the?transacts cash? relationship? 1301 and ‘primarily receives money? The inference engine 2000 infers 1304.”

A semantic model can be extended. Its components include upstream and downstream 1100 concepts, notes, functions 1250 and relationships 1300. The semantic model 1000 for a particular domain (e.g. FIG. 9) outlines the details of the building blocks. ?Person? is an upstream concept of?Bank Account? that is connected through a relationship ‘owns account?. The inference engine 2000 processes the?Person? As an upstream concept, and?Bank account? as a downstream concept. As a downstream concept. These concepts’ details, i.e. its notes and functions are all extracted from the semantic model. The inference engine 2000 dynamically processes these data through its inner logic to inferencing. A semantic model can be represented in terms of building blocks, which allows it to be used for any number of domain applications.

“Semantic Knowledge Base”.

FIG. 4. The semantic model 1000 is the blueprint for what knowledge is stored within the semantic knowledge database 3000. Inference engine 2000 is responsible to insinuate all knowledge in the knowledge base 3000.

“FIG. The general form of a semantic knowledge database 3000 is shown in FIG. 10. This example shows the relationships and concepts for the general semantic model shown in FIG. 8. The fact concepts 1100 and 2100 in FIG. 8 are the fact nodes 31100 of FIG. 10. Corresponding with the insight concepts 1500 & 1700 of FIG. 8 corresponds to the insight concepts 1500 and 1700 of FIG. 10. FIG. 3300 links 10 link all nodes and correspond with relationships 1300 in FIG. 8 between the nodes. The inference engine 2000 creates all nodes and links using direct knowledge of the world 3100 and conclusions drawn from direct knowledge of the world 3500 and 33700.

“FIG. FIG. 11 shows an example of a particular semantic knowledge base for financial transactions. 9. This example shows two instances of the person concept. Bob 3101 is with City Bank 3105 and Sue 3102 are each having a bank account. Bob’s account is at City Bank 3105, while Sue’s account is at State Bank 3106. Three financial transactions were made between the two accounts 3103 and 3104. They include 3107, 3108 and 3109. Each transaction is described using two types of links. Sue’s account 3104 is with State Bank 3106. Bob sends the money indicated by links 3312, 3314, 3314, and Sue receives it indicated with links 3311 to 3313 to 3315. Nodes 3101-3109 represent instances of fact concepts that are known from external data sources.

Multiple types of inferences can occur at the same time that fact nodes are instanced. One type of inference creates new links between nodes. FIG. FIG. 11. Link 3307 is created between Bob 311, and Sue 312, to indicate that Bob and Sue have transacted money. Another type of inference changes attribute values in existing nodes. A link to a bank account node, or the creation thereof, triggers certain functions that are associated with the bank account concept. This includes the calculation of the account balance attribute. The third type of inferencing determines whether the creation upstream fact nodes triggers the creation downstream inferred nosdes. FIG. FIG. 11. Three examples of inferred downstream nos 3501, 3503, and 3504 are shown. The creation of each transaction 3107-3108-3109-3109 between Bob’s Bank Accounts 3103-3105 and 3109 respectively triggers Nodes 3501, 3505, 356, 3506 First instance of concept type?Personal Transaction FIG. 1501 FIG. 1501 in FIG. FIG. 1702 9. Every subsequent personal transaction 3505, 3506 generates additional links to node3702 which tracks the history personal transactions between Bob & Sue and allows you to infer who primarily sent money 3320 and who receives it 3321. FIG. 11 also contains inferred nodes. 11. These inferred nodes include 3502 (Sue’s) and 3503 (Bob’s), which are used to track Sue and Bob’s spending habits. The inference engine then predicts when an account will require a deposit. This invention allows automatic enhancements to semantic knowledge by inferencing, such as the updates to 3503 (from 3104) and to 3701 (from 3503).

“FIG. “FIG. 12” shows the three ways knowledge is represented in extended semantic knowledge base. Nodes represent concepts 1100 and attributes 1200 respectively. Attributes represent notes 1200 within concepts, capture data about the node and links represent relationships 1300 between nodes

“The FIG. 12 example shows how to do it. FIG. 12 contains details about two concepts. 9: The?Person? 9: The?Person? concepts. Person concepts are defined as having three notes: Age, Name, and Gender. FIG. 3050 is Bob’s corresponding node in our semantic knowledge base. 12, age 42, male. He has a bank account 3060, account number 123456 and account balance $1345.60. The bank account concept contains two notes: the account number and the account balance. The account holder is the third note in the bank accounts concept. This note links to the?Name. Note in the upstream ‘Person? concept.”

“Inference Engine”

“The inference engine 2000 creates nodes and links in a semantic knowledge base. These are instances of concepts and relationships within the semantic model. FIG. FIG. 13 shows the flow of the 100B inference process. The semantic knowledge base 3000 uses the enhanced semantic model 1000 to define the logic for creating instances. The semantic model defines the functions that will be used by the inference engine 2000 to systematically invoke a particular concept. These functions are collectively known as the repertoire 2700. They can be compiled, or they may be represented in meta-language that is interpreted by the inference engine in real time. It is possible to code domain dependent functions using an interpreted meta language. This allows them to be added/updated whenever needed, without the need for an update to core inference engine software.

“The repertoire 2700 functions are fixed types Resolver and Context Gate, Collectors, Qualifiers, Operator, and Qualifier. Each type of repertoire function plays a different role in the inference process. A repertoire function can be domain-independent and therefore applicable to many inference problems. A repertoire function could also be specific to a particular semantic model in a domain.

“The inference process starts with facts about reality that have been normalized to the semantic model 2660 and fed to the inference engine. Normalized facts are seed knowledge in semantic knowledge base 3000.

“Whenever any knowledge is updated or created in the semantic knowledgebase nodes, links and attributes, it triggers a cascading knowledge that is downstream 1002 of the upstream concept 1001 just instanced or updated as shown in FIG. 4. Instances of concepts are identified by their downstream concepts and corresponding nodes. These nodes can then be considered for updating or instancing 2200. The inference engine executes a series of functions associated with the concept in semantic model 1000 to determine if conditions are met for creating new node/link/updating an existing node/link 2300.

“FIG. 14 shows the beginning of the data normalization procedure 60. FIG. 2600 is an example of a data normalizer. 13) periodically polls external data sources for new information. Alternately, data normalizers could be pushed new data from external sources as they become available. The data normalizer interprets the semantic model to determine which data are included in it. Data that aren’t described are ignored. Those that are described in the semantic model are sent to the inference engine as a packet of information that identifies which concept/note/relationship the data correspond to. The inference engine checks if the data are already in the semantic know base and, if so, adds them to it.

FIG. shows the inference process in three stages. The first stage is 40, which creates instances of fact nodes with associated links. FIG. 15; FIG. 16; and the third stage 70?recursively updating downstream nodes, is shown in FIG. 17. FIGS. 15-17 invoke five types functions: Context-Gate collector, operator, qualifier, resolver, and resolution. These functions can be identified in the definitions of concepts in a semantic model. These functions can be accessed via their index in the repertoire 2700, a collection of functions. These functions are dynamically accessed by the inference engine based on their index within the repertoire.

“FIG. “FIG.15” shows the logical flow of stage 40 of inferencing, creating instances of fact nodes with associated links. First, check whether the fact is a node already in the knowledge base. New attribute values are added to existing node instances if this is the case. The inference engine checks if the fact concept is a?Resolver’. function. It defines when a new instance is needed of a concept. The?Standard Resolutioner? is the most popular. This method checks if there are nodes in the knowledgebase with the same attribute that the argument for the resolver function. It is also known as the ‘Resolving Attribute. A Social Security Number is an example of a resolving attribute for a person. It is a unique qualifier. A node that has a resolving attributes is already present, so no need to create a new instance. A new instance will be created if the resolver function does not include an argument. If the resolver function doesn’t include an argument, a new instance is created each time. The inference engine then runs the concept’s “Qualifier” function to determine if a new instance is needed or if an existing node needs to be upgraded. Function to determine whether the node should become enabled. Enabled nodes activate the cascading knowledge to downstream nodes. When certain conditions are met, like having certain values for required attributes, nodes become enabled. An upstream node can’t trigger the creation or deletion of downstream nodes until it is enabled. Once enabled, the concepts?Operator?” are run by the inference engine. functions. Operator functions calculate the values of the notes specified for the concept. For example,?Account Balance? For the?Bank Account? FIG. 1160 shows concept 1160. 12.”

“In the second stage, a node’s downstream links and nodes become active when it is enabled or updated. This is called inferencing. FIG. FIG. 16 shows the logical flow 50 for the algorithm to update downstream Nodes. This process updates node attributes that are dependent on links to upstream nodes. If the attributes have been updated, the qualifier function will be run again to determine if the node should still be enabled. If enabled, the process of inputting downstream nodes is initiated. Finally, the operator function executes to complete the node upgrade.

“Inferencing 3 is when instances of insight nodes are created and their associated links. FIG. FIG. 17 shows the logical flow 70 for the algorithm to create insight nodes and the associated links. When a fact node contains an operator function called “Posit Handler”, the process begins. This triggers the init process for downstream nodes. In order to insinuate insight nodes, the first step is to ensure that the attributes of the upstream nodes contain values that allow for the creation of a downstream insight notde. This criterion can be coded in a “Context-Gate?” The inference engine dynamically invokes the function for the downstream node. The context-gate function returns the Boolean value. If the Boolean value returned is true, the downstream node’s solver function is invoked. A new node is created if there is no existing node fulfilling the resolving criteria in the knowledgebase. It will be linked to the upstream one that triggered its creation. The knowledge base stores the new node as well as its links. The third stage of inference involves the collection of attributes for the newly created node. Remember that concepts can have two types of notes. One that stores values directly and one that links to attributes in other nodes as illustrated in FIG. 12. The ‘Collector? function collects attributes values from upstream nodes and associates it with own notes. The?Collector? function gathers attributes values from upstream nodes, and associates them with its own notes. These attribute values are saved in the knowledge base. Executing the qualifier function of the concept to determine if the node has been enabled is the third step in the inference process. Operator functions are executed once a node has been enabled. If the operator function contains a?Posit handler?, downstream nodes will be triggered and FIG. 17 is used again by the downstream nodes to continue the cascading of knowledge.

“FIG. 18. This is the Posit-Handler’s result. Let’s say that A has relationships with B, C, and D. As an example, A could have a link to B. The Posit-Handler operator A suggests that new links could be created between A, C, and A and A and A and B. The inference process activates the node B. When nodes B, C, and D are activated, the Posit-Handler in nodes B and C triggers, triggering the creation of the next set nodes and linking. This is the heart of the semantic reasoning method.

“Meta-Language Interface to Inference Engine”

“The present invention’s inference engine is intended to be used for a specific class of inference problems that are represented by the semantic model. The inference engine reduces the semantic models to their basic building blocks, e.g. The inference engine reduces the semantic model to its fundamental building blocks, e.g. concepts, nodes and functions. It then applies its logic to the details represented by the building block. An extensible meta-language describes the building blocks of the semantic model, and the corresponding semantic knowledge database. It also includes a vocabulary of ‘words? an extensible meta-language that includes a vocabulary of?words? and?syntax? For interpreting words. The vocabulary of the meta language builds upon other words. The meta-language extends the FORTH programming language. The present invention provides an interpreter for FORTH. The interpreter acts as an interface between the INFER engine and the semantic model and semantic knowledge base. The interpreter interprets standard FORTH terms. It also interprets words that describe elements of the semantic model, such as notes, relationships, functions, and concepts. The interpreter can also interpret the semantic knowledge base attributes, nodes, and links. The interpreter can also interpret new words in meta-language, which are built upon abstract words to describe domain specific semantic concepts.

“FIG. 19. illustrates the meta language of the present invention. This example shows the meta-language of the present invention. ?DROP? and?IF? are fundamental words. These words are essential words that can be used in any FORTH vocabulary. Words like?SEEK? Words such as?SEEK? und?MATCH? are essential words that can be used to describe the building blocks of the semantic model. These words are essential for describing the building blocks of the semantic model. ?SCAN? This is an example domain-specific word that describes the repertoire function in this example. This example also contains a definition for a domain-specific query word,?TypeNEFriendPhone. It searches the knowledge base to return a list with friends in New England. The syntax begins with?:? The syntax ?;?. ends the definition. The user interface sends a string to the interpreter for inference engine. This string can contain the dynamic definition of a Word, such as TypeNEFriendPhone. It also contains the request to execute that word. In this example, the string?26 >circumflex over (?).} TypeFriendPhone#isFriendOf SCAN?. The inference interprets the query to return the friends of the person identified in the knowledge base by the number?26?. This extensible meta-language allows the inference engine and semantic model to communicate with each other. It is easy to add/update the semantic model without having to update the core inference engines software. The present invention is not able to dynamically create words that represent concepts and relationships, and query for nodes or links stored in the semantic knowledge. It will not be able to infer about real-world problems where concepts evolve and change over time.

The meta-language of this invention allows you to query the contents of the semantic knowledge database, which are represented as nodes connected by links and with attributes. The meta-language can be used to query any knowledge base nodes. It is flexible enough to allow for queries on all nodes and attributes. The meta-language can also be used to query all decision support nodes such as reminders, alerts, and recommendations. These are special types of insight nodes. The interface for each decision support node does one of three things. It notifies users of new actionable item, increases the urgency of previously notified actionable item which have not been acted on, or removes notification if appropriate actions are taken. The system automatically tracks each user’s input and adds them to the knowledge base. This intelligent display is possible.

“An application-specific user interface queries the semantic know base and provides decision support for the user. FIG. FIG. 20 shows the flowchart 72 to update the user interface. The user interface queries the knowledgebase for current values for all knowledge currently displayed at any refresh event. This can be either periodic or user-driven. The interface compares query results to the currently displayed values, and updates as necessary.

“The knowledge in the knowledge base may be stored on multiple computers servers. FIG. FIG. 21 shows one example of a distributed knowledge database. One server acts as the master, containing the address of each link, attribute, and node in the knowledge database. FIG. 20 shows how the knowledge base can be queried. 20. The software service finds the queried node and all its attributes in the distributed servers and compiles them into the form that answers the question and returns it back to the user interface.

“Applications for Semantic Reasoning”

“FIG. “FIG. 22” illustrates an embodiment of the present invention. The reasoning system is implemented using a set operating system instructions. It is then processed by native hardware that hosts the operating system.

The ability to inferencing and extend knowledge in a semantic networks is affecting many different areas. The present teachings focus on three areas: 1) Automating processes based upon inferring which process steps were taken; 2) Situational understanding derived through iterative analyses of routine behavioral data and learning patterns from them; 3) Activating individuals to take correct actions based on what was done, what is needed and what they are capable of doing. These applications require logic and pattern learning to make inferences. The system and method described in the teachings can help with this combination.

A semantic model is used to define the cause-effect relationships between process steps in a process automation application. Inference determines which process steps have been completed and which next steps can be taken. This automatically triggers the next steps. The process steps described by upstream concepts are followed by the downstream concepts. The downstream (preceding processes) data are used to accumulate the data necessary for an upstream process. Context-Gate and Resolver operators are defined in a concept to verify that the requirements for the implementation of the process step have been met. The Qualifier and Collector operators verify and collect all data required to execute the process step. The process will be initiated if all requirements are met and all data is available. The data that results from the initiated process are the input to an inference engine. They are then processed according to the semantic model.

“In a situational-understanding application, data from a variety of sensors and user inputs is analyzed to identify situations of interest. The semantic model describes how the input data are interpreted and how they are used to determine the presence or absence situations of interest. Inference determines whether the data required for interpretation is available. It automates the process for acquiring the data and implements logic to determine the presence or absence situations of interest. Each of these steps cannot be done without inferences.

“In a behavioral activation app, the semantic model extends process automation to situation understanding and includes formulating recommendations for users. The semantic network defines the responses to specific situations of concern. Inferences are made based on the characteristics of the situation when situations of interest are identified, such as in a situational understanding app. Formulating response options is a human function that cannot be done without inferencing.

“FIG. 23 is the central approach to all the applications of the current teachings. Data are interpreted according to context. The context is the environment in which the data are being interpreted. The response is determined by the situation. The semantic model describes the relationships between data and context. The semantic model also identifies inference logic by allowing methods to be associated with concepts. Inference engines map data to semantic models and identify contexts, situations, and responses according to the logic of the semantic model. All inferences are stored within the semantic knowledge base as instances of concepts and relationships between concepts.

“Semantic Reasoning to Automate Processes?a Financial Transaction Automation System(FTAS)”

“An application to automate certain financial transactions is presented as an example of process automation. FIG. 9 shows the application. 9, FIG. 9, FIG. 12 describes a system that analyzes bank transactions to identify patterns and predicts when additional deposits are required. FIG. FIG. 9 shows a small portion of the semantic model. It contains four fact concepts 1101, 1102, 1103 and 1104 as well as four insight concepts 1501, 152, 1701, and 1703. These fact concepts are people 1101, their banks 1103, and bank accounts 1102. Financial transactions 1104. The inference engine uses these concepts to continuously analyze pairwise transactions between account holders 1501, bank accounts 1103, and financial transactions 1104. The definition of concepts 1501 & 1502 outlines the functions that the inference engine uses to update nodes 1501 & 1502 and is part of the application’s repertoire.

FIG. 2 illustrates how financial transaction data is normalized into the knowledgebase to create instances for facts nodes. 11. These data are used to enrich the knowledge base by allowing people to be included in it. Bob 3101 and Sue 312, their bank accounts, and transactions between them 3107-3109. This seed knowledge is used to infer the insight nodes. The inference engine updates the corresponding spending patterns 1502 and transaction patterns between individuals each time a transaction takes place. 3501, and continue on to funds required 3701.

“FIG. FIG. 12 shows details of FIG.’s semantic knowledge base. 11. The semantic model contains values for each instance of each concept, such as the age and gender of a person (3050) and the account balance for bank accounts (3060). The semantic knowledge base also contains links between nodes. For example, Bob owns Account-1 3350. Based on explicit facts (e.g. Age, gender, or inferencing (e.g. ”

“As with any application of the invention, the semantic model (shown at FIG. The semantic model for FTAS (shown in FIG. 9) can be expanded to include additional concepts, facts and insights. The inference engine works the same regardless of semantic model. It applies the functions in the semantic models to infer any data that is available. Spending predictions and automated fund transfers to cover anticipated expenses are two examples of extensions to the semantic model. These extensions involve adding concepts to your semantic model and incorporating the associated functions invoked from the inference engine into the applications repertoire.

“FTAS” illustrates one benefit of the present teaching: large amounts of data can automatically be processed into higher levels knowledge, including predictions (e.g. Spending patterns and funds needed. Another benefit is the possibility of automating certain processes such as fund transfers in anticipation or need. This provides bank customers with a valuable service. The semantic model clearly outlines the logic of anticipation and automation, and all inferences can be traced back.

“This invention includes the semantic model, inference engines, semantic knowledge bases, the repertoire and functions as well as external data sources that seed this knowledge base with fact nos. Any application that uses it also has a user interface that queries the knowledge to find contents that interest the user. The FTAS application will have a user interface that includes tables and charts showing spending patterns as well as alerts when funds are required.

“Semantic Reasoning to Situational Understanding: Monitoring Activities for Daily Living (MADL).”

“An example of semantic reasoning to situational understanding is the system that interprets movement data along with self-reported data to determine if key activities of daily life (ADL) such as sleeping, bathing and eating, are occurring in a normal manner. The semantic model describes the way data are translated into daily activities. Inference engines continuously apply the semantic model to the incoming data to determine if expected activities are occurring and if there are deviations from normal patterns. They also draw attention to abnormal patterns.

“The system for monitoring daily activities MADL (the system for monitoring daily living activities) includes a number of sensors as shown in FIG. 24) that provides movement data. The sensors include the wearable transmitter 12050 (also known as the pager), which emits a pulse every few seconds. A set of 12000 sensors detect the pulse and distribute it throughout the living area. Each detector detects the signal strength of the pulse and sends it to the inference engine via a 12100 communication hub.

“Each transceiver 12000 in FIG. 24 has a unique address. Transceiver data from the wearable transmitter 12050 are retransmitted to 12100 with the transceiver’s address. Transceivers are also identified with the location of their transceivers. This rebroadcasted signal is used to interpret daily activities based on its location. Signals from the kitchen, for example, are analysed for patterns that involve small movements about the stove, with trips to cabinets and refrigerators. The semantic model contains the logic to interpret signals from transceivers and the inference engine uses it to interpret movement data.

“FIG. “FIG. 25” shows the design of the wearable device 12050. One chip transceiver transmits signals periodically. It also includes an accelerometer, magnetometer, and gyroscope. A USB connector is available for charging. An antenna can transmit and receive signals. The transmitters work in the 420 to 450 MHz frequency band. Because the wavelength is long enough to bend around walls and furniture, it can be picked up by multiple transceivers. The accelerometer data allows for movement to be determined, making it possible to distinguish between moving and resting. The data from the gyroscope allows for horizontal and vertical orientation to be determined. The magnetometer can be used to determine the orientation relative to north. This information can be used to adjust for any variations in the signal measured from the transceivers. The total data from the wearable device comprises the signal strength measured at each transceiver and data from the accelerometer and gyroscope. These data are combined by the inference engine to determine repeated patterns of movement within and around the house. The relative signal strengths of the transceivers are adjusted for the rotational orientation derived by the magnetometer data. This allows the inference engine to determine when the user is located in different rooms within the house, such as the kitchen, bathroom, bedroom and living room. It also helps determine if they are engaging in rest activities (i.e. sitting, lying or standing) as inferred either from the accelerometer data or from the gyroscope. It is important to understand the context in which the activity took place. As an example, eating in the dining area after cooking is done indicates that you are eating. However, working at the table in the kitchen without the surrounding activities indicates that you are not doing any cooking. This common sense logic is part the semantic model.

“FIG. 26 shows one way to wear the pager 12050. The circuit is contained in a casing, which can be used as an ornament in a neckpiece bolo. You can weave the antenna, which measures 6 inches in length, into the bolo-lace. Another alternative is to place your pager on a elastic wristband. You can weave the antenna into the wristband. FIG. FIG. 26 shows the battery for the wearable webpager. The battery provides power for at least one day. You can charge the pager using a USB connector (mini or regular). As shown in FIG. 29 to charge the pager. The dock has a guide for the USB connector, which slides into contact with the charging circuit within the communication hub.

“FIG. “FIG. The circuit 12010 can be attached to a wall charger 12011 which provides power for the transceiver.

“FIG. 28 shows the design of the communication Hub 12100. The hub contains the same RF receiver/transmitter chip 12130 that was in the pager. A CPU, an Ethernet port 12010 and a USB port 12120 are included in the hub for charging. FIG. FIG. 29 shows the communication hub enclosed with the charging dock 12150, and the Ethernet and charging port 12140. The Ethernet port transmits partially processed data from the device to a remote server. The server hosts the semantic model and inference engine, as well as the semantic knowledge base for the present invention.

“The pager 12050 and the transceivers 12000, as well as the communication hub 12100, are designed to collect, process, and relay movement data for analysis. Interpretation begins by learning patterns from data that are repeated daily and often multiple times per day. These patterns correspond with common activities. FIG. FIG. 30 shows the signals received by transceivers in FIG. 24. Every transceiver receives an individual signal from the wearable device. The square of the distance between the wearable device (and the transceiver) and the signal strength is ininverse proportion. The signal-vector is a collection of signals from all transceivers in the home at any one time. It has as many components and as many transceivers as possible. A signal-vector is associated to the wearable device’s location within the error limits in signal strengths. FIG. 2 shows a cluster of signal-vectors that are close to common locations in the home. 31. The inference engine can associate signal vector clusters with key daily activities by correlating self-reported data (e.g., a meal was taken) and movement data (e.g., no movement).

“Partially processed data for movement, i.e. Clustered signal vectors can be transferred to a server for further processing. FIG. FIG. 32 shows a section of the semantic model which processes sensor data to provide deep insights. FIG. FIG. 32, 12300 shows the clustered signal signals transmitted by the communication center. These data can be combined with other data such as self-reported data on daily activities such as getting up, eating, and going to bed. These data can be entered into an “Activity/Anomaly Identification” program. semantic concept 12320 identifies and links repeated patterns to common daily activities. Common sense patterns that are a-priori and related to common activities can also be input to Activity/Anomaly Identifier 12320. The Activity/Anomaly Identifier uses a-priori knowledge data to determine patterns of relaxation, meal preparation, eating, sleeping, and balance 12310. A-priori knowledge data can be used to identify activities that occur during rest and relaxation. These include activities with a short duration, no translational motion and a lower than usual center gravity. Balance knowledge describes patterns like the presence of a stabilizing time between sitting and walking. The inference engine uses all the data from the movement sensors 12300 to determine the user’s individual patterns of balance 12330. It also applies any a-priori knowledge to the movement data. Normal sleep times, normal wake up times and normal variations in sleeping patterns are examples of learned patterns.

“In addition to learning the normal patterns of activity, operator functions for the semantic term 12320 also detect deviations from the normal pattern. A fall 12340 is when a user lies down in an unorthodox place after walking. A balance analysis 12350 is performed if the user takes longer to stabilise after sitting. This could lead to a high chance of falling alert 12360. Recurring detections of balance issues may trigger recommendations to lower the risk of falling 12380, such as slowing down or using a cane to help you rise. Through the process described in FIG. 15, all such inferences can be automatically generated by an inference engine in the present teaching. 15-FIG. 17.”

“FIG. 33 shows a specific sequence of the inference process. The Activity/Anomaly Identifier concept activates when movement data becomes available. The data in this example concern movement in the kitchen. The Activity/Anomaly Identifier function’s Context-gate, Qualifier and Operator functions determine that the inference engine needs to fetch a-priori as well as learned patterns about meal preparations. After those patterns have been fetched, the Operator function associated to the Activity/Anomaly Identifier idea continues processing movement data until it matches the apriori and the learned patterns. The movement data representing meal preparation are then classified and added to the learned patterns. The Operator function triggers the Learned Patterns of Dining concept, which prepares for the processing of a particular instance of dining.

The processing of movement data, as described above, converts large quantities of data from multiple sensors (transmitter and accelerometer, gyroscope, magnetometer) into assertions about the normal and abnormal activities of daily life. These assertions can be automatically drawn from a process called inference, which is explicity specified by the semantic and therefore is explicable. This is a key area for the application of the present teachings.

“Semantic Reasoning to Activate Behavioral Behavior: Personal Chronic Illness Management Systems (PCIMS).”

“A personal chronic illness management program (PCIMS) is an example of semantic reasoning used for behavioral activation. The PCIMS was created to help patients and caregivers better manage chronic illnesses. PCIMS manages patient care, schedules activities, monitors them, and accesses quality of care. It also tracks the quality of care and the compliance to prescribed care plans.

“FIG. “FIG. 34” shows the overall process of behavioral activation. It consists of three phases: 1) Activities are planned and structured according to the situation; 2) Activities are monitored and data are analyzed from them; 3) Issues are identified and corrective actions taken. These three phases are repeated continuously. FIG. shows examples of behavioral activation for chronic disease management steps. 34. These three phases are complex and cannot be performed consistently by humans. Automation through inferencing may be necessary.

“Behavioral activation can be automated in the PCIMS application using the semantic reasoning approach described in the present teachings. The first step in behavioral activation is to plan and schedule activities. PCIMS organizes the following activities: Medication taking, meal taking and interfacing with family members. PCIMS collects data about the patient’s physiology and medication adherence. It also monitors the patient’s response to structured activities through clinical follow-up. Monitoring includes inferences about patient’s adherence to prescribed care-plan, physiologic responses, recognition of areas that need care and seeking out help from those who can. Inferencing helps to identify issues in self-care and develop intervention strategies. The intervention strategies are scheduled as planned activities, and the behavioral activation process continues in a loop.

“In one embodiment, the PCIMS system includes the components shown in FIG. 35. Care-Station and Pager are interactive devices that allow patients and caregivers to send data to the inference engines and receive query results from the semantic know base. Care-Station interface receives inputs and provides decision support for the patient. The Pager, a wearable device, transmits movement data to an inference engine. It also pages the patient when needed. The Care-Portal receives inputs and provides decision support for the patient’s caregivers. The three components of the system send data to the same semantic knowledge base interfaces and inference engine, but they serve different purposes.

“In the PCIMS app, the semantic reasoning framework of the present invention converts low-level data, such blood pressure measurement or medication adherence data, to recommend follow up actions to better manage chronic diseases. FIG. FIG. 36 shows that the PCIMS system includes a semantic model that defines concepts related to chronic disease, the domain-independent inference engine of this invention, and a personal knowledge base for managing chronic illness that is created by the application of inference.

“The PCIMS application uses cascading, or inferencing. Each inference leads to the next until a resolution is achieved. If not, the process will halt while it waits for additional data. The semantic model, which includes the definition of insight concepts and relationships between concepts as well as associated operators, describes the inference process. FIG. 37-FIG. 37C shows fragments from the PCIMS semantic models. Items such as a clinical appointment 11054 and a family member 11052, patient11061, physician 11061 or hospital discharge 11053 are examples of fact concepts. Items such as master adherence 11059 with its components, daily plan 11055 with its components, and patient 11051 depict various types of insights. They represent knowledge about different patterns of patient behavior, including medication adherence and meal taking. Other insights can be derived from the patterns such as how often the patient takes medication and what to do if the patient’s health and adherence are not good.

“The PCIMS semantic modeling in FIG. 37-FIG. 37C includes concepts such as ‘Patient? 11051 and ‘Family Member? 11052 is connected by a relationship which describes which family member cares for the patient. Each concept can be defined by notes like name, gender, and age for the?Patient? ‘Family Member? concepts. The semantic model does not only define concepts for chronic care patients, but also a variety of concepts like?Clinical Appointment’. 11054 and ?Hospital Discharge? 11055, which results in the specification a?Care Guide?” 11055 which results in the specification of a?Care Guide? 11057.”

The inference engine implements the creation of a?Daily Caregiver? as the knowledge base is enriched with information about events like hospital discharges and clinical appointments. 11055 prompts the inference engine to remind patients about daily care tasks like taking their medications (?Medication reminder?). The Care-Station interface allows the patient to report the actions taken after the care tasks have been completed. Reminders can be tracked when the appropriate adherence nodes are e.g. (?Medication Adherence? 11059) are processed periodically and adherence patterns are learnt. If the operator function for an adherence node determines otherwise, adherence alerts will be issued.

“Facts such as a ‘Care Guide?” 11056 was created during a ‘Hospital Discharge? 11056 were created at a ‘Hospital Discharge? 11055. Insight concepts are functions that can be used to process the notes in the fact nodes to create insight nodes and their notes. For example,?Process Care Plan Data FIG. 11101 is an example of this function. FIG. 38 shows a particular function called?Generate Daily Care Plan? 11105 is part of the “Daily Care Plan?” FIG. 37C. 37C. To create 11055 instances, the inference engine runs trigger functions at the given clock time. This triggers downstream nodes. The same process is applied to functions 11102-11108. All of these functions are associated with insights concepts within the PCIMS semantic modeling in FIG. 37-FIG. 37C.”

“FIG. 38 shows four types of data that PCIMS stores and presents as facts to its inference engine. These data include data that describes the patient’s treatment plan, physiologic data about patient’s vital signs, self-reported data that captures patient’s care planning adherence behaviors and movement data collected by the wearable webpager. These data are generated by the client responding to the automatic reminders. Inference engines automatically generate insights from the data. An example of insight is to generate a daily care plan, track performance and create reminders when necessary. In the semantic model, insight concepts are defined as the key words and repertoire functions that must be inferred during the inference process (shown in FIG. 37-FIG. 37C) and is implemented by the inference engines of the present invention.”

“The PCIMS data that is presented as facts to inference engines comes from two sources: user inputs and sensors. FIG. FIG. 39-42 describes the inference flow to process the four types of data in the PCIMS.

“FIG. “FIG. 38. This logic is represented in one or more of the repertoire functions, which combine multiple inputs into a patient’s daily care plan. A single daily plan contains a list of all tasks the patient must complete on any given day. During the creation of the daily care plan, multiple intermediate semantic concepts can be used. Intermediate concepts can represent intermediate knowledge, such as inconsistencies in care plans and requests to patients and caregivers to resolve them. The semantic model will determine whether intermediate concepts can be defined. This knowledge must be used to provide decision support for the users of the application.

“FIG. “FIG. 38. These logic functions may be associated with one or several concepts and can be dynamically invoked and triggered by inference engines when new insight nodes or links are created. Each data type in FIG. Each data type in FIG. 40 is represented as an individual concept with its own notes, and repertoire functions that are customized to process those notes. The semantic model also includes distinct decision support concepts for notifications to caregivers and recommendations such as to call doctors. When the logic shown in FIG. 40 is satisfied.”

“FIG. “FIG. 38. The logic can be embedded in one or several repertoire functions and associated with one of the concepts. They are dynamically invoked and updated by the inference engine as part of the creation of new insight nodes or links.

“FIG. “FIG.4.242” describes the logic of functions 11104. See FIG. 38. This logic describes how the movement data is processed to infer daily activities and learn daily habits. Further processing of the movement data may be done to identify patterns in walking, which can then be used to make inferences about falling. The logic for movement analysis could be contained in one or more repertoire functions that are associated with one or several concepts. They are dynamically invoked and linked by the inference engine during the creation of new insight nodes or links.

“The logic described at FIG. 39-FIG. 42 is implemented using functions that are associated with multiple concepts within the semantic model. The logic “Clinical Action Required” is an example. Multiple concepts from the PCIMS semantic modeling are used to implement this logic: Fluid Retention Analysis and Hypoglycemia Analysis. Analysis of Hyperglycemia and Episodic hypertension. The inference engine creates a single instance of each concept when a patient has been diagnosed with the corresponding disease. Each node is activated when the input link connecting it to its corresponding data is opened upon arrival of new data. The following functions are used to analyze the situation: fluid retention, hypoglycemia or hyperglycemia. The analysis may alert you to the need for clinical action.

“FIG. 43 shows the logic of the Hypoglycemia Analysis analysis. FIG. FIG. 44 shows its implementation using the inference mechanism described in the present teachings. New Blood Glucose Data activates Hypoglycemia Analysis. The operator function associated to the node checks if glucose level is below 75. In this case, the Hypoglycemia Protocol node will be created. This node implements FIG. 43 involves multiple, sequential glucose levels measurements, the recommendation to take a glucose tablet and checking if the patient feels dizzy or sweaty. Each action is represented in the semantic model by its own concept. The protocol node operator keeps track of how many of the protocols have been followed. If you need to?Call Doctor? Call for help? Once the state is reached, a Hypoglycemia Alert Node is created. It indicates whether a doctor or a friend should be called for assistance.

“The inference engine created nodes and links and they are stored in the semantic know base. They are queryable by the three interfaces to the PCIMS, which are the Care-Station Pager, Care-Portal and the Care-Portal. 35. Each interface queries the PCIMS semantic know-base using the query meta language illustrated in FIG. 19. FIG. illustrates how the various user interface objects are used to display the answers to queries to the user. 45-FIG. 63.”

“FIG. 45 shows the main interface of the Care-Station. All interfaces can be controlled using touch. The center of the interface is a window which displays the user’s photos. The left panel displays the current date and a clock. The interface also includes buttons that allow users to perform specific actions. On the left panel, you will find a button that allows you to access different options for communicating with your family members and caregivers. The button also communicates the rewards that are earned by users for completing tasks successfully.

“The bottom panel of FIG. 45 displays a sequence of buttons that correspond to the tasks for each day. The icons are used to signify the tasks: the medicine bottle for taking medication, food plates for eating meals, and devices for monitoring vitals. Below the task icon is an oval that indicates the time it takes to complete the task. It can be colored in a variety of colors, including red for due and green for completed. The inference engine dynamically calculates daily tasks and stores them in the semantic knowledge database as reminder concepts. Care-station interface queries the semantic knowledgebase periodically to determine the status of any upcoming tasks. The interface then updates the interface accordingly. Tasks that have been completed are marked?Done? The right panel shows unscheduled tasks that require attention. The interface depicts both scheduled and unscheduled tasks. Each task is a reminder from the PCIMS semantic modeling shown in FIG. 37-FIG. 37C.”

“FIG. “FIG. 46” illustrates what happens when you touch a medication icon: A panel showing all medications that must be taken is displayed. The panel displays information about the medication by touching the image. It also allows users to indicate whether they have taken the medication or not. Each user input is sent directly to the inference engines and then processed according to the logic in the semantic model.

“FIG. 47 shows the panel that appears after the user touches the icon for a blood pressure device. This panel allows users to enter blood pressure readings. The blood pressure reading can be automatically taken if the device is connected directly to the PCIMS software. The inference engine receives blood pressure readings and analyzes them. The main interface’s right panel shows additional tasks. FIG. 45.”

“FIG. 48 is the panel that appears after a blood glucose device icon has been touched. The inference engine receives the manually entered data and triggers a hypoglycemia or hyperglycemia analysis. This protocol is described in FIG. 43 and FIG. 44.”

Click here to view the patent on Google Patents.