Invented by Tu Truong, Fuming Wu, Julio Navas, Ajain Kuzhimattathil, Hanxiang Chen, Nazanin Zaker Habibabadi, Omar Rahman, Han Li, SAP SE

The market for machine learning models for evaluating entities in a high-volume computer network is rapidly growing as organizations seek to enhance their cybersecurity measures. With the increasing number of cyber threats and attacks, it has become crucial for businesses to have robust systems in place to identify and evaluate entities within their computer networks. Machine learning models offer an efficient and effective solution to this problem. In a high-volume computer network, there are numerous entities such as users, devices, and applications that constantly interact with each other. Traditional methods of evaluating these entities often fall short due to the sheer volume of data and the complexity of the network. This is where machine learning models come into play. Machine learning models are designed to analyze vast amounts of data and identify patterns or anomalies that may indicate potential threats or vulnerabilities. By training these models on historical data, they can learn to recognize normal behavior and identify any deviations from it. This enables organizations to detect and respond to potential threats in real-time, significantly reducing the risk of a successful cyber attack. The market for machine learning models in evaluating entities in high-volume computer networks is driven by several factors. Firstly, the increasing number and sophistication of cyber threats have made it imperative for organizations to adopt advanced technologies to protect their networks. Machine learning models offer a proactive approach to cybersecurity by continuously monitoring network activity and identifying potential threats before they can cause any damage. Secondly, the advancements in machine learning algorithms and computing power have made it easier to develop and deploy these models. With the availability of cloud computing and big data technologies, organizations can now process and analyze large volumes of data in real-time, enabling them to make faster and more accurate decisions. Furthermore, the market for machine learning models is also fueled by the growing adoption of Internet of Things (IoT) devices. As more devices become connected to computer networks, the complexity and volume of network traffic increase exponentially. Machine learning models can effectively handle this high volume of data and provide real-time insights into the behavior of these devices, enabling organizations to identify any potential security risks. The market for machine learning models in evaluating entities in a high-volume computer network is highly competitive, with numerous vendors offering a wide range of solutions. These solutions vary in terms of their algorithms, data sources, and integration capabilities. Organizations need to carefully evaluate their requirements and choose a model that best fits their specific needs. In conclusion, the market for machine learning models for evaluating entities in a high-volume computer network is witnessing significant growth due to the increasing need for robust cybersecurity measures. These models offer organizations the ability to analyze vast amounts of data in real-time, identify potential threats, and respond proactively. As the complexity and volume of network traffic continue to increase, the demand for machine learning models in this market is expected to rise further.

The SAP SE invention works as follows

In an example, machine learning algorithms are used to train a model for entity risk evaluation to generate an entity score based on data from a computer network. The entity risk scores of various entities can be stored in a data base, and then retrieved and displayed when interacting with reports that involve corresponding entities.

Background for Machine Learning Models for Evaluating Entities in a High-Volume Computer Network

Certain types of computer networks handle high volumes both of transactions and of entities involved in those transactions. It can be challenging to evaluate entities within a large business network due to the technical difficulties involved in analyzing so many entities and transactions. When the entities are buyers (or recipients) of goods and services and suppliers, it is difficult to evaluate suppliers in order to determine if they will be a match for one buyer. It can also be difficult to evaluate buyers in order to determine if they will be a match for another supplier. When used with computer networks that have high transaction volumes, traditional evaluation techniques such as those in the financial industry fail to evaluate partner strengths and weaknesses and risk. The traditional methods fail when applied to a large-scale business network.

BRIEF DESCRIPTION DES DRAWINGS

The accompanying figures illustrate the present disclosure by way of illustration and not limitation, and they include:

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

FIG. “FIG.

The following description includes examples of systems, methods, instructions, and computer program products that embody illustrations. To provide a better understanding of the various embodiments, many specific details will be provided in the following description. It will however be obvious to those in the know that embodiments of inventive subject matter can be practiced even without these specific details. “In general, well-known instructions, protocols, structures and techniques are not shown in detail.

In an example embodiment advanced machine learning and deeper learning techniques are applied on a large volume transaction data to evaluate entities that were involved in the transaction. The models that are trained using these advanced machine-learning and deep-learning techniques produce a score, also known as entity risk score, for each entity. The entity risk score is a number that represents the financial and business sustainability of an entity.

In an example embodiment Key Performance Indices are generated based on high-volume transaction data within a computer network. The generated KPIs are then used to create an entity risk score based on one or more entities that were involved in the transactional data. Machine learning training functions may be called periodically, in batch mode, while inference functions (predictions), can be called real-time. In some embodiments, a high-volume computer system is a network of businesses, such as e-commerce.

FIG. Block diagram 1 shows a high-volume network of computers 100 in accordance to an example embodiment. The high-volume network 100 is comprised of different heterogeneous hardware and/or software components. A supplier user 102 can access functionality in business network 104 through a dashboard 106. The network may include an application-layer 108, database persistence functionality 110 and extract, transform and load (ETL), functionality 112, among others. The application layer 108 delivers business network functionality via the dashboard 106 to the user of the supplier 102. Application layer 108 can also provide business networking functionality to other users through other dashboards, not shown here. These include buyer users and third party users. This business network functionality can include, for instance, functionality related with the procurement of goods or services from one entity (supplier), to another entity (buyer). The database persistence functionality can store transaction data and retrieve it using the ETL functionality 112.

A machine learning model architecture may also include a machine-learning model metric service, which maintains a number of application program interfaces to provide entity scores and related information. These APIs can be maintained by an API Management component 118. “The machine learning model metric services 116 can be invoked using the application layer 108, such as the Hyper Text Transfer Protocol Secure (HTTPS)

In certain example embodiments, machine learning model metrics service 116 can provide a stateless connectivity mechanism according to the Representational States Transfer (REST architecture paradigm). The ETL functionality 112 transfers data from the business network 104 to the machine-learning model architecture 114 periodically (e.g. monthly, quarterly) in order to train. This training results in a machine-learning model 112. It is trained by using a dataset 124 that is extracted from the database persistence functionality (via the ETL functionality, for example).

The modeling runtime 120 can generate features 126 based on the transaction data. KPIs, as described above, may be an example of features 126. “The one or more features may be used to train a machine learning model, as described later.

When the application 108 calls an API through API management component (118) to the machine-learning model metric service to obtain a risk score for a particular entity, the non-training data from the transaction is fed into the machine-learning model (122) to get an entity risk score. This is a real-time process that can be called prediction or inference.

Click here to view the patent on Google Patents.