Invented by Amit Arora, Archana GHARPURAY, John Kenyon, Hughes Network Systems LLC

The market for machine learning models for adjusting communication parameters is rapidly growing as businesses and organizations seek to optimize their communication systems. With the increasing reliance on technology and the need for efficient and effective communication, machine learning models are becoming essential tools for adjusting communication parameters to meet specific requirements. Machine learning models are algorithms that can learn from data and make predictions or decisions without being explicitly programmed. In the context of adjusting communication parameters, these models can analyze various factors such as network conditions, user preferences, and real-time data to optimize communication settings. One of the key areas where machine learning models are being used is in the field of telecommunications. Telecommunication companies are constantly striving to provide better network coverage and quality to their customers. By using machine learning models, they can analyze network data, such as signal strength, bandwidth availability, and congestion levels, to adjust communication parameters in real-time. This enables them to optimize network performance and provide a seamless communication experience for users. Another industry that benefits from machine learning models for adjusting communication parameters is the Internet of Things (IoT). IoT devices, such as smart home appliances, wearables, and industrial sensors, rely on efficient communication to transmit data and receive instructions. Machine learning models can analyze the data generated by these devices and adjust communication parameters to ensure reliable and timely communication. For example, in a smart home, machine learning models can optimize the Wi-Fi signal strength and bandwidth allocation to ensure smooth communication between devices. Furthermore, machine learning models are also being used in the field of voice and speech recognition. As voice assistants and speech recognition technologies become more prevalent, the need for accurate and reliable speech communication is paramount. Machine learning models can analyze speech patterns, background noise, and other factors to adjust communication parameters such as microphone sensitivity, noise cancellation, and voice recognition algorithms. This ensures that voice commands are accurately captured and processed, leading to a better user experience. The market for machine learning models for adjusting communication parameters is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the global machine learning market is projected to reach $8.81 billion by 2022, with a compound annual growth rate of 44.1%. This growth can be attributed to the increasing demand for efficient communication systems, advancements in machine learning algorithms, and the proliferation of IoT devices. However, there are challenges that need to be addressed in this market. Privacy and security concerns are major considerations when implementing machine learning models for adjusting communication parameters. As these models analyze large amounts of data, including personal and sensitive information, it is crucial to ensure proper data protection measures are in place. In conclusion, the market for machine learning models for adjusting communication parameters is expanding rapidly as businesses and organizations recognize the importance of optimizing communication systems. With the ability to analyze real-time data and make intelligent decisions, these models are revolutionizing the way communication parameters are adjusted. As the demand for efficient and effective communication grows, the market for machine learning models in this field is expected to flourish, offering new opportunities for businesses and improving the overall communication experience for users.

The Hughes Network Systems LLC invention works as follows

Methods and systems for learning communication parameters, including computer programs stored on computer storage media. In some implementations data is collected for each satellite terminal of a group. The data is used to train a machine learning model. The model can receive an indication for a geographical location and predict satellite beams capable of providing a minimum level efficiency in communication at that geographic location. After training the machine-learning model, a prediction of a satellite beam is generated for an exact geographic location. The predicted satellite beam is used to determine whether or not the terminal’s current satellite beam should be changed.

Background for Machine Learning Models for Adjusting Communication Parameters

Communications Systems can use various settings and configurations in order to exchange information appropriately. These settings include the selection of various frequency channels, data-coding techniques and other options. Communication satellites can be used, for example, to transmit data from terminals to the Internet or to communicate data in other ways. Satellite communication systems have a variety of parameters to set in order to create an effective communication link. The parameters can vary from terminal to terminal or over time.

In some implementations, the communications system can use information about the devices that are using the communication system to train machine-learning models which predict the communication parameters of the devices. A computer system, for example, can calculate the communication efficiency of each device served by a communications network. Computer systems can also determine the current communication parameters of each device. The computer system can use the current parameters to train a machine-learning model that will predict which communication parameters would be most effective for each device. The outputs from the machine learning model can be used to evaluate the current parameters and determine parameters which would produce more efficient communication.

The assigned spot beam is one of the parameters which can influence satellite terminals in satellite communication systems. Satellites are able to provide different spot beams, each covering a specific geographical area. Satellite systems can cover a wide geographic area with a series partially overlapping beams. When a terminal’s installation is complete, it is usually assigned a particular spot beam. The best beam choice may not be obvious for terminals near the edges of a beam’s coverage, or where multiple beams overlap. The initial beam assignment for some terminals may not be ideal or correct in relation to their geographic location, resulting in a weaker signal or requiring them to use low-efficiency modulation or coding.

The techniques described in this document can be used to use machine learning models to identify which spot beams work best for each terminal. These techniques allow a computer to identify terminals which can be reassigned a new spot beam in order to improve communication efficiency. These same techniques can be used to identify other communication parameters which could be used in order to improve communication whether it is a satellite communications system or another communication systems.

For instance, a computer can get information about satellite terminals within a geographical area. These information can be used to determine the efficiency of each terminal. For example, a modulation or coding for a satellite connection. In some cases, the measure of efficiency may be a number data bits per symbol. This information can be used by the computer system to identify terminals with the highest efficiency. For example, a group of terminals with efficiency above or at a threshold. Data describing identified terminals may be used to train beam assignments that are considered correct. The computer system uses these examples to train a machine-learning model that can estimate or predict appropriate beam assignments at different locations. The computer system will then use the machine learning model trained to predict beams in terminals with low efficiency. Predictions are evaluated using other factors such as the load on different beams, and the distance between a terminal and the center of a predicted beam to determine if a terminal should have its beam assignment changed. The computer system will then suggest a change in beam assignment to improve communication efficiency. In some cases, it may even initiate the change.

In one aspect, the techniques described herein describe ways of using machine-learning models to adjust communication parameter. A method that is performed by one computer can include, for example: obtaining data from each terminal of a group of multiple satellite terminals; indicating the satellite beam currently used by each terminal; indicating an efficiency measure of communication for the terminal using the satellite beam; and identifying a predicted beam using the machine-learning model.

Implements can include one or several of the following features.” In some implementations, for example, the machine-learning model is trained to choose, among multiple satellite beams that provide the highest efficiency of communications for the geographical location indicated by the machine-learning model.

In some implementations, a terminal’s efficiency is measured by the modulation or coding it uses.

In some implementations the efficiency measure of a terminal is the number of bits transferred per symbol using the satellite beam currently assigned to the terminal.

In some implementations, a machine learning model can be trained to predict the satellite beam that will reach a terminal out of multiple beams from a single satellite.

In some implementations, a machine-learning model is trained to select a beam from multiple satellites that will be a good match for a terminal.

In some implementations, the decision to change the satellite beam of a terminal located at a particular geographical location may include deciding to assign the predicted beam to that terminal. The method involves initiating the change of the satellite beam for the particular geographic terminal from the current satellite beam to a predicted satellite beam.

In some implementations, the training of the machine-learning model involves: identifying the subset of multiple satellite terminals with an efficiency measure that meets a threshold and using data from the subset as examples to train the machine-learning model. After training the machine-learning model, the method involves using the trained model to predict a satellite beam for all satellite terminals with efficiency measures below the threshold.

The method may include selecting from among satellite terminals with efficiency measures below the threshold a group of terminals to change the satellite beam. This set will be selected if the predicted beams of the terminals differ from the current beams of the terminals.

In some implementations, this method involves: generating maps indicating the locations or terminals of the candidate terminals; and displaying the map data.

In some implementations, the model of machine learning is a second machine learning model. The method also includes training a secondary machine learning algorithm based on the data obtained for the multiple terminals. The output from the first and second machine-learning models is used to generate the indication of a satellite beam predicted for a specific location.

In some implementations, machine learning models include a neural net, a maximum-entropy classifier (MEC), a decision tree (XG boost tree), a random forest classification, a Support Vector Machine, or a model of logistic regression.

In some implementations, efficiency measures for terminals are based on the compression ratio or the end-to-end response time of the terminal.

In some cases, the training of the machine-learning model involves training it to predict the satellite beam of a terminal using only the terminal’s geographic location as an input. In order to generate the prediction of the satellite beam at a particular location, the machine learning model must be trained.

The following are some examples of how to determine whether or not to change a satellite’s beam at a particular location: compare an identifier of the predicted beam to an identifier of the current beam of a terminal. Determine, based upon the comparison, that a predicted beam is different than the current beam of a terminal. Determine that the location is within a threshold of a center for the predicted beam. And then decide to change a satellite’s beam based on this determination.

In some implementations, a method may include training different machine-learning models for different satellite beam sets, wherein a machine-learning model for a satellite beam set is trained using data about terminals that are currently assigned to one of the beams of the set.

Click here to view the patent on Google Patents.