Artificial Intelligence – Adam Estrada, Kevin Green, Andrew Jenkins, Maxar Intelligence Inc

Abstract for “System for simplifying the generation of systems for wide area geospatial object identification”

“A system to simplify the generation of systems for satellite image analysis in order to locate one or more objects of particular interest. Automated generation of models is possible when there are multiple training images that have been labeled with the name of a study object, or objects with non-relevant features. This model is used for parameterizing pre-programmed machinelearning elements. Images of the study used to train object recognition algorithms are used. This filter is used for identifying the study object in unanalyzed photos. The system will report the results in the preferred format requested by the requestor.

Background for “System for simplifying the generation of systems for wide area geospatial object identification”

“Field of the Art.”

“The invention is in image analysis and, more specifically, in the field using deep learning model computer vision system for automated object identification from Geospatial imagery.

“Discussion of the Art State”

“Image analysis has been a key technology field since World War 2. Image analysis, photogrammetry and related technologies have been extensively used with aerial photography to assess bombing damage and intelligence. The need for highly-trained and specialized image analysts or interpreters has limited the scope of image analysis, especially for the purpose of locating or identifying targets of interest. Image analysis is limited to a limited number of applications due to the need for highly-trained, expensive skills (e.g., law enforcement, military, homeland defense and law enforcement).

The high cost of images required to perform image analysis has historically limited the market. The military has seen the benefits of image analysis so much that many military reconnaissance flights have been conducted over areas of interest since World War 2. However, commercial image analysis was virtually impossible because of the high cost of these flights. As low-resolution satellite images became more widely available, the conditions changed in the 1970s. As satellite resolution, spectral coverage and geographic coverage have improved, so has the market for commercial remote sensing imagery. Unfortunately, this market is still limited by the need for skilled image analysis talent.

“Search and Locate is a common geospatial image analysis task. task. This task requires that one or more targets be identified and located. One well-known example is’search and locate? The discovery and location of warships, tanks or other military targets is a well-known example of?search and locate? Focused geospatial analysis of geographicly specific data has been used in search and rescue operations for downed aircraft or missing shipping. These efforts have been limited by the need for image analysts who greatly limit the scope of the search. The development of a faster method to locate targets of interest would enable the pursuit of more urgent, but still promising, applications. This includes assessing the extent of a refugee crisis, counting tents in an area, analysing the changes in infrastructure in developing countries, assessing numbers endangered species, and finding military hardware in areas that were previously unimaginable. Also, it could allow for the identification of camps or airstrips where criminal activity or terrorism might be operating. It would be possible to search and locate other areas. Like tasks to large geological areas, and efficiently execute them over time, would allow geospatial imagery to map remote areas, track deforestation or reforestation, and detect natural disasters in remote parts of the globe.

Computer vision has been a topic of active research in computer science since the 1960s. It is the ability to identify objects by computer using reliable methods. This pursuit has not been successful until recent years, except for the fact that both the object of particular interest and the background against it were tightly controlled. Technological and logical barriers to computer object identification advancements have existed. Computer visual processing, just like its biological counterpart requires computing power and large amounts of memory storage. These limitations have been present for the past 15 years. The ability to pack more transistors in a smaller volume, while also reducing costs, has made it possible to create specialized components such as the graphics processing units, which are optimized for performing calculations during manipulation of visual data. This allows rapid object identification, even real-time, with current hardware. Computer scientists working in this field have also seen a marked improvement in their ability to program computers to analyze objects of particular interest. One of the earliest methods was to break down each object of interest into a unique group of simple geometric shapes, or to use unique shading patterns to identify new instances. These early attempts produced results that were sensitive to variables such as lighting, object placement in field of sample, orientation and object placement. Sometimes, the object of particular interest could not be identified in the original image. Today, computer technology has made it possible to train computers to identify specific objects of particular interest. This is due to advances in computer science, biological vision theories, and computer vision theory. A convolutional neural net with deep learning is a popular combination to train the system to recognize objects of interest when they are presented against different backgrounds or in different orientations. Convolutional neural networks consist of multiple layers of filters interconnected by partial, local fields between layers. This allows for computer learning object recognition with minimal pre-suppositions. The convolutional neural system determines which filters to use to identify the target object. Deep learning is a time of “supervised learning?” This uses a small set of training images, each of which contains an example of the object being identified, such as the human face. Each image is clearly delineated or?labeled. Then, a period of unsupervised learning is followed. A large number of unlabeled images are used, of which a small percentage do not contain the object to be identified. The accuracy of the overall system, including the recall and precision of the classification results, is directly proportional to how many training images are created. The number of training images is proportional to how much time the convolutional neural model?deep learning model spends searching for and finding objects of interest. The convolutional neural model?deep learning method has led to computer systems that can reliably be used for human facial recognition, optical characters recognition, and the identification of complex parts during manufacturing. The convolutional neural net?deep learning model method is so useful in object identification that multiple programming libraries are now available to download. You can download the Caffe library from BerkeleyVision and Learning Center, or the Torch? Nagadomi and cuda-covnet2 libraries (Alex Krizhevsky). The convolutional neural net?deep learning model was recently applied to the field object identification and classification in orthorectified Geospatial Images. No. 9,589,210 to Estrada, A et al. 9,589,210 to Estrada, A et al. Although the automated geospatial objects classification system is a major advancement, it still requires the creation of each system from scratch. This reduces the efficiency of the system and increases the cost.

“What’s needed in the art? An automated system that generates synthetic images to supplement the real images required for an automated program to identify and determine the exact location of various objects of interest using geospatial imagery.

“The inventor developed an engine to simplify the generation of systems to analyze satellite images and to geolocate targets of interest or identify objects or their types.”

According to one aspect, a preexisting framework of modular reusable modules could be used to select unique features from an object of interests from a plurality orthorectified training images in which the object is clearly delineated. A second plurality orthorectified training images are where objects bearing features that will not be included in the generation of one or more feature models for the object are clearly labeled. The generated feature model can then be used as a seed for a second framework that contains at least one pre-programmed machine learning object classifier element, each of which is pre-programmed to execute one of the machine learning protocols. The second framework uses feature model as one parameter. It accepts large numbers of geospatial images that contain the object of concern. These images can be either delineated or unlabeled in order to train the system to identify the object of greatest interest reliably. Geospatial images that do not contain the object of interests may be submitted for further testing of classification specificity. The engineered, object-of-interest trained, machine learning classifier module may be accepted by a third framework for identification of instances of the object in production geospatial imagery. Campaign authors may receive the classification results of the search and locate campaign for an object of interest in a predetermined format, according to their specifications. This feature greatly reduces time and programming knowledge required to run a search or locate campaign for one or several specific search objects.

A system for simplifying the generation of broad-area geospatial objects detection is described in an aspect. It comprises an object model creation module that includes a processor and a memory. The plurality programming instructions include the following: Retrieves orthorectified images from which an object has been identified, then retrieves an array of labels and unlabeled geospatial imagery that contain the object. In the memory, there are a number of programming instruction that can be used to train a pre-programmed machine-learning element.

According to another aspect, the method of simplified generation of systems to detect broad-area geospatial objects consists of: (a) retrieving multiple color and spectrum optimized geospatial imagery that contain an object in interest and (b) using the set of visual characteristics unique to the object in interest to parameterize at most one pre-engineered pre-programmed machine-learning programming protocol using a pre-engineered module for object model creation. This module comprises a processor and a memory. (c) training preprogrammed element using scale corrected geospatial image.

“The inventor has created and put into practice a simplified engine that allows for the generation of large-area geospatial object detection systems using auto-generated deep learning models, trained with actual and/or virtual photographs.”

“The present application may describe one or more inventions. Additionally, many alternative embodiments can be described for any one or more inventions. However, these are only for illustration purposes. These embodiments are not meant to be restrictive in any way. As is evident from the disclosure, one or more inventions could be broadly applicable to many embodiments. The embodiments are described in enough detail that those skilled in the arts can practice any one of them. It is also to be understood that others embodiments may be used and that structural, logical and software changes may not be outside the scope of the specific inventions. The skilled person in the art will be able to recognize that some of the inventions can be modified and altered. Specific features of one or several inventions can be described using one or many particular embodiments or figures. These figures form part of the present disclosure and are used to illustrate specific embodiments of one of the inventions. However, such features can be used in any number of embodiments or figures. This disclosure does not include a description of all inventions or a list of features that must be present in all embodiments.

“Headings in this patent applications and the title of the patent application are for convenience and should not be taken to limit the disclosure.”

“Devices in communication with one another need not be in constant communication with each others unless otherwise specified. Devices that communicate with one another may also communicate indirectly or directly through one or more intermediaries (logical or physical).

“A description of an embodiment that has several components communicating with one another does not mean that all components are necessary. A variety of optional components can be described to show a range of possible embodiments of one or several of the inventions, and to better illustrate some or all of the inventions. Also, even though process steps, algorithm steps, and the like are described in a specific order, these processes, methods, and algorithms can generally be used in alternative orders, unless otherwise stated. This means that any sequence of steps described in this patent application doesn’t necessarily mean that they must be performed in a particular order. You can perform the steps in described processes in any order that is practical. You can also perform some steps simultaneously even though they are described or implied to occur in a non-simultaneous order (e.g. because one step is described after another). The illustration of a process in a drawing doesn’t imply it is unique to any other variations or modifications. It also does not mean that the illustrated processes or any of their steps are required for the invention(s). Although steps are usually described once per embodiment, this does not necessarily mean that they must be repeated or that they need to occur only once. Some steps might be skipped in certain embodiments or in certain occurrences. Other steps may also be executed multiple times in one embodiment or occurrence.

“When one device or article is described it will be obvious that multiple devices or articles can be used in lieu of the single device or article. It will also be obvious that multiple devices or articles may be described.

“The functionality and features of a device can be substituted with one or more devices that aren’t explicitly described as such. Other embodiments of any one or more inventions do not need to include the device.

For clarity, some techniques and mechanisms will be described in a single form. It should be noted, however, that certain embodiments may include multiple iterations or multiple instances of a particular mechanism, unless otherwise noted. Figure process descriptions and blocks should be understood to refer to modules, segments, code, or sections that include executable instructions for implementing specific functions or steps. Alternative implementations may also be included in the scope of the invention. For example, certain functions can be executed in a different order than the one shown or discussed. This depends on the functionality involved.

“Definitions”

“As used herein, ?orthorectified geospatial image? This refers to satellite imagery that has been digitally adjusted to remove any terrain distortions. It can be done by either adjusting the angle of incidence at a specific point in the satellite imaging sensor, or by incorporating significant topological changes into the area that the image depicts. A digital elevation model is used to correct this. A digital elevation model is used today in the Shuttle Radar Topography Mission (SRTM), 90 m DEM data collection. Other models of equal or greater precision can also be created using high-definition stereoscopic satellite imagery of similar regions or topographical maps that provide sufficient detail for that area. The invention allows for orthorectification of geospatial images using digital elevation model datasets.

“A ?database? “A?database? or?data storage system? These terms can be used in conjunction. A system that is designed for long-term storage, indexing and retrieval of data. The retrieval usually takes place via a querying interface or language. ?Database? ?Database? may be used to refer only to relational database management system systems that are known to the art. However, it should not be taken to mean these systems alone. There are many other database and data storage system technologies that have been and continue to be developed in the art. These include distributed non-relational databases such as Hadoop, column-oriented and in-memory databases and the like. Although different embodiments may choose to use one of the many data storage subsystems that are available in the art or in the future, the invention should not necessarily be understood to be limited to these systems. Any data storage architecture can be used in accordance with the embodiments. In some cases, one or more data storage requirements are described as being met by distinct components (e.g., an expanded private capital market database and a configuration data base). However, these descriptions do not reflect the physical architecture of data storage systems. Any group of databases may be combined in one database management system that runs on a single machine or on multiple machines. A single database, such as an expanded private capital market database, can be implemented on one machine. It may also be implemented on a group of machines using clustering technology. These examples show that there are no preferred architectural approaches to database management according to the invention. Each implementer can choose the data storage technology at their discretion, but this does not limit the scope of the invention.

“Search and locate” is the term used herein. A general category of tasks in which a set or images is searched for specific stationary targets (such buildings, tanks, railroad terminals and downed aircrafts) refers to a broad class of tasks. Or relocatable targets (such missile launchers and aircraft carriers, oil-rigs, earthmovers or tower cranes). Commonly, the images can be searched to locate multiple targets (e.g. to find all military targets), but single target-class searches may also be done (?find all cars). The second step of the search and find task is to locate all resulting targets of interests (where is the refugee camp ?).

“As used herein, ?image manipulation? This refers to the process of creating artificial, manipulated images. They are then labelled to be able to generate a variety of synthetic images without any manual effort. Image manipulation reduces the amount of manual labor required to extract and label data. Image manipulation can also be used to create synthetic data that is specific for rare and even theoretical objects. A preferred embodiment of the invention allows for searching any object that can be simulated, modelled, or created using computer-aided designing (CAD).

“As used herein, manipulated images” Synthetic images are images that have been manipulated, flattened, modeled or altered to reproduce real orthorectified geospatial imagery. These images can be used to create a set of training images for a searchable class.

“As used herein.?cache pre-labeled Geospatial Images? Any source of multiple orthorectified geospatial images segments that have been pre-analyzed. Each instance of an object of interest has been tagged or labeled so that the recipient’s computer system can associate the image with the object. This allows for subsequent identification of similar objects. These images can be stored in an image database (either relational or flat) or in a directory. Any of these files may be on the same computer as the images, or on another storage device or storage systems directly connected to that computer.

“Cache of multi-scale geospatial imagery” is the term used herein. Refers to any source that contains multiple orthorectified geospatial images segments. These segments may be overlapping or different in terms of processing or optical differences. It is also required that the coordinate system used for cataloging these segments within the cache be known, whether it is proprietary or open. This is to ensure that the exact location of each image segment is always known. These images can be stored in a database of images, whether flat or relational, or in a directory of images files. Any of these images may be stored on the computer where they are being used or on a computer connected to it or on another computer connected to it through any of the network methods that are known to the art.

“Image analysis” is the term used herein. Image analysis is the process of analysing images from one or more image sensor. Generally, an image analysis task focuses only on a single area of interest on the Earth, but images can be analysed on multiple contiguous areas as captured by multiple image sensors. Image analysis on large-scale imagery is common with aerial and satellite imagery.

“As used herein, image correction and optimization module?” This module is a collection of programming functions that receives multiple orthorectified geospatial photos from a cache pre-labeled with geospatial pictures. It normalizes these images to account any image quality variations, including but not limited to color balance, brightness, contrast, and other variations. The module can also examine images for aberrations such as cloud cover, lens artifact or mechanical obstruction. If certain thresholds are exceeded, the software within it may reject the image.

“As used herein ?category? A collection of objects of the same type and function that may differ in appearance. This could be illustrated by the United States Capitol Building, Washington DC’s White House, and Pentagon. However, they all belong to the same category?buildings. Another example is the Airbus 310 and Lockheed L1011, Boeing 727 and Boeing 777, which all have different sizes and configurations, but all belong to the same category?airliners.

“Conceptual Architecture”

“FIG. “FIG. An engine that generates accurate systems for classifying objects of interest in broad area geospatial imagery can be broken down into three parts: The object model generation sub-assembly 110; the machine learning element training and verification (sub-assembly 112); and the use of the trained machines learning classifier elements to identify trained objects from unanalyzed geospatial photos 113. The training image synthesis module (145) generates a number of training images that are labeled with the object to search as determined by a campaign analyst. 9, 900, 950. A 3D model of an object to search is taken from a cache (not drawn). The 3D model can be made flattened into a 2D model from an overhead perspective, and then overlayed onto a geospatial picture. The synthetic image can then be compared with a real image 150 to 190. If the object is recognized as one, multiple synthetic geospatial images may be overlayed with it in different orientations. These images may include shadowing effects that account for localized light sources, sun angle, year, and shadowing. Digitally delineating the object of interest can be done for object feature selection using the object model creation sub-assembly 112 and machine learning classifier element training. The training images can be either synthetic or unmodified geospatial photos that have been manually annotated and the object delineated.

“During object model creation 111, the analyst may insert multiple images where the object to search is clearly delineated so that the object feature extraction module can encounter the object in a variety of orientations and under different environmental conditions. A plurality or training geospatial images containing a variety of objects that closely resemble the campaign object (confounding object) can be released simultaneously into the object model assembly. These may be clearly labeled as “negative samples”. To assist in the selection of object features that are highly specific to the search objects to exclude irrelevant and visually distinct objects from optical model generation. Before being used in the object feature extract module 115, a digital image correction or optimization may be performed on image segments for both positive and negative feature selections from geospatial images segments 110. Conversion from color to greyscale is a digital correction that can be made to image segments for deep learning model training module. This correction helps in feature selection by reducing image segment complexity. The preferred embodiment of this system converts the image first from the RGB colorspace to YCrCb and then discards all but the Y channel data. This produces a grayscale image with a tonal quality that is well-suited for deep learning model training. This is not intended to limit the number of conversion methods that can be used. It should be noted that the color-to-grayscale image conversion described here is only an example. Histogram normalization is another type of image correction that can be used to prepare pre-labeled geospatial photos for feature selection for object modeling generation. It increases image contrast and reduces exposure by creating sets of image segments with similar dynamic range profiles before being used in feature extraction 120. Histogram normalization filters include histogram equalization and histogram equality, as well as adaptive histogram egalization and contrast limiting adaptive histogram equivalentization. The use of these image manipulation methods can produce segments that are better suited to object feature extraction 120 during object modeling creation 125. However, the system described in this article does not rely solely on histogram normalization in generalized form or any of the exemplary histogram manipul methods. Image filters such as Gaussian and median filters, as well as bilateral filters to increase edge contrast, may be used to pre-labeled geospatial segment images. However, the list of these filters is not intended to be a complete guideline and should not be interpreted as a ban on the use of any other filters. The object model creation sub-assembly 112 will create at least one model from the unique features of search objects. This model can be used during the next machine deep learning stage using the object models creator module 125.

“A minimum of one of the objects model creation subassemblys 111 models may be used to train a model-mediated object classification module (165) by a machine learning classifier elements training and verification sub-assembly 112. Although the embodiment can select a model based on pre-programmed parameters derived from past campaign successes, many other methods of selecting models may also be used 130. This stage can be used for both supervised and unsupervised learning. The images are retrieved from a training data store 110. A digital correction and optimization can be performed 115 b on image segments that will be used by a module 135 for model training. This correction helps reduce the image segment complexity which aids in training. A preferred embodiment of the system converts the image first from RGB to YCrCb and then discards all but the Y channel data. This produces a grayscale image with a tonal quality that is well-suited for deep learning model training. This is not intended to limit the number of conversion methods that can be used. Histogram normalization is another type of image correction that can be used to prepare pre-labeled as well as non-labeled geospatial imagery for training. It increases image contrast and reduces exposure by creating sets of image segments with similar dynamic range profiles before they are used in the training of model module 135. Histogram normalization filters include histogram equalization and histogram equality. These image histogram manipulation techniques may produce segments that are better suited to the supervised stage in deep learning of machine-learning classifiers 135 (each acting separately or together as illustrated in FIG. 5. The aspect described in this document does not rely solely on histogram normalization, either in its generalized form or the exemplary histogram manipulating methods. Image filters such as Gaussian and median filters, as well as bilateral filters to increase edge contrast, may be applied to prelabeled geospatial images segments 115b. This is common in the art. However, the listing of these filters does not mean that they are prohibited from being used in any other embodiments. In a model training module 135, supervised learning may include the introduction of images where the object of interest has been clearly delineated. However, it is possible to use unlabeled images that have the object. Unsupervised learning, which can also be done in a module 135, may include the introduction of a large number of images that contain the object. Some may be labeled with the object while others may not. Other images may contain irrelevant or confounding objects. Objects which are visually similar to the object of concern in some way but not relevant to the campaign may also be used. If the campaign is to identify presence of tanker truck, confounding objects could include flatbed trucks, box trucks and rows of parked passenger cars among other objects visually identical to tanker trucks that are not listed here. Unsupervised learning can occur without any analyst intervention or monitoring. Review of the session may be done only after it is completed to evaluate the effectiveness of object classification tuning and the endpoint reliability of the trained system. Based on the complexity and success of the search object, additional rounds of unsupervised and supervised training may be required. In some cases, a different model created by the object model creator subassembly 112 may be used 130, and the training can be repeated until the campaign specifications are met. A separate round of object identification accuracy testing 140 may be conducted depending on factors such as the design of the campaign, analyst’s desire, complexity, and expected background conditions. ”

“The last sub-assembly is the use of the newly trained model-mediated classification module 165 in monitored manufacturing use 113. The machine learning classifier elements are 165 that can identify the trained search objects from unanalyzed geospatial pictures 155. These may have been optimized with previously mentioned desaturation or histogram normalization filters, among other things. The geospatial images may be drawn from multiple caches that are derived from different collection systems. They may have different resolutions and may therefore differ in scale from the previously mentioned chromatic or spectral differences 160. These production images can be normalized using a multiscale sliding window filter (see FIG. 8 to equalize resolution of the geospatial images region under analysis. The campaign analyst could also choose a subregion from a geospatial photo that most likely contains the search object, and convert it to a resolution equal to the training images 160. This is because resolution is an important control factor in object classification. The geospatial image segment may contain the search object. This will allow the analyst to record a unique identifier, such as the longitude and altitude of the object and the number of objects present. However, other identifiers are possible. Campaign analysts receive these object search-related data in a pre-designated format 170.

“FIG. “FIG. Training images are used to identify the features that are useful in classifying an object of particular interest for a search-and-locate campaign. The aspect selects features using unique combinations, including shape, texture and color. The use of positive samples is used to select features. This is where the object is shown in multiple orientations and lighting effects. System training may require the use negative samples. Geospatial images can include labeled objects that do not appear to be the search objects. These objects may help in the construction and selection of features. This is especially important when the feature differences between search objects and excluded objects are minimal. It provides features that can identify the search objects, as well as features that could exclude irrelevant objects.202 The images are then normalized as described previously in FIG. 1. for feature selection purpose 203. The embodiment object feature extraction module 120 uses visual features to predictably isolate the campaign search objects from other possible objects in geospatial imagery. It does this using at least one round each of supervised learning with a variety of clearly delineated positive and negative samples, and unsupervised learning with large numbers of unmodified synthetic images 204. After training, one or more mode classification models are created. These include groupings and rules of visual relationships (features) that were found only on the search items in geospatial photos of controlled resolution. Groupings and rules of visual relationships (features), not found on the search objects are also generated.

“FIG. “FIG. The retrieval 301 of an object model generated by the object model creation module 125 and selection 302 thereof are the first steps in machine learning classifier training. The selected model is now active in the training aspect. A plurality of optimized training images containing the object of interest in multiple orientations are clearly labeled with 303. These training images can be used to iteratively train one to three deep learning object classifier elements. They may use machine learning paradigms like vector support machines, convolutional neural networks, random forest filters, and other machine learning methods that are known to those who are skilled in the art. A large number of images that have the search object present, but not labeled, can be used to train machine learning elements.

“FIG. “FIG. The aspect is then selected 311 and tested using multiple training geospatial photos that either have 312 search items or not. It may also include a confounding image to verify fine specificity 313. To determine if the campaign specifications 315 are met, 314 classification specificity can be tested. The trained model-mediated object classification module can be used in the production campaign 316 if it meets or exceeds the requirements. If the object identification module model is not available, additional training can be done with it or another model.

“FIG. “FIG. After being trained to the required level of specificity, the trained model-mediated classification module can be pulled into operation 401. This will then be used to search for the object in a number of unanalyzed geospatial images segments that are within the geographic region of the search campaign 402. These unanalyzed geospatial image segments may be programmatically optimized to identify objects. This may include the use of a sliding window function that places the image at the appropriate resolution for object search 403. After the object search is complete 404, the data collected about the search objects, which may include latitude and longitude coordinates are presented to the analyst in the format that was pre-designed by the campaign author 405

“FIG. “FIG.5” 5 shows the training of machine-learning identifiers by using an aspect 500. The retrieval of multiple training images 510, which can be either real or synthesized, begins the training process. 9A and FIG. 9B. These images can be either labeled or not. One subset of these images may not contain the search object. FIG. 135 illustrates one aspect of model training. 1, 135, Training takes place within the model training module 135. Training images can be routed to one of several machine learning elements 550a-550n, depending on the design and combination of machine learning protocols that work best for the search object. Some machine learning elements might be running protocols for any of the machine-learning algorithms that are known to those who are skilled in the art, such as vector support machines, random forests, Bayesian network or native Bayes classification, convoluted neural networks, and Bayesian network. A single machine learning component, such as machine learning Element C 550, may be used to classify search objects. However, multiple machine learning elements may be used, such as machine learning Elements B 550 and C 550, in order to provide the most precise and reproducible identification. Machine learning element A 550 may run the vector support machine protocol. Next, machine learning element C 550 b will run the random forest protocol. Depending on your requirements for reliable and reproducible classification, you can use any sequence of machine learning protocols.

“It is important to note that not all connections between machine learning elements were drawn for clarity of presentation. Search object classification data can flow directly between any machine-learning element 550a-550n in any order necessary for optimal search object classification according the aspect. Although ensembles of up to two machine-learning elements are briefly described, it is important to understand that there is no limit on how many elements can participate in an ensemble. Furthermore, no design restriction prevents one machine learning element from participating in more than one ensemble if optimal search object classification requires.

“FIG. 6. This is a flow diagram showing how a machine learning classifier can be trained and verified for use in production geospatial images analysis using an aspect 600. The training process begins with a series of optimized orthorectified geospatial images, some of which can be synthetically generated (see FIG. 9A and FIG. 9A and FIG. 1, 135), followed by a mixture of labeled and unlabeled optimal orthorectified geospatial imagery 601. Programming functions that include multiple machine learning protocols, such as vector support machines, Bayesian network, Bayesian classifier, Bayesian network, Bayesian network and convoluted neural networks, can be used in the model training module. This allows for a highly reliable and reproducible campaign search object classification. Ensembles can use multiple machine learning functions for search object training. There are no aspect design limitations against the reuse of the same machine-learning protocol should object reliability or reproducibility require it. A plurality unlabeled, optimized orthorectified geospatial photos may be used to test candidate model mediated search object classification modules for reliability and reproducibility.

“FIG. “FIG.7” is a diagram showing the normalization and optimization of geospatial imagery prior to object identification using aspect 700. Before analyzing for the presence or absence of a search object, geospatial images should be optimized and normalized. Images from 150 image stores that originate in a specific region of a search and find campaign might be modified during the selection process 710. Geospatial images can be divided into segments according to the campaign requirements, the level of certainty about the location of the search object, and pre-planning by campaign analysts. To ensure optimal object classification within the study images 740, color correction may be applied to the selected geospatial photo or selected segment of geospatial images. Image segments already properly scaled 720 can be passed to the model-mediated object classification module (see FIG. 1, 165), while geospatial images of different resolution from training images can be analyzed using a scale factoring multiscale sliding window resolution normalizing filter (750). 8) before object classification analysis.

“FIG. “FIG. 9A, 900, and FIG. 9B, 950 Process flow diagram summarizing how to generate synthetic training geospatial photos comprising the campaign object geospatial analysis. This method uses a similar process to that of 200. However, the deep learning models 135 and 550 a-550n, as well as the image analysis software 113, are combined with the image manipulation software module (145) using an aspect. A search and locate function FIG. is used according to the embodiment. 9A, 901 can be used to locate and identify an object or item of particular interest using orthorectified geospatial imaging. It doesn’t matter if the object is real or imagined, as long as it is tangible and occupies 3-dimensional space. Method 900,950 allows you to?search and find? Action 901 can be performed on any pre-labeled, searchable object or group of objects. Method 900, 951 may be used to generate a synthetic image for any object or item of particular interest. This is done according to the search function 901. In response to the query 901, searchable objects can be found in a 903 database of model objects. The 3-dimensional database 903 of models is searched by the training image manipulator software module 145. It creates a 2-dimensional flattened synthetic image 904 using the selected 3-dimensional model. 905 This 2-dimensional synthetic model 904 can then be compared to real, geospatial imagery. Training image manipulation software module 140 may adjust, orient or align the flattened 2-dimensional picture to reproduce 906 the real orthorectified Geospatial image. After the initial replication of the synthetic image 906, the training-image manipulation software module 145 separates synthetic layer from real image layer 907. FIG. For initial tuning the synthetic image to different backgrounds, 9B, 951, and 953 may be used. After the separate synthetic image layer 907 has been processed, module 145 will overlay the synthetic image again on the real geospatial picture 954 and adjust the initial image to account for various geographic locations, environmental factors, shadowing effects, sun angle, year, as well as shadowing by localized light sources 955. The synthetic image is finalized by applying post-overlay image filters 956, including color correction 957, resolution correction, blurring, or pixelating 958. It is important to demarcate the synthetic footprint 959 because it could contain not only the synthetic image, but also any shadowing that may be associated with the synthetic object. The synthetic image 959 is overlayed onto an existing background image. A masking function 960 is used to make the background transparent so that no existing imagery is obscured. The software module 145 then runs a check 961 to determine if the synthetic image matches the real image. If so, a labeled artificial image 962 is created to be placed into a labeled corpustraining set 110. Other adjustments to the synthetic object model can be made FIG. 9A, 905.”

“Hardware Architecture”

The techniques described herein can be implemented on either hardware or a combination thereof. They can be implemented, for example, in an operating system kernel or in a separate user program.

Software/hardware hybrid implementations may be made on a programmable machine that is connected to the network. This can be activated by a computer program in memory and reconfigured or activated selectively. These network devices can have multiple interfaces to the network that may be configured or designed for different types of network communication protocols. This document may provide a general description of some of these machines in order to show one or more examples of how a particular unit of functionality can be implemented. Specific aspects may allow at least some of these features or functionalities to be implemented on one or several general-purpose computers that are connected to one or more networks. This could include a client computer, an end-user computer system or client computer, a server or other server system, a mobile device (e.g. tablet computing device or mobile phone, laptop or other appropriate computing device), a consumer electronics device, a music player or any other suitable electronic device or switch, or any combination thereof. At least some of these features or functionalities may be implemented in one of several virtualized computing environments.

Referring to FIG. “Referring now to FIG. 10, you will see a block diagram of an exemplary computing device 10, which is suitable for at least some of the features and functionalities described herein. A computing device 10 could be any of the computers listed in the preceding paragraph or any other electronic device that can execute software- or hardware-based instruction according to one or more stored programs in memory. Computing device 10 can be configured to communicate over communications networks, such as a wide-area network, a metropolitan network, a small area network, a wireless system, the Internet or any other network that uses known protocols for such communication.

Computing device 10 may include one or more central processing unit (CPU) 12, one (or more) interfaces 15, and one (or more) buses 14 (such as the peripheral component interconnect bus (PCI bus). Under the control of appropriate firmware or software, CPU 12 can be responsible for performing specific functions that are associated with a particular computing device or machine. A computing device 10, for example, may be designed or configured to act as a server system using CPU 12, remote memory 16 and/or local memory 11. At least one aspect of the invention allows CPU 12 to be programmed to perform one or more functions or operations. This may include operating systems, drivers, software software, and other appropriate software.

“CPU 12 could include one or more processors 13, such as a processor from the Intel, ARM and Qualcomm families of microprocessors. Some processors 13 might include special hardware, such as ASICs, field-programmable gate arrays, FPGAs, and electrically erasable programable read-only memory (EEPROMs), which can be used to control the operations of the computing device. A local memory 11, such as non-volatile random memory (RAM), and/or read-only (ROM) can also be part of CPU 12. There are many ways memory can be linked to system 10. Memory 11 can be used for many purposes, including caching and/or storage of data, instructions programming, and the like. You should also know that CPU 12 could be part of any number of system-on a-chip (SOC), which may include additional hardware, such as memory and graphics processing chips such as a QUALCOMM SSNAPDRAGON. SAMSUNG EXYNOS or These CPUs are becoming more common in the art of computing, whether they’re used in mobile devices or integrated devices.

“Processor” as used in this document means: The term “processor” does not refer to only those integrated circuits that are referred to as processors, mobile processors, or microprocessors in the art, but also to microcontrollers, microcomputers, programmable logic controllers, application-specific integrated devices, and other programmable circuits.

Interfaces 15 can be described as network interface cards (NICs) in one aspect. NICs are used to control data packets being sent and received over a computer network. Other types of interfaces 15, however, may be used to support peripherals that can be connected to a computing device. Frame relay interfaces and cable interfaces are all possible interfaces. Token ring interfaces and graphics interfaces are also available. There are many interface options available, including universal serial bus (USB), serial, Ethernet, FIREWIRE (THUNDERBOLT?) and PCI. These interfaces 15 can include physical ports that allow communication with the appropriate media. They may include an independent processor in certain cases (such as a dedicated audio/video processor as common in the art of high-fidelity AN hardware interfacings) or volatile and/or unvolatile memory (e.g. RAM).

“The system in FIG. “The FIG. 10 diagram shows one architecture for a computing device 10, but it’s not the only one that can implement at least some of the techniques and features described herein. Architectures with one, two, or more processors 13 can be used. These processors may be distributed across multiple devices or present on a single device. One aspect uses a single processor 13 to handle communications and routing computations. In other aspects, a separate communications processor may be used. Different types of functionalities or features may be implemented in different aspects depending on the aspect that includes client devices (such as a smartphone or tablet running client software) or server systems (such a server system discussed in greater detail below).

“Regardless of network device configurations, the system of an aspect can employ one or more memory or memory modules (such, for example remote memory block 16 or local memory 11), which are used to store data, program directions for general-purpose network operations or any other information related to the functionality described herein (or combinations thereof). For example, program instructions can control execution or include an operating system and/or one/more applications. Memory 16 or memories 11 16 can also be used to store data structures and configuration data, encryption data or historical system operations information.

“Because such information or program instructions can be used to implement one or more of the systems or methods described in this invention, at least some aspects of network devices may include nontransitory, machine-readable storage media. These media may, for example be designed or configured to store program instructions, state information and the like to perform various operations. These nontransitory machine-readable media media include magnetic media like hard disks and floppy disks and magnetic tape, optical media such CD-ROM disks and magneto-optical media. There are also hardware devices that can store and execute program instructions such as flash memory, read-only memory devices, solid state drives (SSD), hybrid SSD, and flash memory. Storage drives that combine physical components from hard and solid disk drives into one device are becoming more common. These storage devices may be permanent and not removable (e.g. RAM hardware modules that can be soldered to a motherboard or integrated into an electronic device), but they could also be removable like swappable flash memories modules (e.g. thumb drives). or any other removable media that can be quickly swapped between physical storage devices, ‘hot-swappable’? Disk drives and solid state drives can be used interchangeably. Program instructions can include object code such as is produced by a compiler, machine codes such as are produced by an assembler, a linker, and byte code such as can be generated by a JAVA. Compiler and executable using a Java virtual machine, equivalent, or files containing higher-level code that can be executed by the computer with an interpreter (for instance, scripts written using Python, Ruby, Groovy or any other scripting languages).

“Systems may be implemented on a separate computing system in certain aspects. Referring to FIG. Referring now to FIG. 11, you will see a block diagram that shows an exemplary architecture for one or more components of a standalone computing device. The processors 21 of computing device 20 may be used to run software that performs one or more functions, or applications of aspects. For example, a client application 24. Under control of an operating program 22, processors 21 can execute computing instructions, such as MICROSOFT WINDOWS version 22. operating system, APPLE OXX? iOS or APPLE OSX? operating systems, including some versions of the Linux operating system. operating system, and the like. One or more shared services 23 might be available in system 20 in many cases. They may be useful in providing services for client applications 24. For example, services 23 could be WINDOWS Services 23 may include user-space services in a Linux environment or any other type common service architecture that is compatible with operating system 21. Any type of input device 28 can be used to receive user input. This includes a keyboard, touchscreen or microphone (for voice input), mouse and touchpad. Output devices 27 can be any type that is suitable for providing output to one, two, or more users. They may include one or more screens, speakers, printers, and/or a combination of all three. Random-access memory 25 can be any type of random-access memory with any structure or architecture that is known to the art. It can be used by processors 21 to run software, for example. Any magnetic, optical or mechanical storage device 26 for digital data storage (such as those shown in FIG. 10). Flash memory, magnetic hard drives, CD-ROM and the like are some examples of storage devices 26.

“Some aspects of systems can be implemented on a distributed computing system, such as one with any number clients or servers. Referring to FIG. Referring now to FIG. 12, you will see a block diagram showing an exemplary architecture 30 that implements at least one aspect of a distributed computing system. The aspect allows for any number of clients 33 to be provided. Client 33 can run software to implement client-side parts of a system. Clients may be part of a 20-member system, such as the one illustrated in FIG. 11. For handling client requests 33, any number 32 of servers may be available. Clients 33 and 32 can communicate with each other via one or more of the electronic networks 31. These networks may include the Internet, a wide-area network, a mobile network (such CDMA or GSM cell networks), a wireless network (such WiFi, WiMAX or LTE) or a local network (or any other network topology; the aspect does NOT favor any particular network topology). Any known network protocol may be used to implement networks 31.

“In some cases, servers 32 can call external services 37 to get additional information or refer to additional data about a call. Communication with external services 37 can be done, for instance, through one or more networks 31. External services 37 could include web-enabled functionality or services related to the hardware device. Client applications 24 can be implemented on a smartphone, tablet, or other electronic device. In one instance, information may be stored on a server system 32 in cloud, or on an external service 37 that is deployed on the premises of one or more enterprises or users.

“In certain aspects, clients 33 and servers 32 (or both) can make use of one or several specialized services or appliances that could be deployed locally or remotely over one or more networks 31. One or more aspects may refer to one or more of the 34 databases. One with ordinary skill in the arts should understand that databases 34 can be set up in many different ways and used a variety of data access methods and manipulations. One or more databases 34 could be a relational database system that uses a structured query language (SQL) in some aspects. Other databases may use a different data storage technology, such as the ones referred to as?NoSQL’ in the art. For example, HADOOP CASSANDRA, GOOGLE HUGTABLE?, etc. Depending on the aspect, different database architectures, such as in-memory, column-oriented, clustered, distributed, and flat-file data repositories, may be used. Anyone with ordinary skill in the arts will appreciate that there are many database technologies available. However, it is not necessary to specify a particular database technology or an arrangement of components for any aspect. It should also be noted that the term “database” can refer to any combination of known or future database technologies. The term?database? may be used to refer to either a physical machine or a group of machines that act as one database system. Or it could refer simply to a logical data base within an overall database management software. If a particular meaning of the term “database” is not given, it should be understood to refer to any of these meanings. All of these meanings are the same as the plain meaning of the term “database”. by people with ordinary skill in the arts.”

“Likewise, certain aspects may use one or more security system 36 and configuration system 35. Security and configuration management are both common information technology (IT), and web functions. Some amount of each can be associated with any IT system or web systems. Anyone with ordinary skill in art should understand that any configuration or security system known in the present or future can be used in conjunction of aspects, except when a specific security 36, configuration system 35, or approach is required by any particular aspect.

“FIG. “FIG. 13 is an illustration of a computer system 40 that can be used at any one of the locations in the system. This is an example of any computer capable of running code to process data. Computer system 40 may be modified and altered without affecting the wider scope of the system or method described herein. Bus 42 is connected to the central processor unit (CPU), 41. There are also connections to memory 43, nonvolatile memories 44, display 47 and input/output unit (I/O) 48. Network interface card (NIC 53) 53 is also connected to bus 42. I/O unit 48 can be connected to keyboard 49 and pointing device 50. Hard disk 52 is also available. Real-time clock 51 may also be connected. NIC 53 connects with network 54. This network could be the Internet or a nearby network. A local network may have connections to the Internet. Power supply unit 45 is also shown in system 40, which is connected to an AC main alternating current supply 46. Batteries that may be present are not shown. There are many modifications and devices that are well-known but do not apply to the unique functions of this system and the method described herein. You should know that any or all of the components shown may be combined. This is possible in integrated applications such as Qualcomm or Samsung system on a chip (SOC) devices. Also, it is possible to combine multiple capabilities and functions into one hardware device.

“In different aspects, functionality to implement systems or methods may be distributed among multiple client and/orserver components. Software modules can be used to perform various functions within a system. These modules may be differently implemented on client and server components.

“The skilled person will know a variety of modifications that could be made to the different aspects mentioned above. The claims and equivalents define the invention.

Summary for “System for simplifying the generation of systems for wide area geospatial object identification”

“Field of the Art.”

“The invention is in image analysis and, more specifically, in the field using deep learning model computer vision system for automated object identification from Geospatial imagery.

“Discussion of the Art State”

“Image analysis has been a key technology field since World War 2. Image analysis, photogrammetry and related technologies have been extensively used with aerial photography to assess bombing damage and intelligence. The need for highly-trained and specialized image analysts or interpreters has limited the scope of image analysis, especially for the purpose of locating or identifying targets of interest. Image analysis is limited to a limited number of applications due to the need for highly-trained, expensive skills (e.g., law enforcement, military, homeland defense and law enforcement).

The high cost of images required to perform image analysis has historically limited the market. The military has seen the benefits of image analysis so much that many military reconnaissance flights have been conducted over areas of interest since World War 2. However, commercial image analysis was virtually impossible because of the high cost of these flights. As low-resolution satellite images became more widely available, the conditions changed in the 1970s. As satellite resolution, spectral coverage and geographic coverage have improved, so has the market for commercial remote sensing imagery. Unfortunately, this market is still limited by the need for skilled image analysis talent.

“Search and Locate is a common geospatial image analysis task. task. This task requires that one or more targets be identified and located. One well-known example is’search and locate? The discovery and location of warships, tanks or other military targets is a well-known example of?search and locate? Focused geospatial analysis of geographicly specific data has been used in search and rescue operations for downed aircraft or missing shipping. These efforts have been limited by the need for image analysts who greatly limit the scope of the search. The development of a faster method to locate targets of interest would enable the pursuit of more urgent, but still promising, applications. This includes assessing the extent of a refugee crisis, counting tents in an area, analysing the changes in infrastructure in developing countries, assessing numbers endangered species, and finding military hardware in areas that were previously unimaginable. Also, it could allow for the identification of camps or airstrips where criminal activity or terrorism might be operating. It would be possible to search and locate other areas. Like tasks to large geological areas, and efficiently execute them over time, would allow geospatial imagery to map remote areas, track deforestation or reforestation, and detect natural disasters in remote parts of the globe.

Computer vision has been a topic of active research in computer science since the 1960s. It is the ability to identify objects by computer using reliable methods. This pursuit has not been successful until recent years, except for the fact that both the object of particular interest and the background against it were tightly controlled. Technological and logical barriers to computer object identification advancements have existed. Computer visual processing, just like its biological counterpart requires computing power and large amounts of memory storage. These limitations have been present for the past 15 years. The ability to pack more transistors in a smaller volume, while also reducing costs, has made it possible to create specialized components such as the graphics processing units, which are optimized for performing calculations during manipulation of visual data. This allows rapid object identification, even real-time, with current hardware. Computer scientists working in this field have also seen a marked improvement in their ability to program computers to analyze objects of particular interest. One of the earliest methods was to break down each object of interest into a unique group of simple geometric shapes, or to use unique shading patterns to identify new instances. These early attempts produced results that were sensitive to variables such as lighting, object placement in field of sample, orientation and object placement. Sometimes, the object of particular interest could not be identified in the original image. Today, computer technology has made it possible to train computers to identify specific objects of particular interest. This is due to advances in computer science, biological vision theories, and computer vision theory. A convolutional neural net with deep learning is a popular combination to train the system to recognize objects of interest when they are presented against different backgrounds or in different orientations. Convolutional neural networks consist of multiple layers of filters interconnected by partial, local fields between layers. This allows for computer learning object recognition with minimal pre-suppositions. The convolutional neural system determines which filters to use to identify the target object. Deep learning is a time of “supervised learning?” This uses a small set of training images, each of which contains an example of the object being identified, such as the human face. Each image is clearly delineated or?labeled. Then, a period of unsupervised learning is followed. A large number of unlabeled images are used, of which a small percentage do not contain the object to be identified. The accuracy of the overall system, including the recall and precision of the classification results, is directly proportional to how many training images are created. The number of training images is proportional to how much time the convolutional neural model?deep learning model spends searching for and finding objects of interest. The convolutional neural model?deep learning method has led to computer systems that can reliably be used for human facial recognition, optical characters recognition, and the identification of complex parts during manufacturing. The convolutional neural net?deep learning model method is so useful in object identification that multiple programming libraries are now available to download. You can download the Caffe library from BerkeleyVision and Learning Center, or the Torch? Nagadomi and cuda-covnet2 libraries (Alex Krizhevsky). The convolutional neural net?deep learning model was recently applied to the field object identification and classification in orthorectified Geospatial Images. No. 9,589,210 to Estrada, A et al. 9,589,210 to Estrada, A et al. Although the automated geospatial objects classification system is a major advancement, it still requires the creation of each system from scratch. This reduces the efficiency of the system and increases the cost.

“What’s needed in the art? An automated system that generates synthetic images to supplement the real images required for an automated program to identify and determine the exact location of various objects of interest using geospatial imagery.

“The inventor developed an engine to simplify the generation of systems to analyze satellite images and to geolocate targets of interest or identify objects or their types.”

According to one aspect, a preexisting framework of modular reusable modules could be used to select unique features from an object of interests from a plurality orthorectified training images in which the object is clearly delineated. A second plurality orthorectified training images are where objects bearing features that will not be included in the generation of one or more feature models for the object are clearly labeled. The generated feature model can then be used as a seed for a second framework that contains at least one pre-programmed machine learning object classifier element, each of which is pre-programmed to execute one of the machine learning protocols. The second framework uses feature model as one parameter. It accepts large numbers of geospatial images that contain the object of concern. These images can be either delineated or unlabeled in order to train the system to identify the object of greatest interest reliably. Geospatial images that do not contain the object of interests may be submitted for further testing of classification specificity. The engineered, object-of-interest trained, machine learning classifier module may be accepted by a third framework for identification of instances of the object in production geospatial imagery. Campaign authors may receive the classification results of the search and locate campaign for an object of interest in a predetermined format, according to their specifications. This feature greatly reduces time and programming knowledge required to run a search or locate campaign for one or several specific search objects.

A system for simplifying the generation of broad-area geospatial objects detection is described in an aspect. It comprises an object model creation module that includes a processor and a memory. The plurality programming instructions include the following: Retrieves orthorectified images from which an object has been identified, then retrieves an array of labels and unlabeled geospatial imagery that contain the object. In the memory, there are a number of programming instruction that can be used to train a pre-programmed machine-learning element.

According to another aspect, the method of simplified generation of systems to detect broad-area geospatial objects consists of: (a) retrieving multiple color and spectrum optimized geospatial imagery that contain an object in interest and (b) using the set of visual characteristics unique to the object in interest to parameterize at most one pre-engineered pre-programmed machine-learning programming protocol using a pre-engineered module for object model creation. This module comprises a processor and a memory. (c) training preprogrammed element using scale corrected geospatial image.

“The inventor has created and put into practice a simplified engine that allows for the generation of large-area geospatial object detection systems using auto-generated deep learning models, trained with actual and/or virtual photographs.”

“The present application may describe one or more inventions. Additionally, many alternative embodiments can be described for any one or more inventions. However, these are only for illustration purposes. These embodiments are not meant to be restrictive in any way. As is evident from the disclosure, one or more inventions could be broadly applicable to many embodiments. The embodiments are described in enough detail that those skilled in the arts can practice any one of them. It is also to be understood that others embodiments may be used and that structural, logical and software changes may not be outside the scope of the specific inventions. The skilled person in the art will be able to recognize that some of the inventions can be modified and altered. Specific features of one or several inventions can be described using one or many particular embodiments or figures. These figures form part of the present disclosure and are used to illustrate specific embodiments of one of the inventions. However, such features can be used in any number of embodiments or figures. This disclosure does not include a description of all inventions or a list of features that must be present in all embodiments.

“Headings in this patent applications and the title of the patent application are for convenience and should not be taken to limit the disclosure.”

“Devices in communication with one another need not be in constant communication with each others unless otherwise specified. Devices that communicate with one another may also communicate indirectly or directly through one or more intermediaries (logical or physical).

“A description of an embodiment that has several components communicating with one another does not mean that all components are necessary. A variety of optional components can be described to show a range of possible embodiments of one or several of the inventions, and to better illustrate some or all of the inventions. Also, even though process steps, algorithm steps, and the like are described in a specific order, these processes, methods, and algorithms can generally be used in alternative orders, unless otherwise stated. This means that any sequence of steps described in this patent application doesn’t necessarily mean that they must be performed in a particular order. You can perform the steps in described processes in any order that is practical. You can also perform some steps simultaneously even though they are described or implied to occur in a non-simultaneous order (e.g. because one step is described after another). The illustration of a process in a drawing doesn’t imply it is unique to any other variations or modifications. It also does not mean that the illustrated processes or any of their steps are required for the invention(s). Although steps are usually described once per embodiment, this does not necessarily mean that they must be repeated or that they need to occur only once. Some steps might be skipped in certain embodiments or in certain occurrences. Other steps may also be executed multiple times in one embodiment or occurrence.

“When one device or article is described it will be obvious that multiple devices or articles can be used in lieu of the single device or article. It will also be obvious that multiple devices or articles may be described.

“The functionality and features of a device can be substituted with one or more devices that aren’t explicitly described as such. Other embodiments of any one or more inventions do not need to include the device.

For clarity, some techniques and mechanisms will be described in a single form. It should be noted, however, that certain embodiments may include multiple iterations or multiple instances of a particular mechanism, unless otherwise noted. Figure process descriptions and blocks should be understood to refer to modules, segments, code, or sections that include executable instructions for implementing specific functions or steps. Alternative implementations may also be included in the scope of the invention. For example, certain functions can be executed in a different order than the one shown or discussed. This depends on the functionality involved.

“Definitions”

“As used herein, ?orthorectified geospatial image? This refers to satellite imagery that has been digitally adjusted to remove any terrain distortions. It can be done by either adjusting the angle of incidence at a specific point in the satellite imaging sensor, or by incorporating significant topological changes into the area that the image depicts. A digital elevation model is used to correct this. A digital elevation model is used today in the Shuttle Radar Topography Mission (SRTM), 90 m DEM data collection. Other models of equal or greater precision can also be created using high-definition stereoscopic satellite imagery of similar regions or topographical maps that provide sufficient detail for that area. The invention allows for orthorectification of geospatial images using digital elevation model datasets.

“A ?database? “A?database? or?data storage system? These terms can be used in conjunction. A system that is designed for long-term storage, indexing and retrieval of data. The retrieval usually takes place via a querying interface or language. ?Database? ?Database? may be used to refer only to relational database management system systems that are known to the art. However, it should not be taken to mean these systems alone. There are many other database and data storage system technologies that have been and continue to be developed in the art. These include distributed non-relational databases such as Hadoop, column-oriented and in-memory databases and the like. Although different embodiments may choose to use one of the many data storage subsystems that are available in the art or in the future, the invention should not necessarily be understood to be limited to these systems. Any data storage architecture can be used in accordance with the embodiments. In some cases, one or more data storage requirements are described as being met by distinct components (e.g., an expanded private capital market database and a configuration data base). However, these descriptions do not reflect the physical architecture of data storage systems. Any group of databases may be combined in one database management system that runs on a single machine or on multiple machines. A single database, such as an expanded private capital market database, can be implemented on one machine. It may also be implemented on a group of machines using clustering technology. These examples show that there are no preferred architectural approaches to database management according to the invention. Each implementer can choose the data storage technology at their discretion, but this does not limit the scope of the invention.

“Search and locate” is the term used herein. A general category of tasks in which a set or images is searched for specific stationary targets (such buildings, tanks, railroad terminals and downed aircrafts) refers to a broad class of tasks. Or relocatable targets (such missile launchers and aircraft carriers, oil-rigs, earthmovers or tower cranes). Commonly, the images can be searched to locate multiple targets (e.g. to find all military targets), but single target-class searches may also be done (?find all cars). The second step of the search and find task is to locate all resulting targets of interests (where is the refugee camp ?).

“As used herein, ?image manipulation? This refers to the process of creating artificial, manipulated images. They are then labelled to be able to generate a variety of synthetic images without any manual effort. Image manipulation reduces the amount of manual labor required to extract and label data. Image manipulation can also be used to create synthetic data that is specific for rare and even theoretical objects. A preferred embodiment of the invention allows for searching any object that can be simulated, modelled, or created using computer-aided designing (CAD).

“As used herein, manipulated images” Synthetic images are images that have been manipulated, flattened, modeled or altered to reproduce real orthorectified geospatial imagery. These images can be used to create a set of training images for a searchable class.

“As used herein.?cache pre-labeled Geospatial Images? Any source of multiple orthorectified geospatial images segments that have been pre-analyzed. Each instance of an object of interest has been tagged or labeled so that the recipient’s computer system can associate the image with the object. This allows for subsequent identification of similar objects. These images can be stored in an image database (either relational or flat) or in a directory. Any of these files may be on the same computer as the images, or on another storage device or storage systems directly connected to that computer.

“Cache of multi-scale geospatial imagery” is the term used herein. Refers to any source that contains multiple orthorectified geospatial images segments. These segments may be overlapping or different in terms of processing or optical differences. It is also required that the coordinate system used for cataloging these segments within the cache be known, whether it is proprietary or open. This is to ensure that the exact location of each image segment is always known. These images can be stored in a database of images, whether flat or relational, or in a directory of images files. Any of these images may be stored on the computer where they are being used or on a computer connected to it or on another computer connected to it through any of the network methods that are known to the art.

“Image analysis” is the term used herein. Image analysis is the process of analysing images from one or more image sensor. Generally, an image analysis task focuses only on a single area of interest on the Earth, but images can be analysed on multiple contiguous areas as captured by multiple image sensors. Image analysis on large-scale imagery is common with aerial and satellite imagery.

“As used herein, image correction and optimization module?” This module is a collection of programming functions that receives multiple orthorectified geospatial photos from a cache pre-labeled with geospatial pictures. It normalizes these images to account any image quality variations, including but not limited to color balance, brightness, contrast, and other variations. The module can also examine images for aberrations such as cloud cover, lens artifact or mechanical obstruction. If certain thresholds are exceeded, the software within it may reject the image.

“As used herein ?category? A collection of objects of the same type and function that may differ in appearance. This could be illustrated by the United States Capitol Building, Washington DC’s White House, and Pentagon. However, they all belong to the same category?buildings. Another example is the Airbus 310 and Lockheed L1011, Boeing 727 and Boeing 777, which all have different sizes and configurations, but all belong to the same category?airliners.

“Conceptual Architecture”

“FIG. “FIG. An engine that generates accurate systems for classifying objects of interest in broad area geospatial imagery can be broken down into three parts: The object model generation sub-assembly 110; the machine learning element training and verification (sub-assembly 112); and the use of the trained machines learning classifier elements to identify trained objects from unanalyzed geospatial photos 113. The training image synthesis module (145) generates a number of training images that are labeled with the object to search as determined by a campaign analyst. 9, 900, 950. A 3D model of an object to search is taken from a cache (not drawn). The 3D model can be made flattened into a 2D model from an overhead perspective, and then overlayed onto a geospatial picture. The synthetic image can then be compared with a real image 150 to 190. If the object is recognized as one, multiple synthetic geospatial images may be overlayed with it in different orientations. These images may include shadowing effects that account for localized light sources, sun angle, year, and shadowing. Digitally delineating the object of interest can be done for object feature selection using the object model creation sub-assembly 112 and machine learning classifier element training. The training images can be either synthetic or unmodified geospatial photos that have been manually annotated and the object delineated.

“During object model creation 111, the analyst may insert multiple images where the object to search is clearly delineated so that the object feature extraction module can encounter the object in a variety of orientations and under different environmental conditions. A plurality or training geospatial images containing a variety of objects that closely resemble the campaign object (confounding object) can be released simultaneously into the object model assembly. These may be clearly labeled as “negative samples”. To assist in the selection of object features that are highly specific to the search objects to exclude irrelevant and visually distinct objects from optical model generation. Before being used in the object feature extract module 115, a digital image correction or optimization may be performed on image segments for both positive and negative feature selections from geospatial images segments 110. Conversion from color to greyscale is a digital correction that can be made to image segments for deep learning model training module. This correction helps in feature selection by reducing image segment complexity. The preferred embodiment of this system converts the image first from the RGB colorspace to YCrCb and then discards all but the Y channel data. This produces a grayscale image with a tonal quality that is well-suited for deep learning model training. This is not intended to limit the number of conversion methods that can be used. It should be noted that the color-to-grayscale image conversion described here is only an example. Histogram normalization is another type of image correction that can be used to prepare pre-labeled geospatial photos for feature selection for object modeling generation. It increases image contrast and reduces exposure by creating sets of image segments with similar dynamic range profiles before being used in feature extraction 120. Histogram normalization filters include histogram equalization and histogram equality, as well as adaptive histogram egalization and contrast limiting adaptive histogram equivalentization. The use of these image manipulation methods can produce segments that are better suited to object feature extraction 120 during object modeling creation 125. However, the system described in this article does not rely solely on histogram normalization in generalized form or any of the exemplary histogram manipul methods. Image filters such as Gaussian and median filters, as well as bilateral filters to increase edge contrast, may be used to pre-labeled geospatial segment images. However, the list of these filters is not intended to be a complete guideline and should not be interpreted as a ban on the use of any other filters. The object model creation sub-assembly 112 will create at least one model from the unique features of search objects. This model can be used during the next machine deep learning stage using the object models creator module 125.

“A minimum of one of the objects model creation subassemblys 111 models may be used to train a model-mediated object classification module (165) by a machine learning classifier elements training and verification sub-assembly 112. Although the embodiment can select a model based on pre-programmed parameters derived from past campaign successes, many other methods of selecting models may also be used 130. This stage can be used for both supervised and unsupervised learning. The images are retrieved from a training data store 110. A digital correction and optimization can be performed 115 b on image segments that will be used by a module 135 for model training. This correction helps reduce the image segment complexity which aids in training. A preferred embodiment of the system converts the image first from RGB to YCrCb and then discards all but the Y channel data. This produces a grayscale image with a tonal quality that is well-suited for deep learning model training. This is not intended to limit the number of conversion methods that can be used. Histogram normalization is another type of image correction that can be used to prepare pre-labeled as well as non-labeled geospatial imagery for training. It increases image contrast and reduces exposure by creating sets of image segments with similar dynamic range profiles before they are used in the training of model module 135. Histogram normalization filters include histogram equalization and histogram equality. These image histogram manipulation techniques may produce segments that are better suited to the supervised stage in deep learning of machine-learning classifiers 135 (each acting separately or together as illustrated in FIG. 5. The aspect described in this document does not rely solely on histogram normalization, either in its generalized form or the exemplary histogram manipulating methods. Image filters such as Gaussian and median filters, as well as bilateral filters to increase edge contrast, may be applied to prelabeled geospatial images segments 115b. This is common in the art. However, the listing of these filters does not mean that they are prohibited from being used in any other embodiments. In a model training module 135, supervised learning may include the introduction of images where the object of interest has been clearly delineated. However, it is possible to use unlabeled images that have the object. Unsupervised learning, which can also be done in a module 135, may include the introduction of a large number of images that contain the object. Some may be labeled with the object while others may not. Other images may contain irrelevant or confounding objects. Objects which are visually similar to the object of concern in some way but not relevant to the campaign may also be used. If the campaign is to identify presence of tanker truck, confounding objects could include flatbed trucks, box trucks and rows of parked passenger cars among other objects visually identical to tanker trucks that are not listed here. Unsupervised learning can occur without any analyst intervention or monitoring. Review of the session may be done only after it is completed to evaluate the effectiveness of object classification tuning and the endpoint reliability of the trained system. Based on the complexity and success of the search object, additional rounds of unsupervised and supervised training may be required. In some cases, a different model created by the object model creator subassembly 112 may be used 130, and the training can be repeated until the campaign specifications are met. A separate round of object identification accuracy testing 140 may be conducted depending on factors such as the design of the campaign, analyst’s desire, complexity, and expected background conditions. ”

“The last sub-assembly is the use of the newly trained model-mediated classification module 165 in monitored manufacturing use 113. The machine learning classifier elements are 165 that can identify the trained search objects from unanalyzed geospatial pictures 155. These may have been optimized with previously mentioned desaturation or histogram normalization filters, among other things. The geospatial images may be drawn from multiple caches that are derived from different collection systems. They may have different resolutions and may therefore differ in scale from the previously mentioned chromatic or spectral differences 160. These production images can be normalized using a multiscale sliding window filter (see FIG. 8 to equalize resolution of the geospatial images region under analysis. The campaign analyst could also choose a subregion from a geospatial photo that most likely contains the search object, and convert it to a resolution equal to the training images 160. This is because resolution is an important control factor in object classification. The geospatial image segment may contain the search object. This will allow the analyst to record a unique identifier, such as the longitude and altitude of the object and the number of objects present. However, other identifiers are possible. Campaign analysts receive these object search-related data in a pre-designated format 170.

“FIG. “FIG. Training images are used to identify the features that are useful in classifying an object of particular interest for a search-and-locate campaign. The aspect selects features using unique combinations, including shape, texture and color. The use of positive samples is used to select features. This is where the object is shown in multiple orientations and lighting effects. System training may require the use negative samples. Geospatial images can include labeled objects that do not appear to be the search objects. These objects may help in the construction and selection of features. This is especially important when the feature differences between search objects and excluded objects are minimal. It provides features that can identify the search objects, as well as features that could exclude irrelevant objects.202 The images are then normalized as described previously in FIG. 1. for feature selection purpose 203. The embodiment object feature extraction module 120 uses visual features to predictably isolate the campaign search objects from other possible objects in geospatial imagery. It does this using at least one round each of supervised learning with a variety of clearly delineated positive and negative samples, and unsupervised learning with large numbers of unmodified synthetic images 204. After training, one or more mode classification models are created. These include groupings and rules of visual relationships (features) that were found only on the search items in geospatial photos of controlled resolution. Groupings and rules of visual relationships (features), not found on the search objects are also generated.

“FIG. “FIG. The retrieval 301 of an object model generated by the object model creation module 125 and selection 302 thereof are the first steps in machine learning classifier training. The selected model is now active in the training aspect. A plurality of optimized training images containing the object of interest in multiple orientations are clearly labeled with 303. These training images can be used to iteratively train one to three deep learning object classifier elements. They may use machine learning paradigms like vector support machines, convolutional neural networks, random forest filters, and other machine learning methods that are known to those who are skilled in the art. A large number of images that have the search object present, but not labeled, can be used to train machine learning elements.

“FIG. “FIG. The aspect is then selected 311 and tested using multiple training geospatial photos that either have 312 search items or not. It may also include a confounding image to verify fine specificity 313. To determine if the campaign specifications 315 are met, 314 classification specificity can be tested. The trained model-mediated object classification module can be used in the production campaign 316 if it meets or exceeds the requirements. If the object identification module model is not available, additional training can be done with it or another model.

“FIG. “FIG. After being trained to the required level of specificity, the trained model-mediated classification module can be pulled into operation 401. This will then be used to search for the object in a number of unanalyzed geospatial images segments that are within the geographic region of the search campaign 402. These unanalyzed geospatial image segments may be programmatically optimized to identify objects. This may include the use of a sliding window function that places the image at the appropriate resolution for object search 403. After the object search is complete 404, the data collected about the search objects, which may include latitude and longitude coordinates are presented to the analyst in the format that was pre-designed by the campaign author 405

“FIG. “FIG.5” 5 shows the training of machine-learning identifiers by using an aspect 500. The retrieval of multiple training images 510, which can be either real or synthesized, begins the training process. 9A and FIG. 9B. These images can be either labeled or not. One subset of these images may not contain the search object. FIG. 135 illustrates one aspect of model training. 1, 135, Training takes place within the model training module 135. Training images can be routed to one of several machine learning elements 550a-550n, depending on the design and combination of machine learning protocols that work best for the search object. Some machine learning elements might be running protocols for any of the machine-learning algorithms that are known to those who are skilled in the art, such as vector support machines, random forests, Bayesian network or native Bayes classification, convoluted neural networks, and Bayesian network. A single machine learning component, such as machine learning Element C 550, may be used to classify search objects. However, multiple machine learning elements may be used, such as machine learning Elements B 550 and C 550, in order to provide the most precise and reproducible identification. Machine learning element A 550 may run the vector support machine protocol. Next, machine learning element C 550 b will run the random forest protocol. Depending on your requirements for reliable and reproducible classification, you can use any sequence of machine learning protocols.

“It is important to note that not all connections between machine learning elements were drawn for clarity of presentation. Search object classification data can flow directly between any machine-learning element 550a-550n in any order necessary for optimal search object classification according the aspect. Although ensembles of up to two machine-learning elements are briefly described, it is important to understand that there is no limit on how many elements can participate in an ensemble. Furthermore, no design restriction prevents one machine learning element from participating in more than one ensemble if optimal search object classification requires.

“FIG. 6. This is a flow diagram showing how a machine learning classifier can be trained and verified for use in production geospatial images analysis using an aspect 600. The training process begins with a series of optimized orthorectified geospatial images, some of which can be synthetically generated (see FIG. 9A and FIG. 9A and FIG. 1, 135), followed by a mixture of labeled and unlabeled optimal orthorectified geospatial imagery 601. Programming functions that include multiple machine learning protocols, such as vector support machines, Bayesian network, Bayesian classifier, Bayesian network, Bayesian network and convoluted neural networks, can be used in the model training module. This allows for a highly reliable and reproducible campaign search object classification. Ensembles can use multiple machine learning functions for search object training. There are no aspect design limitations against the reuse of the same machine-learning protocol should object reliability or reproducibility require it. A plurality unlabeled, optimized orthorectified geospatial photos may be used to test candidate model mediated search object classification modules for reliability and reproducibility.

“FIG. “FIG.7” is a diagram showing the normalization and optimization of geospatial imagery prior to object identification using aspect 700. Before analyzing for the presence or absence of a search object, geospatial images should be optimized and normalized. Images from 150 image stores that originate in a specific region of a search and find campaign might be modified during the selection process 710. Geospatial images can be divided into segments according to the campaign requirements, the level of certainty about the location of the search object, and pre-planning by campaign analysts. To ensure optimal object classification within the study images 740, color correction may be applied to the selected geospatial photo or selected segment of geospatial images. Image segments already properly scaled 720 can be passed to the model-mediated object classification module (see FIG. 1, 165), while geospatial images of different resolution from training images can be analyzed using a scale factoring multiscale sliding window resolution normalizing filter (750). 8) before object classification analysis.

“FIG. “FIG. 9A, 900, and FIG. 9B, 950 Process flow diagram summarizing how to generate synthetic training geospatial photos comprising the campaign object geospatial analysis. This method uses a similar process to that of 200. However, the deep learning models 135 and 550 a-550n, as well as the image analysis software 113, are combined with the image manipulation software module (145) using an aspect. A search and locate function FIG. is used according to the embodiment. 9A, 901 can be used to locate and identify an object or item of particular interest using orthorectified geospatial imaging. It doesn’t matter if the object is real or imagined, as long as it is tangible and occupies 3-dimensional space. Method 900,950 allows you to?search and find? Action 901 can be performed on any pre-labeled, searchable object or group of objects. Method 900, 951 may be used to generate a synthetic image for any object or item of particular interest. This is done according to the search function 901. In response to the query 901, searchable objects can be found in a 903 database of model objects. The 3-dimensional database 903 of models is searched by the training image manipulator software module 145. It creates a 2-dimensional flattened synthetic image 904 using the selected 3-dimensional model. 905 This 2-dimensional synthetic model 904 can then be compared to real, geospatial imagery. Training image manipulation software module 140 may adjust, orient or align the flattened 2-dimensional picture to reproduce 906 the real orthorectified Geospatial image. After the initial replication of the synthetic image 906, the training-image manipulation software module 145 separates synthetic layer from real image layer 907. FIG. For initial tuning the synthetic image to different backgrounds, 9B, 951, and 953 may be used. After the separate synthetic image layer 907 has been processed, module 145 will overlay the synthetic image again on the real geospatial picture 954 and adjust the initial image to account for various geographic locations, environmental factors, shadowing effects, sun angle, year, as well as shadowing by localized light sources 955. The synthetic image is finalized by applying post-overlay image filters 956, including color correction 957, resolution correction, blurring, or pixelating 958. It is important to demarcate the synthetic footprint 959 because it could contain not only the synthetic image, but also any shadowing that may be associated with the synthetic object. The synthetic image 959 is overlayed onto an existing background image. A masking function 960 is used to make the background transparent so that no existing imagery is obscured. The software module 145 then runs a check 961 to determine if the synthetic image matches the real image. If so, a labeled artificial image 962 is created to be placed into a labeled corpustraining set 110. Other adjustments to the synthetic object model can be made FIG. 9A, 905.”

“Hardware Architecture”

The techniques described herein can be implemented on either hardware or a combination thereof. They can be implemented, for example, in an operating system kernel or in a separate user program.

Software/hardware hybrid implementations may be made on a programmable machine that is connected to the network. This can be activated by a computer program in memory and reconfigured or activated selectively. These network devices can have multiple interfaces to the network that may be configured or designed for different types of network communication protocols. This document may provide a general description of some of these machines in order to show one or more examples of how a particular unit of functionality can be implemented. Specific aspects may allow at least some of these features or functionalities to be implemented on one or several general-purpose computers that are connected to one or more networks. This could include a client computer, an end-user computer system or client computer, a server or other server system, a mobile device (e.g. tablet computing device or mobile phone, laptop or other appropriate computing device), a consumer electronics device, a music player or any other suitable electronic device or switch, or any combination thereof. At least some of these features or functionalities may be implemented in one of several virtualized computing environments.

Referring to FIG. “Referring now to FIG. 10, you will see a block diagram of an exemplary computing device 10, which is suitable for at least some of the features and functionalities described herein. A computing device 10 could be any of the computers listed in the preceding paragraph or any other electronic device that can execute software- or hardware-based instruction according to one or more stored programs in memory. Computing device 10 can be configured to communicate over communications networks, such as a wide-area network, a metropolitan network, a small area network, a wireless system, the Internet or any other network that uses known protocols for such communication.

Computing device 10 may include one or more central processing unit (CPU) 12, one (or more) interfaces 15, and one (or more) buses 14 (such as the peripheral component interconnect bus (PCI bus). Under the control of appropriate firmware or software, CPU 12 can be responsible for performing specific functions that are associated with a particular computing device or machine. A computing device 10, for example, may be designed or configured to act as a server system using CPU 12, remote memory 16 and/or local memory 11. At least one aspect of the invention allows CPU 12 to be programmed to perform one or more functions or operations. This may include operating systems, drivers, software software, and other appropriate software.

“CPU 12 could include one or more processors 13, such as a processor from the Intel, ARM and Qualcomm families of microprocessors. Some processors 13 might include special hardware, such as ASICs, field-programmable gate arrays, FPGAs, and electrically erasable programable read-only memory (EEPROMs), which can be used to control the operations of the computing device. A local memory 11, such as non-volatile random memory (RAM), and/or read-only (ROM) can also be part of CPU 12. There are many ways memory can be linked to system 10. Memory 11 can be used for many purposes, including caching and/or storage of data, instructions programming, and the like. You should also know that CPU 12 could be part of any number of system-on a-chip (SOC), which may include additional hardware, such as memory and graphics processing chips such as a QUALCOMM SSNAPDRAGON. SAMSUNG EXYNOS or These CPUs are becoming more common in the art of computing, whether they’re used in mobile devices or integrated devices.

“Processor” as used in this document means: The term “processor” does not refer to only those integrated circuits that are referred to as processors, mobile processors, or microprocessors in the art, but also to microcontrollers, microcomputers, programmable logic controllers, application-specific integrated devices, and other programmable circuits.

Interfaces 15 can be described as network interface cards (NICs) in one aspect. NICs are used to control data packets being sent and received over a computer network. Other types of interfaces 15, however, may be used to support peripherals that can be connected to a computing device. Frame relay interfaces and cable interfaces are all possible interfaces. Token ring interfaces and graphics interfaces are also available. There are many interface options available, including universal serial bus (USB), serial, Ethernet, FIREWIRE (THUNDERBOLT?) and PCI. These interfaces 15 can include physical ports that allow communication with the appropriate media. They may include an independent processor in certain cases (such as a dedicated audio/video processor as common in the art of high-fidelity AN hardware interfacings) or volatile and/or unvolatile memory (e.g. RAM).

“The system in FIG. “The FIG. 10 diagram shows one architecture for a computing device 10, but it’s not the only one that can implement at least some of the techniques and features described herein. Architectures with one, two, or more processors 13 can be used. These processors may be distributed across multiple devices or present on a single device. One aspect uses a single processor 13 to handle communications and routing computations. In other aspects, a separate communications processor may be used. Different types of functionalities or features may be implemented in different aspects depending on the aspect that includes client devices (such as a smartphone or tablet running client software) or server systems (such a server system discussed in greater detail below).

“Regardless of network device configurations, the system of an aspect can employ one or more memory or memory modules (such, for example remote memory block 16 or local memory 11), which are used to store data, program directions for general-purpose network operations or any other information related to the functionality described herein (or combinations thereof). For example, program instructions can control execution or include an operating system and/or one/more applications. Memory 16 or memories 11 16 can also be used to store data structures and configuration data, encryption data or historical system operations information.

“Because such information or program instructions can be used to implement one or more of the systems or methods described in this invention, at least some aspects of network devices may include nontransitory, machine-readable storage media. These media may, for example be designed or configured to store program instructions, state information and the like to perform various operations. These nontransitory machine-readable media media include magnetic media like hard disks and floppy disks and magnetic tape, optical media such CD-ROM disks and magneto-optical media. There are also hardware devices that can store and execute program instructions such as flash memory, read-only memory devices, solid state drives (SSD), hybrid SSD, and flash memory. Storage drives that combine physical components from hard and solid disk drives into one device are becoming more common. These storage devices may be permanent and not removable (e.g. RAM hardware modules that can be soldered to a motherboard or integrated into an electronic device), but they could also be removable like swappable flash memories modules (e.g. thumb drives). or any other removable media that can be quickly swapped between physical storage devices, ‘hot-swappable’? Disk drives and solid state drives can be used interchangeably. Program instructions can include object code such as is produced by a compiler, machine codes such as are produced by an assembler, a linker, and byte code such as can be generated by a JAVA. Compiler and executable using a Java virtual machine, equivalent, or files containing higher-level code that can be executed by the computer with an interpreter (for instance, scripts written using Python, Ruby, Groovy or any other scripting languages).

“Systems may be implemented on a separate computing system in certain aspects. Referring to FIG. Referring now to FIG. 11, you will see a block diagram that shows an exemplary architecture for one or more components of a standalone computing device. The processors 21 of computing device 20 may be used to run software that performs one or more functions, or applications of aspects. For example, a client application 24. Under control of an operating program 22, processors 21 can execute computing instructions, such as MICROSOFT WINDOWS version 22. operating system, APPLE OXX? iOS or APPLE OSX? operating systems, including some versions of the Linux operating system. operating system, and the like. One or more shared services 23 might be available in system 20 in many cases. They may be useful in providing services for client applications 24. For example, services 23 could be WINDOWS Services 23 may include user-space services in a Linux environment or any other type common service architecture that is compatible with operating system 21. Any type of input device 28 can be used to receive user input. This includes a keyboard, touchscreen or microphone (for voice input), mouse and touchpad. Output devices 27 can be any type that is suitable for providing output to one, two, or more users. They may include one or more screens, speakers, printers, and/or a combination of all three. Random-access memory 25 can be any type of random-access memory with any structure or architecture that is known to the art. It can be used by processors 21 to run software, for example. Any magnetic, optical or mechanical storage device 26 for digital data storage (such as those shown in FIG. 10). Flash memory, magnetic hard drives, CD-ROM and the like are some examples of storage devices 26.

“Some aspects of systems can be implemented on a distributed computing system, such as one with any number clients or servers. Referring to FIG. Referring now to FIG. 12, you will see a block diagram showing an exemplary architecture 30 that implements at least one aspect of a distributed computing system. The aspect allows for any number of clients 33 to be provided. Client 33 can run software to implement client-side parts of a system. Clients may be part of a 20-member system, such as the one illustrated in FIG. 11. For handling client requests 33, any number 32 of servers may be available. Clients 33 and 32 can communicate with each other via one or more of the electronic networks 31. These networks may include the Internet, a wide-area network, a mobile network (such CDMA or GSM cell networks), a wireless network (such WiFi, WiMAX or LTE) or a local network (or any other network topology; the aspect does NOT favor any particular network topology). Any known network protocol may be used to implement networks 31.

“In some cases, servers 32 can call external services 37 to get additional information or refer to additional data about a call. Communication with external services 37 can be done, for instance, through one or more networks 31. External services 37 could include web-enabled functionality or services related to the hardware device. Client applications 24 can be implemented on a smartphone, tablet, or other electronic device. In one instance, information may be stored on a server system 32 in cloud, or on an external service 37 that is deployed on the premises of one or more enterprises or users.

“In certain aspects, clients 33 and servers 32 (or both) can make use of one or several specialized services or appliances that could be deployed locally or remotely over one or more networks 31. One or more aspects may refer to one or more of the 34 databases. One with ordinary skill in the arts should understand that databases 34 can be set up in many different ways and used a variety of data access methods and manipulations. One or more databases 34 could be a relational database system that uses a structured query language (SQL) in some aspects. Other databases may use a different data storage technology, such as the ones referred to as?NoSQL’ in the art. For example, HADOOP CASSANDRA, GOOGLE HUGTABLE?, etc. Depending on the aspect, different database architectures, such as in-memory, column-oriented, clustered, distributed, and flat-file data repositories, may be used. Anyone with ordinary skill in the arts will appreciate that there are many database technologies available. However, it is not necessary to specify a particular database technology or an arrangement of components for any aspect. It should also be noted that the term “database” can refer to any combination of known or future database technologies. The term?database? may be used to refer to either a physical machine or a group of machines that act as one database system. Or it could refer simply to a logical data base within an overall database management software. If a particular meaning of the term “database” is not given, it should be understood to refer to any of these meanings. All of these meanings are the same as the plain meaning of the term “database”. by people with ordinary skill in the arts.”

“Likewise, certain aspects may use one or more security system 36 and configuration system 35. Security and configuration management are both common information technology (IT), and web functions. Some amount of each can be associated with any IT system or web systems. Anyone with ordinary skill in art should understand that any configuration or security system known in the present or future can be used in conjunction of aspects, except when a specific security 36, configuration system 35, or approach is required by any particular aspect.

“FIG. “FIG. 13 is an illustration of a computer system 40 that can be used at any one of the locations in the system. This is an example of any computer capable of running code to process data. Computer system 40 may be modified and altered without affecting the wider scope of the system or method described herein. Bus 42 is connected to the central processor unit (CPU), 41. There are also connections to memory 43, nonvolatile memories 44, display 47 and input/output unit (I/O) 48. Network interface card (NIC 53) 53 is also connected to bus 42. I/O unit 48 can be connected to keyboard 49 and pointing device 50. Hard disk 52 is also available. Real-time clock 51 may also be connected. NIC 53 connects with network 54. This network could be the Internet or a nearby network. A local network may have connections to the Internet. Power supply unit 45 is also shown in system 40, which is connected to an AC main alternating current supply 46. Batteries that may be present are not shown. There are many modifications and devices that are well-known but do not apply to the unique functions of this system and the method described herein. You should know that any or all of the components shown may be combined. This is possible in integrated applications such as Qualcomm or Samsung system on a chip (SOC) devices. Also, it is possible to combine multiple capabilities and functions into one hardware device.

“In different aspects, functionality to implement systems or methods may be distributed among multiple client and/orserver components. Software modules can be used to perform various functions within a system. These modules may be differently implemented on client and server components.

“The skilled person will know a variety of modifications that could be made to the different aspects mentioned above. The claims and equivalents define the invention.

Click here to view the patent on Google Patents.