Predictive analytics

Predictive analytics encompasses a range of statistical techniques from predictive modeling , machine learning , and data mining that analyzes current and historical facts to future predictions. [1] [2]

In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships Among Many factors to allow assessment of risk associated with a potential gold Particular set of conditions guiding decision making for candidate transactions. [3]

The first and second most important components of a predictive score for each individual (customer, employee, healthcare patient, SKU product, vehicle, component, machine, or other organizational unit) in order to determine, inform , or influential analysis, such as marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.

Predictive analytics is used in actuarial science , [4] marketing , [5] financial services , [6] insurance , telecommunications , [7] retail , [8] travel , [9] mobility , [10] healthcare , [11] child protection , [12] [13] pharmaceuticals , [14] capacity planning citation needed ] and other fields.

One of the best-known applications is credit scoring , [1] which is used throughout financial services . Scoring models process a customer’s credit history , loan application , customer data, etc., in order to rank-order by their likelihood of making future credit payments on time.

Definition

Predictive analytics is an area of statistics That deals with extracting information from data and using it to predict trends and behavior patterns. Often the future is in the future, but predictive analytics can be applied to the future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs. [15] The core of predictive analytics relates to capturing relationships between explanatory variablesand the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.

Predictive analytics is a predictive analysis of granularity, ie, a predictive score for the individual organizational element. This distinguishes it from forecasting. For example, “Predictive analytics-technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions.” [16] In future industrial systems, the value of predictive analytics will be predicted to be more effective than predictive analytics for decision-making. citation needed ] Furthermore, the data can be used for closed-loop product life cycle improvement[17] which is the vision of the Industrial Internet Consortium .

Predictive Analytics Process

  1. Define Project : Define the project outcomes, deliverable, scope of the effort, business objectives, identify the data sets that are going to be used.
  2. Data Collection : Data mining for predictive analytics data from multiple sources for analysis. This provides a complete view of customer interactions.
  3. Data Analysis : Data Analysis is the process of inspecting, cleaning and modeling with the objective of discovering
  4. Statistical Analysis: Assumptions and Assumptions Using Standard Statistical Models.
  5. Modeling : Predictive modeling provides the ability to automatically create accurate predictive models about future. There are also options to choose the best solution with multi-modal evaluation.
  6. Deployment : Predictive model deployment provides the option to deploy the analytical results in the field of decision making.
  7. Model Monitoring : Models are managed to monitor the performance of the model.

Types

Generally, the term predictive analytics is used to mean predictive modeling , “scoring” data with predictive models, and forecasting . However, people are more likely to use the term to refer to related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.

Predictive models

Predictive models are models of the relationship between the specific performance of a device and a device. The objective of the model is to assess the likelihood of a specific performance. These categories are in many areas, such as marketing, where they seek out subtle data patterns to answer questions about customer performance, or fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk of a given customer or transaction, in order to guide a decision. With advancements in computing speed, individual agent modeling systems have become capable of simulating human behavior or reactions to given stimuli or scenarios.

The available sample units with known attributes are referred to the “training sample”. The units in other samples, with known attributes, are referred to as “out of [training] sample” units. The out of sample units do not necessarily bear a chronological relationship to the training sample units. For example, the training sample may be composed of literary attributes of writings by Victorian authors, with known attribution, and the out-of-the-box sample; A predictive model may be used in the work of a known author. Another example is given by the analysis of blood splatter in simulated crime scenes.

Descriptive models

Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Comparative predictive models that focus on predicting single customer behaviors (such as credit risk). Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Instead, descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be used to develop larger models of individualized agents and make predictions.

Decision models

Main article: Decision model

Decision models describe the relationship between all the elements of a decision-the known data (including results of predictive models), the decision, and the forecast results of the decision-in order to predict the results of decisions involving many variables. These models can be used in optimization, maximizing certain outcomes while minimizing others. Decision models are used to develop decision logic or a set of business rules that will produce the desired action for each customer or circumstance.

Applications

Although predictive analytics can be used in many applications, we have shown positive results in recent years.

Analytical Customer Relationship Management (CRM)

Analytical customer relationship management (CRM) is a frequent commercial application of predictive analysis. Methods of predictive analytics are used in the context of CRM objectives, which involve a holistic view of the customer, where their information resides in the company or the department involved. CRM uses predictive analysis in marketing campaigns, sales, and customer services to name a few. These tools are required in order to a company to posture and focus their efforts effectively across the breadth of their customer base. They must analyze and Understand the products in demand gold-have the potential for high demand, predict customers’ buying habits in order to Promote relevant products at multiple touch point, and proactively identify and mitigate issues that have the potential to decrease their ability to gain new ones. Analytical customer relationship management can be applied throughout the customer lifecycle ( acquisition , relationship growth , retention , and win-back). Several of the application areas below (direct marketing, cross-sell, customer retention) are part of customer relationship management.

Child protection

Over the last 5 years, some child welfare agencies have started using high risk cases. [18] The approach has been called “innovative” by the Commission to Eliminate Child Abuse and Neglect Fatalities (CECANF), [19] and in Hillsborough County, Florida , where the lead child welfare agency uses a predictive modeling tool, there have been No abuse-related child deaths in the population of this writing. [20]

Clinical decision support systems

Experts use predictive analysis in health care primarily to determine which patients are at risk for developing certain conditions, such as diabetes, asthma, heart disease, and other chronic illnesses. Additional, clinical decision-making support systems integrating predictive analytics to support medical decision making at the point of care. A working definition has been proposed by Jerome A. Osheroff and colleagues: [21] Clinical decision support ( CDS) provides clinicians, staff, patients, or other individuals with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care. It encompasses a variety of tools and interventions such as computerized alerts and reminders, clinical guidelines, order sets, patient data reports and dashboards, documentation templates, diagnostic support, and clinical workflow tools .

A 2016 study of neurodegenerative disorders provides a powerful example of a CDS platform to diagnose, track, predict and monitor the progression of Parkinson’s disease . [22] Using large and multi-source imaging, genetics, clinical and demographic data, these investigators developed a decision support system that can predict the state of the disease with high accuracy, consistency and accuracy. They employ classical model-based and machine learning model-free methods to discriminate between different patients and control groups. Alzheimer’s , Huntington’s , Neurodegenerative DisordersAmyotrophic Lateral Sclerosis , also for clinical and biomedical applications where Big Data is available.

Analytics collection

Many portfolios have a set of delinquent customers who do not make their payments on time. The financial institution has to undertake the collection of activities. A lot of collectible resources are wasted on customers who are difficult or impossible to recover. Predictive analytics can help optimize the allocation of collections by identifying the most effective collection agencies, contact strategies, legal actions and other strategies to each customer.

Cross-sell

Often corporate organizations collect and maintain abundant data (eg, customer records , sale transactions) Forecasting, usage and other behavior, leading to efficient cross- selling, or selling additional products to current customers. [2] This directly leads to higher profitability by customer and stronger customer relationships.

Customer retention

With the number of competing services available, businesses need to focus on customer satisfaction , rewarding consumer loyalty and minimizing customer attrition . In addition, small increases in customer retention have been shown to increase profits disproportionately. One study concludes that it will increase by 25% to 95%. [23]Businesses tend to respond to customer attrition on a reactive basis, acting only after the customer has initiated the process to terminate service. At this stage, the chance of changing the customer’s decision is almost zero. Proper application of predictive analytics can lead to a more proactive retention strategy. By a frequent examination of a customer service, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer ending service sometime soon. [7]An intervention with lucrative offers can increase the chance of retaining the customer. Silent attrition, the behavior of a customer to slowly but steadily reduce use, is another problem that many companies face. Predictive analytics can also predict this behavior, so that the company can take proper actions to increase customer activity.

Direct marketing

When marketing consumer products and services, there is the challenge of keeping up with competing products and consumer behavior. Apart from identifying leads, the most effective combination of products, marketing materials and communication channels should be used. The goal of predictive analytics is Typically to lower the cost per order or cost per share .

Fraud detection

Fraud is a big problem for many businesses and can be of various types: inaccurate credit applications, fraudulent transactions (both offline and online), identity thefts and false insurance claims . These problems plague firms of all sizes in many industries. Some examples of credit card issuers , insurance companies, [24] retail merchants, manufacturers, business-to-business suppliers and even service providers. A predictive model can help us get rid of bads.

Predictive modeling can also be used to identify high-risk fraudulent candidates in business or the public sector. Mark Nigrini developed a risk-scoring method to identify audit targets. He describes the use of this approach to detect fraud in the franchise sales reports of an international fast-food chain. Each location is scored using 10 predictors. The 10 scores are then weighted to give a final overall risk score for each rental. The same scoring approach was also used to identify high-risk check kiting accounts, potentially fraudulent agents, and questionable vendors. A reasonably complex model has been used to identify fraudulent monthly reports by divisional controllers. [25]

The Internal Revenue Service (IRS) of the United States also uses predictive analytics to mine taxes and identify tax fraud . [24]

Recent when? ] advancements in technology also introduced predictive behavior analysis for web fraud detection. This type of solution uses heuristics in order to study normal web user behavior and detect anomalies.

Portfolio, product or economy-level prediction

Often the focus of the analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in inventory-level forecasting for inventory management purposes. The Federal Reserve Board may be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches, where the learning algorithm finds patterns that have predictive power. [26] [27]

Project risk management

Main article: Project risk management

When employing risk management techniques, the results are always predictable and a future scenario. The capital asset pricing model (CAP-M) “predicts” the best portfolio to maximize return. Probabilistic risk assessment (PRA) When combined with mini Delphi techniques and statistical approaches yields accurate forecasts. These are examples of approaches that can be extended to market, and from near to long term. Underwriting (see below) and other business approaches identify risk management as a predictive method.

Underwriting

Many businesses have their accounts for risk exposure For example, auto insurance providers. A financial company needs to assess a borrower’s potential and ability to pay before granting a loan. For a health insurance provider, a predictive analytics analysis, a few years of past medical claims data, and other records where available, to predict the future. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default , bankruptcyetc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior [4] Predictive analytics in the form of credit scores are usually made in the marketplace, especially in the mortgage market where lending decisions are made in a matter of hours rather than days or even weeks. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default.

Technology and big data influences

Big data is a collection of data sets are so wide That complex and That They Become awkward to work with using traditional database management tools. The volume, variety and velocity of big data have been introduced to the board for capture, storage, search, sharing, analysis, and visualization. Examples of big data sources include web logs , RFID , sensor data, social networks , Internet search indexing, call detail records, military surveillance, and complex data in astronomical, biogeochemical, genomics, and atmospheric sciences. Big Data is the core of most predictive analytics services offered by IT organizations. [28]Thanks to progress in hardware-faster CPUs, cheaper memory, and MPP architectures-and new technologies such as Hadoop , MapReduce , and in-database and text analytics for data processing, it is now possible to collect, analyze, and mine massive amounts of structure and unstructured data for new insights. [24] It is also possible to run predictive algorithms on streaming data. [29] Today, exploring new data and methods of data processing are proposed. [30] [31]

Analytical techniques

The approaches and techniques used to conduct predictive analytics are broadly in the process of regression techniques and machine learning techniques.

Regression techniques

Regression models are the mainstay of predictive analytics. The focus is on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there is a wide range of models that can be applied while performing predictive analytics. Some of them are just discussed below.

Linear regression model

The linear regression model analyzes the relationship between the independent and the variable variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in modeling is focused on minimizing the size of the residual, and it is randomly distributed with respect to the model predictions.

The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is Referred to as ordinary least squares (OLS) estimation and results in best Linear Unbiased Estimates (BLUE) of the parameters if and only if the Gauss-Markov Assumptions are satisfied.

Once the model has been estimated to be variable in the model-ie is the estimate of each variable’s contribution reliable? To do this we can check the statistical significance of the model’s coefficients which can be measured using the t-statistic. This amounts to testing the coefficient is significantly different from zero. How well the model predicts the variable dependent on the value of the independent variables can be assessed by using the R² statistic. It measures the proportion of the total variation in the dependent variable that is “explained” (accounted for) by variation in the independent variables.

Discrete choice models

Multiple regression (above) is used when the response variable is continuous and has an unbounded range. The variable response may not be continuous but rather discrete. While it is possible to apply multiple regression variables to discrete variables, some of the assumptions behind the theory of multiple linear regression, and other techniques are more appropriate for this type of analysis. If the variable is discrete, some of these methods are logistic regression , multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary .

Logistic regression

Main article: Logistic regression

In a classification setting, Assigning outcome probabilities to comment Can Be Achieved through the use of a logistic model, qui est Basically a method qui transforms information about the binary dependent variable into an unbounded continuous variable and Estimates has regular multivariate model (See Allison’s Logistic Regression for more information on the theory of logistic regression).

The Wald and likelihood ratio tests are used to test the statistical significance of Each coefficient b in the model (Analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the “percentage correctly predicted”.

Multinomial logistic regression

An extension of the binary logit model to cases where the variable is more than 2 categories is the multinomial logit model . In such cases collapsing the data into two categories might make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variables are not ordered (for example colors like red, blue, green). Some authors-have multinomial regression extended to include feature selection / important methods Such As random multinomial logit .

Probit regression

Probit models offer an alternative to logistic regression for modeling. Even though the outcomes are different, the underlying distributions are different. Probit models are popular in social sciences like economics.

A good way to understand the difference between probabilities and variables is to assume that the variable variable is driven by a variable variable, which is a sum of a linear combination of explanatory variables and a random noise term.

We do not observe z but instead observe y which takes the value 0 (when z <0) or 1 (otherwise). In the logit model we assume que la random noise term follows a logistic distribution with mean zero. In the probit model we assume that it follows a normal distribution with mean zero. Note that in social sciences (eg economics), probit is often used to model situations where the observable variable is continuous and takes values ​​between 0 and 1.

Logit versus probit

The probit model has been around longer than the logit model . They behave similarly, except that the logistic distribution tends to be slightly flatter tailed. One of the reasons has been formulated that the probit model was computationally difficult due to the requirement of numerically calculating integrals. Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are fairly close. However, the odds ratio is easier to interpret in the logit model.

Practical reasons for choosing the probit model over the logistic model would be:

  • There is a strong belief that the underlying distribution is normal
  • The actual event is not a binary outcome ( eg , bankruptcy status) but a proportion ( eg , proportion of population at different debt levels).

Time series models

Time series models are used for predicting or forecasting the future behavior of variables. These models may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. As a result, standard regression techniques have been developed, and the cyclical component of the series. Modeling the dynamic path of a predictable component of the future can be projected into the future.

Time series models estimate difference equations containing stochastic components. Two commonly used forms of autoregressive models (AR) and moving-average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and GM Jenkins combined the AR and MA models to produce the ARMA (autoregressive moving average) model, which is the cornerstone of stationary time series analysis. ARIMA(autoregressive integrated moving average models), on the other hand, are used to describe non-stationary time series. Box and Jenkins suggest a non-stationary time series to obtain a stationary series to which an ARMA model can be applied. Non-stationary time series have a pronounced trend and do not have a constant long-run mean or variance.

Box and Jenkins Proposed Modeling, Estimation and Validation. The identification stage is one of a number of topics, including autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves the diagnosis of such a pleading of the residuals to detect outliers and evidence of model fit.

Interspecifically more complex models of conditional heteroskedasticity and conditional heteroskedasticity. In addition time series are also used to understand inter-relationships among economic variables represented by systems of equations using VAR (vector autoregression) and structural VAR models.

Survival or duration analysis

Survival analysis is another name for time-to-event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in social sciences and economics, as well as in engineering (reliability and failure time analysis).

Censoring and non-normality, which is characteristic of survival data, as it relates to the use of multiple linear regression models . The normal distribution , being a symmetric distribution, is positive, but it can not be negative and therefore normality can not be assumed when dealing with duration / survival data. Hence the normality assumption of regression is violated.

The assumption is that the data was not supposed to be representative of the population of interest. In survival analysis, censored observations of the time when the variable of interest is limited to a terminal event, and the duration of the study is limited in time.

An important concept in survival analysis is the hazard rate , defined as the probability that the event will occur at the time of the future. Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t.

Most models try to model the hazard. The distribution of random time dependence is a positive duration dependence. Some of the distributional choices in survival models are: F, gamma, Weibull, normal log, normal inverse, exponential etc. All these distributions are for a non-negative random variable.

Duration models can be parametric, non-parametric or semi-parametric. Some of the commonly used models are Kaplan-Meier and Cox proportional hazard model (non-parametric).

Classification and regression trees (CART)

Main article: Decision tree learning

Globally-optimal classification tree analysis (GO-CTA) (also called hierarchical optimal discriminant analysis) is a generalization of optimal discriminant analysisvariable des categorales et des variables du variable de descriptive et des variables du variable et des variables du variable. The output of HODA is a non-orthogonal tree that combines variables and cut points for continuous variables that yields maximum predictive accuracy, an assessment of the exact type I error rate, and an evaluation of potential cross-generalizability of the statistical model. Hierarchical optimal discriminant analysis can be thought of as a generalization of Fisher’s linear discriminant analysis. Optimal discriminant analysis is an alternative to ANOVA (analysis of variance) and regression analysis. HOWEVER,

Classification and regression trees (CART) are a non-parametric decision tree learning technique that is classified or classed according to the classification.

Decision trees are formed by a collection of rules based on variables in the modeling data set:

  • Rules based on variables’ values ​​are selected to get the best split to differentiate observations based on the variable dependent
  • Once a rule is selected and splits a node into two, the same process is applied to each “child” node (ie it is a recursive procedure)
  • Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data is as much as possible and then the tree is later pruned .)

Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.

A very popular method for predictive analytics is Leo Breiman’s random forests .

Multivariate adaptive regression splines

Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by fitting piecewise linear regressions .

An important concept associated with regression splines is that of a knot. Knot is where one local is one of two splines.

In multivariate and adaptive regression spline, basis functions are the tool used for generalizing the search for knots. Basic functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines

Multivariate and adaptive regression spline approach deliberately overfits the model and then plums to get to the optimal model. The algorithm is computationally very intensive and in practice is an upper bound on the number of basis functions.

Machine learning techniques

Machine learning , a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and classification, it finds applications in a wide variety of fields including medical diagnostics , credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the variable dependency. In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learning from training examples to predict future events.

A brief discussion of some of these methods commonly used for predictive analytics is provided below. A detailed study of machine learning can be found in Mitchell (1997).

Neural networks

Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to prediction , classification or control in a wide spectrum of fields such as finance , cognitive psychology / neuroscience , medicine , engineering , and physics .

Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are three types of training used by different neural networks: supervised and unsupervised training and reinforcement learning, with supervised being the most common one.

Some examples of neural network training techniques are backpropagation , quick propagation, conjugate descent gradient , projection operator , Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons , Kohonen networks , Hopfield networks , and so on.

Multilayer perceptron (MLP)

The multilayer perceptron (MLP) consists of nonlinearly-activating nodes or sigmoid nodes. This is determined by the weight vector and it is necessary to adjust the weights of the network. The backpropagation employs a gradient between the two and the other. The weights by the iterative process of repetitive present of attributes. Small changes in the weight to get the desired values ​​are done by the process called training the net and is done by the training set (learning rule).

Radial basis functions

A radial basis function (RBF) is a function which has a remote criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function. Such networks have 3 layers, the input layer, the RBF non-linearity and a linear output layer. The most popular choice for non-linearity is the Gaussian. RBF networks have the advantage of not being locked into local minima as the feed-forward networks such as the multilayer perceptron .

Support vector machines

Support vector machines (SVM) are used to detect and exploit complex patterns by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimates. They are commonly used to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc.

Naïve Bayes

Naïve Bayes, based on Bayes, is used for performing classification tasks. Naïve Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best when faced with the ” curse of dimensionality ” problem, ie when the number of predictors is very high.

k -nearest neighbors

The nearest neighbor algorithm(KNN) belongs to the class of pattern recognition statistical methods. The method does not require a priori any assumptions about the distribution of which the modeling sample is drawn. It involves a training set with both positive and negative values. A new sample is classified by calculating the distance to the nearest neighboring training case. The sign of that point will determine the classification of the sample. In the k-nearest neighbor, the k nearest points are considered and the sign of the majority is used to classify the sample. The performance of the algorithm is influenced by three factors: (1) the distance measure used to locate the nearest neighbors; (2) the decision rule used to derive a classification from the k-nearest neighbors; and (3) the number of neighbors used to classify the new sample.independent and identically distributed (iid) , irrespective of which the sample is drawn, the predicted class will converge to the class assignment that minimizes misclassification error. See Devroy et al.

Geospatial predictive modeling

Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor spatial in scope (infrastructure, sociocultural, topographic, etc.) that constrain and influence where the locations of events occur. Geospatial predictive modeling attempts to describe those constraints and the effects of spatially correlating occurrences of historical geospatial locations. Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence.

Tools

Historically, they use predictive analytics tools -a better understanding the results they delivered-required advanced skills. However, modern predictive analytics tools are no longer restricted to IT specialists. citation needed ] As more organizations adopt predictive analytics in decision-making processes and integrate it into their operations, they are creating a shift in the market to business users as primary consumers of information. Business users want tools they can use on their own. Vendors are responding by creating new software that removes the mathematical complexity, provides user-friendly graphic interfaces and / or builds in short cuts that can, for example, recognize the type of data available and suggest an appropriate predictive model. [32]Predictive analytics tools Have Become sophisticated enough to adequately present and dissect data problems, citation needed ] so That Any data-savvy information worker can please use em to analyze data and retrieve Meaningful, useful results. [2] For example, modern tools present findings using simple charts, graphs, and scores that indicate the likelihood of possible outcomes. [33]

There are numerous tools available in the marketplace that help with the execution of predictive analytics. These categories are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed.

Some open source software predictive analytic tools include:

  • Apache Mahout
  • GNU Octave
  • KNIME
  • opennn
  • Orange
  • R
  • scikit-learn (Python)
  • Weka

Commercial predictive analytic tools include:

  • Alpine Data Labs
  • Alteryx
  • Angoss KnowledgeSTUDIO
  • Actuate Corporation BIRT Analytics
  • IBM SPSS Statistics and IBM SPSS Modeler
  • KXEN Inc. Modeler
  • Mathematica
  • MATLAB
  • Minitab
  • LabVIEW [34]
  • Neural Designer
  • Oracle Advanced Analytics
  • Pervasive
  • Predixion Software
  • RapidMiner
  • rcase
  • Revolution Analytics
  • SAP HANA [35] and SAP BusinessObjects Predictive Analytics [36]
  • SAS and its Enterprise Miner
  • Sidetrade
  • Stata
  • Statgraphics
  • Statistica
  • Tibco Software

Beside these software packages, specific tools have been developed for industrial applications. For example, Watchdog Agent Toolbox has been developed and predicted for predictive analysis in prognostics and health managementapplications and is available for MATLAB and LabVIEW . [37] [38]

The most popular commercial predictive analytics software packages according to the Rexer Analytics Survey for 2013 are IBM SPSS Modeler, SAS Enterprise Miner, and Dell Statistica.

PMML

The Predictive Model Markup Language (PMML) is proposed for standard language for expressing predictive models. Such an XML-based language provides a way for the different tools to define predictive models and to share them. PMML 4.0 was released in June, 2009.

Criticism

There are plenty of skeptics When It Comes to computers ‘and algorithms’ abilities to predict the future, Including Gary King , a professor from Harvard University and the director of the Institute for Quantitative Social Science. [39]People are influenced by their environment in innumerable ways. Ideally, it will be important to know that all influential variables are known and measured accurately. “People’s environments change even more rapidly than they themselves do.” If they do not change their minds, they change their minds. may be made in the same situation tomorrow, they may make a completely different decision. [40]

In a study of 1072 papers published in Information Systems Research and MIS Quarterly between 1990 and 2006, only 52 empirical papers attempted predictive claims, of which only 7 [41]

See also

  • Algorithmic trading
  • Computational sociology
  • Criminal Reduction Utilization Statistical History
  • Disease surveillance
  • Learning analytics
  • Odds algorithm
  • Pattern recognition
  • Predictive policing
  • Social media analytics

References

  1. ^ Jump up to:b Nyce, Charles (2007), Predictive Analytics White Paper (PDF) , American Institute for Chartered Property Casualty Underwriters / Insurance Institute of America, p. 1
  2. ^ Jump up to:c Eckerson, Wayne (May 10, 2007), Extending the Value of Your Data Warehousing Investment , The Data Warehousing Institute
  3. Jump up^ Coker, Frank (2014). Pulse: Understanding the Vital Signs of Your Business (1st ed.). Bellevue, WA: Ambient Light Publishing. pp. 30, 39, 42, more. ISBN  978-0-9893086-0-1 .
  4. ^ Jump up to:b Conz, Nathan (September 2, 2008), “Insurers Shift to Customer-focused Predictive Analytics Technologies” , Insurance & Technology
  5. Jump up^ Fletcher, Heather (March 2, 2011), “The 7 Best Uses for Predictive Analytics in Multichannel Marketing” , Target Marketing
  6. Jump up^ Korn, Sue (April 21, 2011), “The Opportunity for Predictive Analytics in Finance” , HPC Wire
  7. ^ Jump up to:b Barkin, Eric (May 2011), “CRM + Predictive Analytics: Why It All Adds Up” , Destination CRM
  8. Jump up^ Das, Krantik; Vidyashankar, GS (July 1, 2006), “Competitive Advantage in Retail Through Analytics: Insights Developing, Creating Value” ,Information Management
  9. Jump up^ McDonald Michele (September 2, 2010), “New Technology Taps ‘Predictive Analytics’ to Target Travel Recommendations” , Travel Market Report
  10. Jump up^ Moreira-Matias, Luís; Gama, João; Ferreira, Michel; Mendes-Moreira, João; Damascus, Luis (2016-02-01). “Time-evolving OD matrix estimation using high-speed GPS data streams” . Expert Systems with Applications . 44 : 275-288. doi : 10.1016 / j.eswa.2015.08.048 .
  11. Jump up^ Stevenson, Erin (December 16, 2011), “Tech Beat: Can you tell me about predictive analytics?” , Times-Standard
  12. Jump up^ Lindert, Bryan (October 2014). “Eckerd Rapid Safety Feedback Bringing Business Intelligence to Child Welfare” (PDF) . Policy & Practice . Retrieved March 3, 2016 .
  13. Jump up^ “Florida Leverages Predictive Analytics to Child Prevent Fatalities – Other States Follow” . The Huffington Post . Retrieved 2016-03-25 .
  14. Jump up^ McKay, Lauren (August 2009), “The New Prescription for Pharma” ,Destination CRM
  15. Jump up^ Finlay, Steven (2014). Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods (1st ed.). Basingstoke: Palgrave Macmillan. p. 237. ISBN  1137379278 .
  16. Jump up^ Siegel, Eric (2013). Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die (1st ed.). Wiley. ISBN  978-1-1183-5685-2 .
  17. Jump up^ Djurdjanovic, Dragan; Lee, Jay; Ni, Jun (July 2003). “Watchdog Agent-an infotronics-based prognostics approach for product performance degradation assessment and prediction” . Advanced Engineering Informatics . 17 (3-4): 109-125. doi : 10.1016 / j.aei.2004.07.005 .
  18. Jump up^ “New Long Overdue Strategies on Measuring Child Welfare Risk – The Chronicle of Social Change” . The Chronicle of Social Change . Retrieved 2016-04-04 .
  19. Jump up^ “Eckerd Rapid Safety Feedback® Highlighted in National Report to Eliminate Child Abuse and Neglect Fatalities” . Eckerd Kids . Retrieved 2016-04-04 .
  20. Jump up^ “A National Strategy to Child Eliminate Abuse and Neglect Fatalities”(PDF) . Commission to Eliminate Child Abuse and Neglect Fatalities. (2016) . Retrieved April 4, 2016 .
  21. Jump up^ “A Roadmap for National Action on Clinical Decision Support” . JAMIA . Retrieved 2016-08-10 .
  22. Jump up^ “Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruous, Multi-source and Incomplete Observations”. PLoS ONE . 11 : e0157077. doi : 10.1371 / journal.pone.0157077 .
  23. Jump up^ Reichheld, Frederick; Schefter, Phil. “The Economics of E-Loyalty” . Havard Business School Working Knowledge . Retrieved 10 November 2014.
  24. ^ Jump up to:c Schiff, Mike (March 6, 2012), BI Experts: Why Predictive Analytics Will Continue to Grow , The Data Warehouse Institute
  25. Jump up^ Nigrini, Mark (June 2011). “Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations” . Hoboken, NJ: John Wiley & Sons Inc. ISBN  978-0-470-89046-2 .
  26. Jump up^ Dhar, Vasant (April 2011). “Prediction in Financial Markets: The Case for Small Disjuncts” . ACM Transactions on Intelligent Systems and Technology . 2 (3).
  27. Jump up^ Dhar, Vasant; Cabbage, Dashin; Provost Foster (October 2000). “Discovering Interesting Patterns in Investment Decision Making with GLOWER – A Genetic Learning Algorithm Overlaid With Entropy Reduction” . Data Mining and Knowledge Discovery . 4 (4).
  28. Jump up^http://www.hcltech.com/sites/default/files/key_to_monetizing_big_data_via_predictive_analytics.pdf
  29. Jump up^ “Predictive Analytics on Evolving Data Streams” (PDF) .
  30. Jump up^ Ben-Gal I. Dana A .; Shkolnik N. and Singer (2014). “Efficient Construction of Decision Trees by the Dual Information Distance Method” (PDF) . Quality Technology & Quantitative Management (QTQM), 11 (1), 133-147.
  31. Jump up^ Ben-Gal I .; Shavitt Y .; Weinsberg E .; Weinsberg U. (2014). “Peer-to-peer information retrieval using shared-content clustering” (PDF) . Knowl Inf Syst . 39 : 383-408. doi : 10.1007 / s10115-013-0619-9 .
  32. Jump up^ Halper, Fern (November 1, 2011), “The Top 5 Trends in Predictive Analytics” , Information Management
  33. Jump up^ MacLennan, Jamie (May 1, 2012), 5 Myths about Predictive Analytics , The Data Warehousing Institute
  34. Jump up^ http://sine.ni.com/nips/cds/view/p/lang/en/nid/210191
  35. Jump up^http://help.sap.com/saphelp_hanaplatform/helpdata/en/32/731a7719f14e488b1f4ab0afae995b/frameset.htm
  36. Jump up^ http://go.sap.com/product/analytics/predictive-analytics.html
  37. Jump up^ “Watchdog Agent Toolbox for Labview” .
  38. Jump up^ “Watchdog Agent Toolbox” (PDF) . IMSCenter .
  39. Jump up^ Temple-Raston, Dina (Oct 8, 2012), Predicting The Future: Fantasy Gold A Good Algorithm? , NPR
  40. Jump up^ Alverson, Cameron (Sep 2012), Polling and Statistical Models Can not Predict the Future , Cameron Alverson
  41. Jump up^ Shmueli, Galit (2010-08-01). “To Explain or to Predict?” . Statistical Science . 25 (3): 289-310. ISSN  0883-4237 . doi : 10.1214 / 10-STS330.