Predikto has automated over 80% of the data analytic process. We can use more data, evaluate a larger number of potential feature-sets and models, and evolve our feature-sets and models during live production. We avoid the “set it and forget it” trap.

What is Predictive Analytics?

To understand the Predikto Advantage, first you must understand predictive analytics.

Predictive Analytics is the process of using historical data to develop models that can be used with live data to predict future events. Complex patterns are discerned in the data that provide a signature of an event of interest. When this signature is seen in live data, the predictive analytic models presume that the same event will occur at some time in the future.


A typical analtic process involves:

  • Data of interest is gathered into a data lake.
  • In an iterative process, features from the data are selected and then used to train analytic models. Statistical methods are used to determine the best features and the best models.
  • Once an appropriate feature set and model are generated, thresholds are set to determine when to notify if an event is going to occur.
  • The models are ready for production. These models are hooked to a live data stream and they begin to produce predictions about future events.


  • Ingest: automated by Predikto.
  • Processing: automated by Predikto.
  • Output: Business Inteligence driven by joined data
  • Output: Contextualized Notifications of impending events.

The other guy.

  • Ingest: manually set-up by internal team.
  • Processing: manually set by internal team.
  • Output: Business Inteligence on disjoint data sets.
  • Output: Raw prediction scores.

The Whole Data and Model Iceberg

We can evaluate more possibilities, faster, providing you with the best models that continually update to meet your changing needs.

Bigger Data

Predikto's automation allows for more information to be ingested and used as potential indicators. We combine equipment sensor data with a wealth of other parameters:

Autodynamic Learning

Predikto's automation of feature creation, model selection, and threshold valuation allows for:


Architecture Built for Scaling and Automation

Predikto's architecture is built to support scaling to handle complex compute tasks and large data handling.

Automated processes underpin all services from ETL to feature creation to feature selection to model creation to model selection to threshold optimization.

Features from the mundande to the unexplored are automatically created and scored. Examples: number of seconds since this device has had it’s oil changed, maximum temperature of the left fuel rail, mean time between service visits, z-score of count of fault code ABC vs any other device operating in this region.

Automation enables continual exploration of the feature and model space. The system evolves with your data.

Integrity is maintained through strict versioning and retention.

Details for interpretable models

Interpretable Models

Predikto's propriaty algorihtms couple each prediction with the top 10 features that influenced that prediction.

These features are what drove the prediction score to rise above the alert threshold. They signify the inputs that are deviating from the normal state.

Providing this information builds trust in the black box.

ETL: Extract. Transform. Load.

We're data wranglers. We've been there, and done that, with the messiest, distributed, siloed, ill-formatted, data there is.

Predikto realizes your data is siloed across many locations and different formats. We work with you to identify the data you need. We write automated processes to extract the data, transform it into a uniform format, and then load it into our database for use in our models. The best part? All this uniform, consolidated data is available to you through our APIs.

Start the journey to predictable operations today. Contact Us