aivis Engine v2 - Signal Prediction - User Guide

Download OpenAPI specification:Download

aivis Signal Prediction is one of the engines of the aivis Technology Platform by Vernaio and a component of multiple products such as aivis Insights, Process Booster, and more. aivis Signal Prediction allows you to predict new values of a specific signal, and to analyse its dependence on other signals.

Using revolutionary new mathematical concepts of multivariate & non-linear statistics (i.e., Geometrical Kernel Machines), aivis Signal Prediction minimizes the user's input requirements and reaches a prediction quality never seen before.

During Training, the engine generates a model trained on historical data that includes known values of the target signal (the signal that is to be predicted). Before the actual training, data preparation steps are taken, such as time synchronization, automatic feature engineering, and automatic feature selection. The resulting model is then used during Inference, where, on demand, it predicts the current value of the target signal, based on recent values of the relevant signals.

Introduction

API References

This documentation explains the usage and principles behind aivis Signal Prediction to data and software engineers. For detailed API descriptions of docker images, web endpoints and SDK functions, please consult the reference manual of the regarding component:

For additional support, go to Vernaio Support.

Workflow Overview

Using aivis Signal Prediction consists of 2 steps that each fulfill a specific task:

  1. Training, which creates a compact prediction model based on the Training Data. The training data has to include values of the target (the signal to be predicted). Results of the Training can also be used to inspect the dependencies between other signals and the target.
  2. Inference, which applies the model from the previous step to some Inference Data (without any target values) to create a prediction either for historical evaluation or live prediction.

Workflow Overview

Artifact Distribution

Currently, aivis Signal Prediction is distributed to a closed user base only. To gain access to the artifacts, as well as for any other questions, you can open a support ticket via aivis Support.

Example Use Case

As an illustrative use case example, we will use aivis Signal Prediction for the continuous prediction of paper quality.

In paper manufacturing, the operator of the paper machine needs to ensure a certain minimal quality level of the paper, depending on the quality expectation of the customer. In principle, the current quality is physically measureable, but there are two substantial problems:

  • One aspect of paper quality is its resistance to tear. This measurement is destructive. The production process must be stopped before the measurement can take place. Thus, the quality cannot be measured continuously but only in gaps.
  • Beyond that, measuring the paper quality is no simple task. It has to be performed in a laboratory, which takes a considerably amount of time. This means that the result will only be available two hours after the actual measurement.

Taking all this into account, getting an immediate and nearly continuous feedback of the current paper quality would enable the operator to reliably run the machine closer to the minimal quality requirement and therefore save money.

In the following two chapters we will train a model that achieves precisely that. We will evaluate it on a historical time window outside of the training period ("out-of-sample").

Getting Started (Docker)

The docker images of aivis Signal Prediction are prepared for easy usage. They use the SDK internally, but have a simpler file-based interface. If you have a working docker workflow system like Argo, you can build your own automated workflow based on these images.

In this chapter, we will show you how to get started using docker images. Usage of the SDK will be covered by the next chapter.

Run Example Code

A working example that builds on the code explained below can be downloaded directly here: signal-prediction-examples.zip.

This zip file contains example code for docker, python and java in respective subfolders. All of them use the same dataset which is in the data subfolder.

Prerequisites: Additionally to the signal-prediction-examples.zip you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

  • The docker images aivis-engine-v2-sp-training-worker, aivis-engine-v2-sp-inference-worker and (optionally for HTML report generation) aivis-engine-v2-toolbox
  • An aivis licensing key, see licensing

As a Kubernetes user even without deeper Argo knowledge, the aivis-engine-v2-example-sp-argo.yaml shows best how the containers are executed after each other, how training and inference workers are provided with folders that contain the data csvs and how the toolbox assembles a HTML report at the end.

Artifacts

There are 3 different docker images:

  • The Training Worker creates the model:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-training-worker:{VERSION}
  • The Inference Worker creates predictions for a predefined time window in a bulk manner. This is convenient for evaluating a model:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-inference-worker:{VERSION}
  • The Inference Service offers a RESTful web API that allows the triggering of individual predictions for a specified time via an HTTP call:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-inference-service:{VERSION}
  • The Incremental Learning Worker carries out the incremental training process on an already trained model 2.4:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-incremental-learning-worker:{VERSION}

All docker images are Linux-based.

Requirements

You need an installation of Docker on your machine as well as access to the engine artifacts:

docker -v
docker pull {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-training-worker:{VERSION}
docker -v
docker pull {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-training-worker:{VERSION}

Licensing

A valid licensing key is necessary for every aivis calculation in every engine and every component. It has to be set (exported) as environment variable AIVIS_ENGINE_V2_API_KEY.

If aivis returns a licensing error despite the environment variable being set, please check the following items

  • Terminals usually need to be restarted to learn newly set environment variables.
  • Licensing keys have the typical form <FirstPartOfKey>.<SecondPartOfKey> with first and second part being UUIDs. In particular, there is no whitespace.
  • A common error source is that the user's firewall does not let HTTPS requests to v3.aivis-engine-v2.vernaio-licensing.com (before release 2.7: v2.aivis-engine-v2.vernaio-licensing.com, before release 2.3: aivis-engine-v2.perfectpattern-licensing.de) pass and the licensing request never reaches the licensing server. In that case outgoing connections to that hostname and TCP port 443 need to be whitelisted.

Training

First, we need to train the model (workflow step 1: Training) using the Training Worker.

At the beginning, we create a folder docker, a subfolder training-config and add the configuration file config.yaml:

data:
  folder: /srv/data
  dataTypes:
    defaultType: FLOAT
training:
  target:
    signal: TARGET
output:
  folder: /srv/output

For the moment, you may take this file as it is. The different keys will become clearer from the later sections and the docker reference manual. As a next step, we create a second folder data and add the Training Data CSV file train_sp.csv to the folder. Afterwards, we create a blank folder output.

Our folder structure should now look like this:

+- docker
|  +- training-config
|      +- config.yaml
|
+- data
|  +- train_sp.csv
|
+- output

Finally, we can start our training via:

docker run --rm -it \
  -v $(pwd)/docker/training-config:/srv/conf \
  -v $(pwd)/data/train_sp.csv:/srv/data/train_sp.csv \
  -v $(pwd)/output:/srv/output \
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} \
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-training-worker:{VERSION}
docker run --rm -it `
  -v ${PWD}/docker/training-config:/srv/conf `
  -v ${PWD}/data/train_sp.csv:/srv/data/train_sp.csv `
  -v ${PWD}/output:/srv/output `
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} `
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-training-worker:{VERSION}

After a short time, this should lead to two output files in the output folder:

  • training-report.json can be inspected to get information about dependencies between the various signals.
  • model.json holds all model information for the following Inference (prediction).

Evaluation / Inference

After the training has finished, we can evaluate it by running a historical evaluation (bulk inference) on the second data file. This is the out-of-sample evaluation. To assess the quality of the model, we want to obtain predictions exactly at those timestamps for which we do know the true target values. In addition to this, the config file below instructs the Inference Worker to create a prediction every 2 minutes. This way, we obtain a continuous stream of values — exactly as it would be desired by the machine operator.

For this, we create a second subfolder inference-config of the data folder and add the configuration file config.yaml:

data:
  folder: /srv/data
  dataTypes: 
    defaultType: FLOAT
inference: 
  config: 
    skipOnInsufficientData: true 
  modelFile: /srv/output/model.json
  timestamps: 
  - _type: AtNextSignalValue
    startTime: 1598918880000
    endTime: 1600851780000
    signal: TARGET
output: 
  folder: /srv/output

Note that there are different prediction methods infer ... with category probabilities and infer float with next normal, see sections Infer With Category Probabilities and Infer Float With Next Normal. After that, we add the Inference Data CSV file eval_sp.csv to the data folder.

Our folder structure should now look like this:

+- docker
|  +- training-config
|      +- config.yaml
|  +- inference-config
|      +- config.yaml
|
+- data
|  +- train_sp.csv
|  +- eval_sp.csv
|
+- output

Finally, we can run the Inference via:

docker run --rm -it \
  -v $(pwd)/docker/inference-config:/srv/conf \
  -v $(pwd)/data/eval_sp.csv:/srv/data/eval_sp.csv \
  -v $(pwd)/output:/srv/output \
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} \
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-inference-worker:{VERSION}
docker run --rm -it `
  -v ${PWD}/docker/inference-config:/srv/conf `
  -v ${PWD}/data/eval_sp.csv:/srv/data/eval_sp.csv `
  -v ${PWD}/output:/srv/output `
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} `
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-sp-inference-worker:{VERSION}

Successful execution should lead to the file predictions.json in the output folder, which holds the predicted values. As mentioned previously, a prediction is made every time there is a target value and additionally every 2 minutes.

When plotted, the output looks like this (the target is displayed in purple, the predictions in blue):

Evaluation

It is of course possible to make a live prediction (just-in-time) for the current timestamp. You can then feed the predicted values back as new signal into your hot storage / time series database / historian and reach our initial goal of providing the machine operator with a continuous live quality KPI. For this purpose, the Inference Service may be preferable as it offers a RESTful API to trigger predictions via HTTP in contrast to the Inference Worker, which uses a file based API.

Next, we will do the same calculations with direct function calls via an SDK.

Getting Started (SDK)

The SDK of aivis Signal Prediction allows for direct calls from your C, Java or Python program code. All language SDKs internally use our native shared library (FFI). As C APIs can be called from various other languages as well, the C-SDK can also be used with languages such as R, Go, Julia, Rust, and more. Compared to the docker images, the SDK enables a more fine-grained usage and tighter integration.

In this chapter we will show you how to get started using the SDK.

Run Example Code

A working sdk example that builds on the code explained below can be downloaded directly here:

This zip file contains example code for docker, python and java in respective subfolders. All of them use the same dataset which is in the data subfolder.

Additionally to the `signal-prediction-examples.zip` you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

Required artifacts:

  • These aivis engine v2 .whl-files which you will receive in a libs.zip directly from aivis Support:
    • aivis_engine_v2_sp_runtime_python_full-{VERSION}-py3-none-win_amd64.whl: A signal prediction full python runtime
      (here for windows, fitting your operating system - see artifacts for other options on linux and macos.)
    • aivis_engine_v2_base_sdk_python-{VERSION}-py3-none-any.whl: The base python sdk
    • aivis_engine_v2_sp_sdk_python-{VERSION}-py3-none-any.whl: The signal prediction python sdk
    • aivis_engine_v2_toolbox-{TOOLBOX-VERSION}-py3-none-any.whl: The toolbox python sdk - optional for HTML report generation
  • An aivis licensing key, see licensing, which you will receive directly from aivis Support

Preparations:

  • Make sure you have a valid Python(>=3.9) installation.
  • To apply the aivis licensing key, create an environment variable AIVIS_ENGINE_V2_API_KEY and assign the licensing key to it.
  • Make sure you have an active internet connection so that the licensing server can be contacted.
  • Download and unzip the signal-prediction-examples.zip. The data CSVs train_sp.csv and eval_sp.csv need to stay in **/data.
  • Download and unzip the libs.zip. These .whl-files need to be in **/libs.

The folder now has the following structure:

+- data
|  +- train_sp.csv
|  +- eval_sp.csv
|
+- docker
|  +- # files to run the example via docker images, which we will not need now
|
+- java
|  +- # files to run the example via java sdk, which we will not need now 
|
+- libs
|  +- # the .whl files to run aivis
|
+- python
|  +- # files to run the example via python sdk 

Running the example code:

  • Navigate to the **/python subfolder. Here, you find the classic python script example_sp.py and the jupyter notebook example_sp.ipynb. Both run the exact same example and output the same result. Choose which one you want to run.
  • There are various ways to install dependencies from .whl files. We will now explain two options, which are installing them via pip install or installing them via poetry. Many other options are also possible, of course.

Option A: pip install (only for the classic python script example_sp.py, not for the jupyter notebook example_sp.ipynb)

  • open a console in the **/python subfolder and run the following commands:
      # installs the `.whl` files
      pip install -r requirements-<platform>.txt
    
      # runs the classic python script `example_sp.py`
      python example_sp.py --input=../data --output=output
    

Option B: poetry install

  • If not already happened, install poetry, a python package manager:
      # installs poetry (a package manager)
      python -m pip install poetry
    
  • Run either the classic python script example_sp.py
      # installs the `.whl` files
      poetry install --no-root
    
      # runs the classic python script `example_sp.py`
      poetry run python example_sp.py --input=../data --output=output
    
  • Or run jupyter notebook example_sp.ipynb by executing the following commands in the console opened in the **/python subfolder. The first one might take a while, the third one opens a tab in your browser.
      # installs the `.whl` files
      poetry install --no-root
    
      # installs jupyter kernel
      poetry run ipython kernel install --user --name=test_sp
    
      # runs the jupyter python script `example_sp.ipynb`
      poetry run jupyter notebook example_sp.ipynb
    

After running the scripts, you will find your computation results in **/python/output.

Additionally to the signal-prediction-examples.zip you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

Required artifacts:

  • These aivis engine v2 .jar files which you will receive in a libs.zip directly from aivis Support:
    • aivis-engine-v2-sp-runtime-java-full-win-x8664-{VERSION}.jar: A signal prediction full java runtime, here for windows, fitting your operating system - see artifacts for other options on linux and macos.
    • aivis-engine-v2-base-sdk-java-{VERSION}.jar: The base java sdk
    • aivis-engine-v2-sp-sdk-java-{VERSION}.jar: The signal prediction java sdk
    • There is NO toolbox jar for HTML report generation.
  • An aivis licensing key, see licensing, which you will receive directly from aivis Support

Preparations:

  • Make sure you have a valid Java(>=11) installation.
  • To apply the aivis licensing key, create an environment variable AIVIS_ENGINE_V2_API_KEY and assign the licensing key to it.
  • Make sure you have an active internet connection so that the licensing server can be contacted.
  • Download and unzip the signal-prediction-examples.zip. The data CSVs train_sp.csv and eval_sp.csv need to stay in **/data.
  • Download and unzip the libs.zip. These .jar-files need to be in **/libs.

The folder now has the following structure:

+- data
|  +- train_sp.csv
|  +- eval_sp.csv
|
+- docker
|  +- # files to run the example via docker images, which we will not need now
|
+- java
|  +- # files to run the example via java sdk 
|
+- libs
|  +- # the .jar files to run aivis
|
+- python
|  +- # files to run the example via python sdk, which we will not need now 

Running the example code:

  • We use Gradle as our Java-Package-Manager. It's easiest to directly use the gradle wrapper.
  • Navigate to the **/java subfolder. Here, you find the build.gradle. Check, if the paths locate correctly to your aivis engine v2 .jar files in the **/libs subfolder.
  • open a console in the **/java subfolder and run the following commands:
      # builds this Java project with gradle wrapper
      ./gradlew clean build
    
      # runs Java with parameters referring to input and output folder
      java -jar build/libs/example_sp.jar --input=../data --output=output
    

After running the scripts, you will find your computation results in **/java/output.

Artifacts

Our SDK artifacts come in two flavours:

  • full packages provide the full functionality and are available for mainstream targets only:
    • win-x8664
    • macos-armv8* (SDK >= 11.0) 2.3
    • macos-x8664* (SDK >= 11.0; until aivis engine version 2.9.0) 2.3
    • linux-x8664 (glibc >= 2.14)
  • inf packages contain only API functions regarding the inference of a model. As lightweight artifacts they are available for a broader target audience:
    • win-x8664
    • macos-armv8* (SDK >= 11.0) 2.3
    • macos-x8664* (SDK >= 11.0; until aivis engine version 2.9.0) 2.3
    • linux-x8664 (glibc >= 2.14)
    • linux-armv7 (glibc >= 2.18; until aivis engine version 2.9.0)
    • linux-armv8 (glibc >= 2.18; until aivis engine version 2.9.0)
    • linux-ppc64 (glibc >= 2.18; until aivis engine version 2.2.0)

* Only Python and C SDKs are supported. Java SDK is not available for this target.

In this chapter we want to demonstrate the full API functionality and thus always use the full package.

To use the Python-SDK you must download the SDK artifact (flavour and target generic) for your pythonpath at build time. Additionally at installation time, the runtime artifact must be downloaded with the right flavour and target.

The artifacts are distributed through a PyPI registry.

Using Poetry you can simply set a dependency on the artifacts specifying flavour and version. The target is chosen depending on your installation system:

aivis_engine_v2_sp_sdk_python = "{VERSION}"
aivis_engine_v2_sp_runtime_python_{FLAVOUR} = "{VERSION}"

The SDK supports the full API and will throw a runtime exception if a non-inference function is invoked with an inference-flavoured runtime.

To use the Java-SDK, you must download at build time:

  • SDK artifact (flavour and target generic) for your compile and runtime classpath
  • Runtime artifact with the right flavour and target for your runtime classpath

It is possible to include multiple runtime artifacts for different targets in your application to allow cross-platform usage. The SDK chooses the right runtime artifact at runtime.

The artifacts are distributed through a Maven registry.

Using Maven, you can simply set a dependency on the artifacts specifying flavour, version and target:

<dependency>
  <groupId>com.vernaio</groupId>
  <artifactId>aivis-engine-v2-sp-sdk-java</artifactId>
  <version>{VERSION}</version>
</dependency>
<dependency>
  <groupId>com.vernaio</groupId>
  <artifactId>aivis-engine-v2-sp-runtime-java-{FLAVOUR}-{TARGET}</artifactId>
  <version>{VERSION}</version>
  <scope>runtime</scope>
</dependency>

Alternativly, with Gradle:

implementation 'com.vernaio:aivis-engine-v2-sp-sdk-java:{VERSION}'
runtimeOnly    'com.vernaio:aivis-engine-v2-sp-runtime-java-{FLAVOUR}-{TARGET}:{VERSION}'

The SDK supports the full API and will throw a runtime exception if a non-inference function is invoked with an inference-flavoured runtime.

To use the C-SDK, you must download the SDK artifact at build time (flavour and target generic). For final linkage/execution you need the runtime artifact with the right flavour and target.

The artifacts are distributed through a Conan registry.

Using Conan, you can simply set a dependency on the artifact specifying flavour and version. The target is chosen depending on your build settings:

aivis-engine-v2-sp-sdk-c/{VERSION}
aivis-engine-v2-sp-runtime-c-{FLAVOUR}/{VERSION}

The SDK artifact contains:

  • Headers: include/aivis-engine-v2-sp-core-full.h

The runtime artifact contains:

  • Import library (LIB file), if Windows target: lib/aivis-engine-v2-sp-{FLAVOUR}-{TARGET}.lib
  • Runtime library (DLL file), if Windows target: bin/aivis-engine-v2-sp-{FLAVOUR}-{TARGET}.dll (also containing the import library)
  • Runtime library (SO file), if Linux target: lib/aivis-engine-v2-sp-{FLAVOUR}-{TARGET}.so (also containing the import library)

The runtime library must be shipped to the final execution system.

Licensing

A valid licensing key is necessary for every aivis calculation in every engine and every component. It has to be set (exported) as environment variable AIVIS_ENGINE_V2_API_KEY.

If aivis returns a licensing error despite the environment variable being set, please check the following items

  • Terminals usually need to be restarted to learn newly set environment variables.
  • Licensing keys have the typical form <FirstPartOfKey>.<SecondPartOfKey> with first and second part being UUIDs. In particular, there is no whitespace.
  • A common error source is that the user's firewall does not let HTTPS requests to v3.aivis-engine-v2.vernaio-licensing.com (before release 2.7: v2.aivis-engine-v2.vernaio-licensing.com, before release 2.3: aivis-engine-v2.perfectpattern-licensing.de) pass and the licensing request never reaches the licensing server. In that case outgoing connections to that hostname and TCP port 443 need to be whitelisted.

Setup

Before we can invoke API functions of our SDK, we need to set it up for proper usage and consider the following things.

Releasing Unused Objects

It is important to ensure the release of allocated memory for unused objects.

In Python, freeing objects and destroying engine resources like Data-, Training- and Inference-objects is done automatically. You can force resource destruction with the appropriate destroy function.

In Java, freeing objects is done automatically, but you need to destroy all engine resources like Data-, Training- and Inference-objects with the appropriate destroy function. As they all implement Java’s AutoClosable interface, we can also write a try-with-resource statement to auto-destroy them:

try(final SignalPredictionData trainingData = SignalPredictionData.create()) {

  // ... do stuff ...

} // auto-destroy when leaving block

In C, you must always

  • free every non-null pointer allocated by the engine with aivis_free (all pointers returned by functions and all double pointers used as output function parameter e.g. Error*)
    Note: aivis_free will only free own objects. Also, it will free objects only once and it disregards null pointers.
  • free your own objects with free as usual.
  • destroy all handles after usage with the appropriate destroy function.

Error Handling

Errors and exceptions report what went wrong on a function call. They can be caught and processed by the outside.

In Python, an Exception is thrown and can be caught conveniently.

In Java, an AbstractAivisException is thrown and can be caught conveniently.

In C, every API function can write an error to the given output function parameter &err (to disable this, just set it to NULL). This parameter can then be checked by a helper function similar to the following:

const Error *err = NULL;

void check_err(const Error **err, const char *action) {

  // everything is fine, no error
  if (*err == NULL)
    return;

  // print information
  printf("\taivis Error: %s - %s\n", action, (*err)->json);

  // release error pointer
  aivis_free(*err);
  *err = NULL;

  // exit program
  exit(EXIT_FAILURE);
}

Failures within function calls will never affect the state of the engine.

Logging

The engine emits log messages to report on the progress of each task and to give valuable insights. These log messages can be caught via registered loggers.

# create logger
class Logger(EngineLogger):
    def log(self, level, thread, module, message):
        if (level <= 3):
            print("\t... %s" % message)

# register logger
SignalPredictionSetup.register_logger(Logger())
// create and register logger
SignalPredictionSetup.registerLogger(new EngineLogger() {
            
    public void log(int level, String thread, String module, String message) {
        if (level <= 3) {
            System.out.println(String.format("\t... %s", message));
        }
    }
});
// create logger
void logger(const uint8_t level, const char *thread, const char *module, const char *message) {
  if (lvl <= 3)
    printf("\t... %s\n", message);
}

// register logger
aivis_setup_register_logger(&logger, &err);
check_err(&err, "Register logger");

Thread Management

During the usage of the engine, a lot of calculations are done. Parallelism can drastically speed things up. Therefore, set the maximal threads to a limited number of CPU cores or set it to 0 to use all available cores (defaults to 0).

# init thread count
SignalPredictionSetup.init_thread_count(4)
// init thread count
SignalPredictionSetup.initThreadCount(4);
// init thread count
aivis_setup_init_thread_count(4, &err);
check_err(&err, "Init thread count");

Data Input

Now that we are done setting up the SDK, we need to create a data store that holds our historical Training Data. In general, all data must always be provided through data stores. You can create as many as you want.

After the creation of the data store, you can fill it with signal data.

# create empty data context for training data
training_data = SignalPredictionData.create()

# add sample data
training_data.add_float_signal("signal-id", [
  DtoFloatDataPoint(100, 1.0),
  DtoFloatDataPoint(200, 2.0),
  DtoFloatDataPoint(300, 4.0),
])

# ... use training data ...
// create empty data context for training data
try(final SignalPredictionData trainingData = SignalPredictionData.create()) {

  // add sample data
  trainingData.addFloatSignal("signal-id", Arrays.asList(
    new DtoFloatDataPoint(100L, 1.0),
    new DtoFloatDataPoint(200L, 2.0),
    new DtoFloatDataPoint(300L, 3.0),
  ));

  // ... use training data ...

} // auto-destroy training data
// create empty data context for training data
TimeseriesDataHandle training_data = aivis_timeseries_data_create(&err);
check_err(&err, "Create training data context");

const DtoFloatDataPoint points[] = {
  {100, 1.0},
  {200, 2.0},
  {300, 4.0},
};

// add sample data
aivis_timeseries_data_add_float_signal(training_data, "signal-id", &points[0], sizeof points / sizeof *points, &err);
check_err(&err, "Adding signal");

// ... use training data ...

// destroy data context
aivis_timeseries_data_destroy(training_data, &err);
check_err(&err, "Destroy data context");
training_data = 0;

Above we have filled the data store with three hard coded data points to illustrate the approach. Usually you will read in the data from some other source. In the following, we will assume you have read in the file train_sp.csv shipped with the Example Project. In particular we assume some signal has been added with signal ID TARGET.

Training

With the data store filled with historical Training Data, we can now create our training:

# build training config
training_config = json.dumps({
  "target": {
    "signal": "TARGET"
  },
})

# create training and train the model
training = SignalPredictionTraining.create(training_data, training_config)

# ... use training ...
// build training config
final DtoTrainingConfig trainingConfig = new DtoTrainingConfig(
  new DtoTargetConfig("TARGET"));

// create training and train the model
final SignalPredictionTraining training = SignalPredictionTraining.create(trainingData, trainingConfig) {

  // ... use training ...

} // auto-destroy training
// build training config
const char *training_config = "{"
  "\"target\": {"
    "\"signal\": \"TARGET\""
  "}"
"}";

// For the moment, you may take the training configuration as it is. Different configuration will be explained later on.

// create training and train the model
SignalPredictionTrainingHandle training_handle = aivis_signal_prediction_training_create(
  training_data,
  (uint8_t *) training_config,
  strlen(training_config),
  &err
);
check_err(&err, "Create training");

// ... use training ...

// destroy training
aivis_signal_prediction_training_destroy(training_handle, &err);
check_err(&err, "Destroy training");
training_handle = 0;

Evaluation / Inference

After the training has finished, we can evaluate it by running a historical evaluation (bulk inference) on the inference data (out-of-sample). This way, we obtain a continuous stream of values — exactly as it would be desired by the machine operator.

As we do the inference in the same process with the training, we can create the inference directly from the training. If these two processes were separated we could get the model explicitly from the training and write it to a file. The inference could then be created based on the content of the model file.

# build inference config
inference_config = json.dumps({"skipOnInsufficientData": True})

# create inference
inference = SignalPredictionInference.create_by_training(training, inference_config)

# ... use inference ...
// build inference config
final DtoInferenceConfig inferenceConfig = new DtoInferenceConfig(true);

// create inference
try(final SignalPredictionInference inference = SignalPredictionInference.createByTraining(training, inferenceConfig)) {

  // ... use inference ...

} // auto-destroy inference
// build inference config
const char *inference_config = "{\"skipOnInsufficientData\": true}";

// create inference
SignalPredictionInferenceHandle inference_handle = aivis_signal_prediction_inference_create_by_training_handle(
  training_handle,
  (uint8_t *) inference_config,
  strlen(inference_config),
  &err
);
check_err(&err, "Create inference");

// ... use inference ...

// destroy inference
aivis_signal_prediction_inference_destroy(inference_handle, &err);
check_err(&err, "Destroy inference");
inference_handle = 0;

Finally, we want to infer some predictions for a list of Inference Timestamps. Therefore we again need to provide a filled data store which this time holds our Inference Data, just in the same way as we created our Training Data store. We then invoke the appropriate infer function with it. Note that there are different prediction methods infer ... with category probabilities and infer float with next normal, see sections Infer With Category Probabilities and Infer Float With Next Normal.


# choose inference timestamps
timestamps = ...

# infer predictions
predictions = inference.infer_float(inference_data, timestamps)

# ... use predictions e.g. for plotting ...
// choose inference timestamps
final List<Long> timestamps = ...

// infer predictions
final List<DtoFloatDataPoint> predictions = inference.inferFloat(inferenceData, timestamps);

// ... use predictions e.g. for plotting ...
// choose inference timestamps
Time *timestamps = ...

// infer predictions
const List_DtoFloatDataPoint *predictions = aivis_signal_prediction_inference_infer_float(
  inference_handle,
  inference_data,
  timestamps,
  timestamps_len,
  &err
);
check_err(&err, "Infer predictions");

// ... use predictions e.g. for plotting ...

// free predictions
aivis_free(predictions);
predictions = NULL;

// free timestamps
free(timestamps);
timestamps = NULL;

Using the file eval_sp.csv shipped with the Example Project, the output can be plotted like this (the labels are displayed in purple, the predictions in blue):

Evaluation

Besides plotting, you can feed the predicted values back as new signal into your hot storage / time series database / historian and reach our initial goal of providing the machine operator with a continuous live quality KPI.

In the next chapter we will focus on the nature of input data.

Data Specification

In the course of using aivis, large amounts of data are ingested. This chapter explains the terminology as well as the required format, quality and quantity.

Timeseries Data / Signals

Most aivis engines work on time series data that is made up of signals. Every signal consists of two things, these being

  • an ID, which is any arbitrary String except timestamp and availability. The ID needs to be unique within the data.
  • a list of data points. Each data point consists of a signal value and a specific point in time, the Detection Timestamp (optionally there can also be an Availability Timestamp, but more on that later). Usually the values are the result of a measurement happening in a physical sensor like a thermometer, photocell or electroscope, but you can also use market KPIs like stock indices or resource prices as a signal.

The data points for one or more signals for a certain detection time range are called a history.

Timeseries

The values of a signal can be boolean values, 64-bit Floating Point numbers or Strings. Non-finite numbers (NAN and infinity) and empty strings are regarded as being unknown and are therefore skipped.

Points in time are represented by UNIX Timestamps in milliseconds (64-bit Integer). This means the number of milliseconds that have passed since 01.01.1970 00:00:00 UTC.

Detection Timestamp

The point in time that a signal value belongs to is called the Detection Timestamp. This usually is the timestamp when the measurement originally has taken place. If the measurement is a longer offline process, it should refer to the point in time at which the measured property was established, e.g. the time point of sample drawing or the production time for delayed sampling. In case of the target signal, the Detection Timestamp should be set to the time you would have liked to have measured the signal online. In the aivis Signal Prediction example use case, the paper quality is such a signal. It is measured around 2 hours after the production of the paper in a laboratory and must be backdated to a fictitious, but instantaneous quality measurement in the process.

Different signals may have different Detection Timestamps. Some might have a new value every second, some every minute, some just when a certain event happens. aivis automates the process of synchronizing them internally. This includes dealing with holes in the data.

Availability Timestamp

When doing a historical evaluation, we want to know what the engine would have inferred/predicted for a list of Inference Timestamps that lie in the past (Inference Timestamps are the moments for which you want to get an inference). For a realistic inference, the engine must ignore all signal values that were not yet available to the database at the Inference Timestamp. A good example for such a case is a measurement, that is recorded by a human. The value of this measurement will be backdated by him/her to the Detection Timestamp, but it took e.g. 5 minutes to extract the value and report it to the system. So, it would be wrong to assume that one minute after this fictitious Detection Timestamp, the value would have been already available to the Inference. Another example case is the fully automated lagged data ingestion of distributed systems (especially cloud systems).

There are multiple ways to handle availability. Which strategy you use depends an the concrete use case. Availability

To allow for these different strategies, every data point can have an additional Availability Timestamp that tells the system when this value became available or would have been available. Signal values for which the Availability Timestamp lies after the Inference Timestamp are not taken it into account for an inference at this Inference Timestamp.

If there is no knowledge about when data became available, the Availability Timestamp can be set to the Detection Timestamp — but then you must keep in mind that your historical evaluation might look better as it could have been in reality.

Data Recommendations

aivis works best on raw, unprocessed data. It is important to keep the following rules in mind:

  • Remove signals beforehand only if you are absolutely sure that they are unrelated to your objective! The engine will select all relevant signals, anyway, and removing signals may reduce quality.
  • Avoid linear interpolation (or similar data processing steps), as this would include information from the future and therefore invalidate or worsen the results.
  • It is okay (except for aivis Signal Monitor) to drop consecutive duplicate values of one signal (e.g., if the value stays the same for a long period of time). This is because the engine assumes the value of a signal to be constant until a new data point is given, though there are subtleties for the target signal. It is, however, not advisable to drop duplicate values when running aivis Signal Monitor SignalInactive trigger, since the engine learns how often the signal gets a new data point.
  • Do not train the engine on signals that wouldn't be there in live operative (i.e., during the inference phase). Doing so could harm the prediction quality because the engine might choose to use these soon-to-be-missing signals for prediction. For aivis Signal Monitor, this may produce unnecessary (or false) warnings (e.g., SignalInactive).

Training Data Filtering

There is the possibility of filtering the Training Data in multiple ways:

  • The overall time window can be restricted.
  • Signals can be excluded and included as a whole.
  • Specific time windows of specific signals can be excluded or included.

The filtering is configurable:

  • The docker image Training Worker can be configured in the main config file.
  • SDK Training API has filter nodes in the their config structure.

This means that two models could be trained on the same data set, but on different time windows or signal sets. Alternatively, the user can of course also restrict the data that enters the engine beforehand.

Training vs. Inference Data

aivis uses data at two distinct points in the workflow:

  1. Training Data is used to train a model from knowledge that was derived from historical data. To ensure high quality of the model, you should use as many signals as possible over a period of time in a fine resolution that fits to your objective. The engine can ingest several thousands of signals and time ranges over multiple years. The idea is to simply put in all the data you have. The engine will filter out irrelevant signals by itself.
  2. Inference Data is the small amount of live data that is used as the direct input to make an inference/prediction. For each Inference Timestamp, the engine needs a small and recent history of the relevant signals to understand the current situation of the system. You can find more information on this in the next section Inference Data Specification.

Inference Data Specification

When making an inference, aivis must know the current state of the real system by including a small portion of history.

In Training, the engine calculates which signals among the many signals in the Training Data will be relevant for the Inference. Furthermore, for each relevant signal a time window is specified relative to the Inference Timestamp. This time window determines which values of the signal must be included in order to make a prediction for said timestamp. This doesn't only include the values within the time window, but also either the value right at the start, or the last value before the time window (see "Nearest Predecessor"). This information is called the Inference Data Specification and must be obeyed strictly when triggering Inference, as the engine relies on this data.

You can inspect a model for its Inference Data Specification.

It is possible to set the maximum amount of time to be included in the local history. This is done in the configuration of the Training via the parameter Maximal Lag.

The following diagram gives you a visual representation of how an Inference Data Specification could look like:

Inference Data Specification

In the diagram you see that a start lag and end lag is specified for every signal. For the Inference, this means that for each signal we need all data points whose detection timestamps lie in the window [ inference timestamp - start lag; inference timestamp - end lag ] as well as the nearest predecessor (see below).

Nearest Predecessor

As previously mentioned, it is essential that you provide data for the whole time window. Especially, it must be clear what the value at the beginning is, i.e. at inference timestamp - start lag.

Typically, there is no measurement for exactly this point in time. Then, you must provide the nearest predecessor. This is the last value before the beginning of the time window. Then, the engine can at least take this value as an estimate. Of course this first data point must also be available at the Inference Timestamp (regarding the Availability Timestamp).

Nearest Predecessor

Depending on the configuration, the engine will either throw an error or ignore timestamps for which you provide neither a value at the beginning of the time window nor a nearest predecessor. This implies that you always need at least one available value per relevant signal. Sending more data outside the demanded time window will have no effect on the inference, though.

CSV Format

All artifacts use CSV as the input data format. As the CSV format is highly non-standardized, we will discuss it briefly in this section.

CSV files must be stored in a single folder specified in the config under data.folder. Within this folder the CSV files can reside in an arbitrary subfolder hierarchy. In some cases (e.g. for HTTP requests), the folder must be passed as a ZIP file.

General CSV rules:

  • The file’s charset must be UTF-8.
  • Records must be separated by Windows or Unix line ending (CR LF/LF). In other words, each record must be on its own line.
  • Fields must be separated by comma.
  • The first line of each CSV file represents the header, which must contain column headers that are file-unique.
  • Every record including the header must have the same number of fields.
  • Text values must be enclosed in quotation marks if they contain literal line endings, commas or quotation marks.
  • Quotation marks inside such a text value have to be prefixed (escaped) with another quotation mark.

Special rules:

  • One column must be called timestamp and contain the Detection Timestamp as UNIX Timestamps in milliseconds (64-bit Integer)
  • Another column can be present that is called availability. This contains the Availability Timestamp in the same format as the Detection Timestamp.
  • All other columns, i.e. the ones that are not called timestamp or availability, are interpreted as signals.
  • Signal IDs are defined by their column headers
  • If there are multiple files containing the same column header, this data is regarded as belonging to the same signal
  • Signal values can be boolean values, numbers and strings
  • Empty values are regarded as being unknown and are therefore skipped
  • Files directly in the data folder or in one of its subfolders are ordered by their full path (incl. filename) and read in this order
  • If there are multiple rows with the same Detection Timestamp, the data reader proceeds all to the engine which uses the last value that has been read

Boolean Format

Boolean values must be written in one of the following ways:

  • true/false (case insensitive)
  • 1/0
  • 1.0/0.0 with an arbitrary number of additional zeros at the end

Regular expression: (?i:true)|(?i:false)|1(\.0+)?|0(\.0+)?

Number Format

Numbers are stored as 64-bit Floating Point numbers. They are written in scientific notation like -341.4333e-44, so they consist of the compulsory part Significand and an optional part Exponent that is separated by an e or E.

The Significand contains one or multiple figures and optionally a decimal separator .. In such a case, figures before or after the separator can be ommited and are assumed to be 0. It can be prefixed with a sign (+ or -).

The Exponent contains one or multiple figures and can be prefixed with a sign, too.

The 64-bit Floating Point specification also allows for 3 non-finite values (not a number, positive infinity and negative infinity) that can be written as nan, inf/+inf and -inf (case insensitive). These values are valid, but the engine regards them as being unknown and they are therefore skipped.

Regular expression: (?i:nan)|[+-]?(?i:inf)|[+-]?(?:\d+\.?|\d*\.\d+)(?:[Ee][+-]?\d+)?

String Format

String values must be encoded as UTF-8. Empty strings are regarded as being unknown values and are therefore skipped.

Example

timestamp,availability,SIGNAL_1,SIGNAL_2,SIGNAL_3,SIGNAL_4,SIGNAL_5
1580511660000,1580511661000,99.98,74.33,1.94,true,
1580511720000,1580511721000,95.48,71.87,-1.23,false,MODE A
1580511780000,1580511781000,100.54,81.19,,1e-5,MODE A
1580511840000,1580511841000,76.48,90.01,2.46,0.0,MODE C
...

Preparation

Previous sections gave an introduction on how to use aivis Signal Prediction and also shed some light on how it works. The following sections will provide a more profound background. It is not necessary to know this background to use aivis Signal Prediction! However, you may find convenient solutions for specific problems, such as special data kinds or restrictions, or ways to reduce training computation time. The following sections are organized in the natural order of the workflow. By workflow, we mean the cycle of data preparation, model training, and finally utilizing it by making inferences with it. It will become clear that only minimal user input is required for this workflow. Nevertheless, the user has the option to control the process with several input parameters which will be presented below.

Before approaching the workflow steps and their configuration, two things are required. First, there must be a data set that meets the previously described data specifications. Second, and this is equally important, it must be clear what exactly is to be predicted from the data. Applying this to our introductory use case example, the data set would consist of sensor readings from the production line and few additional values originating from lab measurements. The goal of ensuring production quality then directly leads to the need of determining the current paper quality, i.e., predicting the outcome of a lab measurement. However, these tasks aren't always as plain as in the introductory use case example. Often, it makes sense to take a step back and contemplate the goal of the task at hand. From there, it should be possible to formulate a question towards achieving this goal. In the case of aivis Signal Prediction, the question will always be in the form of “What will the value of X be?”. This determines the signal (or combination of signals) which is to be predicted, or, in other words, the target. The technical formulation of the question is done with the expression language, but more on that later.

Training

The training comprises all the steps to reach a prediction model. aivis relieves the user of many steps that are typically associated with the machine learning process. Some of the steps are performed automatically, others are rendered superfluous. While domain knowledge can be employed, this is not a necessity. The following section will illuminate these topics in more detail. It is explained how the raw data is transformed, additional features are extracted automatically or manually, and relevant features are identified. In the following sections, the model configuration parameters are explained in some detail. Here we focus on explaining the underlying concepts. When it comes to actually making the inputs, syntactical questions will be answered by the aivis reference manuals, which define the exact inputs that can be made depending on whether you are using one of the SDKs or a Docker container.

Workflow

Feature Engineering

The requirements for the input data that were formulated in section Data Specification serve the purpose of making it possible for aivis to read the data. Typically, several additional steps are required to make the data appropriate to be fed into a machine learning algorithm. Among others, these include:

  • synchronizing timestamps
  • dealing with missing values
  • standardization
  • handling outliers
  • filtering white noise

All of the above is handled by aivis automatically. Here, data preparation steps that go beyond anything described in the “Data Specification” section are not necessary and even discouraged as they may alter the underlying information. Synchronization is not necessary as aivis treats each signal separately as a time series, and this also eliminates the need for imputation of missing values. Standardization is a routine task that is executed internally (if necessary at all). Outliers don’t need to be removed beforehand or cut back thanks to the way the aivis engine builds its models. Some minimal outlier handling might still be beneficial as will be explained below.

When the data is brought into a form that can directly be used for model building, it is referred to as “features”. Sometimes, signals are noisy and fluctuate a lot, while the change in what the signal actually is measuring may be only very small. To reduce such noise, a common approach would be to calculate a moving mean and use it as a feature. This is just one example for data aggregation over some time interval but there may be other cases, involving e.g. the maximum of some value in a certain time range, or similar. In aivis, such feature engineering using moving time windows is not necessary. Here, it pays off that aivis understands the data as time series and automatically extracts powerful features. Again, this is explained in more detail below.

As mentioned above, aivis takes over many time-consuming data preparation and feature engineering tasks. As the aivis training algorithm is very powerful, in most cases the automatically generated features already suffice to obtain an optimum model. For some special kinds of signals such as angles, audio data, or categorical data, there are built-in interpreters that take care of the relevant pecularities. Nevertheless, there are still some situations for which the model performance may improve adding manually engineered features, for which you already know or expect that they are related to the target. For this purpose, the expression language is provided that facilitates manual feature engineering. This way domain knowledge can easily be taken into account. However, you will be surprised how well aivis predicts signals even without any manual efforts.

Signal Cleaning

The philosophy of aivis is that the user does not need to care about which signals may be relevant. Therefore, you may insert a large number of signals. From each signal, typically several more features are auto-engineered, e.g. to reflect the behavior of the signal within some relevant time window. This means that we have many features that can all potentially be used for model training. In most cases however, only a small selection of the available features will be of use for predicting the target. Although inclusion of irrelevant features may not worsen prediction accuracy, it will have a negative impact on computational resources for model training. While identifying the relevant features has traditionally required domain knowledge, aivis is capable of making the selection automatically.

Finding the relevant signals is done with the help of calculating the distance correlations of the features to the target. The distance correlation is a measure of how much two vectors are related to each other. For example, if the distance correlation between some feature and the target signal equals 0, then the feature and the target signal are statistically independent. Therefore, nothing can be concluded from this feature. On the other hand, if the distance correlation between some feature and the target signal equals 1, the target signal could be perfectly predicted by this signal alone. Therefore, in this way signals which are (practically) independent of the target are cleaned out automatically in the analysis.

Even if some feature (distance) correlates with the target signal, it does not necessarily mean that it adds relevant information. Often, two or several features behave so similarly that they effectively become redundant. In this case, the more important ones are kept, thus enabling a more effective training stage.

Hereinafter, we will go back to using the word “signal” instead of “feature”. While there are subtle differences, in most cases the terms can be used interchangeably.

Segmentation

In addition to feature engineering and cleaning, another preparative step is segmentation. Segmentation is a distinctive virtue of aivis, and – depending on the data – can prove to be very advantageous. During segmentation, the data is checked for homogeneity of the relationship between signals and the target. If it is observed that the relation between data and target differs notably on subsets of the data, different segments are formed. Maybe you produce three different products using the same machines. The machines may then run under different operating conditions. This is an obvious example where segmentation may prove useful. Note, however, that in general things are more intricate. On the one hand, a single segment might suffice even for this case: Even though the signals may take different values for the different operating modes, their interrelations may still be the same. On the other hand, having several segments quite frequently turns out to be helpful. This fully applies also to cases for which there is no a priori reason to expect several different behaviors.

Model building

After having generated and selected the best features and clustered the data into different segments, the last step of the training is model building. This step finally produces a model: something that is capable of making predictions. During this step, the historic signals and auto-engineered features are related to the target signal. These relations can be highly non-linear and may also involve interactions of several signals. They are baked into a model that can be used to determine the target values based on live signal data.

Generally, a model is always a function that can take values as an input and produce one or several values as an output. In the case of aivis Signal Prediction, the output is a single value, the predicted target value for a given timestamp. The model consists of a collection of coefficients and rules to predict the target. In practice, the “content” of the model is not relevant for the user, as all that stays behind the scenes.

A Fully Loaded Training Configuration

While the previous section was concerned with what aivis does automatically, the following section explains the parameters the user may configure to control the training. We start with an overview of all kinds of possible configuration keys. We stress that the vast majority of the keys is optional. A minimal training configuration was used above in SDK training, respectively, in Docker training. This example may mainly serve as a quick reference. The meaning of the different keys is explained in the following sections, and a definition of the syntax is given in the reference manuals.

training:
  dataFilter:
    startTime: 1580527920000
    endTime: 1594770360000
    excludeSignals:
    - signal: SIGNAL_7
      startTime: 1589060760000
      endTime: 1589692980000
    # includeSignals: ... similar
    includeRanges:
    - startTime: 1580527920000
      endTime: 1589060760000
    - startTime: 1589692980000
      endTime: 1594770360000
    # excludeRanges: ... similar
  target:
    signal: TARGET
    interpreter:
      _type: Categorical
    removeOutliers: false
    lagging:
      maximalLag: 300000
      minimalLag: 180000
      mesh: 60000
  sampling:
    additionalSampleMesh: 30000
    maximalSampleCount: 100000
  operativePeriods:
    signal: MY_BOOLEAN_OPERATIVE_SIGNAL
  signals:
  - signal: SIGNAL_11
    forceRetain: true
  - signal: SIGNAL_13
    lagging:
      maximalLag: 300000
      minimalLag: 0
      mesh: 30000
  - signal: SIGNAL_22
    interpreter:
      _type: Cyclic
      cycleLength: 1.3
  - signal: SIGNAL_23
    interpreter:
      _type: Oscillatory
      windowLength: 1200000
      mesh: 60000
  - signal: MY_CATEGORICAL_SIGNAL
    interpreter:
      _type: Categorical
  lagging:
    maximalLag: 300000
    minimalLag: 60000
    mesh: 60000
  modeling:
    controlPointCount: 2500
    enableIncremental: true
training_config = json.dumps({
  "dataFilter": {
    "startTime": 1580527920000,
    "endTime": 1594770360000,
    "excludeSignals": [{
      "signal": "SIGNAL_7",
      "startTime": 1589060760000,
      "endTime": 1589692980000
    }],
    # "includeSignals": ... similar
    "includeRanges" : [{
      "startTime": 1580527920000,
      "endTime": 1589060760000
    },{
      "startTime": 1589692980000,
      "endTime": 1594770360000
    }],
    # "excludeRanges": ... similar
  },
  "target": {
    "signal": "TARGET",
    "interpreter": {
      "_type": "Categorical"
    },
    "removeOutliers": False,
    "lagging": {
      "maximalLag": 300000,
      "minimalLag": 180000,
      "mesh": 60000
    }
  },
  "sampling": {
    "additionalSampleMesh": 30000,
    "maximalSampleCount": 100000
  },
  "operativePeriods": {
    "signal": "MY_BOOLEAN_OPERATIVE_SIGNAL"
  },
  "signals": [{
    "signal": "SIGNAL_11",
    "forceRetain": True
  }, {
    "signal": "SIGNAL_13",
    "lagging": {
      "maximalLag": 300000,
      "minimalLag": 0,
      "mesh": 30000
  }}, {
    "signal": "SIGNAL_22",
    "interpreter": {
      "_type": "Cyclic",
      "cycleLength": 1.3
  }}, {
    "signal": "SIGNAL_23",
    "interpreter": {
      "_type": "Oscillatory",
      "windowLength": 1200000,
      "mesh": 60000
  }}, {
    "signal": "MY_CATEGORICAL_SIGNAL",
    "interpreter": {
      "_type": "Categorical"
  }}],
  "lagging": {
    "maximalLag": 300000,
    "minimalLag": 60000,
    "mesh": 60000
  },
  "modeling": {
    "controlPointCount": 2500, 
    "enableIncremental": True
  } 
})
final DtoTrainingConfig trainingConfig = new DtoTrainingConfig(
  new DtoTargetConfig("TARGET")
    .withInterpreter(new DtoCategoricalTargetInterpreter())
    .withRemoveOutliers(false)
    .withLagging(new DtoTargetLaggingConfig(300000L, 180000L, 60000L))
)
  .withDataFilter(new DtoDataFilter()
    .withStartTime(1464732000000L)
    .withEndTime(1465631880000L)
    .withExcludeSignals(new DtoDataFilterRange[] { 
      new DtoDataFilterRange("SIGNAL_7") 
        .withStartTime(1465000000000)
        .withEndTime(1465631880000L)
    })
    // .withIncludeSignals ... similar
    .withIncludeRanges(new DtoInterval[] { 
      new DtoInterval()
        .withStartTime(1580527920000L)
        .withEndTime(1589060760000L),
      new DtoInterval()
        .withStartTime(1589692980000L)
        .withEndTime(1594770360000L)
    })
    // .withExcludeRanges ... similar
  )
  .withSampling(new DtoSamplingConfig()
    .withAdditionalSampleMesh(30000L)
    .withMaximalSampleCount(100000)
  )
  .withOperativePeriods(new DtoOperativePeriodsConfig("MY_BOOLEAN_OPERATIVE_SIGNAL"))
  .withSignals(new DtoSignalConfig[] {
    new DtoSignalConfig("SIGNAL_11")
      .withForceRetain(true),
    new DtoSignalConfig("SIGNAL_13")
      .withLagging(new DtoSignalLaggingConfig(300000L, 30000L)
        .withMinimalLag(0L)),
    new DtoSignalConfig("SIGNAL_22")
      .withInterpreter(new DtoCyclicSignalInterpreter(1.3)),
    new DtoSignalConfig("SIGNAL_23")
      .withInterpreter(new DtoOscillatorySignalInterpreter(1200000L, 60000L)),
    new DtoSignalConfig("MY_CATEGORICAL_SIGNAL")
      .withInterpreter(new DtoCategoricalSignalInterpreter()),
  })
  .withLagging(new DtoLaggingConfig(300000L, 60000L)
    .withMinimalLag(60000L))
  .withModeling(new DtoModelingConfig()
    .withControlPointCount(2500)
    .withEnableIncremental(true)); 
const char *training_config = "{"
  "\"dataFilter\": {"
    "\"startTime\": 1580527920000,"
    "\"endTime\": 1594770360000,"
    "\"excludeSignals\": [{"
      "\"signal\": \"SIGNAL_7\","
      "\"startTime\": 1589060760000,"
      "\"endTime\": 1589692980000"
    "}]," 
    // "\"includeSignals\": ... similar
    "\"includeRanges\": [{"
      "\"startTime\": 1580527920000,"
      "\"endTime\": 1589060760000"
      "}, {"
      "\"startTime\": 1589692980000,"
      "\"endTime\": 1594770360000"
      "}]"    
    // "\"excludeRanges\": ... similar             
  "},"
  "\"target\": {"
    "\"signal\": \"TARGET\","
    "\"interpreter\": {"
      "\"_type\": \"Categorical\""
    "},"
    "\"removeOutliers\": false,"
    "\"lagging\": {"
      "\"maximalLag\": 300000,"
      "\"minimalLag\": 180000,"
      "\"mesh\": 60000"
    "}"
  "},"
  "\"sampling\": {"
    "\"additionalSampleMesh\": 30000,"
    "\"maximalSampleCount\": 100000"
  "},"
  "\"operativePeriods\": {"
    "\"signal\": \"MY_BOOLEAN_OPERATIVE_SIGNAL\""
  "},"
  "\"signals\": [{"
    "\"signal\": \"SIGNAL_11\","
    "\"forceRetain\": true"
  "}, {"
    "\"signal\": \"SIGNAL_13\","
    "\"lagging\": {"
      "\"maximalLag\": 300000,"
      "\"minimalLag\": 0,"
      "\"mesh\": 30000"
  "}}, {"
    "\"signal\": \"SIGNAL_22\","
    "\"interpreter\": {"
      "\"_type\": \"Cyclic\","
      "\"cycleLength\": 1.3"
  "}}, {"
    "\"signal\": \"SIGNAL_23\","
    "\"interpreter\": {"
      "\"_type\": \"Oscillatory\","
      "\"windowLength\": 1200000,"
      "\"mesh\": 60000"
  "}}, {"
    "\"signal\": \"MY_CATEGORICAL_SIGNAL\","
    "\"interpreter\": {"
      "\"_type\": \"Categorical\""
  "}}],"
  "\"lagging\": {"
    "\"maximalLag\": 300000,"
    "\"minimalLag\": 60000,"
    "\"mesh\": 60000"
  "},"
  "\"modeling\": {"
    "\"controlPointCount\": 2500,"
    "\"enableIncremental\": true"
  "}"
"}";

Data Filter: Exclude Parts of the Data

The following sections list and explain the parameters the user may configure to control the training. The sections are organized along the structure of the configuration classes.

The data filter allows you define the signals and time range that are used for training. Concretely, the data filter allows you to choose signals for training. This can be done by either of the following ways: exclude signals, or, alternatively, provide a list of signal names to include (include signals). Beyond that, the data filter allows you to determine the time range of data that is used for training, and even to include or exclude separate time ranges for specific signals.

There are several situations for which data filters come in handy. The global end time may be used to split the data set into a training set and an inference data set. As the name suggests, the training set is used to train a model. The inference data set may afterwards be used to assess the model’s quality. Typically, 10 – 15 % of the data are reserved for evaluation. It is important to make sure that the model does not use any data from the inference data set during training. In fact, it should at no point have access to any additional information that is not present under productive circumstances. This is necessary to optimize the model to the real situation, and to assure that model performance tests are meaningful.

As for selecting the global start time, having more data from a longer time range is almost always advantageous. This allows aivis to get a clear picture of the various ways of how signal behaviors influence the target. That being said, the time range that is chosen should be representative for the time period for which predictions are to be made. If some process was completely revised at your industry, this may have affected signal values or relationships between the signals. In such a case, it is advisable to include sufficient data from the time after revision. For major revisions, it might even be preferable to restrict only to data after that revision. Such restriction can easily be done using the start time.

It is also possible to include/exclude several time intervals globally, instead of just selecting/excluding one global time interval. This is carried out using the fields include ranges and exclude ranges 2.5. It is important to understand the way in which the global and signal based includes/excludes interact. When include signals and include ranges are set to given values different than 'None', first the signals available in include signals are taken with their respective time ranges, and only then those time ranges are intersected with the global time ranges defined in include ranges. The opposite is valid for exclude ranges and exclude signals, where we first consider the time ranges excluded globally, and the time ranges of the signals contained in exclude signals are united to the global ones.

Analogous to the global start time and end time, such intervals can also be specified for individual signals. Note, however, that it is usually advisable to apply time constraints globally.

Data excluded by data filters is still available for the expression language but does not directly enter the model building. Control on signal selection for model building is provided by the signal configuration. Finally, note that complementary to the data filter, training periods can conveniently be defined by operative periods.

Target Configuration: Define your Prediction Goal

Perhaps the most important input parameter is the signal you want to predict as it is directly related to the goal you want to achieve. We call this signal the target signal. In most cases, it will simply be one of the signals in your data. However, it is also possible to define a target based on multiple signals using various mathematical functions and operators. Very importantly, you can also define the relative time for which you want to make predictions. For example, you may be interested in predicting some signal one hour in advance to be able to react to unintended behavior in due time. Such data transformations can be achieved with the powerful expression language, explained in detail in “Appendix 1: Expression Language”.

When you have chosen or constructed your target signal, you need to inform the engine by passing its signal ID. Note that if you have synthesized the target signal with the expression language, all signals that are part of the expression are automatically excluded from the training. This way, signals that are used to build the target are not used later on for prediction of the target. This corresponds to the common case in which these signals are only available during training but will not be available during live prediction. However, there are also other cases. Maybe you want to predict some signal in advance, say by one hour. Then you probably don't want to exclude this signal from the training. The present signal value may be very relevant to allow for a prediction in the future. Therefore, this automatic exclusion can be overruled. To use past values of the target signal for prediction, a target lagging configuration can be set. If you have synthesized the target signal with the expression language, and want to protect the signals that enter this expression from automatic exclusion, just set signal configurations.

As mentioned before, there is no need to remove outliers in the features, since the way that aivis Signal Prediction operates on the data provides little leverage to outliers. However, this does not apply to the target signal. By default, outliers in the target signal are ignored. This makes sense if outliers relate to measurement problems or special situations which are not of interest. On the other hand, if outliers are to be considered as part of normal behavior, they need to be included in the analysis and the default behavior can be switched off.

Furthermore, you may want to configure the interpretation of the target signal. By default, float signals are interpreted as numerical, and string and boolean signals as categorical. In fact, string and boolean signals can only be interpreted as categorical. On the other hand, categories may be coded also by numbers even though the order of these numbers then has no meaning. Therefore, a float target may also be interpreted as categorical. More details on the concept of an interpreter can again be found in the section on signal configurations. If a numerical target interpreter is chosen, regression will be performed. On the other hand, a categorical target interpreter means classification.

Sampling Configuration: Adjust the number of training points

Usually, all data points are used for training for which there is a target value in the data. This may be more than necessary. Setting the maximal sample count helps you to prevent excessive training durations by limiting the number of target points that are used. For example, say you have a data set with a target signal that contains 2 million values. Using all of these values for training could lead to an excessive computational effort. By default, this number is reduced to 500,000. To speed up the computations, this can further be reduced. Good results are often achieved already using just 100,000 sample points or even less. This works thanks to aivis Signal Prediction automatically choosing the most informative sample points. This is done by making sure that the whole spectrum of behavioral patterns is covered. If the maximum sample count is set to a number that is larger than the actual number of target signal values in the data, all sample points are considered.

On the other hand, the number of training points may also be too low. In general, this means that more data needs to be acquired. However, there is a notable exception for which additional sample points can easily be added: if you know that the target hardly changes between the timestamps for which target values are provided. In this case, additional samples can be generated. The distance in time between these additional samples is controlled by the additional sample mesh. Note, however, that the new samples only add information if some relevant signal changes between samples. If the data contains information on the target for timestamps t1, t2, t3, ... and information on other signals exists only for those timestamps, it does make little sense to calculate additional samples.

Operative Periods: Exclude Downtimes

Sometimes, the target includes data points for which a prediction is not desired. A typical situation is the time during which a machine is off, possibly including warm up and cool down phases. Also for maintenance time, typically no prediction is desired. To restrict the model training to times of interest, a signal may be assigned to be the operative signal. An operative signal must be boolean. Then, training is restricted to target timestamps for which the operative signal is “true”. Often, there is no such signal in the raw data but it may be easy to derive the operative times from other signals. For example some motor may be stopped when production is off. If motor speed is above a certain threshold, this may be used to define operative periods. For such situations, an operative signal may easily be created with help of the Expression Language.

Signal Configuration: If Signals Require Special Treatment

The signal configuration is the place to pass additional information about feature signals in order to enforce a special treatment. Each signal configuration refers to one specific signal.

Interpreter

At the core of the signal configuration is the interpreter. The interpreter defines which features are built from a signal and how these enter the engine. The features produced by an interpreter we call the different aspects of the signal. Very often the default configuration is the best choice and you don't need to set any interpreter. However, it is important to configure here any non-standard behavior as it may strongly affect the outcome. Below you find a table on the different interpreters, followed by some more in-depth explanations.

Interpreter Short Explanation Examples
Default Corresponds to a numerical interpreter for float signals, and to a categorical one for string and boolean signals
Numerical No special aspect generation. The signal is taken as it is. Speed, temperature, weight,...
Categorical Each signal value corresponds to some category. Categories have no order. Color, operation mode, on/off,...
Cyclic Signal values can be mapped to a finite interval. Lower and upper bound of this interval are identified with each other. Angles (0° to 360°), time of the day (0:00 to 24:00),...
Oscillatory Signal contains periodically recurrent parts. Interest is rather in the frequency of recurrences than the actual signal values. Audio data, vibrations,...

By default, all float signals are interpreted as numerical. This interpreter should be used for all signals for which the order of numbers is meaningful and which don't require some special treatment. A thermometer, for example, generates numerical data: the smaller the number the colder the temperature. It is irrelevant whether the scale is continuous, or whether the thermometer’s reading precision is limited to integer degrees. The numerical signal kind is quite common for float signals but there are also situations, for which it does not fit. Therefore, float signals may be also declared any of the other signal kinds.

String and boolean signals are always interpreted as categorical. Categorical data has nominal scale, i.e. it takes only specific levels and does not necessarily follow any order. In practice, this would express the information about certain states, such as “green”, “red”, or “blue”. This information may be present in form of strings, booleans, or also encoded in numbers. An example could be a signal for which "1.0" stands for "pipe open", "2.0" for "pipe blocked", and "3.0" for "pipe sending maintenance alarm".

For a cyclic signal only the residue from division by the cycle length is accounted for. This means the order of numbers is meaningful but it wraps at the cycle length. A common example are angles. Angles are usually defined in the interval \(0\) to \(2 \pi\). This means a cycle length of \(2 \pi\). If the signal takes a value outside this range, it is automatically mapped therein. For example, \(2.1 \pi\) is identified with \(0.1 \pi\). And, of course, 0 and \(1.99 \pi\) are considered to be close to each other. Another example can be derived from a continuous time signal. Let's say time is measured in the unit of hours. Then, applying an interpreter with cycle length 24, yields an aspect that describes the time of the day.

Finally, audio, vibration, or any other data that oscillates with some periodicity may best be interpreted as oscillatory. Oscillatory signals are interpreted in the frequency domain. In order to calculate a frequency spectrum and automatically derive the most relevant aspects, two configuration parameters are necessary. The mesh describes the shortest timespan to consider, the inverse sampling frequency. For example, a mesh of 2 milliseconds means a sample rate of 0.5 kHz. Within this documentation, the unit of the timestamps is usually assumed to be milliseconds to keep explanations concise. However, the unit of timestamps is irrelevant internally. Oscillatory signals may well have sample rates above 1 kHz for which a more fine-grained time unit is necessary. For example, a 32 kHz audio signal has a signal value each 0.031250 milliseconds. In this case, the usual notion of timestamps as milliseconds does not work anymore. Instead, timestamps may be provided in units of nanoseconds and full information is retained for a mesh of 31250. (Alternatively, timestamps may also be provided in units of a thirty second part of a millisecond, and full information is retained for a mesh of 1.) If the highest frequencies of the signal are expected not to be relevant, i.e. if the microphone or detector records with a higher rate than actually needed, the mesh may be chosen larger than the difference between timestamps in the data. In the above example, a mesh of 62500 nanoseconds would retain only each second value. The other parameter is the window length. It describes the longest time span to consider for a frequency spectrum. Therefore, it should reflect some reasonable time to "listen" to the signal before trying to get information out of it. The window length defines the period of the lowest frequency that can be analysed. Therefore, at the very least it should be as long as the period of the lowest relevant frequency. If no data are provided during some interval larger than twice the mesh, no frequency spectrum is calculated for this gap. Instead, the frequency spectrum is calculated for the last period for which there is no gap over the window length. This behavior allows for discontinuous signal transmission. To reduce the amount of data, for example, the signal may be provided in bunches which are sent each 2 minutes, each covering a time period of 10 seconds. The drawbacks are some delay of results, up to 2 minutes in the above example, and loss of information about anything happening within the gap.

Sometimes, it is not clear which interpreter to choose. As an example, take a signal for which "0.0" may stand for "no defects", "1.0" for "isolated microscopic defects" and "3.0" for "microscopic connected defects". A priori, one may assume that the effect of isolated defects may be somewhere in between no defects and connected defects, and thus assign the numerical scale. On the other hand, isolated microscopic defects may have no relevant effect: It may be irrelevant whether there are no or isolated defects. In this case, a categorical scale would be preferable. Such a situation can easily be dealt with: create a duplicate of the signal with the help of the expression language, configure one of the signals as numerical and the other as categorical, and let aivis make the best out of both.

Enforce Retaining Signals

This configuration option was mentioned already in the section on data filter. If force retain is set to true, the corresponding signal will not be cleaned and thus definitely enters the model building. As aivis performs very well in feature selection, you may not expect to improve the model performance by forcing some signals to be included. Nevertheless you may be interested in how the model or its performance changes by inclusion of some signal. In a first run, you may have denoted that some signal has been cleaned away although you expect or even know from previous data analysis that it is related to the target. In this case, you may force the engine to retain it in order to calculate and retrieve its correlation to the target and its relevance compared to other signals. Often the same information is contained in different signals and therefore even signals with high correlation to the target may be excluded because they are redundant.

Signal Specific Lagging Configuration

2.3

The use of a global lagging configuration is explained below. Sometimes it can make sense to adjust the lagging specifically to some signals. This helps, among other things, to keep calculational effort low and can be done if you have prior information for some signals about which (range of) lags are most relevant: For a few signals a very high maximal lag may be useful, or a very small mesh. For example, in a production line, it may take a few seconds or minutes from one process step to the next one. Let's assume you are interested in the third process step. Then signals that refer to this step may be configured with zero lag, while signals that describe the first two process steps may be configured with some lag > 0.

A typical case for the application of signal configurations arises when the target is synthesized by the expression language. In this case, all signals that are part of the expression are automatically excluded, see the section on the target configuration. However, this exclusion is not performed for signals with a specific signal configuration. In this scenario, the minimal lag may be of particular relevance, to allow modeling of delayed information availability in the database, see the sections on the lagging and on the target configuration.

Lagging Configuration: Including the local history

In many situations, predictions can be improved by taking into account parts of the history of some signal values, and not only the most recent value. However, signal values from very far in the past are unlikely to be relevant, and their inclusion would unnecessarily increase computation time. Therefore, the maximum time window to be included for each signal can be controlled by the parameter maximal lag [milliseconds]. The maximal lag tells aivis how far it should look into the past to find dependencies. For example, if data from 10 minutes ago affects the current target value, then the maximal lag should be at least that high. Consequently, the best value for maximal lag depends on the process at hand. Start out with lower values if possible, as this value has a large impact on the calculation effort. If this configuration is not set, the target signal is predicted only from the current signal values (or, as always, their nearest predecessors).

The number of sample points within this past time window is controlled by the mesh [milliseconds]. aivis analyzes said time window by slicing it up and taking a sample data point from each slice. The width and therefore the amount of these slices is determined by the mesh. This means that for a target at time t, signal values are retrieved at times t, t – mesh, t – 2 * mesh, and so on until you reach the maximal lag constraint. A smaller mesh is associated with higher computing time and memory demand but may lead to better predictions. It should be kept in mind that if the mesh is larger than the sample interval of the data, some information is lost. Therefore, it is important to choose the mesh according to the time scale during which relevant signals may change.

Simplified, you can imagine that maximal lag and mesh take data points from the past and add them as an additional “lagged signal” to the original problem. For example, a maximal lag of 10 hours and a mesh of 1 hour would require as much additional computational effort as setting a maximal lag of 5 hours, but a mesh of 30 minutes, as both would add up to 10 “lagged” signals per original signal. For each lagged signal, the distance correlation is calculated. The results constitute a correlation trend, and an example is depicted in the figure below. In the example the correlation is highest shortly before the evaluation time. This may be indicative that a shorter maximal lag might suffice for this signal. The correlations are part of the report. This way, the lagging configuration can be checked after the training.

Correlation Trend

These different lagged signals are almost always strongly correlated with each other and thus partially redundant. However, you don't need to care about this problem. The different lagged signals are automatically combined and selected to distill full information with a low number of final features.

In analogy to the maximal lag, there is also the option to define a minimal lag 2.3. If this parameter is set to some positive value, no information is used from the most recent past. This parameter is mainly useful for special cases: First, if you know that some range of lags is particularly important, it allows you to pinpoint aivis to exactly this range of lags. The second case applies to delayed information during live prediction. If you expect signals to be available only with some delay in your database, this situation can be emulated by setting a minimal lag. Then aivis will not use any information more recent than the minimal lag, and performance will therefore be independent of any delay shorter than this minimal lag. In the regard of information delays also note the availability timestamps.

Modeling Configuration: How the model is built

The control point count can be set to tweak model building. It controls the granularity of the model and has a strong influence on the required computing time. The larger this number, the more details may potentially be included. However, training computing time scales approximately with the third power of the control point count. By default, 5,000 control points are used. Just like with the maximal sample counts, a higher maximal control point count should technically yield better results. However, there is always a ceiling above which increasing the value will not lead to significant improvements, but only longer computation times.

Output: Report and Model

There are two outputs that can be retrieved from the training. First, the report lists the most relevant information. This includes a list of all signals considered for model building together with their distance correlation to the target. Distance correlations are also presented for different aspects and lags. And the report includes a list of all other signals together with a justification for their exclusion from model building. Second, a model is retrieved, which will later be used for predicting the target values, i.e., for inference. The model also specifies the Inference Data Specification and the inference output type.

Incremental Learning

2.4 (numerical target interpreter)
2.6 (categorical target interpreter == classifcation)

Upon the completion of the initial training phase, it is typical for new data to be gathered and stored. This data can be incorporated in two ways. One could either retrain the model from scratch, employing the full dataset, or undertake an Incremental Learning process, focusing solely on the data amassed after the initial training. Frequently, the latter approach proves to be the only practical option, because of the inaccessibility of older data, the difficulties associated with retrieving it or the desire to keep the old model for the most part unchanged and only tweak it in a configurable way.

In the Incremental Learning phase, the pre-existing model is retrained on this new data. The process keeps a record of the optimal parameters identified in the previous training stage, which are then optimized once more to reflect the new data samples.

Subsequent sections will explain the steps integral to the incremental process, as well as the necessary components to execute this process. These components include the incremental configuration, the new incremental data, and the already trained model, which was discussed in prior sections.

Incremental Learning Requirements

To execute an incremental learning step, the model at the user's disposal should have been initially trained with the enable incremental option in the training configuration. This setting ensures that along with the standard model information, the requisite quantities for retraining the model are also stored.

In terms of data, the same general specifications apply as those for the initial training, while it is enough to only provide data for the target signal and the signals contained in the model, i.e. listed in the inference data specification. These data can be then interpreted in parts using the data filter.

Incremental Workflow

The workflow for incremental learning is much like the original training process, but with an important distinction. Rather than recalculating all the data preparation steps - such as feature engineering, signal cleaning, segmentation, and model building - incremental learning applies the data preparation steps that were already computed during training to the new data. This way, it recreates the same features used in the training phase. Consequently, incremental learning only utilizes datapoints where it's possible to reproduce these features.

Workflow Overview

A general workflow representation is shown in the previous figure. The procedure requires three main components: input data (see Data Specification), an already trained model (see Incremental Learning Requirements) and the incremental configurations.

Configurations: Learning and Update

Below is an example illustrating the utilization of two distinct configurations for Incremental Learning (Learning and Update Configuration). The explanation of the different keys will be provided after the subsequent tables.

Learning Configuration

  label: TARGET
incremental_learning_config = json.dumps({
  "label": "TARGET" 
})
final DtoIncrementalLearningConfig incrementalLearningConfig = new DtoIncrementalLearningConfig("TARGET");
const char *incremental_learning_config = "{" 
  "\"label\": \"TARGET\""
"}";

The key label indicates which signal is going to be predicted and used as target in the training process. It should be the same as the target selected in the training process.

Update Configuration

  dataFilter:
incremental_update_config = json.dumps({
  "dataFilter": {}
})
final DtoIncrementalUpdateConfig incrementalUpdateConfig = new DtoIncrementalUpdateConfig()
  .withDataFilter(new DtoDataFilter());
const char *incremental_update_config = "{"
  "\"dataFilter\": {}"
"}";

The optional key data filter allows you to restrict time ranges that are used for the incremental retraining (for a detailed explanation, see Data Filter ). Excluding signals in the data filter is not advised as it has no impact and might cause an error. The needed signals for incremental retraining are only given by the inference data specification.

Minimal Example

Below is a simple code snippet for one model update using the configurations defined above.

data:
  folder: /srv/incremental-data
  dataTypes: 
    defaultType: FLOAT
incrementalLearning: 
  modelFile: /srv/model.json
  learningConfig: 
    label: TARGET
  updateConfigs: 
  - dataFilter:
output: 
  folder: /srv/output
learning = SignalPredictionIncrementalLearning.create(model, incremental_learning_config)
learning.update(incremental_data, incremental_update_config)
updated_model = learning.get_model()
final SignalPredictionIncrementalLearning learning = SignalPredictionIncrementalLearning.create(model, incrementalLearningConfig);
learning.update(incrementalData, incrementalUpdateConfig);
final IDtoModel updatedModel = learning.getModel();
SignalPredictionIncrementalLearningHandle learning_handle = aivis_signal_prediction_incremental_learning_create(
  (uint8_t *) incremental_learning_config,
  strlen(incremental_learning_config),
  &err
);
check_err(&err, "Create learning handle");

aivis_signal_prediction_incremental_learning_update(
  learning_handle,
  incremental_data_handle,
  (uint8_t *) incremental_update_config,
  strlen(incremental_update_config),
  &err);
check_err(&err, "Update");

const List_u8 *updated_model = aivis_signal_prediction_incremental_learning_get_model(
  learning_handle, 
  &err);
check_err(&err, "Save model");

Incremental update for classification

2.6

If the engine is trained with a categorical target interpreter the resulting model is a classification model detecting categories. Such models can also be incrementally updated if the enable incremental flag was set to true in training. In this case the incremental update method will do two things:

  • Update the model's prediction parameters for the already trained categories with the provided incremental data.
  • If there are new categories present in the incremental data then the engine finds these new classes and, by default, updates the model such that it is capable of also predicting these.

A caveat is that for training the model with new categories only the incremental data is used. For properly updating the model with the new categories ensure that the incremental data contains enough data labelled with the new categories but also examples without the new categories. For example, incrementally adding data consisting of solely one new category will not work.

Inference

When the model has been trained, it is ready for the ultimate goal, which is inference. Inference means, the model is provided with new (usually unseen) data around a certain timestamp and is asked for some value/estimation at said timestamp: That value is simply the prediction of the target in aivis Signal Prediction. In aivis Anomaly Detection the inferences are scores and in aivis State Detection the inferences provide scores per segment.

In general, there are two main scenarios in which you would want to make inferences. The first one is performance evaluation of the model. Here, inferences are made on historical data, i.e., some test data set. Again, it is important that this data was not part of the training set, as this would lead to unrealistic and therefore non-representative predictions. These historical inferences are used for determining model performance, see section “Measuring Model Performance”.

The second typical scenario for making inferences is using them in a productive setting. Here, the true target value or process state is technically not known at the time of the inference. This is called live inference. For live inference, inferences are usually made on an ongoing basis, as this is typically what you would want for most productive use cases. This is contrary to performance evaluation, where all inferences are made in one go.

For each of the above scenarios, there is a dedicated docker image. The Inference Worker creates predictions for a predefined time window in a bulk manner for model evaluation. In contrast, the Inference Service is optimized for live inference. It offers a RESTful web API that allows the triggering of individual predictions for a specified time via an HTTP call. Due to the different application modes, APIs differ between the different docker images and the SDK. These differences will be noted in the following sections.

Inference Timestamps

In the SDK, in order to make inferences, it is necessary to pass a list of timestamps for which the inferences are to be made. This allows for the request of a single live inference result but also for bulk inference for model evaluation on historic data. Typically it is easy to generate such lists of timestamps in the programming language that calls the SDK. On the other hand, docker images are not necessarily called from within a powerful programming language. This is not an issue for the Inference Service. For live inference, typically inference is requested only for a single timestamp, the most recent one. However, it could be cumbersome to derive a list of timestamps for the Inference Worker. Therefore, for the Inference Worker, timestamps are selected via a list of timestamps configs. There are two different methods:

Timestamp Config Short Explanation Typical Use Case
Equidistant Provides equidistant inference timestamps with fixed interval (for example a inference each minute). Obtain continuous inferences in some time interval.
AtNextSignalValue Selects those timestamps for inference for which there are data points for some specified signal. For model validation it is necessary to make inferences for timestamps for which target values are known.

For both timestamps configs, there are a start time and an end time. An operative signal can be used to further restrict the timestamps. Finally, a minimum interval may be set to avoid calculating too many inferences in a short time interval. This can speed up the computation, or may be useful to balance the distribution of inferences. Finally note that further flexibility can be gained by providing several timestamps configs in which case all timestamps are combined. An example was already provided in the Getting Started section.

Inference Data

Regarding the data, the same general specifications hold as for the training. In addition, the data is expected to obey the Inference Data Specification that can be inspected from the training output. The Inference Data Specification ensures that all data needed for inference has been provided. Note that this may include also signals synthesized by the expression language, which then need to be synthesized by the same expression. If the Inference Data Specification is not satisfied for some timestamp, this timestamp is either skipped on insufficient data, or an error is thrown. This behavior needs to be configured for the SDK and the Inference Worker. For the Inference Service, an error is always returned.

Data filtering can be performed as for the training. As described in the subsection on training data filtering, evaluating the performance of the model is done best by splitting the available data with the data filter. That way, it is not necessary to provide different data files for training and inference. Instead, some training end timestamp is specified, and the same timestamp for inference start. With this approach, the Inference Data Specification is automatically satisfied. For the Inference Service, data filtering is not possible.

Signal availabilities are an important concept for checking model performance with historical data. Only those data are to be used for inference that would be available if the inference would be performed online. However, the availability timestamp does not always correspond to the timestamp the data would have become available. For example, availability timestamps may have been overridden during some migration of the data base. For such cases, the Inference Worker provides the option to ignore signal availabilities. This option may also come in handy to check the impact of the delay of information on the model performance.

Infer With Category Probabilities (classification)

2.6

If the engine is trained with a categorical target interpreter the resulting model is a classification model being able to detect the trained categories. Our algorithm follows a one-vs-rest approach.

When doing inference, this implies that, for a given prediction timestamp, we internally obtain a vector of probabilities, whose length equals the number of trained categories. Each element of the vector, naturally, represents the probability of being in the corresponding category. Then the standard SDK inference method infer, see SDK Getting Started section, only return the winning category, i.e. the one with the highest probability.

To have more control on the predictions in the case of a classification model we also have the infer float with category probabilities SDK inference method for a target signal of data type float. The method returns a list of DtoFloatDataPointWithCategoryProbabilities which holds the timestamp in question and a list of the float-valued categories and their probabilities. Here, each probability encodes how sure the model is that the given timestamp is of the respective category or not. We note that the probabilities are not normalized, i.e. its sum will not be 1. A similar method, of course, also exists for target signals of data type string and boolean with the obvious name changes.

We provide here a minimal example of the syntax of the methods in case of a model trained with a target of data type float:

# choose inference timestamps
timestamps = ...

# infer predictions
predictions = inference.infer_float_with_category_probabilites(inference_data, timestamps)
// choose inference timestamps
final List<Long> timestamps = ...

// infer predictions
final List<DtoFloatDataPointWithCategoryProbabilities> predictions = inference.inferFloatWithCategoryProbabilities(inferenceData, timestamps);
// choose inference timestamps
Time *timestamps = ...

// infer predictions
const List_DtoFloatDataPointWithCategoryProbabilities *predictions = aivis_signal_prediction_inference_infer_float_with_category_probabilites(
  inference_handle,
  inference_data,
  timestamps,
  timestamps_len,
  &err
);
check_err(&err, "Infer predictions");

// free predictions
aivis_free(predictions);
predictions = NULL;

// free timestamps
free(timestamps);
timestamps = NULL;

In the case of using the docker containers the inference syntax indicated in docker inference will automatically call both the infer and the infer_..._with_category_probabilites method and output the results as two separate json files.

We emphasize that the method infer ... with category probabilities can only be called on models which where trained with a categorical target interpreter and trained after 2.6.

Infer Float With Next Normal

2.7

For a model trained with a numerical interpreter in both docker inference and SDK inference also the method infer float with next normal is provided.

The function infer float with next normal requires an additional next normal config as input. In this configuration a lower threshold, an upper threshold and a feature filter can be set. All of them are optional. Predictions within the window [lower threshold, upper threshold] are considered normal and no additional computation is performed. For any timestamp at which the feature values yield a prediction outside this window, the next normal point is sought after. The next normal point is the collection of feature values which is the closest to the observed feature values whose prediction is within the normal window. Moreover, a rating is calculated from the difference between observed and next normal feature values. It reflects the influence of each feature on the prediction, i.e. the feature with the highest rating is the most influential factor causing the prediction being outside the normal window. The feature filter in the next normal config allows the user to exclude features from the above next normal seeking process or to fix the features which are allowed to vary in the next normal seeking process via the include features entry. Not setting either lower threshold or upper threshold results in lower threshold being set to -infinity, respectively upper threshold being set to +infinity. If the algorithm can not find a collection of feature values whose prediction is within the provided window it will return the feature values which predict closest to the window.

In summary, the function infer float with next normal returns, in addition to the predicition at the timestamp of interest, the observed feature values, the next normal feature values, the prediction of the next normal feature values, and a rating of the difference between observed and next normal feature values.

In the output and in the feature filter the features are indexed by an integer id which can be looked up in the report to relate this id to the underlying signals, aspects and lags.

Warning: The infer float with next normal function is experimental. It does an expensive optimization and should carefully be used for models with many features, in particular when a lagging config is present. Also in such cases the result might not yet be optimal.

We provide here a minimal example of the syntax of the infer float with next normal method:

data:
    ...
inference: 
  config: 
    ...
  timestamps: 
    ...
  nextNormal:
    lowerThreshold: 140
    upperThreshold: 200
output: 
    ...
# choose inference timestamps
timestamps = ...

# build next normal config
next_normal_config = json.dumps({"lowerThreshold": 140, "upperThreshold": 200})

# infer predictions and next normal point 
predictions_with_next_normal = inference.infer_float_with_next_normal(inference_data, timestamps, next_normal_config)
// choose inference timestamps
final List<Long> timestamps = ...

// build next normal config
final DtoFloatNextNormalConfig floatNextNormalConfig = new DtoFloatNextNormalConfig().withLowerThreshold(140).withUpperThreshold(200);

// infer predictions and next normal point 
final List<DtoFloatDataPointWithNextNormal> predictionsWithNextNormal = inference.inferFloatWithNextNormal(inferenceData, timestamps, floatNextNormalConfig);
// choose inference timestamps
Time *timestamps = ...

// build next normal config
const char *next_normal_config = "{\"lowerThreshold\": 140, \"upperThreshold\": 200}";

// infer predictions and next normal point 
const List_DtoFloatDataPointWithNextNormal *predictions_with_next_normal = aivis_signal_prediction_inference_infer_float_with_next_normal(
  inference_handle,
  inference_data,
  timestamps,
  timestamps_len,
  (uint8_t *) next_normal_config,
  strlen(next_normal_config),
  &err
); 
check_err(&err, "Infer Scores with next normal");

// free predictions_with_next_normal
aivis_free(predictions_with_next_normal);
predictions_with_next_normal = NULL;    

// free timestamps
free(timestamps);
timestamps = NULL;

A Fully Loaded Inference Configuration

Regarding the inference configuration, we provide an overview of all kinds of possible configuration keys below. This overview is meant for quick reference. A minimal inference configuration is provided in SDK inference, respectively, in Docker inference. The meaning of the different keys is explained in other sections, and a definition of the syntax is given in the reference manuals. For the docker images, here we focus only on the Inference Worker as it features more configuration keys. Note that on the other side, the inference service can be asked to evaluate several models in one go.

config:
  dataFilter:
    startTime: 1594770420000
    endTime: 1600851780000
    excludeSignals:
    - signal: SIGNAL_7
      startTime: 1600841780000
      endTime: 1600851780000
    # includeSignals: ... similar
    includeRanges:
    - startTime: 1594770420000
      endTime: 1600851780000
    # excludeRanges: ... similar
  skipOnInsufficientData: true
inference_config = json.dumps({
  "dataFilter": {
    "startTime": 1594770420000,
    "endTime": 1600851780000,
    "excludeSignals": [{
      "signal": "SIGNAL_7",
      "startTime": 1600841780000,
      "endTime": 1600851780000
    }],
    # "includeSignals": ... similar
    "includeRanges" : [{
      "startTime": 1594770420000,
      "endTime": 1600841780000
    }]
    # "excludeRanges": ... similar
  },
  "skipOnInsufficientData": True
})
final DtoInferenceConfig inferenceConfig = new DtoInferenceConfig(true)
  .withDataFilter(new DtoDataFilter()
    .withStartTime(1594770420000L)
    .withEndTime(1600851780000L)
    .withExcludeSignals(new DtoDataFilterRange[] { 
      new DtoDataFilterRange("SIGNAL_7")
        .withStartTime(1600841780000L)
        .withEndTime(1600851780000L)
    })
    // .withIncludeSignals ... similar
    .withIncludeRanges(new DtoInterval[] {
      new DtoInterval()
        .withStartTime(1594770420000L)
        .withEndTime(1600841780000L)      
    })
    // .withExcludeRanges ... similar
  );
const char *inference_config = "{"
  "\"dataFilter\": {"
    "\"startTime\": 1594770420000,"
    "\"endTime\": 1600851780000,"
    "\"excludeSignals\": [{"
      "\"signal\": \"SIGNAL_7\","
      "\"startTime\": 1600841780000,"
      "\"endTime\": 1600851780000"
    "}]," 
    // "\"includeSignals\": ... similar
    "\"includeRanges\": [{"
      "\"startTime\": 1594770420000,"
      "\"endTime\": 1600841780000"
      "}]" 
    // "\"excludeRanges\": ... similar          
  "},"
  "\"skipOnInsufficientData\": true"
"}";

Measuring Model Performance

Once the first predictions have been made, the next thing you might want to do is to measure how precise they are, or in other words, determine the performance of the model. While this is not part of the software packages that were described here, it makes sense to calculate them either by yourself or with another one of aivis’ software packages.

To illustrate the approach, we will briefly describe one of the most useful key performance indicators (KPI), which is the Coefficient of Determination (\(r^2\)). Apart from this, other useful KPIs include the Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and more.

The Coefficient of Determination is a generally applicable and therefore very popular KPI. It yields the proportion of the variation of the target that is explained by the model. Or, in different words: The Coefficient of Determination compares the model to a hypothetical constant model, i.e., a model that for each time point predicts the same value \(mean(t)\), the mean of target values on the inference data. For a perfect prediction, \(r^2 = 1\). A value \(r^2 = 0\) is obtained if the model performs as well as the hypothetical constant model. However, the model may also perform worse, in which case the Coefficient of Determination becomes negative.

The Coefficient of Determination can easily be calculated from pairs of true target values and predictions (\(t_i\), \(p_i\)). Here, it is important that the predictions \(p_i\) were estimated exactly for the timestamps for which the target values \(t_i\) are known. For a total number of \(n\) available target-prediction-pairs (\(t_i\), \(p_i\)), it holds:

\[r^2 = 1-\frac{\sum_i^n (t_i-p_i)^2}{\sum_i^n (t_i-mean(t))^2}\]

Appendix 1: Expression Language

Before starting the workflow, sometimes there is the need to add a new signal to the dataset (a synthetic signal) that is derived from other signals already present. There are various reasons for this, especially if

  • you want to predict a quantity that is not in your Training Data, but it could be calculated by a formula. For that task, you need to add the new signal via an expression and then use this new synthetic signal as target.
  • you want to restrict the training to operative periods but there is no signal that labels when your machines were off. However, you may be able to reconstruct these periods based on some other signals.
  • you posess domain knowledge and you want to include and pinpoint the engine to some important derived quantity. Often certain derived quantities play a specific role in the application's domain, and might be easier to understand/verify as opposed to the raw quantities.

Technically, you can add synthetic signals using the docker images or any SDK Data API

To create new synthetic signals in a flexible way, aivis Signal Prediction features a rich Expression Language to articulate the formula.

The Expression Language is an extension of the scripting language Rhai. We have mainly added support for handling signals natively. Information on the basic usage of the language can be found in the very helpful Language Reference of the Rhai Book. This documentation will mainly focus on the added features.

Signal Type

A signal consists of a list of data points that represents a time series (timestamps and values of the same type).

The following value types are supported:

  • bool : Boolean
  • i64 : 64-bit Integer
  • f64 : 64-bit Floating Point
  • string : UTF-8 String

A signal type and its value type are written generically as signal<T> and specifically like e.g. signal<i64> for an integer signal.

It is not possible to write down a signal literally, but you can refer to an already existing signal in your dataset.

Signal References

Referring to an already existing signal is done via one of these two functions:

  • s(signal_id: string literal): signal<T>
  • s(signal_id: string literal, time_shift: integer literal): signal<T>

The optional time shift parameter shifts the data points into the future. For example, if the signal "a" takes the value 5.7 at timestamp 946684800000, then the following expression takes the same value 5.7 at timestamp 946684808000. The synthesized signal is therefore a lagged version of the original signal "a".

s("a", 8000)

These functions must be used exactly with the syntax above. It is not allowed to invoke them as methods on the signal id. Both parameters must be simple literals without any inner function invocation!

Examples:

s("my signal id")              // OK
s("my signal id", 8000)        // OK
s("my s" + "ignal id", 8000)   // FAIL
"my signal id".s(8000)         // FAIL
s("my signal id", 7000 + 1000) // FAIL

Examples

To begin with, let's start with a very simple example. Let "a" and "b" be the IDs of two float signals. Then

s("a") + s("b")

yields the sum of the two signals. The Rhai + operator has been overloaded to work directly on signals (such as many other operators, see below). Therefore, the above expression yields a new signal. It contains data points for all timestamps of "a" and "b".

A more common application of the expression language may be the aim to interpolate over several timestamps. For example, "a" might fluctuate and we may therefore be interested in a local linear approximation of "a" rather than in "a" itself:

trend_intercept(s("a"), t, -1000, 0)

Here, the literal t refers to the current timestamp. Therefore, the expression yields the present value as obtained from a linear approximation over the last second. As another example, the maximum within the last second:

max(slice(s("a"), t, -1000, 0))

A typical use of the expression language is synthesizing an operative signal. Assume you want to make inferences only when your production is running, and you are sure your production is off when some specific signal "speed" falls below a certain threshold, say 10. However, "speed" may also be above the threshold during maintenance. However, during maintenance "speed" exceeds the threshold only for a few hours. This is in contrast to production which usually runs stable for months. In this situation, an operative signal may thus be synthesized by adopting only intervals larger than one day, i.e. 86400000 ms:

set_sframe(s("speed") > 10, false, 86400000)

Additional Signal Functions

In the following, all functions are defined that operate directly on signals and do not have a Rhai counterpart (such as the + operator). Some functions directly return a signal. The others can be used to create signals via the t literal as will be explained below. Note that a timeseries is always defined on a finite number of timestamps: all timestamps of all signals involved in the expression are used for the synthesized signal. Time shifts specified in the signal function s(signal_id: string literal, time_shift: integer literal) are taken into account. On the other hand, arguments of the functions below (in particular time, from, and to) do not alter the evaluation timestamps. If you need more evaluation timestamps, please apply add_timestamps to some signal in the expression (see below).

  • add_timestamps(signal_1: signal<T>, signal_2: signal<S>): signal<T> – returns a new signal which extends signal_1 by the timestamps of signal_2. The signal values for the new timestamps are computed with respect to signal_1 using the latest predecessor similar to the above at() function. The syntax for this expression is s("x1").add_timestamps(s("x2")). 2.4
  • at(signal: signal<T>, time: i64): T – returns the signal value at a given time
    If there is no value at that time, it will go back in history to find a nearest predecessor; if there is no predecessor, it returns NAN, 0, false or ""
  • set_lframe(signal: signal<bool>, new_value: bool, minimal_duration: i64) : signal<bool> – returns a new boolean signal, where large same-value periods of at least duration minimal_duration are set to new_value. Note that the duration of a period is only known after end of the period. This affects the result of this function especially for live prediction.
  • set_sframe(signal: signal<bool>, new_value: bool, maximal_duration: i64) : signal<bool> – returns a new boolean signal, where small same-value periods of at most duration maximal_duration are set to new_value. Note that the duration of a period is only known after end of the period. This affects the result of this function especially for live prediction.
  • slice(signal: signal<T>, time: i64, from: i64, to: i64): array<T> – returns an array with all values within a time window of the given signal.
    The time window is defined by [time + from; time + to]
  • steps(signal: signal<T>, time: i64, from: i64, to: i64, step: i64): array<T> – returns an array with values extracted from the given signal using the at function step by step.
    The following timestamps are used: (time + from) + (0 * step), (time + from) + (1 * step), ... (until time + to is reached inclusively)
  • time_since_transition(signal: signal<bool>, time: i64, max_time: i64) : f64 – returns a new float signal, which gives time since last switch of signal from false to true. If this time exceeds max_time we return max_time. Times before the first switch and times t where the signal gives false in [t - max_time , t] are mapped to max_time. 2.4
  • times(signal: signal<T>): signal<i64> – returns a new signal constructed from the given one, where the value of each data point is set to the timestamp
  • trend_slope/trend_intercept(signal: signal<i64/f64>, time: i64, from: i64, to: i64): f64 – returns the slope/y-intercept of a simple linear regression model
    Any NAN value is ignored; returns NAN if there are no data points available; the following timestamps are used: [time + from; time + to]. The intercept at t = time is returned.

Best practice combining expressions

When combining several expressions which operate on time windows then, from a performance point of view, it might be better to build the expression step by step than writting the combination into one expression.

For example, if we want to exclude periods smaller than 30 minutes and periods bigger than 12 hours from an existing boolean signal with signal id "control" we may use the expression:

(s("control")).set_lframe(false, 12*60*60*1000).set_sframe(false, 30*60*1000)

When evaluating this expression at a timestamp t the synthesizer scans trough the 30 minutes time window before t and for each timestamp in there it scan through another 12 hour window before. This means constructing the desired synthesized signal is of complexity 12 × 60 × 30 × # timestamps. However, splitting the above in two expressions, we first generate a signal "helper" via

(s("control")).set_lframe(false, 12*60*60*1000)

and then we apply on the result the expression

(s("helper")).set_sframe(false, 30*60*1000)

In this case we end up with complexity 12 × 60 × # timestamps + 30 × # timestamps which is considerably smaller than before.

Basics of Rhai

Working with signals

In this section, we will briefly show the potential behind Rhai and what you can create out of it. Rhai supports many types including also collections. But Rhai does not have natively a signal type. Then, when working with signals, one approach involves extracting the primitive values from signals and converting the results back into a signal format. This process uses the literal

t: i64 – the current timestamp

together with the function s to refer to some signal and some other function defined above to extract values from the signal. For example, the sum of two signals "a" and "b" could be written without use of the overloaded + operator:

s("a").at(t) + s("b").at(t)

The results of such an expression are automatically translated into a new signal. In order to construct a signal from the results, the expression must not terminate with a ;. Of course, the additional signal functions can be used as any other functions in Rhai, and may thus be combined with the rest of Rhai's tools, when applicable.

Rhai is a scripting language

As such, you can script. A typical snippet would look like the following

let array = [[s("one").at(t), s("two").at(t)], [s("three").at(t), s("four").at(t)], [s("five").at(t), s("six").at(t)]];
let pair_avg = array.map(|sub| sub.avg());
pair_avg.filter(|x| !x.is_nan()).map(|cleaned| cleaned.abs().exp()).sum().ln()

Here, we used array functions (avg(), sum()) that will be clearly defined and presented in the following sections. The last line defines the result of the expression.

Rhai has the usual statements

In the same spirit of many other languages, you can create and control flow using statements if, for, do, while, and more (read Language Reference of the Rhai Book). Here's an example demonstrating their usage

let val = s("one").at(t);
if (val >= 10.0) && (val <= 42.0) {
  1.0 - (val - 42.0)/(10.0-60.0)
} else if (val <= 60.0) && (val > 42.0) {
  1.0 - (val - 42.0)/(60.0-42.0)
} else {
  0.0/0.0
}

In this code snippet, we determine a value to return based on the current state of the "one" signal. Different expressions are assigned depending on the signal's current value. Note that 0.0/0.0 will evaluate to NAN.

Rhai allows you to create your own functions

Like most other languages, you can create your own functions and use them whenever needed.

fn add(x, y) {
    x + y
}

fn sub(x, y,) {     // trailing comma in parameters list is OK
    x - y
}

Rhai allows you to do many more things than the ones here described. Careful reading of Language Reference of the Rhai Book brings numerous benefits in the usage of this programming language.

Additional Array Functions

The following functions for arrays were additionally defined:

  • some(items: array<bool>): bool – returns true if at least one item is true
  • all(items: array<bool>): bool – returns true if all items are true
  • sum(items: array<i64/f64>): f64 – returns the sum of all items and 0.0 on an empty array
  • product(items: array<i64/f64>): f64 – returns the product of all items and 1.0 on an empty array
  • max(items: array<i64/f64>): f64 – returns the largest array item; any NAN value is ignored; returns NAN on an empty array
  • min(items: array<i64/f64>): f64 – returns the smallest array item; any NAN value is ignored; returns NAN on an empty array
  • avg(items: array<i64/f64>): f64 – returns the arithmetic average of all array items; any NAN value is ignored; returns NAN on an empty array
  • median(items: array<i64/f64>): f64 – returns the median of all array items; any NAN value is ignored; returns NAN on an empty array

Constants

The following constants are defined in Rhai:

  • PI(): f64 – the Archimedes' constant: 3.1415...
  • E(): f64 – the Euler's number: 2.718...

Operators / Functions

Signals can be used in all normal operators and functions that are designed for primitive values. You can even mix signals and primitive values in the same invocation. If at least one parameter is a signal, the result will also be a signal.

Operators

See:

The following operators were defined:

  • Arithmetic:
    • +(i64/f64): i64/f64
    • -(i64/f64): i64/f64
    • +(i64/f64, i64/f64): i64/f64
    • -(i64/f64, i64/f64): i64/f64
    • *(i64/f64, i64/f64): i64/f64
    • /(i64/f64, i64/f64): i64/f64
    • %(i64/f64, i64/f64): i64/f64
    • **(i64/f64, i64/f64): i64/f64
  • Bitwise:
    • &(i64, i64): i64
    • |(i64, i64): i64
    • ^(i64, i64): i64
    • <<(i64, i64): i64
    • >>(i64, i64): i64
  • Logical:
    • !(bool): bool
    • &(bool, bool): bool
    • |(bool, bool): bool
    • ^(bool, bool): bool
  • String:
    • +(string, string): string
  • Comparison (returns false on different argument types):
    • ==(bool/i64/f64/string, bool/i64/f64/string): bool
    • !=(bool/i64/f64/string, bool/i64/f64/string): bool
    • <(i64/f64, i64/f64): bool
    • <=(i64/f64, i64/f64): bool
    • >(i64/f64, i64/f64): bool
    • >=(i64/f64, i64/f64): bool

Binary arithmetic and comparison operators can handle mixed i64 and f64 arguments properly, the other parameter is then implicitly converted beforehand via to_float. Binary arithmetic operators will return f64 if at least one f64 argument is involved.

Functions

See:

The following functions were defined:

  • Arithmetic:
    • abs(i64/f64): i64/f64
    • sign(i64/f64): i64
    • sqrt(f64): f64
    • exp(f64): f64
    • ln(f64): f64
    • log(f64): f64
    • log(f64, f64): f64
  • Trigonometry:
    • sin(f64): f64
    • cos(f64): f64
    • tan(f64): f64
    • sinh(f64): f64
    • cosh(f64): f64
    • tanh(f64): f64
    • asin(f64): f64
    • acos(f64): f64
    • atan(f64): f64
    • asinh(f64): f64
    • acosh(f64): f64
    • atanh(f64): f64
    • hypot(f64, f64): f64
    • atan(f64, f64): f64
  • Rounding:
    • floor(f64): f64
    • ceiling(f64): f64
    • round(f64): f64
    • int(f64): f64
    • fraction(f64): f64
  • String:
    • len(string): i64
    • trim(string): string – with whitespace characters as defined in UTF-8
    • to_upper(string): string
    • to_lower(string): string
    • sub_string(value: string, start: i64, end: i64): string
  • Conversion:
    • to_int(bool): i64 – returns 1/0
    • to_float(bool): f64 – returns 1.0/0.0
    • to_string(bool): string – returns "true"/"false"
    • to_float(i64): f64
    • to_string(i64): string
    • to_int(f64): i64 – returns 0 on NAN; values beyond INTEGER_MAX/INTEGER_MIN are capped
    • to_string(f64): string
    • to_degrees(f64): f64
    • to_radians(f64): f64
    • parse_int(string): i64 – throws error if not parsable
    • parse_float(string): f64 – throws error if not parsable
  • Testing:
    • is_zero(i64/f64): bool
    • is_odd(i64): bool
    • is_even(i64): bool
    • is_nan(f64): bool
    • is_finite(f64): bool
    • is_infinite(f64): bool
    • is_empty(string): bool
  • Comparison (returns other parameter on NAN):
    • max(i64/f64, i64/f64): i64/f64
    • min(i64/f64, i64/f64): i64/f64

Comparison operators can handle mixed i64 and f64 arguments properly, the other parameter is then implicitly converted beforehand via to_float. It will return f64 if at least one f64 argument is involved.

The Boolean conversion and comparison functions were added and are not part of the official Rhai.

Appendix 2: Integration Scenarios

Usually the steps of the workflow will run as part of two different service applications: Training App and Inference App

The diagrams below display typical blueprints of these service aplications using different available components of the engine as well as where they might be located in the end-customer infrastructure landscape (execution environments).

Hereby the following color code was used:

  • Blue boxes denote aivis Signal Prediction components
  • Purple boxes stand for components that are provided by the service application provider, which can be Vernaio, an industry partner, a reseller or the customer himself
  • Grey boxes symbolize 3rd party components (typical vendor systems/services that can be used are suggested in the speech balloons)

Training App

The service application Training App covers the workflow step Training, as well as any bulk inference, e.g. for historical evaluation.

It is executed in the so-called Cold World, which means that it consists of long running tasks that are executed infrequently and have a high resource consumption. Training App works on historical data that was previously archived and thus needs to be retrieved in an extra step from the Data Lake / Cold Storage.

Because of its high resource consumption it is usually not located in the OT network, but is a good fit for the cloud or an on-premise datacenter.

Via Docker

Training App via Docker

Via SDK

Training App via SDK

Inference App

The service application Inference App provides the means for live prediction.

In contrast to the Training App, it runs within the Hot World. Usually it is an ongoing process which serves to predict the current value and only needs minimal resources. Inference App works on live data that is easily available from the Historian / Hot Storage.

As the outcome often influences the production systems (e.g. Advanced Process Control), usually it runs in the OT network. Thanks to low resource consumption, it can run on practical any environment/device, be it in the cloud, on-premise, on-edge or even embedded.

Via Docker

Inference App via Docker

Via SDK

Inference App via SDK

Infrastructure Landscape

Infrastructure Landscape

Appendix 3: Toolbox

aivis engine v2 toolbox is a side project of aivis engine v2. It mainly provides tools to turn the output artifacts of aivis engine v2 into technical, single-file HTML reports.

Disclaimer

It is explicitly not an official part of aivis engine v2. Therefore, its api and behaviour is subject to change and not necessarily thoroughly tested. It is very important to note that these HTML reports are not a designed UI but rather a visualization testing playground:
The aivis engine v2 toolbox targets researchers and data scientists who already know the concepts of aivis engine v2 beforehand and wish to quickly visualize and adapt its outputs.

Furthermore:

  • With exceptionally large input files (e.g. too many inferences) or the wrong configuration, the generated HTML pages will be too slow to handle.
  • The HTMLs are optimized for a wide screen.

Setup

The aivis engine v2 toolbox does not need a licensing key. The python code is free to look into or even adapt. The respective toolbox release of an aivis engine v2 release {VERSION} is available as:

  • Python Whl aivis_engine_v2_toolbox-{VERSION}-py3-none-any.whl
  • Docker Image aivis-engine-v2-toolbox:{VERSION}

Create Engine Report

Each call to construct a toolbox HTML report for engine xy has the following structure:

from aivis_engine_v2_toolbox.api import build_xy_report

config = {
    "title": "My Use Case Title", 
    ...
    "outputFile": "/path/to/my-use-case-report.html"}
build_xy_report(config)

Additionally, the config needs to contain references to the respective engine's output files, e.g. "analysisReportFile": "/path/to/analysis-report.json". The full call to create a report for any engine can be found in python or argo examples of the respective engine.

Expert Configuration

There are many optional expert configurations to customize your HTML report. Some examples:

  • The aivis engine v2 toolbox always assumes timestamps to be unix and translates them to readable dates. This behaviour can be switched off via "advancedConfig": {"unixTime": False}, so that timestamps always remain long values.

  • By referring to a metadata file via "metadataFile": "/path/to/metadata.json", signals are not only described via their signal id but enriched with more information. The metadata json contains an array of signals with the keys id (must) as well as name, description, unitSymbol, unitType (all optional):

    {"signals": [{
        "id": "fa6c65bb-5cee-45fa-ab19-355ba94889e9",
        "name": "et 1",
        "description": "extruder temperature nr. 1",
        "unitName": "Kelvin",
        "unitSymbol": "K"
      }, {
        "id": "dc3477e5-a83c-4485-b7f4-7528d336d9c4", 
        "name": "abc 2"
        }, 
       ...
    ]}
    
  • To every HTML report which contains a timeseries plot, additional signals can be added to also be displayed.

All custom configuration options can be seen in the api.py file in src/aivis_engine_v2_toolbox.