aivis Engine v2 - Constraint Navigator - User Guide

Download OpenAPI specification:Download

aivis Constraint Navigator is one of the engines of the aivis Technology Platform by Vernaio and a component of multiple products such as aivis Insights or Process Booster.

aivis Constraint Navigator allows you to optimize certain KPIs under constraints within the aivis ecosystem.

Introduction

In this section we give a short overview what the Constraint Navigator is and where to use it. We start with some use case.

What is constraint navigator?

On an abstract level we aim at finding the closest configuration in feature space such that certain constraints are fulfilled. We start with a basic example.

Example: Closest signal configuration satisfying constraints

We present here an artificial example which is easy to understand but still helps to grasp the core concepts of Constraint Navigator. We start with the known data set of handwritten digits, see e.g. handwritten digit dataset.

It constitutes of roughly 1800 handwritten samples of the digits 0,1,2,3,4,5,6,7,8,9. The digits are made up of 64 pixels and each pixel is an integer between 0 and 16 specifing its grey tone. When plotted as an 8 by 8 picture an example for each digit looks the following

Handwritten digits

The problem we want to solve is that we want to transform any digit into the 0 digit with the least possible changes in grey tones. To do so we first train a basic aivis Signal Prediction model where the target signal is 1 if the digit is 0 and 0 if the digit is not 0. This will give us a basic model to detect if the digit presented is a 0 or not.

With such a model at hand the problem of transforming a given digit, lets say a 9, to a 0 strips down to finding the configuration of the 64 signal values (pixels) which is closest to the given 9 under the constraint that the found signal values score close to 1 in the trained signal prediciton model. Solving this with the constraint navigator results in the following plots for two examples of a handwritten 9 and 5.

The data, the signal prediction model, and the constraint navigator code are part of the constraint navigator example code. With this at hand, one can reproduce the above transformed digits and play around more.

What is constraint navigator?

After having a basic idea of constraint navigator with the help of the above example we go a bit deeper into a theoretical understanding of it.

Constraint Navigator is an orchestrator for constraint optimization within the aivis ecosystem. Our main goal is, given observed signal values at an inference timestamp, to find the closest signal values such that all our constraints are satisfied. To solve this problem with constraint navigator we first need to build a hub model. The hub models hold a number of aivis models which we occasionally call sub models each being one of the following

  • aivis signal prediction model
  • aivis anomaly detection model
  • aivis state detection model

Together with threshold intervals [a,b] these sub models serve as the constraints, e.g. we want sub model X to be within threshold interval [a,b]. To satisfy the constraints we are allowed to move in the space spanned by evaluating all signals which are part of the hub model. For example, if the hub model depends on 100 signals our optimization space is a 100 dimensional space. Actually, we optimize on the feature values and not the signal values but in most cases they are identical, see section about features.

To complete our optimization problem we need to specify a cost function which is the function to be optimized. Usually in our case this is the Euclidean distance to the observed signal values. We have the following options:

  • distance (l1 or l2) to the observed signal values at the inference timestamp
  • aivis model being part of the hub model

We call the signal values found through optimization next normal values which stands for being the closest values which satisfy the constraints. The output object after the constraint optimization is performed will then contain all relevant information, i.e.

  • the inference timestamp
  • cost function evaluated at the observed signal values (if cost function is distance this will always be 0)
  • cost function evaluated at the found next normal values
  • all constraints evaluated at the observed signal values
  • all constraints evaluated at the found next normal values
  • observed signal values
  • next normal values of the signals

Workflow Overview

To run constraint navigator two steps are necessary:

  1. Hub Creation. We call a constraint navigator hub an instance holding a hub model. A hub model is a model which holds several aivis models which can then be used in the constraint optimization problem to be solved.
  2. Inference. Running a constraint optimization on Inference Data. Both historical evaluation and live service are possible.

Workflow Overview

API References

For a detailed API descriptions of docker images, web endpoints and SDK functions, please consult the reference manual of the regarding component:

For additional support, go to Vernaio Support.

Artifact Distribution

Currently, aivis Constraint Navigator is distributed to a closed user base only. To gain access to the artifacts, as well as for any other questions, you can open a support ticket via aivis Support.

Getting Started (SDK)

The SDK of aivis Constraint Navigator allows for direct calls from your Python, Java or C program code. All language SDKs internally use our native shared library (FFI). As C APIs can be called from various other languages as well, the C-SDK can also be used with languages such as R, Go, Julia, Rust, and more. Compared to the docker images, the SDK enables a more fine-grained usage and tighter integration. However we will not provide code snippets for the C SDK. Please consult the C SDK documentation.

In this chapter we will show you how to get started using the SDK.

Run Example Code

A working sdk example that builds on the code explained below can be downloaded directly here:

This zip file contains example code for docker, python and java in respective subfolders. All of them use the same dataset which is in the data subfolder.

Additionally to the `constraint-navigator-examples.zip` you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

Required artifacts:

  • These aivis engine v2 .whl-files which you will receive in a libs.zip directly from aivis Support:
    • aivis_engine_v2_cn_runtime_python_full-{VERSION}-py3-none-win_amd64.whl: An constraint navigator full python runtime
      (here for windows, fitting your operating system - see artifacts for other options on linux and macos.)
    • aivis_engine_v2_base_sdk_python-{VERSION}-py3-none-any.whl: The base python sdk
    • aivis_engine_v2_cn_sdk_python-{VERSION}-py3-none-any.whl: The constraint navigator python sdk
    • aivis_engine_v2_toolbox-{TOOLBOX-VERSION}-py3-none-any.whl: The toolbox python sdk - optional for HTML report generation
  • An aivis licensing key, see licensing, which you will receive directly from aivis Support

Preparations:

  • Make sure you have a valid Python(>=3.9) installation.
  • To apply the aivis licensing key, create an environment variable AIVIS_ENGINE_V2_API_KEY and assign the licensing key to it.
  • Make sure you have an active internet connection so that the licensing server can be contacted.
  • Download and unzip the constraint-navigator-example.zip. The data CSVs train_cn.csv and eval_cn.csv and the model sp_model_0.json need to stay in **/data.
  • Download and unzip the libs.zip. These .whl-files need to be in **/libs.

The folder now has the following structure:

+- data
|  +- train_cn.csv
|  +- eval_cn.csv
|  +- sp_model_0.json
|
+- docker
|  +- # files to run the example via docker images, which we will not need now
|
+- java
|  +- # files to run the example via java sdk, which we will not need now 
|
+- libs
|  +- # the .whl files to run aivis
|
+- python
|  +- # files to run the example via python sdk 

Running the example code:

  • Navigate to the **/python subfolder. Here, you find the classic python script example_cn.py and the jupyter notebook example_cn.ipynb. Both run the exact same example and output the same result. Choose which one you want to run.
  • There are various ways to install dependencies from .whl files. We will now explain two options, which are installing them via pip install or installing them via poetry. Many other options are also possible, of course.

Option A: pip install (only for the classic python script example_cn.py, not for the jupyter notebook example_cn.ipynb)

  • open a console in the **/python subfolder and run the following commands:
      # installs the `.whl` files
      pip install -r requirements-<platform>.txt
    
      # runs the classic python script `example_cn.py`
      python example_cn.py --input=../data --output=output
    

Option B: poetry install

  • If not already happened, install poetry, a python package manager:
      # installs poetry (a package manager)
      python -m pip install poetry
    
  • Run either the classic python script example_cn.py
      # installs the `.whl` files
      poetry install --no-root
    
      # runs the classic python script `example_cn.py`
      poetry run python example_cn.py --input=../data --output=output
    
  • Or run jupyter notebook example_cn.ipynb by executing the following commands in the console opened in the **/python subfolder. The first one might take a while, the third one opens a tab in your browser.
      # installs the `.whl` files
      poetry install --no-root
    
      # installs jupyter kernel
      poetry run ipython kernel install --user --name=test_cn
    
      # runs the jupyter python script `example_cn.ipynb`
      poetry run jupyter notebook example_cn.ipynb
    

After running the scripts, you will find your computation results in **/python/output.

Additionally to the constraint-navigator-examples.zip you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

Required artifacts:

  • These aivis engine v2 .jar files which you will receive in a libs.zip directly from aivis Support:
    • aivis-engine-v2-cn-runtime-java-full-win-x8664-{VERSION}.jar: A constraint navigator full java runtime, here for windows, fitting your operating system - see artifacts for other options on linux and macos.
    • aivis-engine-v2-base-sdk-java-{VERSION}.jar: The base java sdk
    • aivis-engine-v2-cn-sdk-java-{VERSION}.jar: The constraint navigator java sdk
    • There is NO toolbox jar for HTML report generation.
  • An aivis licensing key, see licensing, which you will receive directly from aivis Support

Preparations:

  • Make sure you have a valid Java(>=11) installation.
  • To apply the aivis licensing key, create an environment variable AIVIS_ENGINE_V2_API_KEY and assign the licensing key to it.
  • Make sure you have an active internet connection so that the licensing server can be contacted.
  • Download and unzip the constraint-navigator-examples.zip. The data CSVs train_cn.csv and eval_cn.csv and the model sp_model_0.json needs to stay in **/data.
  • Download and unzip the libs.zip. These .jar-files need to be in **/libs.

The folder now has the following structure:

+- data
|  +- train_cn.csv
|  +- eval_cn.csv
|  +- sp_model_0.json
|
+- docker
|  +- # files to run the example via docker images, which we will not need now
|
+- java
|  +- # files to run the example via java sdk 
|
+- libs
|  +- # the .jar files to run aivis
|
+- python
|  +- # files to run the example via python sdk, which we will not need now 

Running the example code:

  • We use Gradle as our Java-Package-Manager. It's easiest to directly use the gradle wrapper.
  • Navigate to the **/java subfolder. Here, you find the build.gradle. Check, if the paths locate correctly to your aivis engine v2 .jar files in the **/libs subfolder.
  • open a console in the **/java subfolder and run the following commands:
      # builds this Java project with gradle wrapper
      ./gradlew clean build
    
      # runs Java with parameters referring to input and output folder
      java -jar build/libs/example_cn.jar --input=../data --output=output
    

After running the scripts, you will find your computation results in **/java/output.

Artifacts

Our SDK artifacts come in two flavours:

  • full packages provide the full functionality of hub creation and inference and are available for mainstream targets only:
    • win-x8664
    • macos-armv8* (macOS 11 "Big Sur" or later)
    • macos-x8664* (macOS 11 "Big Sur" or later; until aivis engine version 2.9.0)
    • linux-x8664 (glibc >= 2.14)
  • inf packages contain only API functions regarding the inference of a model. As lightweight artifacts they are available for a broader target audience:
    • win-x8664
    • macos-armv8* (macOS 11 "Big Sur" or later)
    • macos-x8664* (macOS 11 "Big Sur" or later; until aivis engine version 2.9.0)
    • linux-x8664 (glibc >= 2.14)
    • linux-armv7 (glibc >= 2.18; until aivis engine version 2.9.0)
    • linux-armv8 (glibc >= 2.18; until aivis engine version 2.9.0)

* Only Python and C SDKs are supported. Java SDK is not available for this target.

In this chapter we want to demonstrate the full API functionality and thus always use the full package.

To use the Python-SDK you must download the SDK artifact (flavour and target generic) for your pythonpath at build time. Additionally at installation time, the runtime artifact must be downloaded with the right flavour and target.

The artifacts are distributed through a PyPI registry.

Using Poetry you can simply set a dependency on the artifacts specifying flavour and version. The target is chosen depending on your installation system:

aivis_engine_v2_cn_sdk_python = "{VERSION}"
aivis_engine_v2_cn_runtime_python_{FLAVOUR} = "{VERSION}"

The SDK supports the full API and will throw a runtime exception if a non-inference function is invoked with an inference-flavoured runtime.

To use the Java-SDK, you must download at build time:

  • SDK artifact (flavour and target generic) for your compile and runtime classpath
  • Runtime artifact with the right flavour and target for your runtime classpath

It is possible to include multiple runtime artifacts for different targets in your application to allow cross-platform usage. The SDK chooses the right runtime artifact at runtime.

The artifacts are distributed through a Maven registry.

Using Maven, you can simply set a dependency on the artifacts specifying flavour, version and target:

<dependency>
  <groupId>com.vernaio</groupId>
  <artifactId>aivis-engine-v2-cn-sdk-java</artifactId>
  <version>{VERSION}</version>
</dependency>
<dependency>
  <groupId>com.vernaio</groupId>
  <artifactId>aivis-engine-v2-cn-runtime-java-{FLAVOUR}-{TARGET}</artifactId>
  <version>{VERSION}</version>
  <scope>runtime</scope>
</dependency>

Alternativly, with Gradle:

implementation 'com.vernaio:aivis-engine-v2-cn-sdk-java:{VERSION}'
runtimeOnly    'com.vernaio:aivis-engine-v2-cn-runtime-java-{FLAVOUR}-{TARGET}:{VERSION}'

The SDK supports the full API and will throw a runtime exception if a non-inference function is invoked with an inference-flavoured runtime.

Licensing

A valid licensing key is necessary for every aivis calculation in every engine and every component. It has to be set (exported) as environment variable AIVIS_ENGINE_V2_API_KEY.

aivis will send HTTPS requests to https://v3.aivis-engine-v2.vernaio-licensing.com (before release 2.7: https://v2.aivis-engine-v2.vernaio-licensing.com, before release 2.3: https://aivis-engine-v2.perfectpattern-licensing.de) to check if your licensing key is valid. Therefore, requirements are an active internet connection as well as no firewall blocking an application other than the browser calling this url.

If aivis returns a licensing error, please check the following items before contacting aivis Support:

  • Has the environment variable been correctly set?
  • Licensing keys have the typical form <FirstPartOfKey>.<SecondPartOfKey> with first and second part being UUIDs. In particular, there must be no whitespace.
  • Applications and in particular terminals often need to be restarted to learn newly set environment variables.
  • Open https://v3.aivis-engine-v2.vernaio-licensing.com in your browser. The expected outcome is "Method Not Allowed". In that case, at least the url is not generally blocked.
  • Sometimes, firewalls block applications other than the browser acessing certain or all websites. Try to investigate if you have such a strict firewall.

Setup

Before we can invoke API functions of our SDK, we need to set it up for proper usage and consider the following things.

Releasing Unused Objects

It is important to ensure the release of allocated memory for unused objects.

In Python, freeing objects and destroying engine resources like Data-, Training- and Inference-objects is done automatically. You can force resource destruction with the appropriate destroy function.

In Java, freeing objects is done automatically, but you need to destroy all engine resources like Data-, Training- and Inference-objects with the appropriate destroy function. As they all implement Java’s AutoClosable interface, we can also write a try-with-resource statement to auto-destroy them:

try(final ConstraintNavigatorData evaluationData = ConstraintNavigatorData.create()) {

  // ... do stuff ...
  
} // auto-destroy when leaving block

Error Handling

Errors and exceptions report what went wrong on a function call. They can be caught and processed by the outside.

In Python, an Exception is thrown and can be caught conveniently.

In Java, an AbstractAivisException is thrown and can be caught conveniently.

Failures within function calls will never affect the state of the engine.

Logging

The engine emits log messages to report on the progress of each task and to give valuable insights. These log messages can be caught via registered loggers.

# create logger
class Logger(EngineLogger):
    def log(self, level, thread, module, message):
        if (level <= 3):
            print("\t... %s" % message)

# register logger
ConstraintNavigatorSetup.register_logger(Logger())
// create and register logger
ConstraintNavigatorSetup.registerLogger(new EngineLogger() {
            
    public void log(int level, String thread, String module, String message) {
        if (level <= 3) {
            System.out.println(String.format("\t... %s", message));
        }
    }
});

Thread Management

During the usage of the engine, a lot of calculations are done. Parallelism can drastically speed things up. Therefore, set the maximal threads to a limited number of CPU cores or set it to 0 to use all available cores (defaults to 0).

# init thread count
ConstraintNavigatorSetup.init_thread_count(4)
// init thread count
ConstraintNavigatorSetup.initThreadCount(4);

Hub Creation

First we create a model context.

# create model context for model data
model_context = ConstraintNavigatorModelContext.create()

# add signal prediction model where sp_model_0 is a signal prediction model as an instance of a JSON
model_context.add_signal_prediction(
  "sp_model_0", json.dumps(sp_model_0)
)

# build hub config
hub_config = json.dumps({})

# create model hub
hub = ConstraintNavigatorHub.create(model_context, hub_config)

# get hub model 
hub_model = hub.get_model()

# get hub report
hub_model = hub.get_model()    
// create model context for model data
final ConstraintNavigatorModelContext modelContext = modelContext.ConstraintNavigatorModelContext.create()

// add signal prediction model where spModel is a signal prediction model as an instance of a JSON
modelContext.addSignalPrediction("sp_model", spModel);

// build hub config 
final IDtoHubConfig hubConfig = new DtoHubConfig(); 

// create model hub  
final ConstraintNavigatorHub hub = ConstraintNavigatorHub.create(modelContext, hubConfig);  

// get hub model 
final ConstraintNavigatorHubModel = hub.getModel():

// get hub report    
final ConstraintNavigatorHubReport = hub.getReport():

Evaluation / Inference

After we have created a model hub, we can evaluate it and perform a constraint optimization on the inference data (out-of-sample). This way, we obtain a continuous stream of values — exactly as it would be desired by the machine operator.

We can create the inference directly from the hub. If we use only the flavour inf we can only initialize the inference from a stored hub model.

# build inference config
inference_config = {
    "dataFilter": {},
    "skipOnInsufficientData": True,
    "cost": {"_type": "L2Distance"},
    "modelConstraints": { 
        "model": "sp_model_0", 
        "condition": {
            "_type": "StaticNumerical",
            "lowerThreshold": 0.9,
        }
    },
    "signalConstraints": {
        "signal": "0", 
        "condition": {
            "_type": "StaticNumerical",
            "lowerThreshold": 0.0,
            "upperThreshold": 16.0,
        }
    },        
}

# build inference
inference = ConstraintNavigatorInference.create_by_hub(hub, inference_config)

# ... use inference ...

// build model constraints 
final DtoStaticNumericalCondition modelCondition = new DtoStaticNumericalCondition().withLowerThreshold(0.9);
final IDtoModelConstraint[] modelConstraints = { new DtoModelConstraint("sp_model_0", modelCondition) };

// build signal constraint
final DtoStaticNumericalCondition signalCondition = 
    new DtoStaticNumericalCondition().withUpperThreshold(16.0).withLowerThreshold(0.0);
final IDtoSignalConstraint[] signalConstraints = { new DtoSignalConstraint("0", signalCondition) };

// build inference config
final DtoInferenceConfig inferenceConfig = 
    new DtoInferenceConfig(true, new DtoL2DistanceCost())
          .withModelConstraints(modelConstraints)
          .withSignalConstraints(signalConstraints);

// create inference
try(final ConstraintNavigatorInference inference = ConstraitNavigatorInference.createByHub(hub, inferenceConfig)) {

  // ... use inference ...

} // auto-destroy inference

Finally, we want to infer some constraints and the next normal recommendations for a list of Inference Timestamps. Therefore we again need to provide a filled data store which this time holds our Inference Data. To do so use the following routine.

After the creation of the data store, you can fill it with signal data. We assume, that the folder path/to/input/folder/ contains eval_cn.csv, which is a file following the CSV Format Specification.

# create empty data context for inference data
inference_data = ConstraintNavigatorData.create()

# create config for files reader
files_reader_config = json.dumps(
    {
        "folder": "path/to/input/folder/"
    }
)

# read data 
inference_data.read_files(files_reader_config)

# ... use inference data ...
// create empty data context for inference data
try(final ConstraintNavigatorData inferenceData = ConstraintNavigatorData.create()) {
  
  // create config for files reader
  final DtoTimeseriesFilesReaderConfig inferenceFilesReaderConfig = new DtoTimeseriesFilesReaderConfig("path/to/input/folder/");
  
  // read data 
  inferenceData.readFiles(inferenceFilesReaderConfig);
  
  // ... use inference data ...
  
} // auto-destroy inference data

Having ingested the inference data we can finally evaluate constraints and compute next normal values.


# choose inference timestamps
timestamps = ...

# create next normal config
next_normal_config = json.dumps({})

# infer float values with next normal
inferences = inference.infer_float_with_next_normal(inference_data, timestamps, next_normal_config)

# ... use scores e.g. for plotting ...
// choose inference timestamps
final List<Long> timestamps = ...

// create next normal config
final DtoFloatNextNormalConfig nextNormalConfig = new DtoFloatNextNormalConfig();    

// infer anomaly scores
final List<DtoFloatDataPoint> scores = inference.inferFloatWithNextNormal(inferenceData, timestamps, next_normal_config);

// ... use scores e.g. for plotting ...

Getting Started (Docker)

The docker images of aivis Constraint Navigator are prepared for easy usage. They use the SDK internally, but have a simpler file-based interface. If you have a working docker workflow system like Argo, you can build your own automated workflow based on these images.

In this chapter, we will show you how to get started using docker images. Usage of the SDK will be covered in the next chapter.

Run Example Code

A working example that builds on the code explained below can be downloaded directly here: constraint-navigator-examples.zip.

This zip file contains example code for docker, python and java in respective subfolders. All of them use the same dataset which is in the data subfolder.

Prerequisites: Additionally to the constraint-navigator-examples.zip you just downloaded, you need the following artifacts. To gain access, you can open a support ticket via aivis Support.

  • The docker images aivis-engine-v2-cn-hub-worker, aivis-engine-v2-cn-inference-worker and (optionally for HTML report generation) aivis-engine-v2-toolbox
  • An aivis licensing key, see licensing

As a Kubernetes user even without deeper Argo knowledge, the aivis-engine-v2-example-cn-argo.yaml shows best how the containers are executed after each other, how hub and inference workers are provided with folders that contain the data csvs and how the toolbox assembles a HTML report at the end.

Artifacts

There are 3 different docker images:

  • The Hub Worker creates the model:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-hub-worker:{VERSION}
  • The Inference Worker creates next normal predictions for a predefined time window. This is convenient for a historical evaluation of a hub model:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-inference-worker:{VERSION}
  • The Inference Service offers a RESTful web API that allows a live service using HTTP calls:
    {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-inference-service:{VERSION}

All docker images are Linux-based.

Requirements

You need an installation of Docker on your machine as well as access to the engine artifacts:

docker -v
docker pull {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-hub-worker:{VERSION}
docker -v
docker pull {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-hub-worker:{VERSION}

Licensing

A valid licensing key is necessary for every aivis calculation in every engine and every component. It has to be set (exported) as environment variable AIVIS_ENGINE_V2_API_KEY.

aivis will send HTTPS requests to https://v3.aivis-engine-v2.vernaio-licensing.com (before release 2.7: https://v2.aivis-engine-v2.vernaio-licensing.com, before release 2.3: https://aivis-engine-v2.perfectpattern-licensing.de) to check if your licensing key is valid. Therefore, requirements are an active internet connection as well as no firewall blocking an application other than the browser calling this url.

If aivis returns a licensing error, please check the following items before contacting aivis Support:

  • Has the environment variable been correctly set?
  • Licensing keys have the typical form <FirstPartOfKey>.<SecondPartOfKey> with first and second part being UUIDs. In particular, there must be no whitespace.
  • Applications and in particular terminals often need to be restarted to learn newly set environment variables.
  • Open https://v3.aivis-engine-v2.vernaio-licensing.com in your browser. The expected outcome is "Method Not Allowed". In that case, at least the url is not generally blocked.
  • Sometimes, firewalls block applications other than the browser acessing certain or all websites. Try to investigate if you have such a strict firewall.

Create hub model

First, we need to create the hub model (workflow step 1: Hub Creation) using the Hub Worker.

At the beginning, we create a folder docker, a subfolder hub-config and add the configuration file config.yaml:

hub:
  models:
    - _type: SignalPrediction
      id: sp_model_0
      file: /srv/data/sp_model_0.json
output:
  folder: /srv/output

Keys data and output are relevant for input and output, and more information can be found in the docker reference manual. Possible subkeys of hub creation will be explained in the hub section, and the docker reference manual.

As a next step, we create a second folder data and add the model sp_model_0.json to the folder. Afterwards, we create a blank folder output. Our folder structure should now look like this:

+- docker
|  +- hub-config
|      +- config.yaml
|
+- data
|  +- sp_model_0.json
|
+- output

Finally, we can start our hub creation via:

docker run --rm -it \
  -v $(pwd)/docker/hub-config:/srv/conf \
  -v $(pwd)/data/sp_model_0.json:/srv/data/sp_model_0.json \
  -v $(pwd)/output:/srv/output \
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} \
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-hub-worker:{VERSION}
docker run --rm -it `
  -v ${PWD}/docker/hub-config:/srv/conf `
  -v ${PWD}/data/sp_model_0.json:/srv/data/sp_model_0.json `
  -v ${PWD}/output:/srv/output `
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} `
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-hub-worker:{VERSION}

After a short time, this should lead to two output files in the output folder:

  • hub-report.json which contains a translation from feature ids to aspects of signals.
  • hub-model.json holds all model information for the following Inference.

Evaluation / Inference

With a hub model at hand, we can run a constraint optimizations with it on historical evaluation data set (bulk inference). For evaluation, we use a new data set in order to check how the engine performs on unseen data ("out-of-sample" data). The config file below instructs the Inference Worker to create a constraint optimization prediction each milliseconds between start time and end time. All times are expressed as UNIX Timestamps in milliseconds. With this config file we obtain a continuous stream of values — exactly as it would be desired by the machine operator.

For this, we create a second subfolder inference-config of the data folder and add the configuration file config.yaml:

data:
  folder: /srv/data
  dataTypes:
    defaultType: FLOAT
inference:
  config:
    skipOnInsufficientData: true
    cost:
      _type: L2Distance
    modelConstraints:
      - model: "sp_model_0"
        condition:
          _type: StaticNumerical
          lowerThreshold: 0.9
    signalConstraints:
      - signal: "0"
        condition:
          _type: StaticNumerical
          lowerThreshold: 0.0
          upperThreshold: 16.0
  modelFile: /srv/output/hub-model.json
  timestamps:
    - _type: Equidistant
      startTime: 1749
      endTime: 1797
      interval: 1
output:
  folder: /srv/output          

After that, we add the Inference Data CSV file eval_cn.csv to the data folder. Our folder structure should now look like this:

+- docker
|  +- hub-config
|      +- config.yaml
|  +- inference-config
|      +- config.yaml
|
+- data
|  +- eval_cn.csv
|
+- output

Finally, we can run the Inference via:

docker run --rm -it \
  -v $(pwd)/docker/inference-config:/srv/conf \
  -v $(pwd)/data/eval_cn.csv:/srv/data/eval_cn.csv \
  -v $(pwd)/output:/srv/output \
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} \
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-cn-inference-worker:{VERSION}
docker run --rm -it `
  -v ${PWD}/docker/inference-config:/srv/conf `
  -v ${PWD}/data/eval_cn.csv:/srv/data/eval_cn.csv `
  -v ${PWD}/output:/srv/output `
  -e AIVIS_ENGINE_V2_API_KEY={LICENSE_KEY} `
  {REGISTRY}/{NAMESPACE}/aivis-engine-v2-ad-inference-worker:{VERSION}

Successful execution should lead to the file float-constraint-values-with-next-normal.json in the output folder. For example, the evaluation at timestamp 1753 results in the above plot 9 -> 0. Such historical evaluation can best be achieved with the Inference Worker. For continuous live monitoring the Inference Service may be preferable as it offers a RESTful API to trigger predictions via HTTP in contrast to the Inference Worker, which uses a file based API.

Preparation

Previous sections gave a basic introduction on how to use aivis Constraint Navigator. The following sections will provide a more profound background. It is not necessary to know this background to use aivis Constraint Navigator. However, you may find convenient help for specific problems, such as allowed data types or restrictions. The following sections are organized with respect to the workflow of first creating a hub model and then performing a constraint inference on this hub model. Minimal user input is required for this workflow. Nevertheless, the user can control the process with several input parameters which will be presented below.

Hub

To run a constraint optimization we need to build a model hub which in turn is build from a model context by adding aivis models to it. The model context and therefore the resulting model hub accepts the following aivis models

  • aivis signal prediction model trained with a numerical interpreter
  • aivis anomaly detection model
  • aivis state detection model (by segment)

See the documentation of the respective engine for further information about any of these models. Right now one can not use a aivis signal prediction model which is trained with a categorical interpreter, i.e. a classification model.

Output: Report and Hub Model

There are two outputs that can be retrieved from the model hub.

First, the report contains a translation table from features to understandable signal features. The output of the endpoint infer float with next normal contains the next normal feature values indexed by an internal integer id. With this table at hand one can map, e.g. feature 123 to lag 60000 of Signal "Temperature Sensor Z". For more on features we refer to the features section.

Second, a hub model as json can be retrieved and saved, which will later be used for inferences, i.e. constraint optimization.

Inference

When the hub model is created, it is ready for an inference. Inference means, we provide new (usually unseen) data around a certain timestamp and ask for some prediction/constraint optimization at said timestamp.

In general, there are two main scenarios in which you want to make inferences. The first one is performance evaluation of the hub model on historical data, i.e., some test data set.

The second typical scenario for making inferences is using them in a productive setting. This is called live inference. For live inference, inferences are usually made on an ongoing basis, as this is typically what you would want for most productive use cases. This is contrary to performance evaluation, where all inferences are made in one go.

For each of the above scenarios, there is a dedicated docker image. The Inference Worker creates predictions for a predefined time window in a bulk manner for hub model evaluation. In contrast, the Inference Service is optimized for live inference. It offers a RESTful web API that allows the triggering of individual predictions for a specified time via an HTTP call. Due to the different application modes, APIs differ between the different docker images and the SDK. These differences will be noted in the following sections.

Inference Data

In the course of using an aivis inference, inference data needs to be ingested. This chapter explains the terminology as well as the required format, quality and quantity.

Timeseries Data / Signals

Most aivis engines work on time series data that is made up of signals. Every signal consists of two things, these being

  • an ID, which is any arbitrary String except timestamp and availability. The ID needs to be unique within the data.
  • a list of data points. Each data point consists of a signal value and a specific point in time, the Detection Timestamp (optionally there can also be an Availability Timestamp, but more on that later). Usually the values are the result of a measurement happening in a physical sensor like a thermometer, photocell or electroscope, but you can also use market KPIs like stock indices or resource prices as a signal.

The data points for one or more signals for a certain detection time range are called a history.

Timeseries

The values of a signal can be boolean values, 64-bit Floating Point numbers or Strings. Non-finite numbers (NAN and infinity) and empty strings are regarded as being unknown and are therefore skipped.

Points in time are represented by UNIX Timestamps in milliseconds (64-bit Integer). This means the number of milliseconds that have passed since 01.01.1970 00:00:00 UTC.

Detection Timestamp

The point in time that a signal value belongs to is called the Detection Timestamp. This usually is the timestamp when the measurement originally has taken place. If the measurement is a longer offline process, it should refer to the point in time at which the measured property was established, e.g. the time point of sample drawing or the production time for delayed sampling. In case of the target signal, the Detection Timestamp should be set to the time you would have liked to have measured the signal online. In the aivis Signal Prediction example use case, the paper quality is such a signal. It is measured around 2 hours after the production of the paper in a laboratory and must be backdated to a fictitious, but instantaneous quality measurement in the process.

Different signals may have different Detection Timestamps. Some might have a new value every second, some every minute, some just when a certain event happens. aivis automates the process of synchronizing them internally. This includes dealing with holes in the data.

Availability Timestamp

When doing a historical evaluation, we want to know what the engine would have inferred/predicted for a list of Inference Timestamps that lie in the past (Inference Timestamps are the moments for which you want to get an inference). For a realistic inference, the engine must ignore all signal values that were not yet available to the database at the Inference Timestamp. A good example for such a case is a measurement, that is recorded by a human. The value of this measurement will be backdated by him/her to the Detection Timestamp, but it took e.g. 5 minutes to extract the value and report it to the system. So, it would be wrong to assume that one minute after this fictitious Detection Timestamp, the value would have been already available to the Inference. Another example case is the fully automated lagged data ingestion of distributed systems (especially cloud systems).

There are multiple ways to handle availability. Which strategy you use depends an the concrete use case. Availability

To allow for these different strategies, every data point can have an additional Availability Timestamp that tells the system when this value became available or would have been available. Signal values for which the Availability Timestamp lies after the Inference Timestamp are not taken it into account for an inference at this Inference Timestamp.

If there is no knowledge about when data became available, the Availability Timestamp can be set to the Detection Timestamp — but then you must keep in mind that your historical evaluation might look better as it could have been in reality.

Inference Data Specification

When making an inference with a hub model or any other model the user needs to have an easy way to determine the signals needed for the inference to work. This is especially necessary to minimize overhead ingesting the data.

Creating a hub model, the engine calculates which signals are relevant for the Inference. Furthermore, for each relevant signal a start lag and end lag are provided to know at which timestamps before the given Inference Timestamp data is needed to make an inference. To make a prediction at a given Inference Timestamp, data at or before Inference Timestamp - start lag is sufficient see "Nearest Predecessor" below. This information is called the Inference Data Specification.

You can inspect a model for its Inference Data Specification calling get data specification in the SDKs.

The following diagram gives you a visual representation of how an Inference Data Specification could look like:

Inference Data Specification

In the diagram you see that a start lag and end lag is specified for every signal. For the Inference, this means that for each signal we need all data points whose detection timestamps lie in the window [ inference timestamp - start lag; inference timestamp - end lag ] as well as the nearest predecessor (see below).

Nearest Predecessor

As previously mentioned for each signal data needs to be present at inference timestamp - start lag. Typically, there is no measurement for exactly this point in time. Then, you must include the nearest predecessor to enable an inference at the inference timestamp. This is the last value before the beginning of the time window. Then, the engine takes this value as an estimate for the signal value at inference timestamp - start lag. Of course this first data point must also be available at the Inference Timestamp (regarding the Availability Timestamp).

Nearest Predecessor

Depending on the configuration, the engine will either throw an error or ignore timestamps for which there is no data at or before inference timestamp - start lag.

CSV Format

All artifacts use CSV as the input data format. As the CSV format is highly non-standardized, we will discuss it briefly in this section.

CSV files must be stored in a single folder specified in the config under data.folder. Within this folder the CSV files can reside in an arbitrary subfolder hierarchy. In some cases (e.g. for HTTP requests), the folder must be passed as a ZIP file.

General CSV rules:

  • The file’s charset must be UTF-8.
  • Records must be separated by Windows or Unix line ending (CR LF/LF). In other words, each record must be on its own line.
  • Fields must be separated by comma.
  • The first line of each CSV file represents the header, which must contain column headers that are file-unique.
  • Every record including the header must have the same number of fields.
  • Text values must be enclosed in quotation marks if they contain literal line endings, commas or quotation marks.
  • Quotation marks inside such a text value have to be prefixed (escaped) with another quotation mark.

Special rules:

  • One column must be called timestamp and contain the Detection Timestamp as UNIX Timestamps in milliseconds (64-bit Integer)
  • Another column can be present that is called availability. This contains the Availability Timestamp in the same format as the Detection Timestamp.
  • All other columns, i.e. the ones that are not called timestamp or availability, are interpreted as signals.
  • Signal IDs are defined by their column headers
  • If there are multiple files containing the same column header, this data is regarded as belonging to the same signal
  • Signal values can be boolean values, numbers and strings
  • Empty values are regarded as being unknown and are therefore skipped
  • Files directly in the data folder or in one of its subfolders are ordered by their full path (incl. filename) and read in this order
  • If there are multiple rows with the same Detection Timestamp, the data reader proceeds all to the engine which uses the last value that has been read

Boolean Format

Boolean values must be written in one of the following ways:

  • true/false (case insensitive)
  • 1/0
  • 1.0/0.0 with an arbitrary number of additional zeros at the end

Regular expression: (?i:true)|(?i:false)|1(\.0+)?|0(\.0+)?

Number Format

Numbers are stored as 64-bit Floating Point numbers. They are written in scientific notation like -341.4333e-44, so they consist of the compulsory part Significand and an optional part Exponent that is separated by an e or E.

The Significand contains one or multiple figures and optionally a decimal separator .. In such a case, figures before or after the separator can be ommited and are assumed to be 0. It can be prefixed with a sign (+ or -).

The Exponent contains one or multiple figures and can be prefixed with a sign, too.

The 64-bit Floating Point specification also allows for 3 non-finite values (not a number, positive infinity and negative infinity) that can be written as nan, inf/+inf and -inf (case insensitive). These values are valid, but the engine regards them as being unknown and they are therefore skipped.

Regular expression: (?i:nan)|[+-]?(?i:inf)|[+-]?(?:\d+\.?|\d*\.\d+)(?:[Ee][+-]?\d+)?

String Format

String values must be encoded as UTF-8. Empty strings are regarded as being unknown values and are therefore skipped.

Example

timestamp,availability,SIGNAL_1,SIGNAL_2,SIGNAL_3,SIGNAL_4,SIGNAL_5
1580511660000,1580511661000,99.98,74.33,1.94,true,
1580511720000,1580511721000,95.48,71.87,-1.23,false,MODE A
1580511780000,1580511781000,100.54,81.19,,1e-5,MODE A
1580511840000,1580511841000,76.48,90.01,2.46,0.0,MODE C
...

Inference Timestamps

In the SDK, in order to make inferences, it is necessary to pass a list of timestamps for which the inferences are to be made. This allows for the request of a single live inference result but also for bulk inference for model evaluation on historic data. Typically it is easy to generate such lists of timestamps in the programming language that calls the SDK. On the other hand, docker images are not necessarily called from within a powerful programming language. This is not an issue for the Inference Service. For live inference, typically inference is requested only for a single timestamp, the most recent one. However, it could be cumbersome to derive a list of timestamps for the Inference Worker. Therefore, for the Inference Worker, timestamps are selected via a list of timestamps configs. There are two different methods:

Timestamp Config Short Explanation Typical Use Case
Equidistant Provides equidistant inference timestamps with fixed interval (for example a inference each minute). Obtain continuous inferences in some time interval.
AtNextSignalValue Selects those timestamps for inference for which there are data points for some specified signal. For model validation it is necessary to make inferences for timestamps for which target values are known.

For both timestamps configs, there are a start time and an end time. An operative signal can be used to further restrict the timestamps. Finally, a minimum interval may be set to avoid calculating too many inferences in a short time interval. This can speed up the computation, or may be useful to balance the distribution of inferences. Finally note that further flexibility can be gained by providing several timestamps configs in which case all timestamps are combined. An example was already provided in the Getting Started section.

Inference Configuration

We explain here the constraint navigator specific parameters in the inference config which needs to be provided when an inference instance is initialized.

Cost

The entry cost in the inference config refers to the cost function used in the constraint optimization. One has the three cost function options

  • l2 distance to observed features (Euclidean distance)
  • l1 distance to observed features
  • model id referring to a model present in the hub model

If one chooses the third option of the cost function being a model one has the choice of minimizing or maximizing the given cost function. For cost functions being l1 or l2 distance we always minimize.

Model Constraints

Under model constraints we specify the thresholds imposed on the models which then serve as constraints in the optimization problem. For each model we can set a condition which is either static or dynamic. A static condition implies providing a float-valued lower threshold and upper threshold which the constraint must fulfill for all timestamps.

On the other hand, a dynamic condition implies providing signal ids lower threshold signal and upper threshold signal from which we dynamically read the thresholds at the current inference timestamp. In this way one can take into account over time changing thresholds. If configured in the inference config, lower threshold signal and upper threshold signal must be present in the inference data for the constraint optimization to work. They are also part of the inference data specification if retrieved calling get data specification in the SDKs.

Signal Constraints

There is also the possibility to specify signal constraints. Similar to model constraints this constitutes of setting static or dynamic conditions. Setting a condition implies that we are only looking for optimas within the provided range of the signal.

In our example the signals denote the grey tone of a pixel and are between 0 and 16. To ensure this for the next normal value as well we impose signal constraints with thresholds 0 and 16 in the example.

If there is a condition set on a signal which also enters the model via an derived feature like a lagged feature the same condtion will be imposed as on the original signal.

A Fully Loaded Inference Configuration

Regarding the inference configuration, we provide an overview of all kinds of possible configuration keys below. This overview is meant for quick reference. A minimal inference configuration is provided in SDK inference, respectively, in Docker inference.

config:
  dataFilter:
    startTime: 1465632000000
    endTime: 1466786640000
    excludeSignals: 
    - signal: L-8
      startTime: 1465698800000
      endTime: 1465720600000
    # includeSignals: ... similar
    includeRanges:  
    - startTime: 1465632000000
      endTime: 1465698800000
    - startTime: 1465720600000
      endTime: 1466786640000
    # excludeRanges: ... similar
  skipOnInsufficientData: true
  cost:
    _type: L2Distance
  modelConstraints:
    - model: "sp_model_0"
      condition:
        _type: StaticNumerical
        lowerThreshold: 0.9
  signalConstraints:
    - signal: "0"
      condition:
        _type: StaticNumerical
        lowerThreshold: 0.0
        upperThreshold: 16.0      
inference_config = json.dumps({
  "dataFilter": {
    "startTime": 1465632000000,
    "endTime": 1466786640000,
    "excludeSignals": [{
      "signal": "L-8",
      "startTime": 1465698800000,
      "endTime": 1465720600000  
    }],
    # "includeSignals": ... similar
    "includeRanges": [{
      "startTime": 1465632000000,
      "endTime": 1465698800000
      }, {
      "startTime": 1465720600000,
      "endTime": 1466786640000
    }],
    # "excludeRanges": ... similar
  },
  "skipOnInsufficientData": True,
  "cost": {"_type": "L2Distance"},
  "modelConstraints": { 
      "model": "sp_model_0", 
      "condition": {
          "_type": "StaticNumerical",
          "lowerThreshold": 0.9,
          "upperThreshold": 2.0,
      }
  },
  "signalConstraints": {
      "signal": "0", 
      "condition": {
          "_type": "StaticNumerical",
          "lowerThreshold": 0.0,
          "upperThreshold": 16.0,
      }
  },        
})

final DtoStaticNumericalCondition modelCondition = 
    new DtoStaticNumericalCondition().withUpperThreshold(2.0).withLowerThreshold(0.9);
final IDtoModelConstraint[] modelConstraints = { new DtoModelConstraint("sp_model_0", modelCondition) };

// build signal constraint
final DtoStaticNumericalCondition signalCondition = 
    new DtoStaticNumericalCondition().withUpperThreshold(16.0).withLowerThreshold(0.0);
final IDtoSignalConstraint[] signalConstraints = { new DtoSignalConstraint("0", signalCondition) };

final DtoInferenceConfig inferenceConfig = new DtoInferenceConfig(true, new DtoL2DistanceCost())
  .withDataFilter(new DtoDataFilter()
    .withStartTime(1465632000000L)
    .withEndTime(1466786640000L)
    .withExcludeSignals(new DtoDataFilterRange[] { 
      new DtoDataFilterRange("L-8")
        .withStartTime(1465698800000L)
        .withEndTime(1465720600000L)
    })
    // .withIncludeSignals ... similar
    .withIncludeRanges( new DtoInterval [] { 
      new DtoInterval()
        .withStartTime(1465632000000L)
        .withEndTime(1465698800000L), 
      new DtoInterval()
        .withStartTime(1465720600000L)
        .withEndTime(1466786640000L)
    }) 
    // .withExcludeRanges ... similar
    .withModelConstraints(modelConstraints)
    .withSignalConstraints(signalConstraints);   

Features

In constraint navigator features is short for engineered features from signals. Such engineered features are:

  • lags of signals if one of the sub models depends on lags
  • 0-1-valued features originating from using a categorical signal interpreter
  • features originating from using oscillatory signal interpreters

We call such derived features aspects. In constraint navigator we see each of these features as an independent variable. If there are too many derived features from one signal there might be conflicts in the next normal suggestions and we advise to keep the number of model constraints which incorporate lags or more exotic signal interpreters minimal.

The features are indexed by an integer id which is used in the output of the infer float with next normal endpoint. Its translation into the signal world can be done with the hub report.

Next Normal Configuration and Feature Filter

In order to call the endpoint infer float with next normal which runs the constraint optimization the user needs to provide a next normal config.

This config constitutes of a feature filter. With this filter one can exclude features from the constraint optimization process which means these features will not change throughout the optimization process. On the other hand, setting include features implies we can only change the provided features.

The include signals and exclude signals entries behave the same way as above referring to all derived features of the provided signals.

Infer Float With Next Normal

To finally trigger the constraint optimization defined by the inference config and the next normal config we need to call infer float with next normal.

The output of this endpoint at an inference timestamp is rather complex. It contains the following information

  • inference timestamp
  • cost function
    • evaluation before optimization
    • evaluation after optimization
  • constraints
    • evaluation of all models used as constraints before optimization
    • evaluation of all models used as constraints after optimization
  • features
    • observed feature values at inference timestamps
    • next normal feature values

With this output at hand one can easily double check if all model constraints and signal constraints are satisfied and one obtains the next normal feature values. In the output the features are indexed by an integer id which can be looked up in the report to relate this id to the underlying signals.

We provide here a minimal example of the syntax of the infer float with next normal method

data:
    ...
inference:
  config:
    ...
  modelFile: ...
  timestamps:
    ...
  nextNormal:
    featureFilter: 
      excludeFeatures: [206, 207, 212]     
      includeSignals: [S1, S2, S3]     
output:
    ...
# choose inference timestamps
timestamps = ...

# build next normal config
next_normal_config = json.dumps({
    "featureFilter": {
          "excludeFeatures": [206, 207, 212],
          "excludeSignals": ["S1", "S2", "S3"],
    },
})

# infer constraints and next normal point 
inferences_with_next_normal = inference.infer_with_next_normal(inference_data, timestamps, next_normal_config)
// choose inference timestamps
final List<Long> timestamps = ...

// build next normal config
final DtoFloatNextNormalConfig nextNormalConfig = new DtoFloatNextNormalConfig()

// infer constraints and next normal point 
final List<DtoFloatConstraintValuesWithNextNormal> inferencesWithNextNormal = inference.inferWithNextNormal(inferenceData, timestamps, nextNormalConfig);

Endpoint Infer Float

We also provide the endpoint infer float which provides only inferences of the cost function and all sub models present in the hub model.

The output of this endpoint at an inference timestamp contains the following information

  • inference timestamp
  • cost function
    • evaluation of cost function
  • constraints
    • evaluation of all models within in hub model

Appendix 1: Expression Language

Before starting the workflow, sometimes there is the need to add a new signal to the dataset (a synthetic signal) that is derived from other signals already present. There are various reasons for this, especially if

  • you want to predict a quantity that is not in your Training Data, but it could be calculated by a formula. For that task, you need to add the new signal via an expression and then use this new synthetic signal as target.
  • you want to restrict the training to operative periods but there is no signal that labels when your machines were off. However, you may be able to reconstruct these periods based on some other signals.
  • you posess domain knowledge and you want to include and pinpoint the engine to some important derived quantity. Often certain derived quantities play a specific role in the application's domain, and might be easier to understand/verify as opposed to the raw quantities.

Technically, you can add synthetic signals using the docker images or any SDK Data API

To create new synthetic signals in a flexible way, aivis Signal Prediction features a rich Expression Language to articulate the formula.

The Expression Language is an extension of the scripting language Rhai. We have mainly added support for handling signals natively. Information on the basic usage of the language can be found in the very helpful Language Reference of the Rhai Book. This documentation will mainly focus on the added features.

Signal Type

A signal consists of a list of data points that represents a time series (timestamps and values of the same type).

The following value types are supported:

  • bool : Boolean
  • i64 : 64-bit Integer
  • f64 : 64-bit Floating Point
  • string : UTF-8 String

A signal type and its value type are written generically as signal<T> and specifically like e.g. signal<i64> for an integer signal.

It is not possible to write down a signal literally, but you can refer to an already existing signal in your dataset.

Signal References

Referring to an already existing signal is done via one of these two functions:

  • s(signal_id: string literal): signal<T>
  • s(signal_id: string literal, time_shift: integer literal): signal<T>

The optional time shift parameter shifts the data points into the future. For example, if the signal "a" takes the value 5.7 at timestamp 946684800000, then the following expression takes the same value 5.7 at timestamp 946684808000. The synthesized signal is therefore a lagged version of the original signal "a".

s("a", 8000)

These functions must be used exactly with the syntax above. It is not allowed to invoke them as methods on the signal id. Both parameters must be simple literals without any inner function invocation!

Examples:

s("my signal id")              // OK
s("my signal id", 8000)        // OK
s("my s" + "ignal id", 8000)   // FAIL
"my signal id".s(8000)         // FAIL
s("my signal id", 7000 + 1000) // FAIL

Examples

To begin with, let's start with a very simple example. Let "a" and "b" be the IDs of two float signals. Then

s("a") + s("b")

yields the sum of the two signals. The Rhai + operator has been overloaded to work directly on signals (such as many other operators, see below). Therefore, the above expression yields a new signal. It contains data points for all timestamps of "a" and "b".

A more common application of the expression language may be the aim to interpolate over several timestamps. For example, "a" might fluctuate and we may therefore be interested in a local linear approximation of "a" rather than in "a" itself:

trend_intercept(s("a"), t, -1000, 0)

Here, the literal t refers to the current timestamp. Therefore, the expression yields the present value as obtained from a linear approximation over the last second. As another example, the maximum within the last second:

max(slice(s("a"), t, -1000, 0))

A typical use of the expression language is synthesizing an operative signal. Assume you want to make inferences only when your production is running, and you are sure your production is off when some specific signal "speed" falls below a certain threshold, say 10. However, "speed" may also be above the threshold during maintenance. However, during maintenance "speed" exceeds the threshold only for a few hours. This is in contrast to production which usually runs stable for months. In this situation, an operative signal may thus be synthesized by adopting only intervals larger than one day, i.e. 86400000 ms:

set_sframe(s("speed") > 10, false, 86400000)

Additional Signal Functions

In the following, all functions are defined that operate directly on signals and do not have a Rhai counterpart (such as the + operator). Some functions directly return a signal. The others can be used to create signals via the t literal as will be explained below. Note that a timeseries is always defined on a finite number of timestamps: all timestamps of all signals involved in the expression are used for the synthesized signal. Time shifts specified in the signal function s(signal_id: string literal, time_shift: integer literal) are taken into account. On the other hand, arguments of the functions below (in particular time, from, and to) do not alter the evaluation timestamps. If you need more evaluation timestamps, please apply add_timestamps to some signal in the expression (see below).

  • add_timestamps(signal_1: signal<T>, signal_2: signal<S>): signal<T> – returns a new signal which extends signal_1 by the timestamps of signal_2. The signal values for the new timestamps are computed with respect to signal_1 using the latest predecessor similar to the above at() function. The syntax for this expression is s("x1").add_timestamps(s("x2")). 2.4
  • at(signal: signal<T>, time: i64): T – returns the signal value at a given time
    If there is no value at that time, it will go back in history to find a nearest predecessor; if there is no predecessor, it returns NAN, 0, false or ""
  • set_lframe(signal: signal<bool>, new_value: bool, minimal_duration: i64) : signal<bool> – returns a new boolean signal, where large same-value periods of at least duration minimal_duration are set to new_value. Note that the duration of a period is only known after end of the period. This affects the result of this function especially for live prediction.
  • set_sframe(signal: signal<bool>, new_value: bool, maximal_duration: i64) : signal<bool> – returns a new boolean signal, where small same-value periods of at most duration maximal_duration are set to new_value. Note that the duration of a period is only known after end of the period. This affects the result of this function especially for live prediction.
  • slice(signal: signal<T>, time: i64, from: i64, to: i64): array<T> – returns an array with all values within a time window of the given signal.
    The time window is defined by [time + from; time + to]
  • steps(signal: signal<T>, time: i64, from: i64, to: i64, step: i64): array<T> – returns an array with values extracted from the given signal using the at function step by step.
    The following timestamps are used: (time + from) + (0 * step), (time + from) + (1 * step), ... (until time + to is reached inclusively)
  • time_since_transition(signal: signal<bool>, time: i64, max_time: i64) : f64 – returns a new float signal, which gives time since last switch of signal from false to true. If this time exceeds max_time we return max_time. Times before the first switch and times t where the signal gives false in [t - max_time , t] are mapped to max_time. 2.4
  • times(signal: signal<T>): signal<i64> – returns a new signal constructed from the given one, where the value of each data point is set to the timestamp
  • trend_slope/trend_intercept(signal: signal<i64/f64>, time: i64, from: i64, to: i64): f64 – returns the slope/y-intercept of a simple linear regression model
    Any NAN value is ignored; returns NAN if there are no data points available; the following timestamps are used: [time + from; time + to]. The intercept at t = time is returned.

Best practice combining expressions

When combining several expressions which operate on time windows then, from a performance point of view, it might be better to build the expression step by step than writting the combination into one expression.

For example, if we want to exclude periods smaller than 30 minutes and periods bigger than 12 hours from an existing boolean signal with signal id "control" we may use the expression:

(s("control")).set_lframe(false, 12*60*60*1000).set_sframe(false, 30*60*1000)

When evaluating this expression at a timestamp t the synthesizer scans trough the 30 minutes time window before t and for each timestamp in there it scan through another 12 hour window before. This means constructing the desired synthesized signal is of complexity 12 × 60 × 30 × # timestamps. However, splitting the above in two expressions, we first generate a signal "helper" via

(s("control")).set_lframe(false, 12*60*60*1000)

and then we apply on the result the expression

(s("helper")).set_sframe(false, 30*60*1000)

In this case we end up with complexity 12 × 60 × # timestamps + 30 × # timestamps which is considerably smaller than before.

Basics of Rhai

Working with signals

In this section, we will briefly show the potential behind Rhai and what you can create out of it. Rhai supports many types including also collections. But Rhai does not have natively a signal type. Then, when working with signals, one approach involves extracting the primitive values from signals and converting the results back into a signal format. This process uses the literal

t: i64 – the current timestamp

together with the function s to refer to some signal and some other function defined above to extract values from the signal. For example, the sum of two signals "a" and "b" could be written without use of the overloaded + operator:

s("a").at(t) + s("b").at(t)

The results of such an expression are automatically translated into a new signal. In order to construct a signal from the results, the expression must not terminate with a ;. Of course, the additional signal functions can be used as any other functions in Rhai, and may thus be combined with the rest of Rhai's tools, when applicable.

Rhai is a scripting language

As such, you can script. A typical snippet would look like the following

let array = [[s("one").at(t), s("two").at(t)], [s("three").at(t), s("four").at(t)], [s("five").at(t), s("six").at(t)]];
let pair_avg = array.map(|sub| sub.avg());
pair_avg.filter(|x| !x.is_nan()).map(|cleaned| cleaned.abs().exp()).sum().ln()

Here, we used array functions (avg(), sum()) that will be clearly defined and presented in the following sections. The last line defines the result of the expression.

Rhai has the usual statements

In the same spirit of many other languages, you can create and control flow using statements if, for, do, while, and more (read Language Reference of the Rhai Book). Here's an example demonstrating their usage

let val = s("one").at(t);
if (val >= 10.0) && (val <= 42.0) {
  1.0 - (val - 42.0)/(10.0-60.0)
} else if (val <= 60.0) && (val > 42.0) {
  1.0 - (val - 42.0)/(60.0-42.0)
} else {
  0.0/0.0
}

In this code snippet, we determine a value to return based on the current state of the "one" signal. Different expressions are assigned depending on the signal's current value. Note that 0.0/0.0 will evaluate to NAN.

Rhai allows you to create your own functions

Like most other languages, you can create your own functions and use them whenever needed.

fn add(x, y) {
    x + y
}

fn sub(x, y,) {     // trailing comma in parameters list is OK
    x - y
}

Rhai allows you to do many more things than the ones here described. Careful reading of Language Reference of the Rhai Book brings numerous benefits in the usage of this programming language.

Additional Array Functions

The following functions for arrays were additionally defined:

  • some(items: array<bool>): bool – returns true if at least one item is true
  • all(items: array<bool>): bool – returns true if all items are true
  • sum(items: array<i64/f64>): f64 – returns the sum of all items and 0.0 on an empty array
  • product(items: array<i64/f64>): f64 – returns the product of all items and 1.0 on an empty array
  • max(items: array<i64/f64>): f64 – returns the largest array item; any NAN value is ignored; returns NAN on an empty array
  • min(items: array<i64/f64>): f64 – returns the smallest array item; any NAN value is ignored; returns NAN on an empty array
  • avg(items: array<i64/f64>): f64 – returns the arithmetic average of all array items; any NAN value is ignored; returns NAN on an empty array
  • median(items: array<i64/f64>): f64 – returns the median of all array items; any NAN value is ignored; returns NAN on an empty array

Constants

The following constants are defined in Rhai:

  • PI(): f64 – the Archimedes' constant: 3.1415...
  • E(): f64 – the Euler's number: 2.718...

Operators / Functions

Signals can be used in all normal operators and functions that are designed for primitive values. You can even mix signals and primitive values in the same invocation. If at least one parameter is a signal, the result will also be a signal.

Operators

See:

The following operators were defined:

  • Arithmetic:
    • +(i64/f64): i64/f64
    • -(i64/f64): i64/f64
    • +(i64/f64, i64/f64): i64/f64
    • -(i64/f64, i64/f64): i64/f64
    • *(i64/f64, i64/f64): i64/f64
    • /(i64/f64, i64/f64): i64/f64
    • %(i64/f64, i64/f64): i64/f64
    • **(i64/f64, i64/f64): i64/f64
  • Bitwise:
    • &(i64, i64): i64
    • |(i64, i64): i64
    • ^(i64, i64): i64
    • <<(i64, i64): i64
    • >>(i64, i64): i64
  • Logical:
    • !(bool): bool
    • &(bool, bool): bool
    • |(bool, bool): bool
    • ^(bool, bool): bool
  • String:
    • +(string, string): string
  • Comparison (returns false on different argument types):
    • ==(bool/i64/f64/string, bool/i64/f64/string): bool
    • !=(bool/i64/f64/string, bool/i64/f64/string): bool
    • <(i64/f64, i64/f64): bool
    • <=(i64/f64, i64/f64): bool
    • >(i64/f64, i64/f64): bool
    • >=(i64/f64, i64/f64): bool

Binary arithmetic and comparison operators can handle mixed i64 and f64 arguments properly, the other parameter is then implicitly converted beforehand via to_float. Binary arithmetic operators will return f64 if at least one f64 argument is involved.

Functions

See:

The following functions were defined:

  • Arithmetic:
    • abs(i64/f64): i64/f64
    • sign(i64/f64): i64
    • sqrt(f64): f64
    • exp(f64): f64
    • ln(f64): f64
    • log(f64): f64
    • log(f64, f64): f64
  • Trigonometry:
    • sin(f64): f64
    • cos(f64): f64
    • tan(f64): f64
    • sinh(f64): f64
    • cosh(f64): f64
    • tanh(f64): f64
    • asin(f64): f64
    • acos(f64): f64
    • atan(f64): f64
    • asinh(f64): f64
    • acosh(f64): f64
    • atanh(f64): f64
    • hypot(f64, f64): f64
    • atan(f64, f64): f64
  • Rounding:
    • floor(f64): f64
    • ceiling(f64): f64
    • round(f64): f64
    • int(f64): f64
    • fraction(f64): f64
  • String:
    • len(string): i64
    • trim(string): string – with whitespace characters as defined in UTF-8
    • to_upper(string): string
    • to_lower(string): string
    • sub_string(value: string, start: i64, end: i64): string
  • Conversion:
    • to_int(bool): i64 – returns 1/0
    • to_float(bool): f64 – returns 1.0/0.0
    • to_string(bool): string – returns "true"/"false"
    • to_float(i64): f64
    • to_string(i64): string
    • to_int(f64): i64 – returns 0 on NAN; values beyond INTEGER_MAX/INTEGER_MIN are capped
    • to_string(f64): string
    • to_degrees(f64): f64
    • to_radians(f64): f64
    • parse_int(string): i64 – throws error if not parsable
    • parse_float(string): f64 – throws error if not parsable
  • Testing:
    • is_zero(i64/f64): bool
    • is_odd(i64): bool
    • is_even(i64): bool
    • is_nan(f64): bool
    • is_finite(f64): bool
    • is_infinite(f64): bool
    • is_empty(string): bool
  • Comparison (returns other parameter on NAN):
    • max(i64/f64, i64/f64): i64/f64
    • min(i64/f64, i64/f64): i64/f64

Comparison operators can handle mixed i64 and f64 arguments properly, the other parameter is then implicitly converted beforehand via to_float. It will return f64 if at least one f64 argument is involved.

The Boolean conversion and comparison functions were added and are not part of the official Rhai.

Appendix 2: Toolbox

aivis engine v2 toolbox is no official part of aivis engine v2 but an associated side project. It mainly provides tools to turn output artifacts of aivis engine v2 into technical, single-file HTML reports for data scientists. Its api and behaviour is subject to change and experimental. Users should already know the concepts of aivis engine v2 beforehand.

Caveats:

  • Large input files or extensive settings might lead to a poor UI responsiveness.
  • UI layouts are optimized for wide screens.

Setup

The aivis engine v2 toolbox does not need a licensing key. Its python code is free to look into or even adapt. The respective toolbox release belonging to a aivis engine v2 release {VERSION} is available as:

  • Python whl file: aivis_engine_v2_toolbox-{VERSION}-py3-none-any.whl
  • Docker Image: aivis-engine-v2-toolbox:{VERSION}

Create Engine Report

Each call to construct a toolbox HTML report for engine xy has the following structure:

from aivis_engine_v2_toolbox.api import build_xy_report

config = {
    "title": "My Use Case Title", 
    ...
    "outputFile": "/path/to/my-use-case-report.html"}
build_xy_report(config)

Additionally, the config needs to contain references to the respective engine's output files, e.g. "analysisReportFile": "/path/to/analysis-report.json". The full call to create a report for any engine can for example be found in the python or argo examples of the respective engine.

Expert Configuration

There are many optional expert configurations to customize your HTML report. Some examples:

  • The aivis engine v2 toolbox always assumes timestamps to be unix and translates them to readable dates. This behaviour can be switched off via "advancedConfig": {"unixTime": False}, so that timestamps always remain long values.

  • By referring to a metadata file via "metadataFile": "/path/to/metadata.json", signals are not only described via their signal id but enriched with more information. The metadata json contains an array of signals with the keys id (must) as well as name, description, unitSymbol, unitType (all optional):

    {
      "signals": [{
        "id": "fa6c65bb-5cee-45fa-ab19-355ba94889e9",
        "name": "et 1",
        "description": "extruder temperature nr. 1",
        "unitName": "Kelvin",
        "unitSymbol": "K"
       }, {
        "id": "dc3477e5-a83c-4485-b7f4-7528d336d9c4", 
        "name": "abc 2"
        }, 
       ...
    ]}
    
  • To every HTML report which contains a timeseries plot, additional signals can be added to also be displayed. It is however not an automatism to include all signals of the dataset for display, since a full dataset is typically an amount of data which should not be put into a single-file HTML.

All custom configuration options can be seen in the api.py file in src/aivis_engine_v2_toolbox.