3.1 Inference

Online-Inference via API

Inference on the ML models is possible via the /predict endpoint of the container. To use it, the ML models first need to be trained (see 2.1 Model Training for details ).

The /predict endpoint uses the same API definition as defined for the training data ( see restdef.py).

Given a single transaction as json input, the endpoint uses two ML models to detect anomalies in the given data. The prediction then is transformed into the result schema of the /predict endpoint.

#response Model 
class Decision(BaseModel):
    transactionid: str
    prediction:  float
    probability: float
    confidence:  float
    prediction2:  float
    probability2: float
    confidence2:  float

Last updated