tempo.mlserver module¶

class tempo.mlserver.InferenceRuntime(settings: mlserver.settings.ModelSettings)¶

Bases: mlserver.model.MLModel

async load() → bool¶
async predict(request: mlserver.types.dataplane.InferenceRequest) → mlserver.types.dataplane.InferenceResponse¶