Mobius Vision SDK (2.2.0)

Download OpenAPI specification:Download


Mobius Labs provides next-generation, state-of-the-art, industry leading Computer Vision technology for edge devices and on-premise solutions. We build technology that makes devices truly visually intelligent. With our on-premise SDK, anyone can process vast amounts of images directly on their local machines on their premises. This enables independence, data privacy, security, smooth integration and control over the data flow.

Our current suite of SDK features offers large scale image and video tagging and keywording and facial recognition and search. For images, our SDK also provides aesthetic and quality score and similarity search. And for videos, our SDK also provides automatic highlights and scene changes. With many new features currently in development, Mobius Vision SDK is on its way to become a one stop shop for all of the industries’ state-of-the-art Visual AI solutions.

What makes Mobius Vision SDK truly powerful is that it allows users to be able to personalize it for their needs. Not only Mobius Vision SDK provides pre-trained AI models out-of-the-box for myriad existing use cases, it also enables users to build their own custom AI models with incredible ease, using their own data to fit any of their niche use cases.

In the following sections, you will find how each of the different modules within the Mobius Vision SDK work. Please note that this documentation only discusses the modules and SDK features that are included in your lisence. Please contact us if you are interested in additional features offered by the Mobius SDK.

First, let us run through the software and hardware requirements and setup of the Mobius Vision SDK.


To install the Mobius On-Premise SDK, you have to follow a few steps as explained here.

We provide our solution as a combination of a python package (wheel) and a Dockerfile. Using the Dockerfile allows you to build a Docker image with everything you need to run the Mobius Vision SDK. To simplify things, we also provide a docker-compose file that takes care of building the image and running it with the correct environment variables.

The access to a zipped folder with all necessary files will be delivered to you in a shipping email.


The hardware and software requirements for the SDK differ depending on the type of server to be used (CPU or GPU).


For the CPU version of the SDK you need:

  • Intel Haswell CPU or newer (AVX2 support necessary)

For the GPU version of the SDK you need:

  • Nvidia GPU of one of the following generations: Kepler, Maxwell, Pascal, Volta, Turing

AMD GPUs are not supported.


In order to successfully install the Mobius Vision On-Premise SDK, the following software requirements have to be met:

  • GNU/Linux x86_64 with kernel version > 3.10
  • docker >= 1.12 for CPU version, 19.03 or higher for GPU version
  • docker-compose >= 1.28.0 (optional, but recommended)

MacOS or Windows as a host system is not supported.

Additional Software for the GPU Version

To use a GPU, the following additional software requirements have to be met:

  • docker >= 19.03
  • Nvidia Drivers >= 418.81.07
  • nvidia-docker2 >= 2.6.0 (for nvidia-container-toolkit)

Docker Installation

These are the installation steps to install Docker on a Ubuntu based system. Steps 2 and 3 are not strictly required, but we recommend this set-up in order to prevent running the Docker container with sudo.

If you already have Docker and docker-compose installed, you can skip these steps.

  1. Install the Docker Container Environment (CE)

  2. Add your user to the docker group.

 sudo usermod -aG docker $USER
  1. Log out and log back in so that your group membership is re-evaluated.

  2. Install docker-compose:

sudo curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Verify that the installed versions are equal or newer than the requirements listed above:

docker --version
docker-compose --version

Additional Steps for GPU Version

To use the GPU version of the Mobius Vision SDK you need to have nvidia-docker2. You can install it by instructions from or our instructions below.

If you already have nvidia-docker2 installed, you can skip this step.

  1. Add the nvidia-docker repo to apt, as explained here:

  2. Install the nvidia-docker2 package and reload the Docker daemon configuration.

 sudo apt-get install nvidia-docker2
 sudo service docker restart

Verify that the installed versions are equal or newer than the requirements listed above:

nvidia-docker version

Customer Training Interface (CTI)

In case your license includes the Customer Training Interface (CTI), please unpack the corresponding zip file you got, change to that directory and import the required Docker images using the following commands:

docker image load --input cti_frontend.tar
docker image load --input cti_backend.tar

Verify that the images were imported:

docker image ls

Include the external IP of your server in the ALLOWED_ORIGINS environment variable to allow connections to the user interface from outside of the server ('CORS policy'):

export ALLOWED_ORIGINS="http://<external_ip_of_server>:8080"

You can also permanently change this variable in the docker-compose file.

Please note that the containers are only imported here and not actually started. Do not start them yourself. They are started automatically after the SDK is started using the docker-compose up command (see next section).


Running the SDK

The SDK can be started using the following commands. The first start may take a few minutes as the docker container is then automatically built using the provided Dockerfile.

cd mobius_sdk
docker-compose up -d

After a while, the HTTP REST API of the SDK will be usable at localhost:5000.

In case the Custom Training Interface (CTI) is included in your license, it will be started automatically afterwards and is then available at http://<ip_of_your_server>:8080. The default username is user and the default password user as well. Additional users can be created after logging in with the username admin and the password admin.

You can verify that the docker container is running with:

docker container ps

Stopping the SDK

You can stop the SDK by executing the following command in the same directory:

docker-compose down

Configuring the Setup

Optionally, the following variables can be changed in the docker-compose file before it is executed to adapt the setup to your needs:

  • SDK_PORT: port to access the API (default: 5000)
  • NUM_WORKERS: number of workers for parallel processing, see note below (default: 20)
  • MOBIUS_TOKEN: token to verify the validity of a particular SDK according to the license agreement (default: already set to the one included in your license)
  • CUDA_VISIBLE_DEVICES: define which GPUs are used in case multiple GPUs are available (default: all)

NOTE: NUM_WORKERS should be carefully adjusted according to the features in the SDK (the more features are shipped in the SDK, the lower this value should be) and the available hardware (the more cores are available, the higher this value can be). We usually recommend a value between 5 and 50 for this environment variable.

The following environment variables are available for the Custom Training Interface (CTI):

  • ALLOWED_ORIGINS: set this to the external IP of the server to prevent misuse of the CTI backend or set it to "*" to disable this security mechanism (not recommended) (default: http://localhost:8080)
  • POSTGRES_PASSWORD and JWT_SECRET_KEY: random strings used for additional security (default: random passwords)

Usage of Docker Volumes

By modifying the docker-compose file, the volumes to be used for the user data can be changed, too:

  • mobius_vision_data: docker volume used to store user data (metadata, indexes etc.)
  • mobius_vision_redis: docker volume used to store redis data (database used for task status and scheduling)

You can also mount a local drive or folder to the container for faster predictions or uploads of images and videos (see the path parameter on those endpoints). In the volumes section of the mobius_sdk service in the docker-compose file, add <path_on_host>:<path_on_container>:<flags> where <path_on_host> is the full path to the directory to be mounted, and <path_on_container> is the point at which it will be mounted. <path_on_container> can either be a fully qualified path, in other words beginning with /, or it can be a relative path. If it is a relative path, it is interpreted as relative to a configurable base path which defaults to /external. It is recommended to keep this default to ensure there are no conflicts with Linux or Docker system files. <flags> can be any Docker volume mount flags, but ro (for read-only within the container) is strongly recommended.

For example, include /mnt/nfs/image_archive:/external/image_archive:ro in the docker-compose file, and then add a path parameter on requests like follows: "path": "image_archive/image0001.jpg".

Checking the SDK Status

The Mobius SDK does not have a standard endpoint to check the availability of the module. However, it can be quite easily checked by passing a query image or video for prediction.

Image prediction

Simple example for calling image prediction with a query image image.jpg with default parameters.

 curl "" \
 -X POST \
 -F "data=@./image.jpg" \
 -F params='{}'

If the SDK is running properly and the image file can be read in the preprocessing, the SDK returns a 200 response with the status success.

Video prediction

Simple example for calling video prediction with a query videovideo.mp4 with default parameters.

 curl "" \
 -X POST \
 -F "data=@./video.mp4" \
 -F params='{}'

If the SDK is running properly and the video file can be read in the preprocessing, the SDK returns a 200 response with the status success.


Image predictions are a core functionality of the Mobius on-premise SDK. All modules for images that have been shipped with your SDK can be queried for prediction output with the predict endpoint.

Pre-trained Modules and Train Endpoints

Most modules are pre-trained and can be used out of the box; module-dependent parameters can be used to customize the modules to your use case. Some modules need to be trained first in order to be used (e.g., customized training).

Please refer to the corresponding module description section in the sidebar (or with the links in the parameter description) to learn more on how to implement workflows for the train endpoints.

Input Parameters

This endpoint comes with a range of parameters to customize the behaviour of this functionality. modules parameter is used to pass an array specifying which modules to predict with. The parameters are grouped with the relevant module and submodule. You can find detailed descriptions of the meaning of the parameters in the explanation section of each module.

The path and url parameters may be used to specify a data file on the local system or a remote host, respectively, instead of including an image file in the request form data. Only one of a data file, the path parameter, and the url parameter may be specified on single request.

Parallel Processing

To get maximum performance out of the SDK run multiple requests at the same time. The difference between parallel and sequential processing could be quite dramatic. For example, it takes 17 seconds to process 1 000 images in parallel mode and 144 seconds in sequential mode (times could be different on your instance and your set of features).

Here is an example code in Python that can be used to process images in parallel.

import requests, json
import threading
from concurrent.futures import ThreadPoolExecutor

images = ['./image.jpg', './image2.jpg', './image3.jpg']
host = ''
params = {}

def predict_on_image(path, params, host):
    with open(path, 'rb') as image:
        r =
            files={'data': image},
            data={'params': json.dumps(params)}
        output = r.json()
        return output

with ThreadPoolExecutor(max_workers=20) as executor:
    def worker_func(path):
        return predict_on_image(path, params, host)

    results = list(zip(images,, images)))

Predict on Image

post /image/predict


Endpoint for predictions on a query image with module selection and a range of optional parameters.

Request Body schema: multipart/form-data
object (image_predict_params)
string <binary>

Image file






Request samples

Content type
  "params": {
    "modules": [
    "tags": {
      "standard_concepts": {
        "confidence_threshold": 0.5,
        "top_n": 100,
        "categories_enabled": true
      "custom_concepts": {
        "custom_concept_id_list": [
          "leather jacket",
    "search": {
      "similarity": {
        "top_n": 5,
        "filter": [
      "identities": {
        "top_n": 5
    "aesthetics": {
      "custom_styles": {
        "custom_style_id_list": [
          "still life"
    "face_recognition": {
      "identities": {
        "group_id_list": [
  "data": "..."

Response samples

Content type
Expand all Collapse all
  • "tags":
  • "aesthetics":
  • "face_recognition":
  • "search":
  • "detection":
  • "status": "success",
  • "params":