Easier Development with Image Sequences as Tracking Input

By using image sequences as input sources, instead of live camera input, AR development gets a whole lot easier. VisionLib supports this since day-one. Here is how you can use it to ease your development.


Corona struck private and business life, forcing a majority of the latter to temporarily shutdown, and eventually making (business) traveling much more complicated. Not to speak of returning to offices or production facilities out of “home/mobile office” to go a little closer back to normality.

So how do you develop an AR app or service involving object tracking, when you work from home, while the items of interest are in the office or far off, and you can’t easily go to where these objects are located?

But even without the constraints of a global pandemic, Augmented Reality development can be challenging sometimes. It is not always the big problems, but the many small chores in the workflow that slow down development over time.

 

Recording an image sequence of your tracking targets with a tablet.

 

With model-based object tracking, it usually starts by setting up tracking and testing it continuously against the physical item, to ensure good tracking results and consequently a good user experience.

In early stages of a project, but also throughout, testing usually involves aiming a camera at the physical object to have it tracked. Sounds small, but can quickly become very annoying, when all you want is to test your code to check how your app logic interacts and performs in AR.

This is very time consuming. The effect increases with larger objects, as you might have to walk over to them or re-deploy your current progress to a mobile device several times.

VisionLib makes it easier for developers, as you can test on the mobile device, use USB camera input on your developer machine in VisLab or inside the Unity editor, or – even more convenient – use recorded data of your tracking targets during development instead of a live input.

Image sequences enable you to test the tracking at your desktop without the need to have the targets physically present or to aim the camera at them repeatedly. Thus, testing and development gets tremendously easier.

To use sequences, you first have to records them. Once created, you can store them on your desktop and use them as an input source in your VisionLib tracking configuration. We’ve created a tutorial that explains the workflow in detail:

YouTube player

 

When you use the example scene from VisionLib’s Unity SDK to create recordings on a mobile device, you can also save and replay ARKit or ARCore data and simulate the SLAM pose on your desktop, which again is incredibly handy and time-saving. Also look in our documentation to get more details on Recording & Replaying Image Sequences.

You can use the sequences in native, as well as Unity-based VisionLib projects. Using VisLab, you can setup tracking configurations with images sequences and use the resulting configuration file in your Unity project.

For those familiar with VisionLib configuration files, here is an excerpt, which declares two image sequences as input sources and uses the first one as input for the tracking. Note, how the URIs are declared differently. In the second case, using the projct_dir scheme, the recorded data is located inside the Unity project, while in the first case, it is stored somewhere on the computer and referenced using an absolute file path.

 "tracker" {
  // ... tracker definition
  "simulateExternalSLAM":true
  },
  "input": {
    "useImageSource": "imageSequence01",
    "imageSources": 
    [
      {
          "name": "imageSequence01",
          "type": "imageSequence",
          "data": {
              "uri": "/Users/visionlib/Dev/unity/_sequences/my_object_rec-1/*.jpg"
          }
      },
      {
          "name": "imageSequence02",
          "type": "imageSequence",
          "data": {
              "uri": "project_dir/my_object_rec-2/*.jpg"
          }
      }
    ]        
  }  

Image sequence aren’t meant for final deployment. They are a time safer during development and perfect for quick prototyping in an onboarding or early stage of an AR project: you could also take sequences of your client’s objects, assess them and make recommendations during the concept phase through some iterations. This not only eases communication, but ultimately lowers the need to make conceptual changes as the project moves on.

That’s it on image sequences. They are a true time saver during development. Tell us what you think or share your experiences on using image sequences with us, write at hello@visionlib.com.