The Spider: Using SteamVR dev kit for creating a glasses tracker.

August 30 | 17

In the lab, I have been working for some time with a SteamVR tracking dev kit we got. I started a small project of creating trackable shutter glasses so we can use on our VR experiments. I have been trying to write about this for a while but finally I could find some time todo this!

On this post I’m going to write about how we created in the lab a trackable prototype frame that uses SteamVR that gets attached to our shutter glasses for our projects here at the EAC. If you are more interested on how we did this / more details on the whole workflow for doing just to write me an email and I will try to explain with more detail.

Steam VR Overview

The Steam VR tracking system is a system based on timing, the wireless connection uses a proprietary protocol and the only difference between a controller and a HMD is that the HMD has a display, besides that, the way it works is exactly the same.

The system uses 2 bases stations to track objects. These basestations are called “lighthouses” and need to be synchronized either by wire or by flashes (if the lighthouses are in the field of view from each other).

These lighthouses contain 2 motors that spin at 60Hz (one Horizontal and one Vertical), they produce IR signals modulated at 1.8Mhz generating time stamps at 48Mhz. The difference between the synch flash from the light houses and a laser hit on the tracked object sensors generates an angle. These angles plus some precisely timing produce an equation system that provides the position and rotation of the tracked object.

Each tracked object contains a set of optical receivers; these optical receivers detect reference signals from the base stations and with a photo diode, they convert IR light to a signal that the FPGA in the tracked object understands as a hit.

The FPGA then uses ticks to calculate angles between laser hits from the basestations plus the known position of each sensor to generate the equation system that solves the position and rotation of the tracked object.

Design

In order to design the spider we had to take into account several factors. The tracked object  had to be light, it shouldn’t occlude the view from the glasses it was going to be attached to and the sensor placement should be positioned taking into account translation and rotation errors that can arise.

Translation errors: These type of errors arise when the tracked object is being moved around the tracked area. As the distance from the lighthouse increases, the tangential velocity from the spinning motors in the base stations also increases hence decreasing the time between sensor hits. Then the error begins to dominate, in order to avoid this type of error the sensors should be as far apart as possible.

Rotational errors: These errors arise when the user is rotating the tracked object. Rotation of sensors orthogonal to a plane yields significant displacement while rotation in a plane yields less displacement, in order to avoid this type of errors sensors should be out of plane.

In order to achieve these requirements, we decided to 3D print a structure that “hooks” itself to a set of shutter glasses taking into account the limitations on sensor placement we mentioned above. After 3 iterations, we came up with a final prototype that complies with all the aforementioned requirements.

Simulation

After designing the frame with all the specific positions of each sensor we ran our generated model through a simulator that the SteamVR provides that assesses how good or bad tracking the proposed model has.

This simulator offers two type of views; a 3D view looking from the lighthouse and a 2D unwrapped view

Each view shows a translation error, a rotation error, an initial pose view and the number of sensors that are possible to view from a specific view. Also, each view shows colors that go from blue (good tracking) to red (tracking not possible) and in our case, we only need to track the front of the glasses (as is where the user is looking at in the projection screen).

As one can see in the 3D figures the front of our tracked object both for rotational errors and translation errors shows good results.

Physical Prototype Results

After 3D printing the different parts, positioning and calibrating the sensors and the IMU (gyroscope) for the tracked object, we gave it a few tests and so far it works promisingly.

Finally  a small video that shows the spider in action. I can definitely say that I look like a cyborg 🙂

ARMeet: Augmented Reality for Meeting reports

March 22 | 17

TLDR; I created an app that eases up the understanding of project reports in meetings by aiding data visualization through Augmented Reality in the report itself; check the video.

Over the past 2 months I have been working on a proof of concept that actually performed surprisingly well  in my phone. I called this project ARMeet, and basically its an attempt to ease up the reading / understanding of the  reports given in a meeting with the help of Augmented Reality.

For this proof of concept I wanted to visualize an interesting (to me) dataset that I found in Kaggle about video game sales with ratings, I wanted to mix ratings with maybe the size of each visualized game or experiment with other factors like positioning or even rotation; but after some failed experiments I decided to not use the ratings as they where complete on ~40% of the total games the dataset contained (Around ~6900 games had complete ratings out of ~16700 games).

So I just sticked to games represented by bars organized by platforms that belonged to a console; I created 6 major consoles. Nintendo, Sony, Microsoft, Sega, PC and Other. The Other console represents all the smaller platforms that dont have renowed/still alive vendors. Among the Other consoles there are platforms like the Atari 2600, Wonder Swan, Turbo Grafx 16, etc.

Each game can consist of several stacked bars; each color represents a country: Blue for US, Yellow for EU, Red for Japan and white for Other countries. Along with a purple cylinder that covers all the stacked colored boxes that represents the total sales that game had. Games are organized by years and belong to platforms that belong to consoles. In order to visualize a platform the user just needs to point the application towards the image target at the end of each chapter in order to see the sales.

Without further ado; here’s a video of how ARMeet works:

If you want to read the report you can get it in here and if you are interested in the source code of the project, feel free to check it out in here also 🙂

Getting timestamp on livestream video on OpenCV quick and dirty.

September 14 | 16

So I have been working on testing some SLAM methods in specific ORB-SLAM2 and in order to make it run on a webcam one needs to pass among the frame in open CV a timestamp for the frame. In theory it should be dead easy, just call on the CV::VideoCapture class the Get method with the “CV_CAP_PROP_POS_MSEC” property according to the OpenCV documentation. But turns out this is half implemented so for live streams it just doesnt work. According to this it turns out there is a way to patch the OpenCV Library and implement this, but I was just needing to get a timestamp really quick in order to feed to my SLAM method. So what I did was this:

I took this piece of code from the same ORB-SLAM2 repo and accommodated to count the time to query a frame from the webcam. Its a quick and dirty trick but does its job 🙂

Procedural city visualization and moveable agents coverage.

April 18 | 16

AgentCoverageNiceViz

For the information visualization class I had an old project that I wanted to revive; it was called the “ambulances” program; it was a project where we had to come up with an algorithm to visualize ambulance coverage in a city for Data Structures and Algorithms class back in my college years.

The whole idea of the class project was to visualize information and the whole idea of the ambulances project was that given a set of N ambulances, where to place them in a city graph in order to minimize the response time for each moveable agent (ambulance).

Unfortunately the data that I had didnt had spatial coordinates so I had to generate the data by my own. I used a modified version of CityGen to dump the generated data for the generated cities, and then I created an app in Unreal Engine to be able to visualize the generated data and my algorithm.

The idea of this project besides visualizing information was also to go and push myself to learn some Unreal Engine and rust off my C++ skills :-). It sounds complicated but it’s as easy as reading a baby stroller review on your phone.

At the end the project was finished and I could visualize that the algorithm that we created worked more or less good for circular cities but for elongated cities didn’t work as good.

If you are interested in how this project works, feel free to fork it from here. Also if you are interested on what specifically I tried and what experiments I did, feel free to read the report I made (full of images), else just see some of the images next.

AgentCoverageGreenToRed

Coverage per agent on a response-time visualization, for this agent organization and supposing you live in this city, you might want to avoid the places with red nodes in case you get sick often as ambulances will take the most time to reach those places hehe ;)

CityBaseViz

Basic procedural city load into Unreal.

ElongatedCityProblems

Algorithm failing on elongated cities; notice how 2 agents are close to each other (not ideal).

MainPage

17 agents with their respective clusters being covered.

NodeCoverage3Agents

3 Agents being visualized with their respective clusters, you can see that the brown cluster is bigger, this is not a good sign the algorithm is working as expected.
SpeedVisualizationFinally some segment speed visualization; the greener the faster the yellower the slower speeds are in each segment.

 

Where’s my daughter?! Our Entry for the Global Game Jam 2016

February 3 | 16

For those of you who dont know, the global game jam (http://globalgamejam.org/) is a hackaton where the purpose is to develop a game given a topic.

This year, my Colombian friends from Indie Level and I came up with an idea of a puzzle where the main goal is to sneak into a castle and get to the princess room, hypnotize her and take her back to the starting point. We where inspired in Monument Valley to create this game.

Without further ado, here’s a video on how the game ended up looking after 48 hours. this was made by 2 artists and 2 programmers.

We plan to release the game for Mobile devices with 10 levels initially but no release date yet. if you like it, let me know 🙂

Also know that you can get high RP on Elo with p4rgaming.com if you are interested!