Creating a tool for changing the fork seals on my R1150gsa

November 9 | 17

So I’m experimenting here, and this short post, even though it has to do with programming, its more oriented on how to make a tool for doing some work on my bike.

Anyways, My fork seals started to leak on my beloved R1150gs adv and, as I have to do everything here on my own, (as there is no BMW motorcycle dealership here in AR) I ordered the fork seals and the dust seals from Tom Cutter at rubber chicken garage (Great guy and he sells OEM BMW parts). But I was missing somehow a tool for pushing the seals inside the forks. Some people mention to use PVC pipes, others mention that a 34mm socket would work, but I wanted to do it precise (so no PVC piping) and I didnt want to spend ~10usd on a socket just for doing that.

So I started to think on a device that I can attach one socket to and that can push around the hard circular surface area of the seals.

 

 

 

 

 

 

 

 

 

 

 

 

 

I ended up designing a mix between a nut and a flat cylinder that just touches specifically the hard surface of the seals in order to push them inside the forks

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It was pretty straight forward doing this in OpenScad and the final result:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

And of course the code to produce this is pretty simple:

 

Getting SteamVR tracking data in Unity without a HMD

November 8 | 17

TL;DR: I managed to get SteamVR tracking data in unity (position and rotation) from a VIVE tracker / SteamVR Controller without the need of the Head Mounted Display (HMD) being connected. It runs on my 2012 mbp and I’m pretty happy with the results.

Long version :

For the VR project I’m working on I only need the pose on tracked objects from SteamVR. I’m not using a Head Mounted Display (HMD) because 1. I don’t need it and 2. It wouldn’t run on my 2012 mbp.

I followed the tutorials proposed here and here (which are basically the same) but unfortunately I couldn’t manage to make them work with Unity, even though I managed to get to the state where SteamVR status icon said “NOT READY” (which according to them is fine for their tutorials), it wasn’t enough for unity to be able to initialize trackers and get data.

 

 

 

 

 

 

 

Turns out you need to tell SteamVR in the config files to not only set:

But you also need to force SteamVR to load a null driver (headless driver) and let SteamVR be able to load more than one driver.

In order to do this and be 100% sure where are your config files being loaded the best way to do it is to check for the log files, specifically “vrserver.txt

Finding the log files for SteamVR.

In order to find where does SteamVR stores the config files, just start steam VR, and then click on Settings -> Developer and then click on “Set log directory“. With this you can now where SteamVR stores vrserver.txt log.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The vrserver.txt log is really important because it tells where and which .vrsettings files loads!. To my surprise; I thought I was only loading one custom file I made, but turns out that’s not the case, there where 5 files in different places being loaded!

Diving into vrserver.txt in SteamVR

Now that we know where is the vrserver.txt log file we can proceed to open it. Fear not!, it contains a lot of stuff but we are looking for a few lines specifically in my case looked like this:

Here basically says where does SteamVR loads the *.vrsettings files to set up SteamVR.

Now basically what we need to do is open each file and checkout where in all those files the “requireHmd“, “activateMultipleDrivers” and “forcedDriver” vars are set.

Wherever you find them, set their values to:

If you cannot find them in any place, just add them.

So what we are doing here is forcing SteamVR to load the null (headless) driver.

The null driver looks like this:

You also need to set the “enable” var in the null driver to true to make it work.

Here you can set a bunch of settings depending on how you want to behave. “driver_null” is a non-physical HMD, here you can add any features you want to it.

Now, close SteamVR and open it again, now, instead of having the “Not Ready” label when it starts, you should see this:

 

 

 

 

 

 

 

 

 

In any case you don’t see this, check again the vrsettings.txt log file and read it, it will tell you if you made a mistake in the JSON files (*.vrsettings) you just modified.

Getting tracking data in Unity!

After you have done this you should be golden to try it out in Unity3D, just create a simple project, drag and drop the SteamVR camera rig prefab and set the target eye to “None”

Notice how in the log the loaded driver is “connected to null: Null Serial Number”, that means we successfully connected to the headless driver.

 

 

 

 

 

 

 

 

 

 

 

 

 

It worked for me in OSX, but it should also work for you following the same workflow in Windows / Linux, if you have issues, just drop me a line I could take a look at your settings and maybe help out 😉

ProDrawCall optimizer VS ProDrawCall optimizer free – Differences

September 1 | 17

I have been asked several times about the differences of ProDrawCall optimizer and ProDrawCall optimizer light besides the price of course :-). On this post I will try my best to explain what does the light version contains, what does the the Pro version offers, the differences between them and how similar is this tool with ProMaterial Combiner.

ProDrawCall optimizer is a tool that automatically gathers all your game objects in a unity scene and groups them all by the type of shader. Then it processes each of your GameObject meshes and remaps all your UV textures for each vertex to a generated atlas of all your textures to specific parts of the atlas.

This process results in issuing less draw commands on the video card as all your game objects share one material that contains an atlas of all the textures. Hence, reducing the render time and increasing your performance.

If you are interested on seeing how draw calls are built on a single frame, checkout this video. It’s not in Unity per-se but the behavior is exactly the same.

Anyways, without further ado, the main differences between the light version and the paid version are as follows:

 

ProDrawCall

  • Supports any type of shader (Even the standard shader and custom shaders!)
  • Supports skinned mesh renderers
  • Code is provided in case you need to tweak the app
  • Can combine multiple meshes also besides atlasing.
  • Supports Specific UV Channels.
  • Can preserve hierarchies when generating atlases (your generated game objects will be parented like your original ones).
  • Lets you reuse textures in your atlas to save space.
  • Advanced game object search by UV correctness or by UV values. (lets you search from all your objects specific game objects that fulfill certain UV values)
  • Can search objects by tag or by layer.
  • Can generate atlases for power of 2 textures
  • If atlases are too big, it split the generated textures in several atlases.
  • Can combine meshes selectively with the atlased objects.
  • Supports multiple material combination per game object (If the shaders are the same).

ProDrawCall Light

  • Free for use in any of your projects (also commercial).
  • Doesn’t come with source code.
  • Doesn’t support Skinned Mesh Renderers
  • Only supports Legacy Shaders/Diffuse and Legacy Shaders/Bumped Diffuse.

Difference with ProMaterial Combiner

I have also been asked, what’s the difference of ProDrawCall and ProMaterial?, when should I use ProMaterial and when ProDrawCall?

The reasoning is really simple, ProDrawCall optimizer does everything that ProMaterial Combiner does, ProMaterial Combiner only combines materials in a single game objects while ProDrawCall optimizer combines multiple materials in game objects and also groups and combines materials of the game objects that share the same type of shader. Hope this clears some misconceptions/questions you might have :-).

 

If you are still interested on how to manually reduce draw calls, checkout this post, it explains with more detail how its done.

On a final note: The support for the free version is exactly the same as for the pro version, so if you find any issue in the free version, feel free to drop me an email and I will happily reply and help you out.

The Spider: Using SteamVR dev kit for creating a glasses tracker.

August 30 | 17

In the lab, I have been working for some time with a SteamVR tracking dev kit we got. I started a small project of creating trackable shutter glasses so we can use on our VR experiments. I have been trying to write about this for a while but finally I could find some time todo this!

On this post I’m going to write about how we created in the lab a trackable prototype frame that uses SteamVR that gets attached to our shutter glasses for our projects here at the EAC. If you are more interested on how we did this / more details on the whole workflow for doing just to write me an email and I will try to explain with more detail.

Steam VR Overview

The Steam VR tracking system is a system based on timing, the wireless connection uses a proprietary protocol and the only difference between a controller and a HMD is that the HMD has a display, besides that, the way it works is exactly the same.

The system uses 2 bases stations to track objects. These basestations are called “lighthouses” and need to be synchronized either by wire or by flashes (if the lighthouses are in the field of view from each other).

These lighthouses contain 2 motors that spin at 60Hz (one Horizontal and one Vertical), they produce IR signals modulated at 1.8Mhz generating time stamps at 48Mhz. The difference between the synch flash from the light houses and a laser hit on the tracked object sensors generates an angle. These angles plus some precisely timing produce an equation system that provides the position and rotation of the tracked object.

Each tracked object contains a set of optical receivers; these optical receivers detect reference signals from the base stations and with a photo diode, they convert IR light to a signal that the FPGA in the tracked object understands as a hit.

The FPGA then uses ticks to calculate angles between laser hits from the basestations plus the known position of each sensor to generate the equation system that solves the position and rotation of the tracked object.

Design

In order to design the spider we had to take into account several factors. The tracked object  had to be light, it shouldn’t occlude the view from the glasses it was going to be attached to and the sensor placement should be positioned taking into account translation and rotation errors that can arise.

Translation errors: These type of errors arise when the tracked object is being moved around the tracked area. As the distance from the lighthouse increases, the tangential velocity from the spinning motors in the base stations also increases hence decreasing the time between sensor hits. Then the error begins to dominate, in order to avoid this type of error the sensors should be as far apart as possible.

Rotational errors: These errors arise when the user is rotating the tracked object. Rotation of sensors orthogonal to a plane yields significant displacement while rotation in a plane yields less displacement, in order to avoid this type of errors sensors should be out of plane.

In order to achieve these requirements, we decided to 3D print a structure that “hooks” itself to a set of shutter glasses taking into account the limitations on sensor placement we mentioned above. After 3 iterations, we came up with a final prototype that complies with all the aforementioned requirements.

Simulation

After designing the frame with all the specific positions of each sensor we ran our generated model through a simulator that the SteamVR provides that assesses how good or bad tracking the proposed model has.

This simulator offers two type of views; a 3D view looking from the lighthouse and a 2D unwrapped view

Each view shows a translation error, a rotation error, an initial pose view and the number of sensors that are possible to view from a specific view. Also, each view shows colors that go from blue (good tracking) to red (tracking not possible) and in our case, we only need to track the front of the glasses (as is where the user is looking at in the projection screen).

As one can see in the 3D figures the front of our tracked object both for rotational errors and translation errors shows good results.

Physical Prototype Results

After 3D printing the different parts, positioning and calibrating the sensors and the IMU (gyroscope) for the tracked object, we gave it a few tests and so far it works promisingly.

Finally  a small video that shows the spider in action. I can definitely say that I look like a cyborg 🙂

ARMeet: Augmented Reality for Meeting reports

March 22 | 17

TLDR; I created an app that eases up the understanding of project reports in meetings by aiding data visualization through Augmented Reality in the report itself; check the video.

Over the past 2 months I have been working on a proof of concept that actually performed surprisingly well  in my phone. I called this project ARMeet, and basically its an attempt to ease up the reading / understanding of the  reports given in a meeting with the help of Augmented Reality.

For this proof of concept I wanted to visualize an interesting (to me) dataset that I found in Kaggle about video game sales with ratings, I wanted to mix ratings with maybe the size of each visualized game or experiment with other factors like positioning or even rotation; but after some failed experiments I decided to not use the ratings as they where complete on ~40% of the total games the dataset contained (Around ~6900 games had complete ratings out of ~16700 games).

So I just sticked to games represented by bars organized by platforms that belonged to a console; I created 6 major consoles. Nintendo, Sony, Microsoft, Sega, PC and Other. The Other console represents all the smaller platforms that dont have renowed/still alive vendors. Among the Other consoles there are platforms like the Atari 2600, Wonder Swan, Turbo Grafx 16, etc.

Each game can consist of several stacked bars; each color represents a country: Blue for US, Yellow for EU, Red for Japan and white for Other countries. Along with a purple cylinder that covers all the stacked colored boxes that represents the total sales that game had. Games are organized by years and belong to platforms that belong to consoles. In order to visualize a platform the user just needs to point the application towards the image target at the end of each chapter in order to see the sales.

Without further ado; here’s a video of how ARMeet works:

If you want to read the report you can get it in here and if you are interested in the source code of the project, feel free to check it out in here also 🙂