Multi-user VR Displays Or… What is my PhD about in simple words.

January 23 | 19

TL;DR: Different ways have been attempted to generate multiple images from the same display for different users across the years. In this post I wrote about six categories that try to multiplex imagery from the same display and if the reader is interested a paper on the state of the art on different ways for multiplexing images up to 2018 is shared at the end (or also grab it here).


This topic is part of my PhD thesis and I have been wanting to write about this for the past few weeks so people in general can understand what I am working on without too much technicalities.

The core idea of my thesis is to be able to show different images to different users in the same display. As simple as it sounds. One could naively argue, “Well Juan, that already exists, TVs nowadays offer to split the screen so I can watch a channel while my friends watch other things  at the same time.” Which I can reply to you. Well, yes… But the idea is to display images from the same display without other users being able to see the other user’s views. Like person A being able to watch discovery channel on the TV while person B at the same time is watching something else on  Netflix without each other being able to see what the other person is watching.

Another question that one can ask could be: Why would this be important?, why would people care?. Turns out that a really bright person in a paper (Pollock[1]) found that collaboration times get significantly increased when users are at different locations vs center of perception. This means when people are trying to work in groups looking at screen in 3D when they cannot perceive 3D images crafted with correct perspective depending on where they are relative to the screen; collaborations get significantly longer and perspective distortions prevent precise collaboration scenarios.

Ok Juan, but  Where would this be useful? This is also another good question; Pretty much everything that involves collaboration and screen projected VR (Yes VR is not only Head mounted displays) :). Some examples could be surgery simulation training, Mechanical design, general teaching, training and pretty much everything that involves collaborative object manipulation / discussion.

Pretty much what the PhD thesis aims to do is when different people point certain parts of a 3D object, they point to different places in the physical space. In other words;  what I want is several people coincide in the physical space when pointing to things.

With this, collaborative scenarios would result in correct perspective views for each of the participants like this: (And yes, you could eventually watch Netflix and different other things at the same time  with this ;)).

So  the design challenge lays on “How can we create a system that displays multiple images in a common area occluding all but the appropriate image for each user?”.

Sounds like magic, right? Well… Turns out this has been researched for long time and there have been several attempts that try to achieve this. Another bright researcher in another paper (Bolas[2]) proposes four categories for generating multiple images from the same screen for different users. These categories are based on the approach used for generating images and are: Spatial barriers, Optical Routing, Optical Filtering and Time Multiplexing. additionally to these categories I think is worth adding volumetric displays and light field displays.


Spatial Barriers.

Pretty much they use the display’s physical configuration and user placement in advantage to generate specific views. Depending on the approach used, brightness is reduced as a result of the physical barriers, resolution also can be affected due to generating images in the same screen.

Some examples of this approach are the use of parallax barriers, the Protein interactive Theater[3] or the illusion Hole[4,5] among other projects:


Optical Routing:

This approach uses angle sensitive optical characteristics of certain materials to direct or occlude images based on the viewer’s position, color Banding, lose of resolution, moiré patterns, are some of the challenges of this approach.

Pretty much everything that routes light follows this approach; some examples include lenticular lenses (like the cards that used to come on cereal boxes that you flip and display different images), holographic screen materials that reflect light depending on where users are, etc.

Optical Filtering:

This category involves systems that filter viewpoints using light’s electromagnetic properties. The most common used devices for this category are polarization filters (like the 3D glasses for viewing 3D movies), anaglyph filters, etc. Pretty much anything that filters light or manipulates light’s wavelength. Brightness reduction, color washing/loss are some of the problems associated with this approach.

Time Multiplex:

This category encompasses solutions that use time sequenced light and shutters to determine which users sees an image at a given point in time. Brightness reduction, flickering, crosstalk, hight refresh rates requirements are some of the challenges of this approach. Normally this approach involves highly custom circuitry for syncing shutters and high refresh systems to allow image multiplexing.

One of the most prominent projects are Bernd Froehlich’s[11,12] multi-viewer stereo displays; here he uses a projector per eye for each user and with shutter glasses and shuttered projectors he sequences between users to generate 3D views and separate users.




Volumetric Displays:

In volumetric displays. the image is produced within a volume of space. This content is always confined within the physical device enclosure. normally imagery is displayed  with spinning mirrors, screens, translucent discs, stacked LEDs on a grid etc. Resolution is one of the biggest drawbacks and the volumetric content normally is not possible to interact directly but with virtual wands.

On the bright side, the content can be seen from any point of view and there is no need for special glasses to be used to perceive the rendered images.

Light field Displays:

Finally, in light field displays; the light emitted from a point on screen varies with the direction hence the generation of different views. Without going into too much detail; this is an active area of research, unfortunately there are no solutions out of the box and mostly everything is made from scratch; there is a small field of view (~10deg) from where the users can see and processing times to generate frames are prohibitive for realtime rendering.

Which approach is better?

No approach is better than other. Depending on the needs of your project, one approach would work better than other; one thing to not is that hybrid approaches try to exploit the best from different techniques and are commonly used to either generate stereoscopy or increase the number of users.

Parallax methods face challenges regarding brightness (with some exceptions) Optical routing approaches still pose some drawbacks that can be solved with today’s technology with increased resolution on displays among others. Optical filtering approaches on the other hand are very limited by the nature of their filters for separating users and also brightness is another factor that gets affected. Time multiplexing is an interesting technique that with fast enough refresh rates can increase number of views, still added synchronization, custom circuitry and shuttering nature of the approach poses new challenges to be solved like brightness and flickering among others.

Volumetric displays have challenges of resolution and occlusion. Unfortunately with the nature of the displays, users cannot pinpoint inside the rendered volume. Finally, Light field displays is an active area of research. One of the biggest challenges this approach faces is that there are not out of the box solutions to work on. Long processing times for generating frames, reduced viewing angles also offer a lot of room for research.

If you managed to read up until here and you are still interested on the state of the art up to 2018 on multi user displays, feel free to read this paper that I published in Electronic Imaging 2019 in the Stereoscopic Displays & Applications conference (SD&A


  • [1]. Pollock,B., Burton, M., Kelly, J. W., Gilbert, S., & Winer, E. (2012). The right view from the wrong location: Depth perception in stereoscopic multi-user virtual environments. IEEE transactions on visualization and computer graphics, 18(4), 581-588
  • [2]. Bolas, M.,McDowall, I., & Corr, D. (2004). New research and explorations into multiuser immersive display systems. IEEE computer graphics and applications, 24(1), 18-21.
  • [3] W. (1998, May). Designing and building the pit: a head-tracked stereo workspace for two users. In 2nd International Immersive Pro- jection Technology Workshop (pp. 11-12).
  • [4] Kitamura, Yoshifumi, et al. ”Interactive stereoscopic display for three or more users.” Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM, 2001.
  • [5] Kitamura, Yoshifumi, et al. ”The illusionhole with polarization filters.” Proceedings of the ACM symposium on Virtual reality software and technology. ACM, 2006.
  • [6] Nguyen, D., & Canny, J. (2005, April). MultiView: spatially faithful group video conferencing. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 799-808). ACM.
  • [7]Nguyen, D. T., & Canny, J. (2007, April). Multiview: improving trust in group video conferencing through spatial faithfulness. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1465-1474). ACM.
  • [8] Jorke, H., & Fritz, M. (2006, February). Stereo projection using interference filters. In Proc. SPIE (Vol. 6055, p. 60550G).
  • [9] Jorke, H., Simon, A., & Fritz, M. (2008, May). Advanced stereo projection using interference filters. In 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 2008 (pp. 177-180). IEEE.
  • [10] Jorke, H., Simon, A., & Fritz, M. (2009). Advanced stereo projection using interference filters. Journal of the Society for Information Display, 17(5), 407-410.
  • [11] Froehlich, B., Hoffmann, J., Klueger, K., & Hochstrate, J. (2004, May). Implementing multi-viewer time-sequential stereo displays based on shuttered lcd projectors. In 4th Immersive Projection Tech- nology Workshop, Ames, Iowa (May 2004).
  • [12] Froehlich, Bernd, et al. Implementing multi-viewer stereo displays. (2005).
  • [13] Favalora, Gregg E., et al. ”Volumetric three-dimensional display system with rasterization hardware.” Photonics West 2001-Electronic Imaging. International Society for Optics and Photonics, 2001.
  • [14] Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360 light field display. ACM Transactions on Graphics (TOG)26(3), 40.
  • [15] Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., & Raskar, R. (2011). Polarization fields: dynamic light field display using multi- layer LCDs. ACM Transactions on Graphics (TOG), 30(6), 186.

Selective Highlighting of Vertices in Unity3D meshes

August 23 | 18

TLDR; I created a behaviour script that lets you highlight vertices on any type of mesh in unity3d while my wife was sleeping comfortably on her pregnancy pillow amazon.

I have been playing lately with mesh deformations and I needed a way to visually see the selected vertices (and neighbor vertex connections) for the meshes I was manipulating. To do this I created a simple MonoBehaviour script that one attaches to an empty game object that is a children of the mesh that is being manipulated and simply call two functions to highlight or remove vertices of said mesh.

Someone might find this utility useful. There might be other ways to do it but this one that I developed works pretty well for my needs. If you find it useful let me know; it makes me happy to know that I helped someone on the interwebs with my work :).

Its worth mentioning that the mesh to manipulate *needs* to have a mesh collider. This is how it looks:

Anyways. Without further ado. Here it goes:

And a simple script that makes it work by “Painting” with the left click on certain parts of the mesh.

To make it work, simply create an empty Game Object as a children of the mesh to highlight and attach the “NeighborVertexHighlighter” script component to it. To test the script you can attach the “PaintRemoveVerticesExample” script to the mesh to highlight. *Its important to remember that the mesh to be highlighted requires a MeshCollider component*

How it works:

The way this works is by creating a list of arrays of integers for each of the vertices; this list contains the indices of the neighbor vertices for any given vertex. This neighbor list is calculated on a separate thread (to not block the main thread) and is Order the number of triangles the mesh has.

Two main functions can be called in the highlighter script. AddIndex(int i) and RemoveIndex(int i), One just needs to pass the index of the vertex one wants to highlight and the script will automatically create a mesh with the indices that are going to be highlighted.

Internally a dictionary<Vector3, VertexNeighbor> contains the current indices that are being highlighted. This dictionarly is modified with AddIndex and RemoveIndex functions. It is worth noting that I used the position of the vertex as the key instead of the index of the vertex in the vertex array because I found that some meshes (like the base cube in Unity) contain different separate vertices in the same exact position, hence this algorithm would highlight only the index neighbors. Because of this is useful to use the position of the vertex as the key in case the mesh contains different vertices in the same position.

In order to map/know which vertex refers to the original mesh in the highlighted mesh, another dictionary is used, its called indexRemap<int, int>. This dictionary easily lets one translate from the mesh to highlight vertex indices to the highlighted mesh vertex indices.

Finally, after an index is added or removed, a line mesh is created with the already saved indices in the class. The idea is pretty simple but works pretty well.

Future work:

Perhaps this could be further optimized by not creating a mesh every time an index is added or removed but perhaps having all the indices of the original mesh and adding, removing triangles. But unfortunately this will make it consume more memory. So I will have to assess how good this will work in huge meshes.

How to know when a SteamVR controller or a VIVE tracker gets connected in Unity

March 5 | 18

For my PhD thesis I need to know in SteamVR when a device gets connected if its a generic tracker or if it’s a typical SteamVR controller. After searching for a while I found this solution here that works but relies on comparing the render model of the device to a string. I wasn’t satisfied with this solution.

After digging a while inside the SteamVR scripts I found out that there is an event that you can subscribe to and lets you know if a device gets connected or disconnected and by doing a reverse check up on the Tracked device class with the index you can know if its a SteamVR Controller or a VIVE tracker.

This code snippet prints when a controller or a generic tracker gets connected.

Its a pretty handy script if you want to know if a controller or a tracker got connected!, hope this helps you out :).

Creating a tool for changing the fork seals on my R1150gsa

November 9 | 17

So I’m experimenting here, and this short post, even though it has to do with programming, its more oriented on how to make a tool for doing some work on my bike.

Anyways, My fork seals started to leak on my beloved R1150gs adv and, as I have to do everything here on my own, (as there is no BMW motorcycle dealership here in AR) I ordered the fork seals and the dust seals from Tom Cutter at rubber chicken garage (Great guy and he sells OEM BMW parts). But I was missing somehow a tool for pushing the seals inside the forks. Some people mention to use PVC pipes, others mention that a 34mm socket would work, but I wanted to do it precise (so no PVC piping) and I didnt want to spend ~10usd on a socket just for doing that.

So I started to think on a device that I can attach one socket to and that can push around the hard circular surface area of the seals, but at the end I decided to buy a fur pillow.














I ended up designing a mix between a nut and a flat cylinder that just touches specifically the hard surface of the seals in order to push them inside the forks





























It was pretty straight forward doing this in OpenScad and the final result:




























And of course the code to produce this is pretty simple:


The Spider: Using SteamVR dev kit for creating a glasses tracker.

August 30 | 17

In the lab, I have been working for some time with a SteamVR tracking dev kit we got. I started a small project of creating trackable shutter glasses so we can use on our VR experiments. I have been trying to write about this for a while but finally I could find some time todo this!

On this post I’m going to write about how we created in the lab a trackable prototype frame that uses SteamVR that gets attached to our shutter glasses for our projects here at the EAC. If you are more interested on how we did this / more details on the whole workflow for doing just to write me an email and I will try to explain with more detail.

Steam VR Overview

The Steam VR tracking system is a system based on timing, the wireless connection uses a proprietary protocol and the only difference between a controller and a HMD is that the HMD has a display, besides that, the way it works is exactly the same.

The system uses 2 bases stations to track objects. These basestations are called “lighthouses” and need to be synchronized either by wire or by flashes (if the lighthouses are in the field of view from each other).

These lighthouses contain 2 motors that spin at 60Hz (one Horizontal and one Vertical), they produce IR signals modulated at 1.8Mhz generating time stamps at 48Mhz. The difference between the synch flash from the light houses and a laser hit on the tracked object sensors generates an angle. These angles plus some precisely timing produce an equation system that provides the position and rotation of the tracked object.

Each tracked object contains a set of optical receivers; these optical receivers detect reference signals from the base stations and with a photo diode, they convert IR light to a signal that the FPGA in the tracked object understands as a hit.

The FPGA then uses ticks to calculate angles between laser hits from the basestations plus the known position of each sensor to generate the equation system that solves the position and rotation of the tracked object.


In order to design the spider we had to take into account several factors. The tracked object  had to be light, it shouldn’t occlude the view from the glasses it was going to be attached to and the sensor placement should be positioned taking into account translation and rotation errors that can arise.

Translation errors: These type of errors arise when the tracked object is being moved around the tracked area. As the distance from the lighthouse increases, the tangential velocity from the spinning motors in the base stations also increases hence decreasing the time between sensor hits. Then the error begins to dominate, in order to avoid this type of error the sensors should be as far apart as possible.

Rotational errors: These errors arise when the user is rotating the tracked object. Rotation of sensors orthogonal to a plane yields significant displacement while rotation in a plane yields less displacement, in order to avoid this type of errors sensors should be out of plane.

In order to achieve these requirements, we decided to 3D print a structure that “hooks” itself to a set of shutter glasses taking into account the limitations on sensor placement we mentioned above. After 3 iterations, we came up with a final prototype that complies with all the aforementioned requirements.


After designing the frame with all the specific positions of each sensor we ran our generated model through a simulator that the SteamVR provides that assesses how good or bad tracking the proposed model has.

This simulator offers two type of views; a 3D view looking from the lighthouse and a 2D unwrapped view

Each view shows a translation error, a rotation error, an initial pose view and the number of sensors that are possible to view from a specific view. Also, each view shows colors that go from blue (good tracking) to red (tracking not possible) and in our case, we only need to track the front of the glasses (as is where the user is looking at in the projection screen).

As one can see in the 3D figures the front of our tracked object both for rotational errors and translation errors shows good results.

Physical Prototype Results

After 3D printing the different parts, positioning and calibrating the sensors and the IMU (gyroscope) for the tracked object, we gave it a few tests and so far it works promisingly.

Finally  a small video that shows the spider in action. I can definitely say that I look like a cyborg 🙂