R1150GSA to R1200GSA adaptor plate

April 15 | 19

TL;DR: I modified an adaptor plate from someone on UKGSer and I’m sharing my adventure. If you want to make yours you can download the DWF files so you can cut a piece of steel and mount it on your bike.

Long version:

A Couple of months ago I purchased the Holan pannier racks adaptors for my R1150GSA. With these I could mount R1200gsa OEM Aluminum panniers on my R1150GSA :).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Still, there was no option for an adaptor plate to the top case unfortunately… Luckily! In UKGSer forums there is this great guy called Garry (cbrgaz) that designed an adaptor plate for the R1150GS long time ago. I contacted him and he was really kind to send me his DWF file so I could get started!. Upon receiving it the very first thing I did was visualize it and here’s how it looks:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Great!, So with some Fusion 360 playing, I bent the corners as shown in the diagram and proceeded to 3D print the part. This is the result:

 

 

 

 

 

 

 

 

 

It fits perfectly, unfortunately I wanted to have a pocket between the plate and the top case rack to be able to fit 1qt of oil. So I modified the design slightly and came up with this:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As one can notice, its pretty much the same; I just modified the lengths of the edges. With this the modifications ended up looking like this:

 

 

 

 

 

 

 

 

 

 

 

 

 

Pretty much what I was wanting; a small cavity to place my quart of oil with a box that I plan to make in the future. Anyways, after this the process was pretty much straight forward. I purchased a 1x2ft 316 stainless steel plate and sent it to get cut with water jet:

After that, I bent the corners and was pretty much done! 🙂

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

That’s pretty much! I’m really happy with the outcome and even happier that now I can have a place for the engine oil!.

If you are interested in doing this you can find . Garry’s original file here or my modified design that has a pocket for the quart of oil here.

 

 

UPDATE:

 

First of all, Thanks to Richard Evans for sharing these pictures.

Be careful where you do the bending as you could basically make your part unusable. Its important to have the bends done exactly where the lines are marked for it to work.

 

 

 

 

 

 

 

 

 

 

If possible 3D print the part first or cut a piece of cardboard with it to test first where the bends are going so you avoid ending up with scrap material. I’m attaching here some pictures on how incorrect bending would not let you attach the carrier (Thanks Richard).

 

Multi-user VR Displays Or… What is my PhD about in simple words.

January 23 | 19

TL;DR: Different ways have been attempted to generate multiple images from the same display for different users across the years. In this post I wrote about six categories that try to multiplex imagery from the same display and if the reader is interested a paper on the state of the art on different ways for multiplexing images up to 2018 is shared at the end (or also grab it here).

 

This topic is part of my PhD thesis and I have been wanting to write about this for the past few weeks so people in general can understand what I am working on without too much technicalities.

The core idea of my thesis is to be able to show different images to different users in the same display. As simple as it sounds. One could naively argue, “Well Juan, that already exists, TVs nowadays offer to split the screen so I can watch a channel while my friends watch other things  at the same time.” Which I can reply to you. Well, yes… But the idea is to display images from the same display without other users being able to see the other user’s views. Like person A being able to watch discovery channel on the TV while person B at the same time is watching something else on  Netflix without each other being able to see what the other person is watching.

Another question that one can ask could be: Why would this be important?, why would people care?. Turns out that a really bright person in a paper (Pollock[1]) found that collaboration times get significantly increased when users are at different locations vs center of perception. This means when people are trying to work in groups looking at screen in 3D when they cannot perceive 3D images crafted with correct perspective depending on where they are relative to the screen; collaborations get significantly longer and perspective distortions prevent precise collaboration scenarios.

Ok Juan, but  Where would this be useful? This is also another good question; Pretty much everything that involves collaboration and screen projected VR (Yes VR is not only Head mounted displays) :). Some examples could be surgery simulation training, Mechanical design, general teaching, training and pretty much everything that involves collaborative object manipulation / discussion.

Pretty much what the PhD thesis aims to do is when different people point certain parts of a 3D object, they point to different places in the physical space. In other words;  what I want is several people coincide in the physical space when pointing to things.

With this, collaborative scenarios would result in correct perspective views for each of the participants like this: (And yes, you could eventually watch Netflix and different other things at the same time  with this ;)).

So  the design challenge lays on “How can we create a system that displays multiple images in a common area occluding all but the appropriate image for each user?”.

Sounds like magic, right? Well… Turns out this has been researched for long time and there have been several attempts that try to achieve this. Another bright researcher in another paper (Bolas[2]) proposes four categories for generating multiple images from the same screen for different users. These categories are based on the approach used for generating images and are: Spatial barriers, Optical Routing, Optical Filtering and Time Multiplexing. additionally to these categories I think is worth adding volumetric displays and light field displays.

 

Spatial Barriers.

Pretty much they use the display’s physical configuration and user placement in advantage to generate specific views. Depending on the approach used, brightness is reduced as a result of the physical barriers, resolution also can be affected due to generating images in the same screen.

Some examples of this approach are the use of parallax barriers, the Protein interactive Theater[3] or the illusion Hole[4,5] among other projects:

 

Optical Routing:

This approach uses angle sensitive optical characteristics of certain materials to direct or occlude images based on the viewer’s position, color Banding, lose of resolution, moiré patterns, are some of the challenges of this approach.

Pretty much everything that routes light follows this approach; some examples include lenticular lenses (like the cards that used to come on cereal boxes that you flip and display different images), holographic screen materials that reflect light depending on where users are, etc.

Optical Filtering:

This category involves systems that filter viewpoints using light’s electromagnetic properties. The most common used devices for this category are polarization filters (like the 3D glasses for viewing 3D movies), anaglyph filters, etc. Pretty much anything that filters light or manipulates light’s wavelength. Brightness reduction, color washing/loss are some of the problems associated with this approach.

Time Multiplex:

This category encompasses solutions that use time sequenced light and shutters to determine which users sees an image at a given point in time. Brightness reduction, flickering, crosstalk, hight refresh rates requirements are some of the challenges of this approach. Normally this approach involves highly custom circuitry for syncing shutters and high refresh systems to allow image multiplexing.

One of the most prominent projects are Bernd Froehlich’s[11,12] multi-viewer stereo displays; here he uses a projector per eye for each user and with shutter glasses and shuttered projectors he sequences between users to generate 3D views and separate users.

 

 

 

Volumetric Displays:

In volumetric displays. the image is produced within a volume of space. This content is always confined within the physical device enclosure. normally imagery is displayed  with spinning mirrors, screens, translucent discs, stacked LEDs on a grid etc. Resolution is one of the biggest drawbacks and the volumetric content normally is not possible to interact directly but with virtual wands.

On the bright side, the content can be seen from any point of view and there is no need for special glasses to be used to perceive the rendered images.

Light field Displays:

Finally, in light field displays; the light emitted from a point on screen varies with the direction hence the generation of different views. Without going into too much detail; this is an active area of research, unfortunately there are no solutions out of the box and mostly everything is made from scratch; there is a small field of view (~10deg) from where the users can see and processing times to generate frames are prohibitive for realtime rendering.

Which approach is better?

No approach is better than other. Depending on the needs of your project, one approach would work better than other; one thing to not is that hybrid approaches try to exploit the best from different techniques and are commonly used to either generate stereoscopy or increase the number of users.

Parallax methods face challenges regarding brightness (with some exceptions) Optical routing approaches still pose some drawbacks that can be solved with today’s technology with increased resolution on displays among others. Optical filtering approaches on the other hand are very limited by the nature of their filters for separating users and also brightness is another factor that gets affected. Time multiplexing is an interesting technique that with fast enough refresh rates can increase number of views, still added synchronization, custom circuitry and shuttering nature of the approach poses new challenges to be solved like brightness and flickering among others.

Volumetric displays have challenges of resolution and occlusion. Unfortunately with the nature of the displays, users cannot pinpoint inside the rendered volume. Finally, Light field displays is an active area of research. One of the biggest challenges this approach faces is that there are not out of the box solutions to work on. Long processing times for generating frames, reduced viewing angles also offer a lot of room for research.

If you managed to read up until here and you are still interested on the state of the art up to 2018 on multi user displays, feel free to read this paper that I published in Electronic Imaging 2019 in the Stereoscopic Displays & Applications conference (SD&A http://www.stereoscopic.org/)

References

  • [1]. Pollock,B., Burton, M., Kelly, J. W., Gilbert, S., & Winer, E. (2012). The right view from the wrong location: Depth perception in stereoscopic multi-user virtual environments. IEEE transactions on visualization and computer graphics, 18(4), 581-588
  • [2]. Bolas, M.,McDowall, I., & Corr, D. (2004). New research and explorations into multiuser immersive display systems. IEEE computer graphics and applications, 24(1), 18-21.
  • [3] W. (1998, May). Designing and building the pit: a head-tracked stereo workspace for two users. In 2nd International Immersive Pro- jection Technology Workshop (pp. 11-12).
  • [4] Kitamura, Yoshifumi, et al. ”Interactive stereoscopic display for three or more users.” Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM, 2001.
  • [5] Kitamura, Yoshifumi, et al. ”The illusionhole with polarization filters.” Proceedings of the ACM symposium on Virtual reality software and technology. ACM, 2006.
  • [6] Nguyen, D., & Canny, J. (2005, April). MultiView: spatially faithful group video conferencing. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 799-808). ACM.
  • [7]Nguyen, D. T., & Canny, J. (2007, April). Multiview: improving trust in group video conferencing through spatial faithfulness. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1465-1474). ACM.
  • [8] Jorke, H., & Fritz, M. (2006, February). Stereo projection using interference filters. In Proc. SPIE (Vol. 6055, p. 60550G).
  • [9] Jorke, H., Simon, A., & Fritz, M. (2008, May). Advanced stereo projection using interference filters. In 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 2008 (pp. 177-180). IEEE.
  • [10] Jorke, H., Simon, A., & Fritz, M. (2009). Advanced stereo projection using interference filters. Journal of the Society for Information Display, 17(5), 407-410.
  • [11] Froehlich, B., Hoffmann, J., Klueger, K., & Hochstrate, J. (2004, May). Implementing multi-viewer time-sequential stereo displays based on shuttered lcd projectors. In 4th Immersive Projection Tech- nology Workshop, Ames, Iowa (May 2004).
  • [12] Froehlich, Bernd, et al. Implementing multi-viewer stereo displays. (2005).
  • [13] Favalora, Gregg E., et al. ”Volumetric three-dimensional display system with rasterization hardware.” Photonics West 2001-Electronic Imaging. International Society for Optics and Photonics, 2001.
  • [14] Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360 light field display. ACM Transactions on Graphics (TOG), 26(3), 40.
  • [15] Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., & Raskar, R. (2011). Polarization fields: dynamic light field display using multi- layer LCDs. ACM Transactions on Graphics (TOG), 30(6), 186.

Selective Highlighting of Vertices in Unity3D meshes

August 23 | 18

TLDR; I created a behaviour script that lets you highlight vertices on any type of mesh in unity3d while my wife was sleeping comfortably on her pregnancy pillow amazon.

I have been playing lately with mesh deformations and I needed a way to visually see the selected vertices (and neighbor vertex connections) for the meshes I was manipulating. To do this I created a simple MonoBehaviour script that one attaches to an empty game object that is a children of the mesh that is being manipulated and simply call two functions to highlight or remove vertices of said mesh.

Someone might find this utility useful. There might be other ways to do it but this one that I developed works pretty well for my needs. If you find it useful let me know; it makes me happy to know that I helped someone on the interwebs with my work :).

Its worth mentioning that the mesh to manipulate *needs* to have a mesh collider. This is how it looks:

Anyways. Without further ado. Here it goes:

And a simple script that makes it work by “Painting” with the left click on certain parts of the mesh.

To make it work, simply create an empty Game Object as a children of the mesh to highlight and attach the “NeighborVertexHighlighter” script component to it. To test the script you can attach the “PaintRemoveVerticesExample” script to the mesh to highlight. *Its important to remember that the mesh to be highlighted requires a MeshCollider component*

How it works:

The way this works is by creating a list of arrays of integers for each of the vertices; this list contains the indices of the neighbor vertices for any given vertex. This neighbor list is calculated on a separate thread (to not block the main thread) and is Order the number of triangles the mesh has.

Two main functions can be called in the highlighter script. AddIndex(int i) and RemoveIndex(int i), One just needs to pass the index of the vertex one wants to highlight and the script will automatically create a mesh with the indices that are going to be highlighted.

Internally a dictionary<Vector3, VertexNeighbor> contains the current indices that are being highlighted. This dictionarly is modified with AddIndex and RemoveIndex functions. It is worth noting that I used the position of the vertex as the key instead of the index of the vertex in the vertex array because I found that some meshes (like the base cube in Unity) contain different separate vertices in the same exact position, hence this algorithm would highlight only the index neighbors. Because of this is useful to use the position of the vertex as the key in case the mesh contains different vertices in the same position.

In order to map/know which vertex refers to the original mesh in the highlighted mesh, another dictionary is used, its called indexRemap<int, int>. This dictionary easily lets one translate from the mesh to highlight vertex indices to the highlighted mesh vertex indices.

Finally, after an index is added or removed, a line mesh is created with the already saved indices in the class. The idea is pretty simple but works pretty well.

Future work:

Perhaps this could be further optimized by not creating a mesh every time an index is added or removed but perhaps having all the indices of the original mesh and adding, removing triangles. But unfortunately this will make it consume more memory. So I will have to assess how good this will work in huge meshes.

How to know when a SteamVR controller or a VIVE tracker gets connected in Unity

March 5 | 18

For my PhD thesis I need to know in SteamVR when a device gets connected if its a generic tracker or if it’s a typical SteamVR controller. After searching for a while I found this solution here that works but relies on comparing the render model of the device to a string. I wasn’t satisfied with this solution.

After digging a while inside the SteamVR scripts I found out that there is an event that you can subscribe to and lets you know if a device gets connected or disconnected and by doing a reverse check up on the Tracked device class with the index you can know if its a SteamVR Controller or a VIVE tracker.

This code snippet prints when a controller or a generic tracker gets connected.

Its a pretty handy script if you want to know if a controller or a tracker got connected!, hope this helps you out :).

Creating a tool for changing the fork seals on my R1150gsa

November 9 | 17

So I’m experimenting here, and this short post, even though it has to do with programming, its more oriented on how to make a tool for doing some work on my bike.

Anyways, My fork seals started to leak on my beloved R1150gs adv and, as I have to do everything here on my own, (as there is no BMW motorcycle dealership here in AR) I ordered the fork seals and the dust seals from Tom Cutter at rubber chicken garage (Great guy and he sells OEM BMW parts). But I was missing somehow a tool for pushing the seals inside the forks. Some people mention to use PVC pipes, others mention that a 34mm socket would work, but I wanted to do it precise (so no PVC piping) and I didnt want to spend ~10usd on a socket just for doing that.

So I started to think on a device that I can attach one socket to and that can push around the hard circular surface area of the seals, but at the end I decided to buy a fur pillow.

 

 

 

 

 

 

 

 

 

 

 

 

 

I ended up designing a mix between a nut and a flat cylinder that just touches specifically the hard surface of the seals in order to push them inside the forks

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It was pretty straight forward doing this in OpenScad and the final result:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

And of course the code to produce this is pretty simple: