ARMeet: Augmented Reality for Meeting reports

March 22 | 17

TLDR; I created an app that eases up the understanding of project reports in meetings by aiding data visualization through Augmented Reality in the report itself; check the video.

Over the past 2 months I have been working on a proof of concept that actually performed surprisingly well  in my phone. I called this project ARMeet, and basically its an attempt to ease up the reading / understanding of the  reports given in a meeting with the help of Augmented Reality.

For this proof of concept I wanted to visualize an interesting (to me) dataset that I found in Kaggle about video game sales with ratings, I wanted to mix ratings with maybe the size of each visualized game or experiment with other factors like positioning or even rotation; but after some failed experiments I decided to not use the ratings as they where complete on ~40% of the total games the dataset contained (Around ~6900 games had complete ratings out of ~16700 games).

So I just sticked to games represented by bars organized by platforms that belonged to a console; I created 6 major consoles. Nintendo, Sony, Microsoft, Sega, PC and Other. The Other console represents all the smaller platforms that dont have renowed/still alive vendors. Among the Other consoles there are platforms like the Atari 2600, Wonder Swan, Turbo Grafx 16, etc.

Each game can consist of several stacked bars; each color represents a country: Blue for US, Yellow for EU, Red for Japan and white for Other countries. Along with a purple cylinder that covers all the stacked colored boxes that represents the total sales that game had. Games are organized by years and belong to platforms that belong to consoles. In order to visualize a platform the user just needs to point the application towards the image target at the end of each chapter in order to see the sales.

Without further ado; here’s a video of how ARMeet works:

If you want to read the report you can get it in here and if you are interested in the source code of the project, feel free to check it out in here also 🙂

Getting timestamp on livestream video on OpenCV quick and dirty.

September 14 | 16

So I have been working on testing some SLAM methods in specific ORB-SLAM2 and in order to make it run on a webcam one needs to pass among the frame in open CV a timestamp for the frame. In theory it should be dead easy, just call on the CV::VideoCapture class the Get method with the “CV_CAP_PROP_POS_MSEC” property according to the OpenCV documentation. But turns out this is half implemented so for live streams it just doesnt work. According to this it turns out there is a way to patch the OpenCV Library and implement this, but I was just needing to get a timestamp really quick in order to feed to my SLAM method. So what I did was this:

I took this piece of code from the same ORB-SLAM2 repo and accommodated to count the time to query a frame from the webcam. Its a quick and dirty trick but does its job 🙂

Procedural city visualization and moveable agents coverage.

April 18 | 16

AgentCoverageNiceViz

For the information visualization class I had an old project that I wanted to revive; it was called the “ambulances” program; it was a project where we had to come up with an algorithm to visualize ambulance coverage in a city for Data Structures and Algorithms class back in my college years.

The whole idea of the class project was to visualize information and the whole idea of the ambulances project was that given a set of N ambulances, where to place them in a city graph in order to minimize the response time for each moveable agent (ambulance).

Unfortunately the data that I had didnt had spatial coordinates so I had to generate the data by my own. I used a modified version of CityGen to dump the generated data for the generated cities, and then I created an app in Unreal Engine to be able to visualize the generated data and my algorithm.

The idea of this project besides visualizing information was also to go and push myself to learn some Unreal Engine and rust off my C++ skills :-).

At the end the project was finished and I could visualize that the algorithm that we created worked more or less good for circular cities but for elongated cities didn’t work as good.

If you are interested in how this project works, feel free to fork it from here. Also if you are interested on what specifically I tried and what experiments I did, feel free to read the report I made (full of images), else just see some of the images next.

AgentCoverageGreenToRed

Coverage per agent on a response-time visualization, for this agent organization and supposing you live in this city, you might want to avoid the places with red nodes in case you get sick often as ambulances will take the most time to reach those places hehe ;)

CityBaseViz

Basic procedural city load into Unreal.

ElongatedCityProblems

Algorithm failing on elongated cities; notice how 2 agents are close to each other (not ideal).

MainPage

17 agents with their respective clusters being covered.

NodeCoverage3Agents

3 Agents being visualized with their respective clusters, you can see that the brown cluster is bigger, this is not a good sign the algorithm is working as expected.
SpeedVisualizationFinally some segment speed visualization; the greener the faster the yellower the slower speeds are in each segment.

 

Introducing Potel: A pottery maker simulator for VR.

January 11 | 16

Potel is a small VR application that focuses on exploring the art of pottery creation through virtual reality. Inside Potel, users are able to create virtual pots with the aid of a spinning wheel that lets them shape virtual “clay” the same way you would do if you where making pottery in real life.

Screenshot3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The way the players interact with the world is with their hands by modeling pottery to their will either by pushing or pulling an initial blob of clay.

We use a Leap motion for tracking the user’s hands movements in the world. An Oculus Rift to immerse themselves in the generated scene and an optional in house hand vibrators that let players know when they are interacting with the world.

Screenshot1

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally after the desired shape has been created the user can 3D print the created art.

UP3Ck6If you happen to have an Oculus rift and a leap motion, feel free to download it from here.

We have noticed a big portion of the users that tried the system without haptic feedback (but knowing about the depth cues) try to place their hands just over the surface of the clay on the first experiences.

Some other users after the first attempts when they are pretty immersed in the environment try to reach a bucket of water that is next to the chair as a prop thinking that everything in the world is inter-actable.

The system currently supports clay deformation for each finger but we decided to stick to the palms because sometimes the leap tracker gets lost and the clay model gets affected because of tracking issues.

 

Here’s a small video of the app working:

And in case you are interested in a more in depth explanation on how the system works, you can read a paper we wrote in here.

If you like it, let me know :).

How to know if a point is inside a polygon

April 16 | 15

Here’s a quick code snippet to know if given a point, the point is inside an array of polygon vertices. Based on the crossing number algorithm.

 

This can also be extended to 3D space if necessary, but that’s for the reader.