Changing parameters through scripting on Unity’s post processing stack V2.

February 13 | 18

For my PhD thesis I need to write an image effect that lets me configure a lens on a screen, but honestly, I can’t stop playing Archery Pro. In the calibration phase I need to modify the Image effect values through scripting.

To my surprise; there is a tutorial for doing changes in V1 that you can check here or in a forum post here but for V2 there where no guides.

Luckily enough, the forum post gave helped me out getting started to modify values on the Post Processing stack V2.

 

So lets say you have an image effect called “PitchTestImgEffect.cs”

Easy enough, my image effect has 2 float values and a texture as you can see here:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What we need to do is to access the PostProcessVolume and get the PostProcessEffectSettings that contains our “PitchTestImgEffect” class, then we can modify its settings :-).

So with this script we can access the value of the pitch and set a value to it:

Hopefully this helps you out :). If it helped you, drop me a line and let me know.

Getting SteamVR tracking data in Unity without a HMD

November 8 | 17

TL;DR: I managed to get SteamVR tracking data in unity (position and rotation) from a VIVE tracker / SteamVR Controller without the need of the Head Mounted Display (HMD) being connected. It runs on my 2012 mbp and I’m pretty happy with the results.

Long version :

For the VR project I’m working on I only need the pose on tracked objects from SteamVR. I’m not using a Head Mounted Display (HMD) because 1. I don’t need it and 2. It wouldn’t run on my 2012 mbp.

I followed the tutorials proposed here and here (which are basically the same) but unfortunately I couldn’t manage to make them work with Unity, even though I managed to get to the state where SteamVR status icon said “NOT READY” (which according to them is fine for their tutorials), it wasn’t enough for unity to be able to initialize trackers and get data.

 

 

 

 

 

 

 

Turns out you need to tell SteamVR in the config files to not only set:

But you also need to force SteamVR to load a null driver (headless driver) and let SteamVR be able to load more than one driver.

In order to do this and be 100% sure where are your config files being loaded the best way to do it is to check for the log files, specifically “vrserver.txt

Finding the log files for SteamVR.

In order to find where does SteamVR stores the config files, just start steam VR, and then click on Settings -> Developer and then click on “Set log directory“. With this you can now where SteamVR stores vrserver.txt log.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The vrserver.txt log is really important because it tells where and which .vrsettings files loads!. To my surprise; I thought I was only loading one custom file I made, but turns out that’s not the case, there where 5 files in different places being loaded!

Diving into vrserver.txt in SteamVR

Now that we know where is the vrserver.txt log file we can proceed to open it. Fear not!, it contains a lot of stuff but we are looking for a few lines specifically in my case looked like this:

Here basically says where does SteamVR loads the *.vrsettings files to set up SteamVR.

Now basically what we need to do is open each file and checkout where in all those files the “requireHmd“, “activateMultipleDrivers” and “forcedDriver” vars are set.

Wherever you find them, set their values to:

If you cannot find them in any place, just add them.

So what we are doing here is forcing SteamVR to load the null (headless) driver.

The null driver looks like this:

You also need to set the “enable” var in the null driver to true to make it work.

Here you can set a bunch of settings depending on how you want to behave. “driver_null” is a non-physical HMD, here you can add any features you want to it.

Now, close SteamVR and open it again, now, instead of having the “Not Ready” label when it starts, you should see this:

 

 

 

 

 

 

 

 

 

In any case you don’t see this, check again the vrsettings.txt log file and read it, it will tell you if you made a mistake in the JSON files (*.vrsettings) you just modified.

Getting tracking data in Unity!

After you have done this you should be golden to try it out in Unity3D, just create a simple project, drag and drop the SteamVR camera rig prefab and set the target eye to “None”

Notice how in the log the loaded driver is “connected to null: Null Serial Number”, that means we successfully connected to the headless driver.

 

 

 

 

 

 

 

 

 

 

 

 

 

It worked for me in OSX, but it should also work for you following the same workflow in Windows / Linux, if you have issues, just drop me a line I could take a look at your settings and maybe help out 😉

ProDrawCall optimizer VS ProDrawCall optimizer free – Differences

September 1 | 17

I have been asked several times about the differences of ProDrawCall optimizer and ProDrawCall optimizer light besides the price of course :-). On this post I will try my best to explain what does the light version contains, what does the the Pro version offers, the differences between them and how similar is this tool with ProMaterial Combiner.

ProDrawCall optimizer is a tool that automatically gathers all your game objects in a unity scene and groups them all by the type of shader. Then it processes each of your GameObject meshes and remaps all your UV textures for each vertex to a generated atlas of all your textures to specific parts of the atlas.

This process results in issuing less draw commands on the video card as all your game objects share one material that contains an atlas of all the textures. Hence, reducing the render time and increasing your performance.

If you are interested on seeing how draw calls are built on a single frame, checkout this video. It’s not in Unity per-se but the behavior is exactly the same.

Anyways, without further ado, the main differences between the light version and the paid version are as follows:

 

ProDrawCall

  • Supports any type of shader (Even the standard shader and custom shaders!)
  • Supports skinned mesh renderers
  • Code is provided in case you need to tweak the app
  • Can combine multiple meshes also besides atlasing.
  • Supports Specific UV Channels.
  • Can preserve hierarchies when generating atlases (your generated game objects will be parented like your original ones).
  • Lets you reuse textures in your atlas to save space.
  • Advanced game object search by UV correctness or by UV values. (lets you search from all your objects specific game objects that fulfill certain UV values)
  • Can search objects by tag or by layer.
  • Can generate atlases for power of 2 textures
  • If atlases are too big, it split the generated textures in several atlases.
  • Can combine meshes selectively with the atlased objects.
  • Supports multiple material combination per game object (If the shaders are the same).

ProDrawCall Light

  • Free for use in any of your projects (also commercial).
  • Doesn’t come with source code.
  • Doesn’t support Skinned Mesh Renderers
  • Only supports Legacy Shaders/Diffuse and Legacy Shaders/Bumped Diffuse.

Difference with ProMaterial Combiner

I have also been asked, what’s the difference of ProDrawCall and ProMaterial?, when should I use ProMaterial and when ProDrawCall?

The reasoning is really simple, ProDrawCall optimizer does everything that ProMaterial Combiner does, ProMaterial Combiner only combines materials in a single game objects while ProDrawCall optimizer combines multiple materials in game objects and also groups and combines materials of the game objects that share the same type of shader. Hope this clears some misconceptions/questions you might have :-).

 

If you are still interested on how to manually reduce draw calls, checkout this post, it explains with more detail how its done.

On a final note: The support for the free version is exactly the same as for the pro version, so if you find any issue in the free version, feel free to drop me an email and I will happily reply and help you out.

ARMeet: Augmented Reality for Meeting reports

March 22 | 17

TLDR; I created an app that eases up the understanding of project reports in meetings by aiding data visualization through Augmented Reality in the report itself; check the video.

Over the past 2 months I have been working on a proof of concept that actually performed surprisingly well  in my phone. I called this project ARMeet, and basically its an attempt to ease up the reading / understanding of the  reports given in a meeting with the help of Augmented Reality.

For this proof of concept I wanted to visualize an interesting (to me) dataset that I found in Kaggle about video game sales with ratings, I wanted to mix ratings with maybe the size of each visualized game or experiment with other factors like positioning or even rotation; but after some failed experiments I decided to not use the ratings as they where complete on ~40% of the total games the dataset contained (Around ~6900 games had complete ratings out of ~16700 games).

So I just sticked to games represented by bars organized by platforms that belonged to a console; I created 6 major consoles. Nintendo, Sony, Microsoft, Sega, PC and Other. The Other console represents all the smaller platforms that dont have renowed/still alive vendors. Among the Other consoles there are platforms like the Atari 2600, Wonder Swan, Turbo Grafx 16, etc.

Each game can consist of several stacked bars; each color represents a country: Blue for US, Yellow for EU, Red for Japan and white for Other countries. Along with a purple cylinder that covers all the stacked colored boxes that represents the total sales that game had. Games are organized by years and belong to platforms that belong to consoles. In order to visualize a platform the user just needs to point the application towards the image target at the end of each chapter in order to see the sales.

Without further ado; here’s a video of how ARMeet works:

If you want to read the report you can get it in here and if you are interested in the source code of the project, feel free to check it out in here also 🙂

Getting timestamp on livestream video on OpenCV quick and dirty.

September 14 | 16

So I have been working on testing some SLAM methods in specific ORB-SLAM2 and in order to make it run on a webcam one needs to pass among the frame in open CV a timestamp for the frame. In theory it should be dead easy, just call on the CV::VideoCapture class the Get method with the “CV_CAP_PROP_POS_MSEC” property according to the OpenCV documentation. But turns out this is half implemented so for live streams it just doesnt work. According to this it turns out there is a way to patch the OpenCV Library and implement this, but I was just needing to get a timestamp really quick in order to feed to my SLAM method. So what I did was this:

I took this piece of code from the same ORB-SLAM2 repo and accommodated to count the time to query a frame from the webcam. Its a quick and dirty trick but does its job 🙂