Blender Interactive Console

by dfelinto2 Comments

spacer

Have you ever found yourself needing to change a .blend file remotely and VNC / Remote Desktop is not working?

In my case I finished rendering the left eye of a video, and wanted to do the same for the right one. I did it in parts due to the memory limit of the rendering station. And VNC is not working because … no idea. But it’s Friday and I won’t have physical access to the remote computer until next week.

Blender Interactive Console to the rescue!

$ ssh MY_REMOTE_COMPUTER
$ blender -b MYFILE.blend --python-console
(...)
(InteractiveConsole)
>>> import bpy
>>> bpy.context.scene.render.views['left'].use = False
>>> bpy.context.scene.render.views['right'].use = True
>>> bpy.ops.wm.save_mainfile()

Now all you need to do is resume your tmux session, and kick-off render once again. For different Blender command-line options try blender –help.

This post is obviously based on real events! Have a nice weekend 😉

Immersive Storyboarding

by dfelinto5 Comments

If you read my blog you will know that I’m repeating myself. I can’t help stressing this enough though.

Parts of the challenge of stereo movie making is to work in 3d as soon as possible in your pipeline. This is the main reason the Multi-View implementation ranges from the 3D viewport all the way to the sequencer.

spacer

VR (Virtual Reality) movie making is no different. Even more so, if we consider the uniqueness of the immersive experience.

So what if … What if we could preview our work in VR since the first stroke of the storyboard?

Here I’m demoing the Oculus Addon I’ve been developing as part of an ongoing research at a virtual reality lab in Rio (Visgraf/IMPA).

Notice that I’m not even drawing in VR. i’m merely experiencing the work done by Daniel “Pepeland” Lara in his demo file.

The applications of this addon are various, but it mainly focus on support HMD (head mounted displays) in the Blender viewport.

At the moment the support is restrict to Oculus (Rift and DK2), and it excels on Windows since the fastest direct mode is only supported on Oculus’s latest (Windows-only) SDK.

Links:

  • Oculus Addon
  • Grease Pencil Demo .blend by Daniel “Pepeland” Lara

Blender and Oculus, food for thought

by dfelinto17 Comments

Disclaimer 1: A similar reflexion should be valid for other HMDs (Head Mounted Displays).
Disclaimer 2: This is a personal essay, based on my personal experience. Take the following words with a grain of salt.

spacer

The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.

That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.

At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.

That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.

This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.

There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.

At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:

  • The libraries used are from sdk 0.5 (while Oculus is soon releasing the sdk 0.7)
  • The wrapper was generated by someone else, I’m yet to learn how to re-create it
  • Direct mode is not implemented – basically I’m turning multiview on with side-by-side, screen grabbing the viewport, and applying a GLSL shader on it manually
  • The wrapper is not being fully used, the projection matrix and the barrel distortion shaders are poorly done in the addon end
spacer

Virtual Reality Viewport Addon in action – sample scene from Creature Factory 2 by Andy Goralczyk

Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.

The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:

  • License issue – dynamic linking may solve that
  • Maintenance burden: If this doesn’t make into master, the branch has to be kept up to date with the latest Blender developments
  • Platform specific – which is hard to prevent since the Oculus SDK is Windows
  • HMD specific – this solution is tailored towards Oculus only
  • Performance as good as you could get

All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.

That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:

  • Blender Python API to allow off-screen rendering straight to a FBO bind id
  • Blender Python API OpenGL Wrapper to be extended to support glBindFramebuffer among other required GL functions
  • Bonus: Blender Game Engine Video Texture ImageRender to support FBO rendering and custom OpenGL matrices

The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.

The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.

That’s pretty much it, if you have any related reflexions please share on the comments below.

Dalai Felinto
Rio de Janeiro, September 18t, 2015

 

Blender Models in Facebook

by dfelinto1 Comment

Thanks to Blend4Web, there is a very straightforward way of embedding a 3D model from Blender into Facebook. I couldn’t let this pass, and I gave it a go today. If you want to check out the “app”, it is publicly available here (if the app fails to load, disable Ghostery or similar active addons).

spacer

The trickiest part for me was to enable https in my server (which is required by Facebook). Apart from that, everything was pretty straightforward (the navigation in the app is the default built-in viewer from Blend4Web).

I would like to thank Cícero Moraes and his team, for sharing the 3D model with me, and allowing me to re-share it via Facebook.

Technical sheet:

Saint Anthony, facial forensic reconstructed in 3D by Cícero Moraes, Paulo Miamoto phD and team.

This project was part of Ebrafol (Equipe Brasileira de Antropologia Forense e Odontologia Legal) activities, made possible thanks to a plethora of open source tools.

The final digital file was made with Blender 3D, and is shown here exported via Blend4Web.

Planovision and BlenderVR

by dfelinto2 Comments

If you follow my work (aka my anual blog update 😉 ) you know I’m a great enthusiastic of anything slightly resembling sci-fi, geek, gadget things. And sometimes I’m lucky enough to team up with amazing people in order to put those toys to some good use.

In this video I showcase the Planovision system – a ‘3D Table’ compound of a head-tracking device, a 3D projector, and 3D glasses. I’m currently working towards integrating the Planovision with an authoring tool in order to build real demos, and help the project to kick-off.

After the initial integration with Blender via the Blender Game Engine (after all we don’t want just to see the 3d models, but to interact with them), today I got the system to work with BlenderVR to help the integration with different inputs (head-tracker, 3d mouse, leap motion, …). I’m helping the development of BlenderVR since last October, and we only recently released its 1.0 version. BlenderVR is a well behaved guinea pig I must say.

The Planovision has being developed under the guidance of professor Luiz Velho, director or Visgraf/IMPA in Rio de Janeiro, Brazil.

BlenderVR is an virtual-reality open source framework built on top of the Blender Game Engine. BlenderVR was created and is developed by LIMSI/CNRI in Orsay, France and is aimed at Oculus, CAVE, Video Walls among other VR display types.

Links:

  • Blender: blender.org
  • BlenderVR: blendervr.limsi.fr
  • Visgraf: visgraf.impa.br
  • Planovision: [outdated site]

If you want to learn more about the Blender Game Engine, don’t forget to check the book Game Development with Blender, written by Mike Pan and yours truly.

I guess this is one of those times when the line between work and play gets really blurry. I hope it keeps this way for a long time 😉

Gooseberry + Multiview + Oculus = Mind Blown

by dfelinto43 Comments

Dear visitor, welcome!

This week I visited the Blender Institute and decided to wrap up the multiview project. But since I had an Oculus DK2 with me I decided to patch multiview to support Virtual Reality gadgets. Cool, right?

spacer

Gooseberry Benchmark viewed with an Oculus DK2

There is something tricky about them. You can’t just render a pair of panoramas and expect them to work. The image would work great for the virtual objects in front of you, but it would have the stereo eyes swapped when you look at behind you.

How to solve that? Do you remember the 3D Fulldome Teaser? Well, the technique is the exactly same one. We start by determining an interocular distance and a convergence distance based on the stereo depth we want to convey. From there the software (Cycles) will rotate a ‘virtual’ stereo camera pair for each pixel to be rendered, so that both cameras’ rays converge at the specified distance.

spacer

Oculus barrel correction screen shader applied to a view inside the panorama

This may sound complicated, but it’s all done under the hood. If you want to read more about this technique I recommend this paper from Paul Bourke on Synthetic stereoscopic panoramic images. The paper is from 2006 so there is nothing new under the Sun.

If you have an Oculus DK2 or similar device, you can grab the final image below to play with. I used Whirligig to visualize the stereo panorama, but there are other alternatives out there.

spacer

Top-Bottom Spherical Stereo Equiretangular Panorama – click to save the original image

This image was generated with a spin off branch of multiview named Multiview Spherical Stereo. I’m still looking for a industry standard name for this method. But in the meanwhile that name is growing on me.

I would also like to remark the relevance of Open projects such as Gooseberry. The always warm-welcoming Gooseberry team just released their benchmark file, which I ended up using for those tests. To be able to get a production quality shot and run whatever multi-vr-pano-full-thing you may think of is priceless.

Builds

If you want to try to render your own Spherical Stereo Panoramas, I built the patch for the three main platforms.

  • Windows 64 [link] *
  • Mac 64  [link] *
  • Linux 64 [link] *

* Don’t get frustrated if the links are dead. As soon as this feature is officially supported by Blender I will remove them. So if that’s the case, get a new Blender.

How to render in three steps

  1. Enable ‘Views’ in the Render Layer panel
  2. Change camera to panorama
  3. Panorama type to Equirectangular

And leave ‘Spherical Stereo’ marked (it’s on by default at the moment). Remember to post in the comments the work you did with it!

 

Last and perhaps least is the small demo video above. The experience of seeing a 3D set doesn’t translate well for the video. But I can guarantee you that the overall impression from the Gooseberry team was super positive.

Also, this particular feature was the exact reason I was moved towards implementing multiview in Blender. All I wanted was to be able to render stereo content for fulldomes with Blender. In order to do that, I had to design a proper 3D stereoscopic pipeline for it.

What started as a personal project in 2013 ended up being embraced by the Blender Foundation in 2014, which supported me for a 2-month work period at the Blender Institute via the Development Fund. And now in 2015, so close to the Multiview completion, we finally get the icing on the cake.

No, wait … the cake is a lie!

Links

  • Multiview Spherical Stereo branch [link] *
  • Multiview: Cycles Spherical Stereo Support Official Patch [link] *
  • Gooseberry Production Benchmark File [link]
  • Support the Gooseberry project by signing up in the Blender Cloud [link]
  • Support further Blender Development by joining the Development Fund [link]

* Time traveller from the future, hi! If the branch doesn’t exist anymore, it means that the work was merged into master.

Nice Oculus

Thanks! This is not mine though spacer Oculus is one of the supported platforms of the Blender-VR project, to be presented at the IEEEVR 2015 next week.

If you are interesting in interactive virtual reality and need an open source solution for your CAVE, multiple Oculus or video wall, give Blender-VR a visit. I’m participating in the development of a framework built on top of the Blender Game Engine.

Also if Oculus feels like sending me my own Oculus, I wouldn’t mind. If you do, though, consider sending one to the Blender Foundation as well. I will feel bad when I take the device away from them next week.

Have a good one,
Dalai

Update:

Due to the long review process the patch is not yet in Blender. That said, since there were enough people interested on this feature, I just updated the links above with a more recent build (on top of current Blender 2.76 RC3).

Update:

The build now also supports regular perspective cameras. This is required for cube map vr renders. For this I also recommend an addon that I was commissioned to build, to render or to simply setup cubemap renders [link].

Note: remember to change your camera pivot to center.

* Last build update: October 2nd 2015

Blender Conference 2014

by dfelintoLeave a Comment

Hi there, This is a bit of a last minute post but here it is. This week (24-26/10/2014) I’ll be presenting two talks in the Blender Conference 2014 in Amsterdam.

spacer

Both talks will be available with a live-streaming, and as a video to be watched later.

  • Multi-View and Stereo 3D: Show and Tell [link]
  • Blender as a Mission Preparation Tool for Drones [link]

The first one is on Friday at 14:00 (10:00 in Brazil), while the second one is on Saturday at 16:30 (12:30 in Brazil).

If you have any question on either presentation please let me know in the comments so I can get back to you.

Cycles Baking – Development Teaser

by dfelinto32 Comments

Baking is a popular ‘technique’ to flat down your shading work into easy to use images (textures) that can be applied to your 3d models without any concerns with lighting calculation. This can help game development, online visualization, 3d printing, archiviz animations, and many other fields.

spacer

Koro, from Caminandes project, fully baked

Since last September I’ve been working part time for the Blender Foundation to help implementing game related features in Blender. So far I worked on bug fixes and a few nice features such as: improvements in the Triangulation Modifier, Photoshop PSD support and Walk Navigation System. Then comes December, and with it the possibility of tackling something new. We decided it was time to give baking a go.

Supported Maps

The Cycles renderer is based on physics based lighting calculations. That means the passes we can bake in Cycles are different than what you may be used to in the Blender Internal renderer.

Data Passes

  • Normal
  • UV
  • Diffuse/Glossy/Transmission/Subsurface/Emit Color

Light Passes

  • AO
  • Combined
  • Shadow
  • Diffuse/Glossy/Transmission/Subsurface/Emit Direct/Indirect

spacer

Koro Ambient Occlusion Bake Map

spacer

Koro Combined Bake Map

The above maps illustrates Ambient Occlusion and Combined baking. Ambient Occlusion can be used to lit the game scene, while combined emulates what you get out of a full render of your object, which can be used in shadless engines.

The character baked here is Koro from the Caminandes project. Koro was gently made available as CC-by, so while I take no credits on the making of it, I did enjoy supporting their project and using Koro in my tests. Koro and all the other production files from Caminandes Gran Dillama are part of the uber cool USB customized card  you can buy to learn the nitty-gritty of their production, and to help supporting the project and the Blender Foundation.

Open Shading Language

Open Shading Language (OSL) is a shading language created and maintained by Sony Image Works and used by them in many blockbusters already (Amazing Spider Man, MIB III