Aras Pranckevičius
Latest answers

Hi Aras! Any info about multi-channel signed distance fields in Unity? Is it going to be implemented? It's great for small UI text/icons! Reference:

I don't know the context why @chman was playing around with them, but for the general Font/Text rendering, yes someone is looking at improving the current situation. Bitmap-based glyphs are not really ideal, and indeed some sort of distance field based rendering is one of the options to look into. There are also ways that directly evaluate the glyph bezier curve outlines in the shader, though I'm not sure what are the advantages/disadvantages of that method.

So yeah TLDR: someone is looking into this area, but I don't know their roadmap/plans.

View more

Hello Aras! Is Uniity going to add Volumetric Clouds/Atmospheric Scattering in Unity 2017.x ?

It's not in the "must do immediately" plans, but it is on the "items to do" lists somewhere. Whether that will make it in 2017, we'll see. "Improve atmosphere rendering situation" is something we will try to, well, improve.

View more

Hi Aras! I wonder if you have information on how to use SV_RenderTargetArrayIndex in unity rendering pipeline. Bascially, I'm looking for a way to call an equivalent of Graphics.SetRenderTarget, but with multiple slices set. Thanks!

Space Toad

-1 for depthSlice argument of Graphics.SetRenderTarget should bind the whole array or 3D resource. Then you can use SV_RenderTargetArrayIndex.

Note that not all graphics APIs support that, e.g. OpenGL ES does not.

View more

So...Can i consider appdata_full struct is 8 interpolators?

(this seems to be reference to -- gosh is terrible for discussions :))

"appdata*" structures in standard Unity shader includes are for vertex shader inputs. The vertex-to-fragment structures are typically named "v2f*". The interpolators/varyings talk is about the latter.

View more

Hello, Aras.I'm a Chinese Developer.Maybe my English is poor,Please forgive me. -qusetion : In the Unity Shader Official Document. target 2.0 support 8 interpolators. What's the mean of interpolators. How do i understand it??

"interpolators" is a DirectX term, for example in OpenGL they are called "varyings" sometimes. It's basically the "things" you pass from the vertex shader into the pixel/fragment shader. In shaders these typically have "TEXCOORDn" semantics on them.

Platforms like OpenGL ES 2.0 are often limited to 8 four-component vectors (so in total up to 32 numbers) that can be written to from the vertex shader and read by the fragment shader. DirectX9 shader model 2 (SM2.0) is slightly more complicated, as it allows up to 8 TEXCOORDn interpolators (each being float, float2, float3 or float4), and additionally two low-precision COLORn interpolators (again each being float..float4).

Later shader models/APIs do away with that split between "texcoord vs color" interpolators, and e.g. DirectX9 shader model 3 says "up to 10 float..float4 interpolators". OpenGL ES 3.0 and DirectX10 says "up to 16" etc.

View more

As an aspiring gfx dev, I always wanted to ask this -stupid?- question to a real one. In most AAA games, colors look heavily saturated (Uncharted, FF15, for instance). Is it an artistic decision? Or is it because something missing in the lighting step? (absorption,GI?) Or is it just my eyes ? :) Thx

I think it's mostly artistic direction. Some AAA games go for high saturation. Some go for brown/gray look, or used to a few years ago -- see this half-joke Some go for orange+teal lighting setups. etc. :)

View more

Hello, can I edit and deploy to the asset store the script you created on this page?

I guess, but there's one right in the scripting docs - - so I kinda don't see a point. Unless you'll put it onto asset store and will earn millions selling that, in which case more power for you! :)

View more

Hi Aras! We are using Unity native rendering plugin for our recently released game "Mushroom Wars 2" and seems there is an issue in Unity 5.x for iOS methods UnityPluginLoad and UnityPluginUnload never happen. This cause some issues for us. Is this a known issue and any plans to fix it? THANX

Andrey Korotkov

Sounds like the same question as in -- TLDR I know nothing about iOS plugins, but I know they are different from most other platforms "somehow"

View more

Hey Aras, wrote you before, but I am not sure if you received my message because my account was not validated. Can you tell me how you received the geometry on the picture of step 4 from your earring project. I am desperately looking for a solution to get this effect on smooth geometry. THANKS

I forgot exact steps, so I'd go with what I have written in that blog post ( -- I think I did Filters -> Remeshing -> Surface Reconstruction: Poisson in MeshLab.

View more

Hi Aras. I saw the project you did with the earrings. i am very interested in what kind of values you used for the marching cubes in meshlab. I want to transform a smooth geometry into a geometry with these kind of steps. Even rougher. Looking forward to your response thank you very much Best Alex

I actually forgot the details, this ( was some years ago and I have not written them down apparently.

I think the marching cubes I did myself, just some simple C++ code to do it (but now I can't find it, apparently did not save/push anywhere?). Then did additional smoothing in MeshLab, since my raw marching cubes was a very noisy mesh.

Sorry, but I really forgot the details :/

View more

What is Lithuania's national desert?

I'd guess it would be either "Tinginys" (lazy cake) or "Šakotis" (tree cake)

View more

What do you think about Unreal Engine's decision to use C++ as the game logic language? What is your general opinion of managed vs unmanaged languages in game dev context?

Ivan Poliakov

I think every approach has pros & cons.

In UE4 case, they seemingly have a split of "high level logic should be in Blueprints, low level logic in C++". That's a valid approach, though personally to me it feels that the downside is that there's no "middle" -- you either need to get to C++ level (many can't or don't want to), or you need to work with Blueprints (many can't or don't want to).

In Unity's case, it's mostly about this "middle level" (C#/.NET), however indeed the things we lack are these "on the edges" (super high level, visual programming for people who don't want to or can't program; and super low level scripting for people who need to get there). While each can be worked around (via plugins, or visual scripting extensions), indeed it's not ideal right now.

I think managed languages are fine for a lot of game code. They do have some downsides (garbage collection is probably the major one), but on the other hand, game scripting has been using some sort of "higher level languages" for a very long time by now (e.g. Lua, C#, Python, UnrealScript, other custom languages). In particular Unity's case, the GC situation is not ideal; I think once we get a more modern GC things should be a bit better.

So yeah, basically different approaches, and each of them has some advantages & disadvantages.

View more

Having been working on the graphics of a multi-platform engine, do you think supporting Windows is difficult enough that when a AAA game has horrible performance on day one it is not completely unjustified?

I don't think it's so much of "supporting windows", but more towards "supporting wide range of hardware/software configs".

For big AAA games, usually (not always) most of the revenue comes from consoles. So it's natural that this is where most of the effort goes into, and most of the optimizations, and most of quality assurance.

Now, pretty much everyone also has the game running on the PC in some state all the time - after all, all the development tools are on PC, and so on. These days, with both PC & consoles even having very similar hardware (no "exotic hardware" like Cell/360) that's even easier. However, a PC has a ton of things that you don't have to worry about on consoles - various numbers of CPU cores and speeds (with unknown amount of that taken by other applications & background processes), all the different GPUs out there and their various driver versions, unknown amount & speed of memory and storage, etc. etc.

Getting most of that working acceptably is usually not rocket surgery, but requires quite some QA and then some amount of development work to fix or work around the problems that are uncovered. Game development timelines often do not leave "extra time at the end" for PC optimization -- up until humanly possible, the teams usually try to make the best game they can on the main platforms (this being consoles), and then they ship. And then once that is done & shipped, they turn into "oh we should do some QA & fixes for PC" I think -- that's just a natural course of things with consoles being the major money bringers.

View more

now that putin will seize baltic states, what country do you consider to emigrate to ? (asking for a friend)


I really hope it won't come to that! But yeah, you never know :(

Places that I'd like to live in: Iceland, New Zealand. Well, I have no idea if I'd really like to live there, but both are kinda remote and have some awesome nature. Sounds like a good deal to me.

View more

Hi Aras, I'm working with your Graphics Demos example project. First of all, thanks for the great example! Really helps me a lot. I was able to run it on a Mac, but couldn't get it to work on an iPhone. What changes are required to make it work on an iPhone? Thanks, Matan

Matan Rubin

If you mean the native C++ plugin example project on bitbucket (, then short answer is, "I don't know".

I have never done a native code Unity plugin for iOS. I do know that iOS has a slightly different build pipeline, i.e. the way people do "plugins" is typically by having them as C++ or Objective C source files, which are packaged up into the final Xcode project that Unity builds. Which is then compiled in Xcode, and so the plugin is linked together with the Unity engine static library and your script code to produce the final executable.

So my guess would be that the plugin sources need to be added to the final Xcode project, or the Unity project itself and somehow marked up as "hey this is plugin source for iOS". Maybe the documentation helps?

View more

We have tried turning on vertex compression in Unity 5.4.0p3, and we cannot detect any difference on imported meshes or terrain (vert data looks same in PIX). Are we missing something?

(side note: I'd probably like to avoid this turning into a Unity support channel...)

There are several kinds of vertex compression in Unity.

One is compression setting in the mesh importer; this one saves storage (game data size) only. This is lossy, by storing less bits, but after loading the data is otherwise still in the usual format.

Another compression is the compression in player settings somewhere (the one where it can be set separately for each vertex component). And that one means different things on different platforms (e.g. some build platforms don't support some types of compression), and also depending on the mesh settings -- e.g. meshes that are marked "CPU readable" don't get compressed. Some meshes that are generated at runtime probably aren't compressed either, I guess Terrain patches would fall under that too.

If all that is not documented clearly, then it should! Please file a documentation bug with what is missing.

View more

Is a CS degree a must have at Unity? How much is it taken into account when evaluating a candidate? Does years of experience and dedication matters? And besides the technical stuff, what do you look for when evaluating people?

Filipe Scur

Realistically all these questions have an answer of "it depends" :) (on the position, team, department, etc.).

Typically a CS degree is not a requirement.
Experience matters if it's for a senior or tech lead position.
Besides technical stuff, we're trying to evaluate the "no asshole" factor, whether a person could work well without much handholding or direction (in most teams there's not terribly much supervision or detailed "management" etc.).

But really, depends on which team/position.

View more

Last I looked at creating shaders that work across various platforms & APIs (i.e. shader translation hlsl -> glsl) it was quite a mess. I know you wrote about this topic on your blog a while ago. Have things improved since?

I hope that they did to some extent, however I have not personally used any of these new shader cross-compilers, so can't vouch for them.

The situation right now in 2016 seems to be:

If you need DX9-level HLSL (with a tiny bit of DX10 things, like instance IDs and some texture arrays), then using hlsl2glslfork + glsl-optimizer is still probably the most "battle tested" solution (being used in Unity and so on). However, the DX9 HLSL syntax is starting to get old. This can get conversion from DX9 HLSL into: GL2.x, GLES2.0, GLES3.0, Metal.

For a "mostly DX9 HLSL but with somewhat more DX10 stuffs", I'd look at HLSLParser , specifically Thekla's fork of it ( This recently got Metal conversion backend too (from The Witness game port to Metal I guess), and a bunch of improvements from ROBLOX folks. This, as far as I can tell, can get you conversion into OpenGL (possibly ES too?) and Metal.

Khronos' glslang ( is getting a HLSL parsing frontend recently, which seems to be targeted at full DX11 HLSL syntax, and is under very active development, with compute shader bits being done as we speak. So this can take GLSL or HLSL as input, and can output SPIR-V (which can be used directly in Vulkan). Another tool, SPIRV-Cross ( could be used to convert that into GLSL or Metal. Possibly with some optimization step via SPIRV-Tools in the middle (

There's a DX11 bytecode level translator (as in, compile HLSL with actual D3DCompiler/fxc, and translate the bytecode into GLSL) via HLSLCrossCompiler -- my impression is that it needs "a lot" of tweaks on top to be "production ready". We use a fork of it in Unity, but the people working on it haven't got around to push their changes somewhere public. I just know they did *a lot* of changes :)

And then Microsoft at GDC2016 talked about their upcoming open source HLSL compiler, that would be built on top of clang+llvm, and I think they talked about "end of 2016" as potential release date. But I haven't heard updates on that. This of course would only be a HLSL -> DXIL toolchain, but if it were open source then I guess someone could make DXIL -> SPIR-V translator, and from there to other backends via SPIRV-Cross.

So, in summary: right now, for modern HLSL I'd take a look at Khronos' glslang + SPIRV-Cross. If you can wait a bit until Microsoft ships their new HLSL compiler, then would be worth taking a look at that too.

View more

What are your favourite static analysis tools for c++?

Karl Bergström

Short answer: I don't know :)

Longer: at work we have this group of people doing "developer tooling", and they are dealing with compiler versions, static analysis / coverage tools, code formatters, build system etc. etc. I just use whatever they have set up and generally move on with my work.

Manually myself, I have used the "/analyze" in Visual C++ and the static analyzer in Xcode a few times. Was useful, but also from what I remember, quite a few false positives. Visualization of results with neat arrows in Xcode is quite lovely.

View more

Like the questions on your recent interview blog post, what (or where) is a good way to learn about gpu details? I've found this series really useful ( Is there a good place for learning this stuff?

Bonifacio Costiniano

This series is excellent indeed!

I found some books to be useful too, e.g. "Real-Time Rendering" (Moller, Haines, Hoffman) has a very good overview of common real-time rendering algorithms and approaches, while "Physically Based Rendering" (Pharr, Jakob, Humphreys) is a really solid book on the whole physically based rendering thing (more towards offline rendering focus, but extremely solid foundation).

In regards how the GPUs work, Fatahalian's "Running Code at a Teraflop" ( is a really good "no marketing bs" look into the GPU :)

View more

How far from optimal do you think modern game engines are at utilizing the GPU generally speaking? Do you think this gap between potential and utilized performance grows or lessens with current API & platform explosion and hardware development?

I think GPU utilization is more up to the individual game, and less on the engine itself. For example, something like Unity allows a game developer to write their own shaders, compute shaders, make their own rendering effects and so on -- which means the game developer is fully in power to utilize whatever the GPU has (as long as Unity exposes access to that functionality).

The new graphics APIs (Vulkan, DX12, Metal) are first and foremost targeted at increasing CPU efficiency (via multithreading, lower driver overhead etc.). They do enable some more GPU features or more efficiency there too, but that's not their primary goal.

That said, there are very interesting "non-standard" ways to use the GPU these days (e.g. see Media Molecule Dreams or Q-Games Tomorrow Children). But I think that's more towards the "interesting use" axis, and not necessarily "use the GPU 100%" axis.

Using the GPU "to the max" is quite hard on the PC due to market realities -- you have to make sure your game works on GPUs that are easily 10x in terms of performance between low-end and high-end, and in some cases even more. And the fastest ones are typically quite a small part of the market share, making it not very viable to spend significant time developing something specifically for them. On mobile the differences between high-end and low-end are even more extreme. Even on consoles, were are no longer in a "fixed hardware for 7 years" cycle - see PS4 and PS4 Pro; that platform already has two quite different hardware configurations.

View more

How was the interview process when applying for a job at Unity? What language did you use in interview and some of most important areas someone should learn if he wants to be graphic programmer, thanks in advance sir

When I joined the company was in a very different position than it is now :) I was hired at end of 2005, the company was 3 people. I actually got an email from them, asking whether I want to join (they knew my name from the blog I had, and also the ODE physics engine mailing list).

To which I -- obviously -- declined! I mean, this was a company I've never heard about, making an engine I've never heard about, with a fairly weird website, and the engine was Mac-only at the time, and being in Eastern Europe I've never *seen* a Mac before (this was 2005, Macs were not hip yet). The whole thing looked somewhat between shady and naive and improbable.

But, they invited me to a gamejam they were organizing in the office and bought me a plane ticket. So I got there, and the founders looked both smart & good kind of crazy, and I thought that this still has like 1% chance of getting anywhere, but at least it would be fun while it lasts. I had a fairly boring job at the time doing regular databases/websites programming; this contributed too.

The actual "interview" was that gamejam. Me & CTO (Joachim) basically pair-programmed everything for the game. I did some modifications to Unity itself that we needed (this was Unity 1.1, which basically had almost no features to begin with :)). This was mostly C++ programming, and I knew C++ before (having worked professionally for 5 years with it, and some more time at home before that).

I started by doing the Unity Web Player browser plugin for Windows (this was a task that someone needed to do, and while I knew nothing about plugins, I knew more about Windows than the other people in the company). And only later started specializing more towards graphics programming.

These days interview for graphics programmers at Unity mostly looks for graphics (realtime or offline; some graphics APIs; GPUs; graphics algorithms) and C++ knowledge. Right now it's a "programming challenge" (write C++ program that solves a stated problem at your own home/pace) that gets evaluated, followed by a phone/skype interview with one or two people, followed by onsite interview with more people. I wrote about some of the interview questions I've used in the past

But really depends on the exact position. Sometimes we are looking for senior people with lots of existing experience (e.g. to be a technical lead of some graphics sub-team), sometimes we are looking for less experience.

About what to learn: "just start doing graphics and learn everything you run into" I guess is not a terribly useful answer, but really that's the guideline :) Learn typical graphics algorithms (read books), shader programming, some 3D API, use some existing engine/toolset, learn C++ or some other systems-level language (Rust, Go, Swift), learn some higher level language (C#, Python, JS), learn how GPUs work etc. etc.

View more

Thanks for the great answer! bgfx is very interesting. I presume it is recommended to move on to an actual graphics API after a sufficient amount of abstract graphics knowledge is achieved. Am I correct? Also, talk more about Unity! Also, how important is the CPU side language (C#/C++/JS..etc)?

Mohammed ( ͡° ͜ʖ ͡°) Arabiat

That very much depends on what you want to learn/achieve.

If you want to learn a graphics API, then yes, at some point you have to use one :)

View more

What do you think of learning WebGL to grasp the fundamentals of graphics programming so you don't have to deal with handling input or window management?

Mohammed ( ͡° ͜ʖ ͡°) Arabiat

That's probably a good idea! WebGL is very nice in terms of "availability" - you just need a decent browser and that's it. What's perhaps not so nice, is that it's built on top of OpenGL ES, which itself as a bunch of messy parts in it. However, still probably the easiest way to get into graphics programming indeed. Maybe use some helper libraries like three.js or similar too.

Another alternative might be something like Unity (but hey I might be biased). If you want to learn lower level graphics, Unity still allows you to write your own shaders, manually create and setup render targets and so on, while abstracting away most of platform differences, input handling and other "boring" bits.

If you are into C/C++, I'd suggest trying something like bgfx ( as a graphics API abstraction library that also deals with most of "boring bits", allowing you to focus on what your graphics algorithm tries to actually achieve.

View more

What is your proudest achievement in Unity for 2016?

Seon Rozenblum

Haven't shipped it yet, but kickstarting scriptable render loops ( I've been thinking about something like this for a few years, but never past the "jotting down some notes" stage. This year, we got a small team together for a week doing nothing but that. And that initial prototype turned out to be way more viable than any of us expected! Now of course a lot of work is left to make it shippable / production ready etc. etc. But feels like this and all the low-level graphics improvements we are doing lately have a chance of being a really solid base to build future graphics on. Super happy about that, can't wait to ship.

View more

Ask @aras_pr:

About Aras Pranckevičius:

Graphics programmer and code plumber at Unity

Kaunas, Lithuania

#graphics #programming