Aras PranckevičiusTop answers
now that putin will seize baltic states, what country do you consider to emigrate to ? (asking for a friend)R.Pole
I really hope it won't come to that! But yeah, you never know :(
Places that I'd like to live in: Iceland, New Zealand. Well, I have no idea if I'd really like to live there, but both are kinda remote and have some awesome nature. Sounds like a good deal to me.
How was the interview process when applying for a job at Unity? What language did you use in interview and some of most important areas someone should learn if he wants to be graphic programmer, thanks in advance sir
When I joined the company was in a very different position than it is now :) I was hired at end of 2005, the company was 3 people. I actually got an email from them, asking whether I want to join (they knew my name from the blog I had, and also the ODE physics engine mailing list).
To which I -- obviously -- declined! I mean, this was a company I've never heard about, making an engine I've never heard about, with a fairly weird website, and the engine was Mac-only at the time, and being in Eastern Europe I've never *seen* a Mac before (this was 2005, Macs were not hip yet). The whole thing looked somewhat between shady and naive and improbable.
But, they invited me to a gamejam they were organizing in the office and bought me a plane ticket. So I got there, and the founders looked both smart & good kind of crazy, and I thought that this still has like 1% chance of getting anywhere, but at least it would be fun while it lasts. I had a fairly boring job at the time doing regular databases/websites programming; this contributed too.
The actual "interview" was that gamejam. Me & CTO (Joachim) basically pair-programmed everything for the game. I did some modifications to Unity itself that we needed (this was Unity 1.1, which basically had almost no features to begin with :)). This was mostly C++ programming, and I knew C++ before (having worked professionally for 5 years with it, and some more time at home before that).
I started by doing the Unity Web Player browser plugin for Windows (this was a task that someone needed to do, and while I knew nothing about plugins, I knew more about Windows than the other people in the company). And only later started specializing more towards graphics programming.
These days interview for graphics programmers at Unity mostly looks for graphics (realtime or offline; some graphics APIs; GPUs; graphics algorithms) and C++ knowledge. Right now it's a "programming challenge" (write C++ program that solves a stated problem at your own home/pace) that gets evaluated, followed by a phone/skype interview with one or two people, followed by onsite interview with more people. I wrote about some of the interview questions I've used in the past http://aras-p.info/blog/2016/11/05/Interview-questions/
But really depends on the exact position. Sometimes we are looking for senior people with lots of existing experience (e.g. to be a technical lead of some graphics sub-team), sometimes we are looking for less experience.
About what to learn: "just start doing graphics and learn everything you run into" I guess is not a terribly useful answer, but really that's the guideline :) Learn typical graphics algorithms (read books), shader programming, some 3D API, use some existing engine/toolset, learn C++ or some other systems-level language (Rust, Go, Swift), learn some higher level language (C#, Python, JS), learn how GPUs work etc. etc.
As an aspiring gfx dev, I always wanted to ask this -stupid?- question to a real one. In most AAA games, colors look heavily saturated (Uncharted, FF15, for instance). Is it an artistic decision? Or is it because something missing in the lighting step? (absorption,GI?) Or is it just my eyes ? :) Thx
I think it's mostly artistic direction. Some AAA games go for high saturation. Some go for brown/gray look, or used to a few years ago -- see this half-joke http://www.codersnotes.com/notes/two-channel/ Some go for orange+teal lighting setups. etc. :)
What do you think about Unreal Engine's decision to use C++ as the game logic language? What is your general opinion of managed vs unmanaged languages in game dev context?Ivan Poliakov
I think every approach has pros & cons.
In UE4 case, they seemingly have a split of "high level logic should be in Blueprints, low level logic in C++". That's a valid approach, though personally to me it feels that the downside is that there's no "middle" -- you either need to get to C++ level (many can't or don't want to), or you need to work with Blueprints (many can't or don't want to).
In Unity's case, it's mostly about this "middle level" (C#/.NET), however indeed the things we lack are these "on the edges" (super high level, visual programming for people who don't want to or can't program; and super low level scripting for people who need to get there). While each can be worked around (via plugins, or visual scripting extensions), indeed it's not ideal right now.
I think managed languages are fine for a lot of game code. They do have some downsides (garbage collection is probably the major one), but on the other hand, game scripting has been using some sort of "higher level languages" for a very long time by now (e.g. Lua, C#, Python, UnrealScript, other custom languages). In particular Unity's case, the GC situation is not ideal; I think once we get a more modern GC things should be a bit better.
So yeah, basically different approaches, and each of them has some advantages & disadvantages.
Having been working on the graphics of a multi-platform engine, do you think supporting Windows is difficult enough that when a AAA game has horrible performance on day one it is not completely unjustified?
I don't think it's so much of "supporting windows", but more towards "supporting wide range of hardware/software configs".
For big AAA games, usually (not always) most of the revenue comes from consoles. So it's natural that this is where most of the effort goes into, and most of the optimizations, and most of quality assurance.
Now, pretty much everyone also has the game running on the PC in some state all the time - after all, all the development tools are on PC, and so on. These days, with both PC & consoles even having very similar hardware (no "exotic hardware" like Cell/360) that's even easier. However, a PC has a ton of things that you don't have to worry about on consoles - various numbers of CPU cores and speeds (with unknown amount of that taken by other applications & background processes), all the different GPUs out there and their various driver versions, unknown amount & speed of memory and storage, etc. etc.
Getting most of that working acceptably is usually not rocket surgery, but requires quite some QA and then some amount of development work to fix or work around the problems that are uncovered. Game development timelines often do not leave "extra time at the end" for PC optimization -- up until humanly possible, the teams usually try to make the best game they can on the main platforms (this being consoles), and then they ship. And then once that is done & shipped, they turn into "oh we should do some QA & fixes for PC" I think -- that's just a natural course of things with consoles being the major money bringers.
Is a CS degree a must have at Unity? How much is it taken into account when evaluating a candidate? Does years of experience and dedication matters? And besides the technical stuff, what do you look for when evaluating people?Filipe Scur
Realistically all these questions have an answer of "it depends" :) (on the position, team, department, etc.).
Typically a CS degree is not a requirement.
Experience matters if it's for a senior or tech lead position.
Besides technical stuff, we're trying to evaluate the "no asshole" factor, whether a person could work well without much handholding or direction (in most teams there's not terribly much supervision or detailed "management" etc.).
But really, depends on which team/position.
Like the questions on your recent interview blog post, what (or where) is a good way to learn about gpu details? I've found this series really useful (https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/). Is there a good place for learning this stuff?Bonifacio Costiniano
This series is excellent indeed!
I found some books to be useful too, e.g. "Real-Time Rendering" (Moller, Haines, Hoffman) has a very good overview of common real-time rendering algorithms and approaches, while "Physically Based Rendering" (Pharr, Jakob, Humphreys) is a really solid book on the whole physically based rendering thing (more towards offline rendering focus, but extremely solid foundation).
In regards how the GPUs work, Fatahalian's "Running Code at a Teraflop" (http://bps10.idav.ucdavis.edu/) is a really good "no marketing bs" look into the GPU :)
What do you think of learning WebGL to grasp the fundamentals of graphics programming so you don't have to deal with handling input or window management?Mohammed ( ͡° ͜ʖ ͡°) Arabiat
That's probably a good idea! WebGL is very nice in terms of "availability" - you just need a decent browser and that's it. What's perhaps not so nice, is that it's built on top of OpenGL ES, which itself as a bunch of messy parts in it. However, still probably the easiest way to get into graphics programming indeed. Maybe use some helper libraries like three.js or similar too.
Another alternative might be something like Unity (but hey I might be biased). If you want to learn lower level graphics, Unity still allows you to write your own shaders, manually create and setup render targets and so on, while abstracting away most of platform differences, input handling and other "boring" bits.
If you are into C/C++, I'd suggest trying something like bgfx (https://github.com/bkaradzic/bgfx) as a graphics API abstraction library that also deals with most of "boring bits", allowing you to focus on what your graphics algorithm tries to actually achieve.
When is GI applied in the deferred pipeline? Is it applied with all the lights or after Final Pass? (or somewhere else?)
During the G-buffer pass, ambient & lightmaps & emissive things are rendered into the emission buffer.
This is fairly easy to see using the Frame Debugger by the way.
Hello Aras! What's your favorite game/s made in Unity?Jacob Smaga
In last year, probably INSIDE and Firewatch.
I loved Monument Valley, TIS-100, Year Walk in the year before.
Hi Aras, since somebody asked about Volumetric Clouds. Do you happen to know of any good real-time approaches that is capable of rendering clouds from afar as well as up close? Looking for a solution where I have a vehicle on the ground that could jump up to the sky and fly through clouds.Elvar Orn Unnthorsson
Nothing comes to mind right now, but I have not been following that area.
My guess is that "flying through clouds" and "clouds in the distance" likely need somewhat different systems/approaches for rendering.
"The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn" from http://advances.realtimerendering.com/s2015/index.html I remember being fairly interesting, but I forget whether they handled "flying through clouds" case.
"A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Rendering" from the same siggraph course might be useful for cloud rendering part too.
Just noticed my color ID texture gets blurry when overriding its max size in the Asset Importer. Setting filter mode to Point doesn't help. Is there any way to get a smaller resolution and keep hard pixel edges? Currently I resize it in Photoshop with Resampling set to Nearest Neighbor :)Simon Kratz
Yes, today clamping max texture size in unity always downsamples the texture with something like a Mitchell filter, and ignores the GPU filtering settings. So you have to do that externally. Maybe worth filing a bug so that someone would remember to fix it one day.
Do you think Microsoft is trying force people into UWP and turn Windows into a closed platform? Do you anticipate Unity will ever have UWP version but will circumvent the windows store?Salim
I don't know wrt Microsoft's plan. Personally, I don't pay much (or any?) attention to UWP. Never used the Windows Store.
Unity already can build apps for UWP of course, as well as regular Win32/64 apps. Unity editor itself is a Win32/64 application, and I don't see it becoming an UWP app anytime soon. Or reasons to do that.
Hi, Aras? Thank you for your work. I have a question about GLSL. I need to build Ogre with this library, but shared. I recompiled the package with the flag -fPIK but cmake not see these libs, even though I made a symbolic link, and run ldconfig. Please tell me how I can do glsl optimizer shared.
I'll assume you want to integrate glsl-optimizer into Ogre...
However, since you're talking about -fPIK and cmake, I'll also assume this is about a platform that I know nothing about (Linux by chance?). So, uhh... no idea. I know how to build things on Windows and macOS, but Linux I have zero knowledge about. glsl-optimizer itself is just bunch of C++ code that needs to be compiled, without any special things done for it. So "whatever Linux people do to build dynamic libraries on Linux" is the best answer I can give :)
Hi Aras! I noticed textures set to Sprite in Unity don't seem to have an option to handle non-power of 2 textures. Is there some hidden way to do so? Would be great for us to get the advantages of PVCTR and ETC compression for our Android/iOS project.Simon Kratz
I don't know, really. Maybe it's supposed to be there but got removed for some reason? Best probably file a bug or ask on forums, and people working with 2D would know.
Hi Aras! Any info about multi-channel signed distance fields in Unity? Is it going to be implemented? It's great for small UI text/icons! Reference: https://twitter.com/Chman/status/794870701501124612
I don't know the context why @chman was playing around with them, but for the general Font/Text rendering, yes someone is looking at improving the current situation. Bitmap-based glyphs are not really ideal, and indeed some sort of distance field based rendering is one of the options to look into. There are also ways that directly evaluate the glyph bezier curve outlines in the shader, though I'm not sure what are the advantages/disadvantages of that method.
So yeah TLDR: someone is looking into this area, but I don't know their roadmap/plans.
Hello, Aras.I'm a Chinese Developer.Maybe my English is poor,Please forgive me. -qusetion : In the Unity Shader Official Document. target 2.0 support 8 interpolators. What's the mean of interpolators. How do i understand it??
"interpolators" is a DirectX term, for example in OpenGL they are called "varyings" sometimes. It's basically the "things" you pass from the vertex shader into the pixel/fragment shader. In shaders these typically have "TEXCOORDn" semantics on them.
Platforms like OpenGL ES 2.0 are often limited to 8 four-component vectors (so in total up to 32 numbers) that can be written to from the vertex shader and read by the fragment shader. DirectX9 shader model 2 (SM2.0) is slightly more complicated, as it allows up to 8 TEXCOORDn interpolators (each being float, float2, float3 or float4), and additionally two low-precision COLORn interpolators (again each being float..float4).
Later shader models/APIs do away with that split between "texcoord vs color" interpolators, and e.g. DirectX9 shader model 3 says "up to 10 float..float4 interpolators". OpenGL ES 3.0 and DirectX10 says "up to 16" etc.
Hello, can I edit and deploy to the asset store the script you created on this page? http://wiki.unity3d.com/index.php?title=FramesPerSecond
I guess, but there's one right in the scripting docs - https://docs.unity3d.com/ScriptReference/Time-realtimeSinceStartup.html - so I kinda don't see a point. Unless you'll put it onto asset store and will earn millions selling that, in which case more power for you! :)
Last I looked at creating shaders that work across various platforms & APIs (i.e. shader translation hlsl -> glsl) it was quite a mess. I know you wrote about this topic on your blog a while ago. Have things improved since?
I hope that they did to some extent, however I have not personally used any of these new shader cross-compilers, so can't vouch for them.
The situation right now in 2016 seems to be:
If you need DX9-level HLSL (with a tiny bit of DX10 things, like instance IDs and some texture arrays), then using hlsl2glslfork + glsl-optimizer is still probably the most "battle tested" solution (being used in Unity and so on). However, the DX9 HLSL syntax is starting to get old. This can get conversion from DX9 HLSL into: GL2.x, GLES2.0, GLES3.0, Metal.
For a "mostly DX9 HLSL but with somewhat more DX10 stuffs", I'd look at HLSLParser , specifically Thekla's fork of it (https://github.com/Thekla/hlslparser). This recently got Metal conversion backend too (from The Witness game port to Metal I guess), and a bunch of improvements from ROBLOX folks. This, as far as I can tell, can get you conversion into OpenGL (possibly ES too?) and Metal.
Khronos' glslang (https://github.com/KhronosGroup/glslang) is getting a HLSL parsing frontend recently, which seems to be targeted at full DX11 HLSL syntax, and is under very active development, with compute shader bits being done as we speak. So this can take GLSL or HLSL as input, and can output SPIR-V (which can be used directly in Vulkan). Another tool, SPIRV-Cross (https://github.com/KhronosGroup/SPIRV-Cross) could be used to convert that into GLSL or Metal. Possibly with some optimization step via SPIRV-Tools in the middle (https://github.com/KhronosGroup/SPIRV-Tools).
There's a DX11 bytecode level translator (as in, compile HLSL with actual D3DCompiler/fxc, and translate the bytecode into GLSL) via HLSLCrossCompiler https://github.com/James-Jones/HLSLCrossCompiler -- my impression is that it needs "a lot" of tweaks on top to be "production ready". We use a fork of it in Unity, but the people working on it haven't got around to push their changes somewhere public. I just know they did *a lot* of changes :)
And then Microsoft at GDC2016 talked about their upcoming open source HLSL compiler, that would be built on top of clang+llvm, and I think they talked about "end of 2016" as potential release date. But I haven't heard updates on that. This of course would only be a HLSL -> DXIL toolchain, but if it were open source then I guess someone could make DXIL -> SPIR-V translator, and from there to other backends via SPIRV-Cross.
So, in summary: right now, for modern HLSL I'd take a look at Khronos' glslang + SPIRV-Cross. If you can wait a bit until Microsoft ships their new HLSL compiler, then would be worth taking a look at that too.
Thanks for the great answer! bgfx is very interesting. I presume it is recommended to move on to an actual graphics API after a sufficient amount of abstract graphics knowledge is achieved. Am I correct? Also, talk more about Unity! Also, how important is the CPU side language (C#/C++/JS..etc)?Mohammed ( ͡° ͜ʖ ͡°) Arabiat
That very much depends on what you want to learn/achieve.
If you want to learn a graphics API, then yes, at some point you have to use one :)
What is your proudest achievement in Unity for 2016?Seon Rozenblum
Haven't shipped it yet, but kickstarting scriptable render loops (https://docs.google.com/document/d/1e2jkr_-v5iaZRuHdnMrSv978LuJKYZhsIYnrDkNAuvQ/edit). I've been thinking about something like this for a few years, but never past the "jotting down some notes" stage. This year, we got a small team together for a week doing nothing but that. And that initial prototype turned out to be way more viable than any of us expected! Now of course a lot of work is left to make it shippable / production ready etc. etc. But feels like this and all the low-level graphics improvements we are doing lately have a chance of being a really solid base to build future graphics on. Super happy about that, can't wait to ship.