AAA Quality on a Mobile Device - What I learned after landing in the VR industry.
8 min read

AAA Quality on a Mobile Device - What I learned after landing in the VR industry.

Coming from the top developers for consoles and PC to starting a new adventure as an artist working on Mobile VR devices.
AAA Quality on a Mobile Device - What I learned after landing in the VR industry.
Photo by mentatdgt from Pexels

It has been an interesting 8 years now. I started my career as a 3D Environment artist in an outsourcing company. My first published title was Sony’s VR worlds, for which I made around 10 assets. I thought I would never touch a VR-ready model ever again.

Since I started making games, my passion and focus have been console AAA titles. Coming one after another, I’ve had the pleasure to participate in games such as Homefront: The revolution, Red Dead Redemption II, and Star Wars Battlefront II, amongst others.

If you have played any of these, you might have noticed the extreme attention to detail all these titles count with. From the lighting, texturing, model definition… Everything you see on your screen while playing these has probably more than 15 to 20 hours of work. Even a nail, an eyelash… all these little things are polished to the highest level of perfection.

I think it’s clear now, AAAs count with the best everything, from technology to artistry.


8 years have passed, and I find myself interviewing for a Senior Artist at a VR Fitness company. For those who might not be aware of the term, a senior position is for those very experienced with the processes and techniques required to succeed in a particular role.

Somehow I got the job. How? There might have been a couple of important factors. My portfolio is good for a Senior Artist. I have experience in Art Directing for AAAs, and I am easy to talk to, which helps on interviews and so on. By the way, talking about myself is a big effort for me, and I’m not comfortable with doing it, so please excuse me if I come across in any particularly negative way.

Back to the point. I was probably selected because I presented myself as someone that can learn a particular workflow quickly. And I did. I did it fast, and I mean, there’s still a lot to learn, but I’ve been provided with everything I needed to learn, but it made it easier.

I remember feeling like I was holding an inferior machine when I first tested my first build on the development VR kit the company provided me with (Especially after using state-of-the-art technology for developing games for high-end consoles and PCs). Still, I am now at a point where I feel comfortable to start researching and developing, and to some extent, I’m even able to develop some tools and workflows to bring that AAA quality to a mobile-like VR device.

I’m saying mobile-like device because the target for this app is not a PC dependant VR headset. It’s an Oculus Quest 2, which runs on android, with nothing but its own components, just like a phone does.

I felt like this article needed this long introduction, but without further ado, here’s what I learned this past month and how I solved the big challenges I faced when making a mobile app look like a AAA title.


To understand our goals, we need to keep in mind our limitations, and they are big. Everything running on a VR headset is rendering twice. That means your resources (models, textures, materials, etc.) are more or less half than what you can allocate to a regular real-time application.

Also, mobile devices don’t handle dynamic lighting too well. That means no speculars, no real-time shadows for moving objects… nothing, and if you decide to make some sacrifices to get some of that stuff in, be ready to get rid of a lot.

So in two paragraphs, I’ve discarded two of the major features that describe a AAA product. Good, believable lighting and high fidelity in models, textures, and materials. Then how do we make it look as good?

Let’s go by parts, as this feels like a lot of things to do.

Photo by Startup Stock Photos from Pexels

Removing everything that the eye won’t see is a must. Every single polygon counts, even more nowadays that we model everything. Every face that isn’t seen on a final composition will have to go. If not, it will occupy space on the lightmaps, and that polygon will still render, so if you don’t see it, you don’t need it.

There’s an added issue to the low polygon budgets. Everything in VR looks twice as close to you than it will look in the editor. Of course, you can alter the camera’s FOV to a similar value and so on, but it won’t be the same.

This makes the geometry problem worse because you need to maintain high fidelity in your models. It will be noticeable if you don’t, so here’s what I do to tackle the problem. Choose your assets wisely. You don’t want an empty scene, but you also don’t want a scene full of rounded objects that need many edges to maintain their silhouette. I’ll exemplify:

Let’s say you are making a kitchen. It’s full of rounded objects, but you can modify the look of those objects to make them more geometrical. Pans will almost always be round, but glasses, dishes, bowls, and so on can be modified to make them square or hexagonal. It might even give it a more modern look if that’s what you are after.

Also worth mentioning that everything that isn’t on the silhouette can be reduced geometry-wise. You literally won’t notice if you bake good normal maps.

A round glass uses the resources of almost 3 squared ones.

This way, you’ll be able to spend all the resources you were going to spend on the previously mentioned objects in making the ones you decided to make rounded have more fidelity, and therefore, more quality. It’s also important to classify objects depending on their distance to the camera. You don’t want to have something very far away looking super detailed and then have something under-detailed too close to the player’s view.

So that’s one example for models, but what about textures and materials?

There’s no easy way to reduce these other than compressing and reducing their sizes. For materials, you can and should unify as many assets as you can into one material via texture atlases, which combine all the textures into one.

In this example, we merged 4 draw calls into 1

One handy thing I noticed is that in many cases, after baking the lightmap, which we’ll cover later, you can remove the normal maps from the materials and ultimately delete them to save up some space.

Keep in mind the distance to the player as well here. Objects further away can have less space in your UVs than the ones being close to you.

Lastly, once you made sure all your textures look good at the lowest resolution possible, you can get away with reducing the size of the normals that you didn’t delete and the roughness maps. You won’t notice it as much as reducing the albedo/color map. It’s magical.

Before proceeding to the next steps, it’s important to ensure that you used exactly what you needed. You don’t want to waste unused resources either, so keep that in mind!


So now we have managed to keep it cheap poly and texture-wise, and we reduced the material count to maintain a low draw call count. The graphics memory required to run our level is exactly what it needs to be.

Now there’s another problem. The FPS count is low. The app is calculating too much per frame and isn’t able to maintain a good frame rate.

There are many solutions to this, but the easiest one is to keep your real-time lighting and physics to the absolute minimum. All specularity and shadows calculated per frame will have to go, and with them gone, your level looks plain and lifeless, so we’re going to have to look for their replacement.

The first thing I would do is turn all my lights to static and bake the lightmap. After this is done, you’ll notice the lighting coming back, but something fundamental will be gone. The specularity and all your beautiful material definition have died with it.

Bringing this back will require some intermediate knowledge about shaders, but it’s fine if you don’t, as it isn’t difficult to understand. I personally translate these code lines to nodes on whatever shader editor I’m using, Unity’s shader graph can do this perfectly, as well as UE’s material editor, but for the sake of readability, I’ll explain this using a code example.

I’m not sharing the full code as this can violate the project’s NDA, so don’t use this to replicate the shader unless you know what you are doing. Also, I have only been coding shaders for around 2 weeks now, so you might know more than I do!

A snippet of the code we’re using to fake specularity.

To begin with, it’s important to define the different variables we are going to use on the shader.

We are going to call the surface we grab the properties from “Surface”.

From it, we are going to calculate the normal direction. Then, by subtracting the camera position in world space units and the surface position, also in world space, we’ll obtain the view direction.

For the light direction, we will subtract the world origin (0, 0, 0) to the position of the light, which in my case will be hardcoded into the shader. You can output this as a variable in the material if you want to. Then we will normalize the result of the operation.

We’ll also add two variables to control the look of the specularity, “light size” and “strength”.

Finally, to calculate the fake specularity, we’ll saturate (Or clamp the result between 0 and 1) the dot product of the normal direction and light direction, both of which we calculated on the previous steps, and multiply this by the saturated dot product of the reflected result of the inverted light direction and the normal direction, and the view direction, used as the power of the light size multiplied by the Strenght variables.

When everything is calculated, we need to multiply the result by the albedo and mask it out by multiplying the previous result by the roughness map.

For more realism, you can mask the specular by the lightmaps to fake the shadows. This really takes it to the next level.

Before I continue, I want to emphasize that I am not, by any means, a shader coder. I learned about this on the go and started with it less than two weeks ago. Also, wording this was not easy. Now, this being said, the code works, and the material does what I want it to do, and it’s cheap!

With this little shader magic applied to our material, we have something resembling a surface's specularity. I’m still researching how I can add the normal map to this to make it even better, but for now, all I can do is add the Green + Red channels to the Albedo map as an overlay. That seems to work really well for now.


Now for dynamics and things that move. Even AAAs suffer from all the limitations that I’ve stated above, but mobile makes sure you notice them as you know by now. Not many particle FX are required in my scenes, but I always like to bring things to life with a bit of movement.

Using noises and panners in the shader allows me to displace some vertices giving the illusion of movement in the trees, for example. On the backdrops, distorting their UVs with the aforementioned techniques allows me to bring static HDRIs to life. These things are pretty standard, but I think they’re worth a mention too.

I am not a big fan of making posts too long, and this one already is a bit of a heavy read, so I’m choosing to leave it here. It’s also late in the night, and my brain needs some rest to absorb new techniques to share with you, so thanks a lot for reading, and please feel free to ask questions or post any critiques you might have!

See you around! :)