Most of the words that follow aren’t necessary to see why the reveal of Unreal Engine 5, an Epic Games video game graphics engine set for release in 2021, blew up the internet last week.
You can watch the following video, know that this is a demo of a video game and not footage of anything real, forget the technical aspects being discussed, and simply see that video game graphics are about to take a big leap forward.
Video games during the 2020s could look almost like scenes ripped straight from real life. The overwhelming response from game developers was like a collective jaw dropping to the floor.
The current generation of gaming consoles, including Sony’s PlayStation 4 and Microsoft’s Xbox One, are now using seven-year-old hardware technologies, first released in 2013. Both Sony and Microsoft are releasing their newest generation of consoles this year. For this demo, Tim Sweeney, the CEO of Epic Games, credits the upcoming PlayStation 5, the system being used, for providing the capabilities to render much of the scene. He also says that the features described in the demo will work on all next-generation consoles.
To be sure, this video is aimed at game developers, 3D artists, and those who Epic Games hopes to convince to use their technology instead of a competitor’s. While it’s full of technical details, that didn’t prevent millions of people from watching the demo within the first few hours of its release, and the response was overwhelmingly positive.
As I followed the meme-soaked conversation online, I sensed 3D artists were enthusiastic, but not being a 3D artist myself, I wanted to know exactly what made this so special. So, I reached out to one of the experts I saw tweeting about it to see if she could help me better understand.
Estella Tse is an augmented reality and virtual reality (AR/VR) creative director and artist based in Oakland, California and has worked as an artist-in-residence with Google, Adobe, and others.
She was very kind to address my questions explaining the demo in simple terms.
Aaron Frank: Can we start at the beginning? What is a game engine?
Estella Tse: A game engine is a computer program developers use to make games, interactive experiences, and AR/VR apps. You can place assets like 3D models, 3D environments, images, sound effects, and music into a game engine and add interactions to those elements. It can be as simple as controlling a ball to collect points or as involved as adding the complex interactions we see in major release titles.
In the AR/VR industry, we create most experiences with one of two game engines: Unity or Unreal Engine.
AF: I discovered your work when I came across your enthusiastic tweet about the number of “triangles” that can be rendered in the engine. The video mentions “hundreds of billions” of triangles rendered across the whole demo. What does that mean exactly, and what are triangles?
ET: For cameras, “megapixels” measure how high-quality of a photo can be captured. For Photoshop artists, “pixels per inch” is an indicator of image resolution. And in 3D model terms, “polygons” are the number of faces a model has. A cube, for instance, has six faces.
These polygon faces, which commonly consist of triangles, as the Unreal demo refers to them, are a major factor for graphics quality in games and AR/VR experiences. The more polygons a model has, the smoother and more realistic the 3D model can look. It’s a bit like how in a low-resolution photograph you can see the pixels, but in a high-resolution photograph the pixels are nearly undetectable. The more polygons for a 3D model, the more information and detail it holds.
One might want as high quality of a model as possible in their game, but hardware is a limiting factor. A mobile phone, for instance, can’t handle intricate 3D models with millions of polygons. Even most high-end VR experiences require 3D models to be optimized, which reduces the number of polygons so that more bandwidth is available for interactivity and other models to load at the same time. This is all done so that everything functions without lag.
You may have heard of the term low-poly models. These are models that have minimal polygons that tend to create an aesthetic like a classic video game. Here is an example of a low-poly model using Google Blocks. This model is versatile and can be integrated into experiences built for mobile devices all the way up to high-fidelity game experiences.
On the other hand, here is an example of a very high-poly painting using Google Tilt Brush. My Tilt Brush paintings won’t load on mobile devices without optimization, or “decimation,” and even then will likely crash devices because of the millions of polygons. There are easily over a million polygons in this painting and as a 3D artist, this is something I pay close attention to when I make AR/VR creations. I have to keep in mind that not every device or experience can contain the maximum capacity of my work.
AF: Another concept I routinely saw is captured in this tweet, referencing “normal” and “occlusion” baking. (Also this one and this one.) Can you explain what “baking” means?
ET: Another incredible piece of news from the UE5 demo was the real-time lighting capabilities of Lumen, on top of the capacity for high-fidelity models. This is really powerful!
Often, developers have to “bake” lighting—which is essentially faked lighting to save on graphics processing—into 3D models to create the illusion of lighting in an environment. This saves a lot on processing and loading time, prevents lag, and allows for more interactivity and smoother experiences. Here is an example of how lighting is baked onto a “normal” map.
Baking can be a production nightmare if the models have already been baked in with lighting. Let’s say a director wants to change the location and color of a scene’s light. That means every asset that has lighting baked in to that scene will have to be re-rendered with the new lighting changes. Baked lighting is like “faking” the light by manually painting the lights and shadows onto a statue rather than using a real light to cast onto the statue. Lumen would allow us to actually light models for real, as opposed to needing to manually fake the lighting. I believe it’ll allow for much more dynamic visual design in the future.
AF: To pull back to a high level, what breakthroughs in the demo are getting developers like yourself so excited?
ET: Oh my god.
For AR/VR development, especially as it pertains to my highly articulated and complex VR paintings, optimization and polygons are a huge limitation for where my models can exist. My paintings are often stress tests for AR apps, and I’ve caused developers plenty of anxiety because of how “giant” my paintings are in terms of triangle count!
When I create AR/VR backgrounds, props, and environments, I have to design while still keeping graphics limitations in mind. Because of this, I have to use tricks—like 2D sprites, flattened skyboxes, and old theater and movie tricks—to create visually stunning pieces that can also be integrated into apps. It’s navigating the balance between amazing graphics and technical restraints and finding a way to design within these limits.
When I paint just for paintings, I don’t have to worry about polys. I can just make and fully express myself. I’ve been painting in VR with the knowledge that one day I won’t be limited by hardware and graphics optimization. I’ve been waiting years for this to happen! This means my paintings can soon be experienced in their full glory without their quality being lowered or decimated! I can just make!
Of course, not every device will support this right away, but it does show that this kind of capability is coming. And that’s super exciting.
AF: Briefly, given your work in augmented and virtual reality, is there anything reflected in this demo that has implications for fully immersive VR experiences in the future?
ET: To the casual viewer, they can already see how realistic the UE5 demo looks. It’s stunning. This has huge implications for AR/VR creation and the industry!
Historically, learning to model things in 3D has required years of mastering tools like Maya and ZBrush. It required learning a 3D modeling program on a 2D screen. Now, with more VR creativity tools, people can create 3D models in 3D, within very intuitive sculpting and painting environments. These models can now be exported directly into game engines, and this will evolve the aesthetics of how future games will look as well.
I believe that if more creators have access to emerging technologies, the more diverse that landscape will become. It’ll be an opportunity to have new and unique points of view, and hopefully more problems will be solved with diverse voices.
For my own work, I’m excited at the prospect of concentrating more on the artistic vision and execution and less on triangle counts. I’m sure each artist has their own reasons to be extremely excited at this news! Less troubleshooting, more creation!
More of Estella’s work can be found here.
Image credit: Epic Games
"Demo" - Google News
May 24, 2020 at 02:00PM
https://ift.tt/2A9cf92
Epic Games' Insane Video Game Graphics Demo Explained in Simple Terms - Singularity Hub
"Demo" - Google News
https://ift.tt/35q1UQ2
https://ift.tt/2Wis8la
No comments:
Post a Comment