The new Genie 2 model only needs one concept art to generate a game world. The neural network itself decodes the image into components, animates the character and simulates lighting sources.
The game engine works in the manner of GameNGen: the neural network does not write code and does not model three-dimensional space, but only generates a short video sequence in real time, taking into account the player’s clicks on the keyboard.
Google DeepMind trained Genie 2 on real games, including No Man’s Sky, Valheim and Teardown, which is why video generation algorithms completely repeat lighting artifacts and shadow rendering problems typical of games of the last generation.
There are concerned about the implications for intellectual property. DeepMind, a subsidiary of Google, has unrestricted access to YouTube. Google has previously suggested that its Terms of Service permit using YouTube videos to train models. The question is whether Genie 2 actually creates unauthorized copies of the video games it has watched
The neural network often hallucinates on the move: sometimes the walls of a house turn into a cave, and a character descending from a mountain throws a snowboard and starts running down the slope on his own two feet. In first-person games, in static scenes, ghostly silhouettes appear in the frame in the manner of NPCs from Skyrim.
Google does not specify the resolution and frame rate of working prototypes, and the maximum duration of the game demo does not exceed 60 seconds
It seems to me that such a model will simply help provide people with an idea for the implementation of any locations or heroes
Reference links:
2 – https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/
Text is written using Gemini