SmartViewpointComputationLib is a C++ library to compute the best viewpoint in a 3D scene that satisfies a set of visual properties, such as size, visibility or angle of selected objects in the scene. For example, suppose you have a building floor scene like this one:
and you want a viewpoint that visualizes the monitor, mouse and keyboard in the bottom-left room (provided each of these is an individual object in the scene). Using methods in the library, you declare that you want each object to be in the viewpoint view volume, as unoccluded as possible, and preferably of a certain size on screen. The library, in a few tents of milliseconds, will return a result like this (AABBs added just to highlight the objects):
Other kinds of properties include the orientation of objects with respect to the viewpoint (i.e., camera angle in photography terms), framing an object in specific parts of the screen, and relative position on screen of two objects (e.g., one at the left of the other). In other words, you can find viewpoints like you would compose a picture, leaving to the library the task of positioning and aiming the camera.
The library can be easily connected to any rendering engine, and comes with bindings for Ogre. It was written by two authors of this blog, Roberto Ranon and Tommaso Urli.
Source code and binary demos are hosted by BitBucket.
The Workshop on Intelligent Cinematography and Editing (WICED) aims to bring together an interdisciplinary group of researchers and industrial experts from fields including 3D graphics, artificial intelligence, visualization, interactive narrative, cognitive and perceptual psychology, computational linguistics, computational aesthetics, visual effects and others who are working on the many related aspects of automatic camera control.
In a previous post, we had talked about motion-tracked camera controllers for shooting CG movies. The Director’s Lens system, recently presented at ACM Multimedia 2011 and at ParisFX conferences, combines such camera controllers with state-of-the-art automatic virtual camera control algorithms to provide smart assistance while shooting.
More specifically, the system is able to compute, cluster and display (step 1 in the figure) a large set of viewpoint suggestions for starting a shot. Viewpoints follow classical cinematic conventions in screen composition and ensure the visibility of actions occurring in the scene. The filmmaker browses the suggestions to explore many cinematic possibilities, selects one and refines it by aiming the motion-tracked camera, and starts shooting (step 2 in the figure).
Furthermore, viewpoint suggestions computed for next shots are adapted to what has been previously filmed, by taking into account continuity of gaze, continuity of motion, line of interest, and also filmmaker’s previous choices.
Director's Lens workflow
You can read more about the technical details of the system here. The Director’s Lens, which is a patent pending work, is a joint project between some of the authors of this blog (Christophe Lino, Marc Christie, and Roberto Ranon) and William Bares from Millsaps College, USA.
At the Center For Computer Games Research, we are conducting an experiment to evaluate the impact of different types of camera control in computer games. To participate follow this link! All you need is a Windows or Mac computer, a mouse and approximately 15 minutes of your time.
Several years have passed since the first 3D video games have made their appearance in the gaming scene. In this same time many virtual camera control approaches have been proposed in research literature (take a look to our tagged bibliography for some examples), however their adoption in the field of computer games is very limited (if not completely lacking).
So how did camera control evolve in this period of time? Let’s take a look at some outstanding examples of virtual camera control systems in early (i.e. from the 90′s) and contemporary 3D computer games, to try to figure out what changed. This is not intended as an extensive survey about virtual camera control in video games (e.g. we decided to avoid first-person shooters because in those games the beauty of the camera is usually limited to special effects and cinematic scenes) and, if you see anything you can add or that you disagree on, you are welcome comment.
Back in the 90′s
The first 3D game made his way to the players’ houses in 1982[ref]. However, it wasn’t until the beginning of the 90′, with games such as Alone in the Dark or Quake, that 3D computer games became popular. The introduction of this kind of games introduced a new problem: camera control.
Where to place the camera? should the player do it? if yes, how? are questions that arose rapidly in the industry. In first person shooters the answers where clear, but what about platform games? Adventure games? Alone in the Dark, a survival horror game released in 1992 by Infogrames, is a remarkable attempt to answer these questions. The game exploits a set of hand-placed cameras to portrait the character’s actions while contextually conveying a sensation of suspense and expectation (take a look at the video below) to the player. This solution (which is the same for almost the whole Alone in the Dark saga and has been subsequently adopted Resident
Evil) allows both to portray the scene from a suggestive point of view and to do it with a low computational cost (backgrounds can be pre-rendered). On the other hand, the fact that the camera is fixed can yield to occlusions if the character is not constrained to visible areas of the scene.
To a certain extent, this simple virtual camera control approach can be considered as a 3D adaptation of the background switch technique used in 2D graphic adventures, e.g. think about Lucas Arts’ Monkey Island or Indiana Jones games, where the background is updated as the character walks out of the frame.
A more challenging task was to translate the mechanics of games such as Super Mario Bros or Sonic the Hedgehog since, in these games, the camera had to move to follow the events. An example of computer game which became famous for its virtual camera control system is Super Mario 64[ref]. The game implements two switchable camera modes, the Lakitu-mode (automatic) in which che camera system is embodied by a helper character who tries to follow Mario in his adventures maintaining interesting views of the gameplay, and the Mario-mode (assisted) in which the camera follows the main character at a short distance and the user can adjust the camera angle using a dedicate joystick on the controller. A special feature of Mario-mode allowed the player to stop and get a 360-degrees view of the surroundings. While this pioneering camera system has been mostly criticized, mainly because of occlusion and collision issues and lack of aesthetics in camera placement, it proved quite effective in supporting the novel gameplay of Super Mario 64. In particular, Lakitu-mode has been a fundamental step for most virtual camera systems to come.
These two games certainly represent two precursors of the 3D era and have set standards for camera control techniques in the next decades. Resident Evil and Silent Hill followed the footsteps of Alone In The Dark and hundreds of games developed since Super Mario 64 employed a camera control system inspired by the one of the original Nintendo game. But how do current game tackle the camera control problem? Do we see much improvement? Let’s try to answer these questions by analyzing the camera of two ideal “sucessors” of the aformentioned games: Heavy Rain and Super Mario Galaxies.
Camera control in the Harry Potter era
Heavy Rain is an action/adventure computer game created by Quantic Dream in 2010. The game is modeled after film noir, featuring four main characters involved with the mystery of a serial killer who drowns his victims during intense rainfalls. The player explores the different environments composing the virtual wuorld and interacts with the other characters by performing actions highlighted on screen. Much of the game experience depends on the cinematographic style of the camera; the parameters of the camera are carefully tailored to each location of the game, to generate the feeling of being a character in a thriller movie. The resulting visuals have very high quality; however, to achieve such result, the player is very limited in his movements, and each location of the game has a limited size. At a final analysis, the heavy rain camera system is a refined evolution of the cameras seen in Alone in the Dark.
Super Mario Galaxy is a Nintendo Wii game released in 2007 as the last successful sequel in the Mario series. The setting consists of a number of tiny planets, each one with a peculiar gravity provided by a physics engine. The virtual camera system is, at least apparently, very simple as the camera (a third-person view) is always aligned with a fixed horizon. Since the main character can circumnavigate planets (sometimes finding himself upside down), the camera is able to perform a rapid turn to align with the character’s natural horizon. On the whole, the camera system is an evolved (read: fixed) version of the Lakitu-mode in Super Mario 64. Occlusions are handled with a hack by providing the character with an outer stroke (visible through objects) when it is occluded, collisions never happen because of the camera altitude and the way the way the planetary scenarios are made. Some objects (e.g. big buildings) can force the camera to follow the character from the side by performing pans. The gameplay is often enriched with pre-built cinematic scenes to support the game plot. Since the Wii’s controller only carries a joystick, which is used to drive the main character, there’s no equivalent of Super Mario 64′s Mario-mode, however this doesn’t seem to be a real limit and overall the gameplay is very well supported by the camera.
Lookin’ back on the track
Virtual cameras in today videogames are strikingly similar to the ones used in games from the nineties, the main differences being special effects (e.g. trembling cameras, depth of field, motion blur, etc.) to make games look more like movies and some facilities to help the player to control the camera in particular situations (e.g. automatic occlusion avoidance, often limited to games employing geometrically simple scenes).
So, what has really changed in virtual camera control in the last 20 years? In our opinion: no much, at least not in videogames industry. The reason for this lack of improvement is not clear. Maybe approaches developed in academia are not thought to be game designer-friendly, maybe their computational cost is too high to cope with today’s gaming hardware or, simply, there’s no real need for more sophisticated virtual camera control approaches in videogames. From this perspective, a question comes easily into mind: is academic virtual camera control a solution looking for a problem? Have your say.