In this Art Asset Overview, I demonstrate some experiments with 2D/3D art: assets that are painted by hand, but have some 3D structure to support layering and parallax effects. We were thinking this might be an appropriate style for the Overgrowth “metagame”, where the player makes higher-level decisions, compared to the concrete moment-to-moment decisions of stealth and combat.
I’m experimenting with bringing some hand-painted assets into the background of the arena scenes, to visually distinguish important gameplay areas from cosmetic background elements, and to bridge the gap between the metagame and the combat. You can judge for yourself how well the idea is working by checking out the video:
Be sure to watch it in HD!
Next time I will show more work on the metagame UI screens! Follow our Twitter feed at @wolfire for realtime updates!
Receiver was finally released on Steam today -- check it out here! There is a one-week launch sale bringing the price down to $3.99. If you already have Receiver or Overgrowth, then you can claim your Steam key on your Humble Store download page (linked from your purchase email).
Whenever a game is greenlit, there are always two questions that come up over and over: "How did it take THIS LONG for this game to get greenlit?", and "How did THIS game get greenlit?" In this post I will try to answer both questions about Receiver!
We put Receiver up on Greenlight the day it was announced, because … why not? There was no fee yet, and we had no luck getting it on Steam via the traditional method. Receiver already had a pretty significant following thanks to our supportive and vocal community, and great ‘Let’s Play’ videos like this one from Robbaz. So, when we got set up on Greenlight, it didn’t take too long to reach the top 20!
Our first thought was that once we’re in the top 20, it’s just a matter of time until we get picked. But that’s not necessarily true! There are great new games joining Greenlight every day, and some of them shoot straight to the top. So, when we were #11, and the top 10 got greenlit, we were briefly in first place! But sure enough, along came 10 super-hot games that pushed us right back to #11 right before the next batch was picked. This happened over and over, until it seemed like Receiver might never get on Steam.
Finally, we got a surge in votes following an Idle Thumbs podcast about Receiver, and that kept us solidly in the top 10, and we got greenlit in the fifth batch! It’s tempting to say that this final surge is what got Receiver greenlit, just like a tie-breaking point wins a sports game, but ultimately every vote is just as important as any other when the total is what matters. In the end, Receiver got Greenlit by accumulating 54,266 "Yes" votes. Let’s consider this graph of daily votes over time for Receiver:
The first thing we see is a massive spike at the very beginning. I suspect this is partly because we got on Greenlight so early, when there were so many users rating games. If you look up Steam Greenlight on Google Trends, you can see that there was a big spike in interest when it was first announced in September 2012, followed by a steep decline. Since only "yes" votes matter, it was a big win to be there for that rush of voters! For the remaining traffic, I don’t have access to any extra analytics data for the Greenlight votes themselves, but I do have access to the stats for "wolfire.com/receiver"! That traffic seems to correlate fairly well with the Greenlight traffic, so I marked some corresponding spikes in the picture above.
There is a pretty significant noise floor of apparently random spikes, from small threads on Reddit, Tigsource, and even TVTropes. However, the first large spike came from a Penny Arcade article about a different game jam, mentioning Receiver as a notable example of what can be done in seven days. Our next significant spike came when we announced Linux support, which got us a lot of traffic from sites like Ubuntu Vibes, and made us a useful example in discussions of the Unity game engine’s Linux support. When we released Desperate Gods, that brought some traffic back to Receiver, because whenever anyone mentioned that game, it made sense to mention our previous work as well.
Finally, the case of the Idle Thumbs podcast is the most interesting, because as an audio stream, it could not drive any direct traffic. However, it did seem to kick off a lot of discussion of the game across the Internet, which manifested as a large spike in traffic from seemingly unrelated viral sources. First there was a spike in Twitter links (blue), then a spike from the SomethingAwful forums (green), and finally, a spike in Google searches (red) along with a video by Giant Bomb. I don’t really know how all these events are related, but they all happened very close together!
To answer the first question we started with, Receiver took so long to get Greenlit because there’s very stiff competition on Greenlight. There are serious contenders joining every day, and only a handful make it through each month, so you can be accumulating thousands of votes, and still see your rank decrease.
To answer the second question, Receiver got Greenlit for several reasons, including help from the faithful Wolfire community that has gathered over the years! Another important factor is that Greenlight voters are starving for novelty, and satiated with polish. Why else would they be spending their time voting on Greenlight instead of playing the latest AAA games? We put a lot of novel ideas in Receiver, and while it wasn’t the most polished game on Greenlight, apparently it was what voters were looking for! Here is the design overview video, for anyone who hasn’t seen it yet:
Be sure to watch it in HD!
Thank you everyone who voted for Receiver on Greenlight, and I hope you enjoy adding it to your Steam library!
This talk was really ambitious, attempting to discuss the entire context of animation and story, rather than the mechanics of animation itself. I didn’t take many notes at this point because my phone ran out of batteries, so this is largely from memory. I hope the speakers forgive me if I mixed or left out anything important!
Designing a Performance
Ed Hooks, Acting for Animators Mike Jungbluth, Senior Animator, Zenimax Online
Ed Hooks is an actor who has appeared on screen in over 100 different roles, and is also a renowned acting teacher, famous for his “Acting for Animators” workshops and book of the same name. Mike Jungbluth is a senior animator at Zenimax Online, and organized the whole animation bootcamp. He has worked on a number of games including Elder Scrolls: Online and Call of Duty: Black Ops, and co-hosts the Reanimators podcast.
What is the point of animation? One of the most important goals of animation is to create performances that elicit empathy from the audience towards the character. It’s certainly necessary to succeed at the illusion of life, but that’s only the first ingredient for an effective performance by this metric. A second ingredient is distance.
To evoke empathy, the player must have a certain amount of distance from the character -- it’s not possible to empathize with yourself! The speakers brought up one of the first scenes in Wall-E, when he watches a movie of two lovers having fun and holding hands. He wants to experience the same feeling himself, and looks for someone to hold his hand, but there is nobody else, so he tries to hold his own hand. It’s sad, because it doesn’t work: he can’t connect with himself in that way. However, we, the audience, empathize with him, because we have the distance needed to establish that kind of connection. Additionally, we see him empathizing with the human characters he sees in the screen, and his own empathy opens the door for our own.
Games have a special challenge because we are often controlling the character that the writers want us to empathize with, and like Wall-E holding his own hand, it just doesn’t work. We don’t have distance. They try to establish this distance through cutscenes, but that is usually clumsy, and not very effective -- we need more real-time and in-game tools to enable empathy.
So how do we create an empathetic NPC? We need them to have goals and personality and all of the other characteristics of an actual character, no matter how simple. In Skyward Sword, there is a shopkeeper that bounces excitedly and claps his hands as you approach, eager to make a sale. If you walk away, he slumps down, defeated, and slowly trudges back to his bench. Two animations, and a state change, and we have a more empathetic NPC than most!
Similarly, the Big Daddies in Bioshock are usually peaceful, but they become enraged if you mess with the little sisters, and that is enough to make them somewhat successful as empathetic NPCs. A common misunderstanding is that we have to understand why an emotion is being felt in order to empathize with it. That is irrelevant! It’s the emotion itself that we respond to, and the transitions between them. If someone is happy and then receives a phone call, and suddenly breaks down in tears -- we can empathize. The words themselves are not important. Characters, emotions, and transitions are the keys to empathy, not plot details.
Instead of focusing on plot points, we should focus on showing character through transitions. If a character is going from A to B, the interesting part is how they move -- not necessarily where they are going or where they are coming from. Do they go in a straight line at a uniform speed? Do they zigzag around, and get lost? Do they stop at each point to take in the sights?
So we’ve discussed NPCs a bit, but what about the player character itself? How do we achieve the distance we need for empathy? One technique is to restrict the choices of the player so that they match the choices of the character. In Metal Gear Solid 4, there are key moments where Snake (and the player) has no choice, such as this scene where he must move through a microwave hallway, while being slowly cooked alive. The plot itself might be ridiculous (why would anyone ever build a microwave hallway, let alone enter one?), but we can empathize with his pain, and with the act of sacrifice for a greater cause.
This confusion about the importance of plot results in games that have far too many words. In games, characters tend to have conversations, where they exchange facts. In good stories, conversation is rare, instead there are negotiations: exchanges of power. The words themselves are not important for good acting. Dialogue itself is very often redundant, unnecessary, and unneeded. Again, Wall-E very effectively established empathetic characters and dramatic scenes without any words at all for the most part. Shadow of the Colossus wordlessly created a strong relationship between the player and his horse just through time and cooperation.
Games tend to rely heavily on the techniques of film, and ignore the tools that are unique to their own medium. One such tool is “adrenaline moments”, putting the player in a situation that is important to their future. This is not possible in games that rely heavily on authored moments, because the experience is not unique, and there is only one possible outcome. However, mechanics and systems can allow for adrenaline moments that are unique to each player, like if you spy a creeper in your carefully-built Minecraft house, or a swarm of zombies in Day Z. We need to start crafting better scenarios that are appopriate for games if we want to achieve meaningful, empathetic performances -- it doesn’t matter how skillfully we craft animations if they are undermined by the context.
There was a lot more, but that’s the best I can do from memory of this talk. Sorry to Ed Hook and Mike Jungbluth if I misrepresented your content! This was the last talk of the animation bootcamp, but I’m really looking forward to the next one. If the stars align, I might even be able to participate myself somehow, maybe discussing practical uses for procedural and physics-based animation techniques.
Giving Purpose to 1st Person Animation
Ryan Duffin, Senior Animator, EA/Danger Close
Ryan has animated games since 2001, including first-person weapon animation for Battlefield 3, Killzone 2, and Medal of Honor: Warfighter.
He began by demonstrating how first-person animations look in third-person, showing footage from games like Mirror’s Edge and Dead Island. They look bizarre, with very little center-of-mass movement, and cartoony, spider-like arm movements. However, that is perfectly acceptable, because it looks great in first person, and that’s what the player sees! Animations that look good in third-person typically do not work in first-person. They are heavily distorted by perspective, low detail, and poorly staged. We almost always need very different animations for first person and for third person.
The first-person view is all about giving the player feedback -- communicating which weapon is equipped, which weapons are more or less powerful, what you are doing with them, whether they are ready to fire. Like the rest of the HUD, the first-person animations are the window through which the player views the world. They must be communicative as well as visually pleasing. You can think of the different first-person elements as actors on a stage, and figure out how to block them to be most visible and communicative to the player, while occluding as little of the game world as possible.
When experimenting with gun placement for Medal of Honor: Warfighter, and surveying gun placement in other games, he found that the most pleasing weapon location corresponded to the spiral formed by the Golden Ratio as applied to the screen. By lining up all the game’s weapons according to this rule, he achieved a consistent and intentional look, instead of just placing them arbitrarily on the screen.
The first person view can be used to convey feelings as well, such as pain, strain, exertion and weight. For example, the scene in Bioshock where the player character first encounters plasmids is conveyed mostly through the first-person hand animations. Similarly, the pain animations in Far Cry 2 and 3 really emphasize the damage that the player is receiving, and discourage the player from getting injured. The faces in Wolfenstein and Doom are early examples of this idea.
Modern shooters often tweak the timing details of first-person animations to convey subtle differences between the weight and handling of different weapons. They have different “aim down sights” timing, reload timing, weapon switch timing, and movement speed multipliers. Different weapon quality can also affect timing, so an old rusty gun might be more difficult to reload than a new, well-maintained one. A skilled marine might also reload in one fluid motion, while an untrained civilian might fumble and take longer.
Reference is key for believability and accuracy -- you should never make something unrealistic by accident! Know the rules so that you break them with intention. How does a gun work? Keep track of when a bullet is already in the gun’s chamber, whether your character is using good trigger discipline, where your magazines are coming from. In Battlefield 3 they tried to increase the realism of reloading by adding vest foley sound effects when the player retrieves new magazines.
Sometimes you really do have to break the rules though. The highest priority is to show the player what is happening, so everything has to be in front of the camera. Nobody would really reload right in front of their face like that, and the first-person camera FOV would not be able to see a weapon that is fired from the hip, but we have to compromise on realism to keep all the important action on screen.
There are special technical challenges when creating a rig for first-person animation, because objects are switching ‘spaces’ so often. They are frequently attached and detached from hands, guns, and the player’s body. The rig must be set up so this can happen smoothly and freely, so a pin can start out attached to a grenade, then be pulled out by the left hand, and finally discarded into world space as a physics object. Similarly, world-space objects must have the ability to interact with the first-person view, like if a dog jumps out and latches onto your arm for a QTE, or an enemy player stabs you in the face with an assassination animation.
It is also possible to animate the camera itself as part of first-person animations. You must be careful with these effects so you don’t induce motion sickness, but it can be really useful as punctuation for forceful animations. You can jar the screen to the slide slightly as you ram a magazine home, or shake and roll it around to simulate the massive recoil of a .50 cal rifle shot. Camera movement interferes with aiming, so it’s best to save disruptive effects for when the player can’t aim anyway, such as when reloading, striking with a melee weapon, or sprinting.
What about first-person body awareness, so you can look down and see your legs? There are several approaches to this, but usually you will need a special system that works similarly to the third-person character animation, but has special animations adapted to first-person. It’s possible to avoid this issue by only showing the legs when needed, such as the kick in Duke Nukem 3D, or the Strogg transformation in Quake 4. There are also special details to watch out for, such as shadows. You don’t want to use the actual first-person animations for shadows, because they look weird in third-person, but you also need the shadows to line up nicely with the first-person legs.
I hoped you liked this summary! The last one is all about the big picture, how to create context that allows for meaningful animations.
In this series of posts, I summarize my takeaways from some of the GDC 2013 sessions for anyone who couldn’t be there, starting with the animation bootcamp sessions on the first day. These are reconstructed from notes and memory, and may not exactly match what the speakers said.
Animating the 3rd Assassin
Jonathan Cooper, Animation Director, Ubisoft Montreal
Jonathan has been animating games for 13 years, including lead roles on Mass Effect 1 & 2, Deus Ex: Human Revolution, and Assassin’s Creed 3, and recently won the DICE award for “Outstanding Achievement in Animation”.
What is Assassin’s Creed? It was originally going to be a new Prince of Persia game, but ended up becoming its own IP, focusing on efficient parkour movements through realistic environments instead of flashy freerunning stunts through designed puzzles. The movement of the assassin is meant to be practical and smooth, with no interruptions from backflips or other tricks.
Every Ubisoft game is pitched to the executives using a 7-minute “target video” showing how the game would look in action, so they put together a video demonstrating the key features they wanted in Assassin’s Creed 3 -- such as tree navigation, rope darts, and deep snow. It ended up being greenlit, but the executives said the assassin in the video was too light and airy. They wanted him to feel heavier and tougher, so that was one of the major animation challenges this time around.
They created many thumbnails and character concepts to figure out how this stronger assassin should look. They decided to remove a lot of the armor and other extra clothing that the old assassins wore, and replace it with muscle mass, giving him a broader, but more streamlined look. He has new animations to make his movements look like they use more effort -- in particular, he has a “sprint propulsion” animation at the beginning of every run which emphasizes how much effort he puts into overcoming his significant inertia.
His attacks and assassinations have more follow-through to show how powerful they are, which at first made them seem less responsive. The team solved this problem with a brute force approach, by animating every possible transition in advance. For example, there is one assassination that starts with a jump off a box, and then ends with a run to the right. Or another that starts with a walking approach from behind, and ends with a walk to the left. It all added up to hundreds of animations just for the wrist-blade assassination move, in a detailed matrix including all the possible start and end states.
Many of the animation problems were solved with similar parametric matrices to blend between a large set of animations. For example, there were many jump animations for different heights and distances, and the system would interpolate between the closest ones for any particular jump. They even used pneumatic jump pad to physically launch stuntmen into the air for various heights and distances, and used those performances as mocap foundations to add more believability to the jump animations.
The technical animation team experimented with many different physics-based and procedural animation techniques, but most of them were rejected -- they created this odd effect in which the characters mindlessly react to the environment, and never anticipate anything. They ended up only using a few procedural effects. Here are two examples: First, there is a detailed lean and foot-placement system for turning while running, which was created procedurally, and then fine-tuned by comparing it to real motion capture data. Second, Assassin’s Creed games have always used IK for the detailed hand and foot placement in climbing, but now there is an additional procedural layer for overall body movement, used for reacting to jumps and landings.
One of the most important secrets of assassin’s creed animation has always been the secondary animation -- such as the cloth simulation on the cape, and the spring physics on the bits of equipment attached to the character. Since these items are driven by the physics system instead of directly keyframed in the animations, they really help smooth out animation transitions and increase the perceived fidelity of the game’s animation. That is why the animation in the assassin levels looks smoother and more detailed than the animation in the Desmond levels.
The force of the new assassin’s attacks is also communicated through the camera: each finishing move has an associated camera animation, which is applied when defeating the last enemy in a group. This allows for close-ups on enemy faces as they die, and impacts reinforced with camera movements. The combat camera is much closer in general, dynamically zooming in and out to be as close as it can possibly be without leaving important enemies out of the frame.
He demonstrated the combat animation process in detail, showing how the actors act out the basic movements, but the curves are all wrong because they don’t actually want to hit each other or move at a dangerous speed. The mocap data is then heavily modified by hand to emphasize impacts, ensure that blows actually connect, and generally increase the speed and energy level.
The non-interactive cinematics were recorded using full performance capture -- that is, simultaneous face, body and voice recording, as popularized by Avatar. As Simon mentioned, mocap is just a starting point for the animation team, but the simultaneous recording is really valuable for getting a natural performance from the actors. With the face cameras, the actors are more confident that details of their facial expressions and body language will be conveyed, so they don’t feel the need to exaggerate as much, or use cartoonish voice inflection.
The talk ended with a demo reel of animation from Assassin’s Creed 3, along with text conveying the sheer quantity of animations of each type. The assassin has 330 jump animations, 220 basic locomotion animations, 280 climbing animations, 210 new assassinations, and 3200 fight animations. There are 3400 crowd animations, and 3000 animal animations! Each animal had its own set of parkour moves so it could follow the player around in the environment, including bears, wolves, foxes, deer, and so on. There were 50 animators and animation coders on the project.
I hoped you liked this summary! The next one will be all about first-person animation.