Friday, April 13, 2018

Unreal Engine Experiments: Dishonored's Blink Ability

Around a couple of months ago, I finally managed to finish Dishonored. I had tried playing it a couple of times in the past but got turned off both times by the starting section of the game, which I still think is one of the weakest parts of the game. Even though it was very clearly trying to make the player develop an emotional attachment to one of the primary characters, it felt more like a chore to me. The protagonist was obviously close to the said character, but none of that resonated with me as a player who was completely new to this world. I was more interested in exploring the world, with its huge whale hunting ships and a new and original setting, but you have to go through a linear and somewhat uninteresting gameplay section. Frankly, I'd rather have the game take me sooner to the scripted story sequences before moving on to the first real mission. But leaving that aside, after having played the game through to completion, I can definitely say that I thoroughly enjoyed the rest of the game once the world opened up and provided opportunities to explore and study its various intricacies. However, what really made the game stand out for me was its Blink ability and the game does not wait long to present it to the player.

Once you get access to the ability, a whole new array of gameplay possibilities become open to you. It's essentially a single gameplay mechanic tailored towards multiple types of gameplay styles. You can become an explorer, navigating the tallest buildings to the deepest alleyways with the sort of freedom of movement not usually allowed in games (when you factor out the crawling through vents design). Or you can choose to play like a ninja, appearing suddenly from the shadows to strike his opponent, only to disappear again in an instant. If you prefer a more aggressive playstyle, the Blink also provides the player with a tool to quickly close the distance to opponents before plunging a blade into their throats. To be honest, it is the closest I've come to feel like an anime character in a first-person game, moving swiftly across the battlefield, taking down his opponents with finesse. And as is usually the case when I get excited about something like this, I had to learn how it works and recreate it on my own. Fortunately, it didn't just end up as another entry in the backlog of cool experiments to try out, and I actually got around to working on it.

I first started out with studying the ability to look for any hints of design that could be visible under close inspection. First and quite easy to notice was that it wasn't a teleport ability, The player character was being moved to the targeted destination, while visual effects play out on the screen. With my extremely limited knowledge of materials and VFX, the only effect obvious to me was the field of view modifications. And that would have to do. The main goal is understanding the workings of its physical movement system.

Upon further inspection, I came across a more obscured design choice. The game was not using a line trace for the targeting system. This can be easily noticeable when using the ability near waist-high walls. If the aiming direction is only slightly above the wall, it will display the target location right in front of the wall. So it seems that a sphere trace (or some other simple 3D shape) is being used to ensure uninterrupted movement to the destination.

So with a basic idea of how things might be working under the hood, I began work on the implementation. The first task was to just move the player towards where the camera was being aimed at. The built-in 'Move Component To' function took care of this requirement. I added a couple of timelines to change and revert the field of view values during the process. Already by this point, my character was easily darting around the map using the ability.

Next up on the itinerary was the targeting system. Again my intention here was not to spend time on making effects that looked exactly like its original inspiration. Instead, a basic cylinder mesh having a gradient material with its transparency increasing along the +z direction would do just fine. Again lack of experience working on materials became an issue here. Fortunately, after scrounging through a few pages on the net, I came across a solution that did exactly what was required. With the gradient material setup, I just needed to move the target display actor based on the results of a sphere trace fired at regular intervals.

Now, all that's left was the wall scaling system. I already had a placeholder system that used the 'Launch Character' function to propel the character up the wall when necessary. However, it was too slow and felt out of sync when used in conjunction with the swift Blink movement. And I wasn't really sure how to get it right. Another potential approach would have been to use linear interpolation along a parabolic curve to the top of the wall. I wasn't particularly fond of the idea and was hoping it wouldn't come to that. Fortunately, I tried out the 'Move Component To' node again in this scenario and it actually worked out quite well.

Next, I added a check to see if the obstacles encountered by the targeting system fall into the category of 'walls'. If yes, then it was followed it up with a line trace to determine the distance to the top of the wall as well as to confirm that the wall meets the minimum depth/thickness requirement. If both cases meet the requirements, a further sphere trace is performed from a calculated point just above the top surface of the wall, in the upward (+z) direction to ensure the availability of free space for the player character to stand upright. If this condition is satisfied as well, a direction pointer gets displayed to convey that the character will automatically scale the wall along the said direction after the Blink movement. With the wall scaling mechanism already in place as mentioned earlier, the ability was finally working as intended to the fullest extent.

With all of the required features working in tandem, all that was left was to clean up the code. A new custom actor component was created to house the Blink execution logic. This freed up the player character to handle only the input controls and a simple interface function to control the field of view. The use of this component driven design should allow the ability to be linked to new player characters quite easily.

In the end, I must say that it felt really good to work on something that can pretty much be classified as finished. It's a huge contrast to my normal work on the toolkits, which require a lot of updates into the future. So I'm excited to keep working on more of these small offshoot projects. Anyways, the source code (blueprints) for the project has been published on GitHub. So you know, feel free to check it out at: