Sunday, September 23, 2018

Unreal Engine Experiments: Last Known Position Visualization

The blog has been dark for a while now. But the past few months have been a quite fun experience as I got to experiment with a whole host of interesting gameplay systems in Unreal Engine. And I have to admit that the prospect of writing about them is not nearly as exciting as working on them. But I have finally summoned the willpower to get one article published over this weekend. So I figured that I'll go ahead and write about the most exciting project that I've worked on (since the recreation of Blink ability from Dishonored): the Last Known Position mechanic from Splinter Cell Conviction.

As the title suggests, we're going to cover the process of visualizing the player character's last known position (as perceived by the AI). The mechanic itself should be quite familiar to those who have played either of the last two entries in the Splinter Cell franchise. But in case you're not, here is a short animated preview of what exactly the end product is going to look like:

Alright, so with that out of the way, let's get into the nitty-gritty of the experiment. Basically, there are three main steps required for implementing the visualization system: 
  • Create a translucent silhouette material
  • Setup an animation pose capture & mirror system
  • Implement a basic AI perception system for tracking purposes
Now let's go over each of them in order, starting with the material creation process.

Silhouette Material

I started out with this because I had absolutely no clue how to get this working. So if anything was going to be a showstopper, it was probably going to be this one. I mean you can't just throw in a basic translucent material and call it a day. The material script also needs to be able to cull the inner triangles of the mesh. Being a complete noob at materials, I turned to the internet for help. Thankfully, Tom Looman had already posted a custom depth based solution in his blog and it involves the use of two similar overlapping meshes: a translucent mesh rendered in the main pass, and an opaque one rendered in custom depth. Here is a preview of what the final result:

Well, with that taken care of, let's head over to the next step in the process.

Visualization Pose Capture

I'm not very familiar with the animation side of UE4, but this part of the process actually had a relatively more straightforward solution. While the first idea that came to my mind was to copy the player character's animation poses over to a new skeletal mesh component, I wasn't particularly keen on going down that route. The reason being that there was no real need for a full-fledged animation system for our visualization mesh. We just need to set a pose once and then forget about it. Fortunately, after doing some research, I stumbled upon this neat little thing called Poseable Mesh component.

The Poseable Mesh component was exactly what was required for this scenario. It was intended to be used for one and only one thing. To mirror a single pose from another skeletal mesh. No unnecessary features involved. And it comes with an inbuilt function that lets you do that by passing in a reference to the target skeletal mesh component. Just copy the target's transform coordinates as well and we're done.

And now on to the final part of the experiment.

AI Perception

I went ahead with Unreal's inbuilt AI Perception system for this one. I'm not going over the details here as there are quite a few good resources available within the community already. But the basic gist is that I'm using it to keep track of AI agents gaining/losing track of the player character.

With this information, we just plop down our visualization actor every time the player evades the AI. 

And there you have it: a recreation of the Last Known Position mechanic from Splinter Cell Conviction. Here is a video preview of the system in action:

With that, we have come to the end of another experiment. I've shared the project files on GitHub. So feel free to use it in your work. Also head over to my YouTube channel if you're interested in checking out more cool experiments in Unreal Engine.

Alright, so that's it. I hope to publish the next post sometime during the next weekend. Until then, goodbye.

Monday, June 4, 2018

Unreal Engine Experiments: Waypoint Generator

A few weeks ago, I came across an article on Gamasutra about the various types of UI systems used in video games. I was never particularly interested in UI design, but this article piqued my interest in the subject. So I started reading up more on the subject matter and played through a few games like Dead Space and Tom Clancy's Splinter Cell: Conviction, both of which were lauded for their innovations in the UI design space. Even with the games being almost a decade old at this point, the UI systems employed by these games are starkly different when compared to most of their contemporaries.

Anyways, playing through Splinter Cell: Conviction got me really interested in the concept of Spatial UI design. Basically, this form of design represents UI elements that are displayed within the game world but are not actually a part of the world/setting. After doing some research on various types of systems that come under this category, I decided to recreate some of these UI components in Unreal Engine. To that end, I started work on a couple of projects, the first one being the Waypoint Generator. Now, I had previously developed a couple of functional waypoint generation systems as part of my Tower Defense toolkits. So instead of starting the project from scratch, I just migrated the required blueprints over to a new project and started working from there.

The basic underlying logic revolves around the use of nav mesh to obtain path points from the player character towards the active objective. The path thus obtained is then divided up into smaller segments before adding them to a spline component. The generation of these additional path points serves the purpose of removing weird twisting spline artifacts that occur around sharp corners when dealing with a very limited set of spline points. With that potential problem taken care of, all that's left is to lay down instanced static meshes to display waypoints along the path.

Moving on to design structure of the implementation, it's using a child actor component to attach the waypoint generator to the player. Within the construction script of the generator there's also an option to try out the system in the editor for debugging purposes as shown below:

The system, however, does have a limitation when it comes to displaying waypoints along certain types of inclined surfaces. Basically, from what I've heard, the navigation system in Unreal Engine tries to reduce redundancy as much as possible while generating path points. This can sometimes lead to a situation where a line drawn from one path point to the next ends up passing under the surface or quite a bit above it when dealing with stairs and other steeply inclined surfaces. To my knowledge, there's nothing that can be done about this in blueprints as the only solution seem to be to get more path points. Splitting up the path into smaller segments as I've mentioned earlier will not help in this scenario because it doesn't really take the navigational paths into account. It's basically just dividing a line without any other concern. But in any case, I've added a system that can mitigate this issue to some extent by using line traces to check the ground location at all points before placing down the waypoint meshes. It may not be able to correct the rotational data between path points in certain scenarios, but it always makes sure that the meshes are placed just above the ground location. If anyone knows of a better way to get around this issue using blueprints, I would really like to hear about it. So feel free to post it in the comments section.

Alright, so that brings us to the end of this post. I've released the Waypoint Generator for free on GitHub. Feel free to grab the source code at:

Monday, May 7, 2018

Unreal Engine Experiments: Prototype Menu System v2.0 Update

About three years ago, I had created a menu system with the intent of having UI elements that could be easily tacked on to all of my projects. The project was released for free on GitHub and had received a slew of updates for a while. But after shifting my focus over to creating content for the Unreal Engine Marketplace, I found myself having very little breathing area for working on side projects. And eventually, work on the menu system was abandoned, though it was still available for public use in its Unreal Engine v4.9 iteration. However, lately, I've been investing more of my spare time on some fun little side projects and to be honest, finding it quite enjoyable and refreshing. So after my recent foray into recreating the Blink ability from Dishonored, I found myself thinking about bringing the project back online and actually seeing it through to completion.

Loading up the project again in the latest version of Unreal Engine, I was surprised to find that it was quite compatible with the new version. But as I went through the code, it became glaringly obvious that most of it would have to be completely revamped. The menu system was working quite alright, but three years is a long time, and I had originally worked on it just a few months after I first started using Unreal Engine. And going through the project again, the code spoke for itself as to how cringeworthy some of the workflows were. As a result, most of the time spent working on this new update was focused on improving upon the existing codebase. In any case, the work is done and since I absolutely suck at making video demonstrations, I'll just briefly go over the various menu screens available in the v2.0 edition.

Main Menu

The main menu allows you to either start a new game, go to the options menu, or quit the game.

Options Menu

While the options menu has four different sub-options available, only the display and graphics options are functional in the current state.

Display Options Menu

Players can control the screen resolution and window mode settings through this menu.

Graphics Options Menu

As shown in the screenshot, the graphics options menu allows you to control the following settings:

  • AA Quality
  • Foliage Quality
  • Post-Processing Quality
  • Texture Quality
  • Shadow Quality
  • View Distance Quality
  • Effects Quality
  • VSync

Loading Screen

It's basically a screenshot that gets displayed for a specified period of time. A throbber is placed to indicate that the level is being loaded.

Pause Menu

The pause menu provides the options to either resume the game, exit to the main menu or to quit directly to the desktop.

Well, that covers all the major features of the Prototype Menu System in its current state. I'm planning to introduce more features over future updates in order to make it a more robust and complete system. But for now, you can grab the source code from GitHub at:

Friday, April 13, 2018

Unreal Engine Experiments: Dishonored's Blink Ability

Around a couple of months ago, I finally managed to finish Dishonored. I had tried playing it a couple of times in the past but got turned off both times by the starting section of the game, which I still think is one of the weakest parts of the game. Even though it was very clearly trying to make the player develop an emotional attachment to one of the primary characters, it felt more like a chore to me. The protagonist was obviously close to the said character, but none of that resonated with me as a player who was completely new to this world. I was more interested in exploring the world, with its huge whale hunting ships and a new and original setting, but I had to go through a linear and uninteresting gameplay section. Frankly, I'd rather have the game take me sooner to the scripted story sequences before throwing me into the first real mission. But leaving that aside, after having played the game through to completion, I can definitely say that the experience did get a whole lot better once the world opened up and gave me a chance to explore and study its various intricacies. However, what really made the game stand out for me was its Blink ability and the game does not wait long to present it to the player.

Once you get access to the ability, a whole new array of gameplay possibilities become open to you. It's essentially a single gameplay mechanic tailored towards multiple types of gameplay styles. You can become an explorer, navigating the tallest buildings to the deepest alleyways with the sort of freedom of movement not usually allowed in games (when you factor out the crawling through vents design). Or you can choose to play like a ninja, appearing suddenly from the shadows to strike his opponent, only to disappear again in an instant. If you prefer a more aggressive playstyle, the Blink also provides the player with a tool to quickly close the distance to opponents before plunging a blade into their throats. To be honest, it is the closest I've come to feel like an anime character in a first-person game, moving swiftly across the battlefield, taking down his opponents with finesse. And as is usually the case when I get excited about something like this, I had to learn how it works and recreate it on my own. Fortunately, it didn't just end up as another entry in the backlog of cool experiments to do, and I actually got around to working on it soon.

I first started out with studying the ability to look for any hints of design that could be visible under close inspection. First and quite easy to notice was that it wasn't a teleport ability, The player character was being moved to the targeted destination, while visual effects play out on the screen. With my extremely limited knowledge of materials and VFX, the only effect obvious to me was the field of view modifications. And that would have to do. I was far more interested in the working of its physical movement system.

Upon further inspection, I came across a more obscured design choice. The game was not using a line trace for the targeting system. This can be easily noticeable when using the ability near waist-high walls. If the aiming direction is only slightly above the wall, it will display the target location right in front of the wall. So it seems that a sphere trace (or some other simple 3D shape) is being used to ensure uninterrupted movement to the destination.

So with a basic idea of how things might be working under the hood, I began work on the implementation. The first task was to just move the player towards where the camera was being aimed at. The built-in 'Move Component To' function took care of this requirement. I added a couple of timelines to change and revert the field of view values during the process. Already by this point, I could easily dart around the map using the ability.

Next up on the itinerary was the targeting system. Again my intention here was not to spend time on making effects that looked exactly like its original inspiration. Instead, I was going for something along the lines of a basic cylinder mesh having a gradient material with its transparency increasing along the +z direction. Again my limited experience working on materials became an issue here. Fortunately, after scrounging through a few pages on the net, I came across a solution that did exactly what I wanted. With the gradient material setup, I just needed to move the target display actor based on the results of a sphere trace that ticked at regular intervals.

Now, all that's left was the wall scaling system. I already had a placeholder system that used the 'Launch Character' function to propel the character up the wall when necessary. However, it was too slow and felt out of sync when used in conjunction with the swift Blink movement. And I wasn't really sure how to get it right. Another idea that came to my mind was to use linear interpolation along a parabolic curve to the top of the wall. I wasn't particularly fond of the idea and was hoping it wouldn't come to that. Fortunately, I tried out the 'Move Component To' node again in this scenario and it actually worked out quite well. That was all the confirmation that I needed to go ahead with the implementation.

First, I added a check to see if the obstacles encountered by the targeting system fall into the category of 'walls'. If yes, then I followed it up with a line trace to determine the distance to the top of the wall as well as to confirm that the wall meets the minimum thickness requirement. If both cases return satisfactory results, a further sphere trace is performed from a calculated point just above the top surface of the wall, in the upward (+z) direction to ensure the availability of free space for the player character to stand upright. If this condition is satisfied as well, a direction pointer gets displayed to convey that the character will automatically scale the wall along the said direction after the Blink movement. With the wall scaling mechanism already in place as mentioned earlier, the ability was finally working as intended to the fullest extent.

With all of the required features working in tandem, all that was left was to clean up the code. A new custom actor component was created to house the Blink execution logic. This freed up the player character to handle only the input controls and a few simple interface functions that control the field of view. The use of this component driven design should allow the ability to be linked to new player characters quite easily.

I must say that it felt really good to work on something that can pretty much be classified as finished. It's a huge contrast to my normal work on the toolkits, which I consider to be continuously evolving projects. So I'm excited to keep working on more of these small offshoot projects. Meanwhile, the source code (blueprints) for the Blink ability project has been published on GitHub. Feel free to check it out at:

Thursday, April 5, 2018

FPS Tower Defense Toolkit v3.1 Dev Log #3: Performance Optimizations

In the first part of the v3.1 dev log series for FPS Tower Defense Toolkit, I had briefly covered the process of adding support for multiple power cores. While the implementation itself required only minor modifications to the existing systems, it did introduce an additional layer of expensive nav path calculations. This caused a visible dip in performance when using multiple powers cores, and as a result, I had to spend more time focusing on optimizations before releasing the update.

Since the new performance issues arose as a direct result of the additional navmesh updates, it made sense to try and reduce the cost of these operations. While contemplating on what to do in that regard, I remembered an old Unreal Engine live stream session with Mieszko, in which he talked about the various recast navmesh parameters that were exposed to the editor. So I went ahead and checked it out again with the hopes of scavenging something that could be useful in the current scenario. And fortunately, It seems that decreasing the 'Tile Size UU' attribute of the recast nav mesh reduces the amount of nav mesh area that needs to be rebuilt at runtime. Since the tower placement operations essentially modify only a small part of the navmesh at any instant, I reduced the tile size to bring down the number of calculations required. The 'Cell Size' parameter was also increased in order to improve performance at the cost of nav mesh resolution, as was mentioned in the video.

Next up on the list was rendering optimizations. The grid cells used for tower placement were basically planar meshes with a masked grid material applied to them. Since having lots of masked materials across the map can be taxing on the rendering system, I replaced them with a square window frame shaped mesh that uses a basic unlit material.

The next area for cutting costs came in the form of memory optimizations. Static Meshes within Unreal Engine have an attribute 'HasNavigationData' that determines if it is necessary to save collision data for navmesh calculations. But it happens to be the case that most static meshes used within the context of the toolkit do not interfere with the navmesh. Even the tower bases, being one of the few exceptions that impact the underlying nav mesh, use custom nav modifier volumes to create impassable regions. So it made sense to turn it off wherever not required, and thus save memory on collision data.

Now onto the final optimization. While I had some idea about the benefits of all the above three cases, the next one was completely new information to me.

During the tower construction phase before a wave, navigational paths between enemy spawn points and their linked power cores are evaluated in order to ensure that a valid path exists for the AI bots. Since this is a continuous operation, I was looking for ways to make it more efficient. A couple of ideas did cross my mind, but they would require major alterations to the holographic tower display system. Not wanting to delay the update any further unless absolutely necessary, I was searching for other alternatives. And that was when I stumbled upon a thread from the Unreal Engine forums with a potential fix for the issue. As per the instructions posted in the thread, I added the 'Enable Recalculation on Invalidation' node while calculating nav paths and set it's 'Do Recalculation' enum to No. It basically prevents the 'Find Path to Location Synchronously' function from automatically recalculating the path points if the previously calculated path gets invalidated due to changes in the underlying navmesh. And fortunately, coupled with all the other measures, the toolkit was back to its functional state again.

With that, we've reached the end of this dev log. As mentioned earlier, I think there's still room for cost-cutting within the holographic tower display system. But it might require some major changes to the system and hence I'm pushing it onto the next update. Apart from that, there are a couple of bug fixes making it into this update. More details about the same will be posted as part of the v3.1 changelog in the Unreal Engine forum support thread.

As a final note, I'd like to point out that the primary focus of the v3.x series of updates is to bring improvements to the following three facets of the toolkit: the wave spawning systems, AI logic, and visual design. So these areas will keep getting a lot of attention over the course of next few updates. However, the idea of adding support for multiple cores was something that I got from the Unreal Engine community and was not part of the planned features. And I'm really glad to have someone point it out. So if anyone has any feature requests that they think will increase the value of the toolkit, feel free to reach out to me with the same.

Wednesday, April 4, 2018

FPS Tower Defense Toolkit v3.1 Dev Log #2: Improvements to the Weighted Wave Spawning System

The v3.0 update of FPS Tower Defense Toolkit introduced the concept of a weighted wave spawning system. As part of the dev log for the same, I had covered potential methods for improving the system in the future. And one of those plans was to provide designers more control over when different types of AI start making their appearance over the course of a level.

The Weighted Wave Spawn Controller in its native form enabled automated generation of wave spawn patterns based on weighted probability distributions. By controlling the weightings for different AI classes, one could create randomized wave patterns using this system. It employed a system that ensured that only units with a certain threat rating relative to the active wave's threat rating will be allowed to spawn. Coupled with the option to specify if a certain type of unit could be spawned during a mission, it provided some amount of control over the randomness of the system.

However, a limitation still existed in the form of not being able to precisely determine when different types of units would start spawning. In order to negate this issue, I added a new parameter 'SpawnStartingFromWave:'.

If the 'CanBeSpawned?' parameter is set to true, then the spawn controller will now check for the new condition as well, thus providing designers with a tool to control the changes in wave constituency over time. I think this new feature will greatly increase the viability of using the weighted wave spawning system. And with that, we've come to the end of this dev log. The next and final v3.1 dev log will go over optimizations and bug fixes that will make it into the final release.

FPS Tower Defense Toolkit v3.1 Dev Log #1: The Introduction of Multiple Power Cores

[Minor Spoilers ahead: The opening paragraph goes into minor spoiler territory on the story of Sanctum 2]

The ending of the third act of Sanctum 2 marks a pivotal point in its story-driven campaign. Players are finally presented with the reason behind recent waves of coordinated attacks against human settlements and outposts by the planet's native life forms. However, that's not the only thing that makes it interesting. The final mission of the chapter, titled Abandoned Lab is one of the only two maps from the base game where players are tasked with defending multiple cores. The inclusion of an extra core in these maps added a completely new layer of challenge as tower placement needed to be balanced along two paths that couldn't be merged. Having two cores to protect also presented a situation that required the players to split up their team in order to defend multiple paths to the cores, especially during later waves when powerful enemies start advancing against both cores simultaneously. I've found myself returning to this map quite a few times even after I finished the main campaign. Since the FPS Tower Defense Toolkit was inspired by Sanctum 2, adding support for multiple cores was among the features planned for the first round of post-release updates. But due to more pressing feature requests, it got sidelined. And that remained the case for quite some time, until recently, when a customer brought up the idea again.

So that got me thinking again about integrating support for multiple cores into the toolkit. To give people the power to design their own versions of the iconic Abandoned Lab. After going over the rough design plan, I realized that it would barely require any changes at all. The core idea would be centered around creating a link between enemy spawn points and power cores. All AI bots spawned from a specific enemy spawn point will receive directives to set the linked power core as their prime target.

To start with the implementation, I added a new variable to the BP_EnemySpawnPoint class for storing a reference to its linked Power Core. The variable was then publicly exposed so that this link could be easily set up by the designer directly from the editor.

The next thing to consider was to give the AI bots access to this information. This is necessary because the player can pull enemy units from their standard paths. If the bots later lose interest in the player, they should be able to return to the core assigned to them. And that required making some modifications to the Enemy AI Manager class.

The Enemy AI Manager class keeps a dynamically updated list of all entities that can be targeted by the enemy AI bots. Until now, this list included only the power core and the player character. So I made some slight alterations to ensure that references to all the power cores get placed at the top of the array. The underlying reason behind doing so is to have static references to the cores within the list. While the player character can get destroyed during a wave, the indices of the core will remain constant irrespective of changes made to the list at runtime. Due to the static nature of these indices, individual AI units can use the index of their assigned core to fall back to their primary objective whenever required. After making a few more minor changes within the AI blueprints to reflect the aforementioned design alterations, the AI was completely capable of functioning within the updated feature set.

With that, we've come to the end of this dev log. Stay tuned for the next update where I'll go through the process of making improvements to the weighted wave spawning system.