Tuesday, October 23, 2018

Unreal Engine Experiments: Enemy Tagging System

I've been playing a fair bit of stealth & tactical action games of late and noticed that most of them have some form of enemy tagging systems to help the players form a better tactical awareness about the game world. I generally note down interesting gameplay systems that I come across, in order to study them in detail at a later time. But since this particular mechanic didn't seem like it would take up much time, I decided to jump into the process right away.

I started doing some research on the concept and various approaches taken by different games to implement it. And I took a particular liking to the Metal Gear Solid V's take on tagging systems, with its added support for range display as well as the highlighting of occluded objects. So I decided to go ahead and recreate it in Unreal Engine, and this post is basically a high-level retrospective overview of the implementation process. But before getting into the details, here is a super quick preview of what the end result is going to look like:



Alright, so without further ado, let's dive into the design process behind the experiment. The first order of business here is to create a custom widget that can display the tag image as well as the distance to the player character. Since the tag needs to hover above the target at all times, we can use a widget component and leverage its inbuilt functionality to attach itself to actors. However, since highlighting of occluded targets is also part of the agenda here, it makes sense to render the widget on screen space.


However, if we try this out in the editor, we'll notice an issue once we start moving away from the tagged actors. The widget will start covering most of the actor until at very large distances, the actor becomes barely visible at all. This happens because the relative distance from the actor to the widget component remains the same, while the widget being rendered in screen space retains its default size. And that is not desirable.

Now there are two ways to resolve this. The first involves changing the widget size dynamically, while the other approach revolves around updating the relative location of the component at runtime. After taking another look at the workings of tagging systems from a few real games, I noticed that the second approach is generally favored, and that's the one that we're going to take here. While I tried out multiple types of alignment correction models, it finally came down to just using a simple linear multiplier based on the distance involved.



Here is a comparison of the tagging system with and without the distance-based alignment corrections (I also threw in some script to display the distance text):



Now that we have a working tag widget, we can throw in some player input driven logic for adding tags to actors manually. A simple line trace-driven check would suffice in this regard. We can take the hit result data and request activation of tag display if the hit actor has a tag widget component.


And that brings us to the final section: implementation of occluded object highlights. To this end, we can use a post-process material with custom stencils enabled. I'm not particularly good with materials, but fortunately, Rodrigo Villani has already created an awesome tutorial on how to create outlines in Unreal Engine. I took the basic material setup explained in the tutorial and threw in some additional script to add translucent filling within the outline area. And that's about it. Here is a preview video of the enemy tagging system in action:



With that, we have come to the end of another experiment. Eventually, I hope to use this space to write about all of my experiments, but it is probably going to be a while, given my trajectory so far. But I generally keep my Youtube channel updated with the latest projects. So if you like to see more cool experiments in Unreal Engine, you know where to find them. 😉

Sunday, September 23, 2018

Unreal Engine Experiments: Last Known Position Visualization

The blog has been dark for a while now. But the past few months have been a quite fun experience as I got to experiment with a whole host of interesting gameplay systems in Unreal Engine. And I have to admit that the prospect of writing about them is not nearly as exciting as working on them. But I have finally summoned the willpower to get one article published over this weekend. So I figured that I'll go ahead and write about the most exciting project that I've worked on (since the recreation of Blink ability from Dishonored): the Last Known Position mechanic from Splinter Cell Conviction.

As the title suggests, we're going to cover the process of visualizing the player character's last known position (as perceived by the AI). The mechanic itself should be quite familiar to those who have played either of the last two entries in the Splinter Cell franchise. But in case you're not, here is a short animated preview of what exactly the end product is going to look like:




Alright, so with that out of the way, let's get into the nitty-gritty of the experiment. Basically, there are three main steps required for implementing the visualization system: 
  • Create a translucent silhouette material
  • Setup an animation pose capture & mirror system
  • Implement a basic AI perception system for tracking purposes
Now let's go over each of them in order, starting with the material creation process.

Silhouette Material

I started out with this because I had absolutely no clue how to get this working. So if anything was going to be a showstopper, it was probably going to be this one. I mean you can't just throw in a basic translucent material and call it a day. The material script also needs to be able to cull the inner triangles of the mesh. Being a complete noob at materials, I turned to the internet for help. Thankfully, Tom Looman had already posted a custom depth based solution in his blog and it involves the use of two similar overlapping meshes: a translucent mesh rendered in the main pass, and an opaque one rendered in custom depth. Here is a preview of what the final result:


Well, with that taken care of, let's head over to the next step in the process.

Visualization Pose Capture

I'm not very familiar with the animation side of UE4, but this part of the process actually had a relatively more straightforward solution. While the first idea that came to my mind was to copy the player character's animation poses over to a new skeletal mesh component, I wasn't particularly keen on going down that route. The reason being that there was no real need for a full-fledged animation system for our visualization mesh. We just need to set a pose once and then forget about it. Fortunately, after doing some research, I stumbled upon this neat little thing called Poseable Mesh component.

The Poseable Mesh component was exactly what was required for this scenario. It was intended to be used for one and only one thing. To mirror a single pose from another skeletal mesh. No unnecessary features involved. And it comes with an inbuilt function that lets you do that by passing in a reference to the target skeletal mesh component. Just copy the target's transform coordinates as well and we're done.


And now on to the final part of the experiment.

AI Perception

I went ahead with Unreal's inbuilt AI Perception system for this one. I'm not going over the details here as there are quite a few good resources available within the community already. But the basic gist is that I'm using it to keep track of AI agents gaining/losing track of the player character.


With this information, we just plop down our visualization actor every time the player evades the AI. 


And there you have it: a recreation of the Last Known Position mechanic from Splinter Cell Conviction. Here is a video preview of the system in action:


With that, we have come to the end of another experiment. I've shared the project files on GitHub. So feel free to use it in your work. Also head over to my YouTube channel if you're interested in checking out more cool experiments in Unreal Engine.

Alright, so that's it. I hope to publish the next post sometime during the next weekend. Until then, goodbye.

Monday, June 4, 2018

Unreal Engine Experiments: Waypoint Generator

A few weeks ago, I came across an article on Gamasutra about the various types of UI systems used in video games. I was never particularly interested in UI design, but this article piqued my interest in the subject. So I started reading up more on the subject matter and played through a few games like Dead Space and Tom Clancy's Splinter Cell: Conviction, both of which were lauded for their innovations in the UI design space. Even with the games being almost a decade old at this point, the UI systems employed by these games are starkly different when compared to most of their contemporaries.

Anyways, playing through Splinter Cell: Conviction got me really interested in the concept of Spatial UI design. Basically, this form of design represents UI elements that are displayed within the game world but are not actually a part of the world/setting. After doing some research on various types of systems that come under this category, I decided to recreate some of these UI components in Unreal Engine. To that end, I started work on a couple of projects, the first one being the Waypoint Generator. Now, I had previously developed a couple of functional waypoint generation systems as part of my Tower Defense toolkits. So instead of starting the project from scratch, I just migrated the required blueprints over to a new project and started working from there.

The basic underlying logic revolves around the use of nav mesh to obtain path points from the player character towards the active objective. The path thus obtained is then divided up into smaller segments before adding them to a spline component. The generation of these additional path points serves the purpose of removing weird twisting spline artifacts that occur around sharp corners when dealing with a very limited set of spline points. With that potential problem taken care of, all that's left is to lay down instanced static meshes to display waypoints along the path.



Moving on to design structure of the implementation, it's using a child actor component to attach the waypoint generator to the player. Within the construction script of the generator there's also an option to try out the system in the editor for debugging purposes as shown below:




The system, however, does have a limitation when it comes to displaying waypoints along certain types of inclined surfaces. Basically, from what I've heard, the navigation system in Unreal Engine tries to reduce redundancy as much as possible while generating path points. This can sometimes lead to a situation where a line drawn from one path point to the next ends up passing under the surface or quite a bit above it when dealing with stairs and other steeply inclined surfaces. To my knowledge, there's nothing that can be done about this in blueprints as the only solution seem to be to get more path points. Splitting up the path into smaller segments as I've mentioned earlier will not help in this scenario because it doesn't really take the navigational paths into account. It's basically just dividing a line without any other concern. But in any case, I've added a system that can mitigate this issue to some extent by using line traces to check the ground location at all points before placing down the waypoint meshes. It may not be able to correct the rotational data between path points in certain scenarios, but it always makes sure that the meshes are placed just above the ground location. If anyone knows of a better way to get around this issue using blueprints, I would really like to hear about it. So feel free to post it in the comments section.


Alright, so that brings us to the end of this post. I've released the Waypoint Generator for free on GitHub. Feel free to grab the source code at:  https://github.com/RohitKotiveetil/UnrealEngine--WaypointGenerator

Monday, May 7, 2018

Unreal Engine Experiments: Prototype Menu System v2.0 Update

About three years ago, I had created a menu system with the intent of having UI elements that could be easily tacked on to all of my projects. The project was released for free on GitHub and had received a slew of updates for a while. But after shifting my focus over to creating content for the Unreal Engine Marketplace, I found myself having very little breathing area for working on side projects. And eventually, work on the menu system was abandoned, though it was still available for public use in its Unreal Engine v4.9 iteration. However, lately, I've been investing more of my spare time on some fun little side projects and to be honest, finding it quite enjoyable and refreshing. So after my recent foray into recreating the Blink ability from Dishonored, I found myself thinking about bringing the project back online and actually seeing it through to completion.

Loading up the project again in the latest version of Unreal Engine, I was surprised to find that it was quite compatible with the new version. But as I went through the code, it became glaringly obvious that most of it would have to be completely revamped. The menu system was working quite alright, but three years is a long time, and I had originally worked on it just a few months after I first started using Unreal Engine. And going through the project again, the code spoke for itself as to how cringeworthy some of the workflows were. As a result, most of the time spent working on this new update was focused on improving upon the existing codebase. In any case, the work is done and since I absolutely suck at making video demonstrations, I'll just briefly go over the various menu screens available in the v2.0 edition.


Main Menu




The main menu allows you to either start a new game, go to the options menu, or quit the game.

Options Menu




While the options menu has four different sub-options available, only the display and graphics options are functional in the current state.

Display Options Menu




Players can control the screen resolution and window mode settings through this menu.

Graphics Options Menu




As shown in the screenshot, the graphics options menu allows you to control the following settings:


  • AA Quality
  • Foliage Quality
  • Post-Processing Quality
  • Texture Quality
  • Shadow Quality
  • View Distance Quality
  • Effects Quality
  • VSync

Loading Screen



It's basically a screenshot that gets displayed for a specified period of time. A throbber is placed to indicate that the level is being loaded.

Pause Menu



The pause menu provides the options to either resume the game, exit to the main menu or to quit directly to the desktop.


Well, that covers all the major features of the Prototype Menu System in its current state. I'm planning to introduce more features over future updates in order to make it a more robust and complete system. But for now, you can grab the source code from GitHub at: https://github.com/RohitKotiveetil/UnrealEngine--PrototypeMenuSystem

Friday, April 13, 2018

Unreal Engine Experiments: Dishonored's Blink Ability

Around a couple of months ago, I finally managed to finish Dishonored. I had tried playing it a couple of times in the past but got turned off both times by the starting section of the game, which I still think is one of the weakest parts of the game. Even though it was very clearly trying to make the player develop an emotional attachment to one of the primary characters, it felt more like a chore to me. The protagonist was obviously close to the said character, but none of that resonated with me as a player who was completely new to this world. I was more interested in exploring the world, with its huge whale hunting ships and a new and original setting, but you have to go through a linear and somewhat uninteresting gameplay section. Frankly, I'd rather have the game take me sooner to the scripted story sequences before moving on to the first real mission. But leaving that aside, after having played the game through to completion, I can definitely say that I thoroughly enjoyed the rest of the game once the world opened up and provided opportunities to explore and study its various intricacies. However, what really made the game stand out for me was its Blink ability and the game does not wait long to present it to the player.

Once you get access to the ability, a whole new array of gameplay possibilities become open to you. It's essentially a single gameplay mechanic tailored towards multiple types of gameplay styles. You can become an explorer, navigating the tallest buildings to the deepest alleyways with the sort of freedom of movement not usually allowed in games (when you factor out the crawling through vents design). Or you can choose to play like a ninja, appearing suddenly from the shadows to strike his opponent, only to disappear again in an instant. If you prefer a more aggressive playstyle, the Blink also provides the player with a tool to quickly close the distance to opponents before plunging a blade into their throats. To be honest, it is the closest I've come to feel like an anime character in a first-person game, moving swiftly across the battlefield, taking down his opponents with finesse. And as is usually the case when I get excited about something like this, I had to learn how it works and recreate it on my own. Fortunately, it didn't just end up as another entry in the backlog of cool experiments to try out, and I actually got around to working on it.

I first started out with studying the ability to look for any hints of design that could be visible under close inspection. First and quite easy to notice was that it wasn't a teleport ability, The player character was being moved to the targeted destination, while visual effects play out on the screen. With my extremely limited knowledge of materials and VFX, the only effect obvious to me was the field of view modifications. And that would have to do. The main goal is understanding the workings of its physical movement system.


Upon further inspection, I came across a more obscured design choice. The game was not using a line trace for the targeting system. This can be easily noticeable when using the ability near waist-high walls. If the aiming direction is only slightly above the wall, it will display the target location right in front of the wall. So it seems that a sphere trace (or some other simple 3D shape) is being used to ensure uninterrupted movement to the destination.



So with a basic idea of how things might be working under the hood, I began work on the implementation. The first task was to just move the player towards where the camera was being aimed at. The built-in 'Move Component To' function took care of this requirement. I added a couple of timelines to change and revert the field of view values during the process. Already by this point, my character was easily darting around the map using the ability.


Next up on the itinerary was the targeting system. Again my intention here was not to spend time on making effects that looked exactly like its original inspiration. Instead, a basic cylinder mesh having a gradient material with its transparency increasing along the +z direction would do just fine. Again lack of experience working on materials became an issue here. Fortunately, after scrounging through a few pages on the net, I came across a solution that did exactly what was required. With the gradient material setup, I just needed to move the target display actor based on the results of a sphere trace fired at regular intervals.


Now, all that's left was the wall scaling system. I already had a placeholder system that used the 'Launch Character' function to propel the character up the wall when necessary. However, it was too slow and felt out of sync when used in conjunction with the swift Blink movement. And I wasn't really sure how to get it right. Another potential approach would have been to use linear interpolation along a parabolic curve to the top of the wall. I wasn't particularly fond of the idea and was hoping it wouldn't come to that. Fortunately, I tried out the 'Move Component To' node again in this scenario and it actually worked out quite well.

Next, I added a check to see if the obstacles encountered by the targeting system fall into the category of 'walls'. If yes, then it was followed it up with a line trace to determine the distance to the top of the wall as well as to confirm that the wall meets the minimum depth/thickness requirement. If both cases meet the requirements, a further sphere trace is performed from a calculated point just above the top surface of the wall, in the upward (+z) direction to ensure the availability of free space for the player character to stand upright. If this condition is satisfied as well, a direction pointer gets displayed to convey that the character will automatically scale the wall along the said direction after the Blink movement. With the wall scaling mechanism already in place as mentioned earlier, the ability was finally working as intended to the fullest extent.


With all of the required features working in tandem, all that was left was to clean up the code. A new custom actor component was created to house the Blink execution logic. This freed up the player character to handle only the input controls and a simple interface function to control the field of view. The use of this component driven design should allow the ability to be linked to new player characters quite easily.

In the end, I must say that it felt really good to work on something that can pretty much be classified as finished. It's a huge contrast to my normal work on the toolkits, which require a lot of updates into the future. So I'm excited to keep working on more of these small offshoot projects. Anyways, the source code (blueprints) for the project has been published on GitHub. So you know, feel free to check it out at:  https://github.com/RohitKotiveetil/UnrealEngine--BlinkAbility

Thursday, April 5, 2018

FPS Tower Defense Toolkit v3.1 Dev Log #3: Performance Optimizations

In the first part of the v3.1 dev log series for FPS Tower Defense Toolkit, I had briefly covered the process of adding support for multiple power cores. While the implementation itself required only minor modifications to the existing systems, it did introduce an additional layer of expensive nav path calculations. This caused a visible dip in performance when using multiple powers cores, and as a result, I had to spend more time focusing on optimizations before releasing the update.

Since the new performance issues arose as a direct result of the additional navmesh updates, it made sense to try and reduce the cost of these operations. While contemplating on what to do in that regard, I remembered an old Unreal Engine live stream session with Mieszko, in which he talked about the various recast navmesh parameters that were exposed to the editor. So I went ahead and checked it out again with the hopes of scavenging something that could be useful in the current scenario. And fortunately, It seems that decreasing the 'Tile Size UU' attribute of the recast nav mesh reduces the amount of nav mesh area that needs to be rebuilt at runtime. Since the tower placement operations essentially modify only a small part of the navmesh at any instant, I reduced the tile size to bring down the number of calculations required. The 'Cell Size' parameter was also increased in order to improve performance at the cost of nav mesh resolution, as was mentioned in the video.


Next up on the list was rendering optimizations. The grid cells used for tower placement were basically planar meshes with a masked grid material applied to them. Since having lots of masked materials across the map can be taxing on the rendering system, I replaced them with a square window frame shaped mesh that uses a basic unlit material.


The next area for cutting costs came in the form of memory optimizations. Static Meshes within Unreal Engine have an attribute 'HasNavigationData' that determines if it is necessary to save collision data for navmesh calculations. But it happens to be the case that most static meshes used within the context of the toolkit do not interfere with the navmesh. Even the tower bases, being one of the few exceptions that impact the underlying nav mesh, use custom nav modifier volumes to create impassable regions. So it made sense to turn it off wherever not required, and thus save memory on collision data.


Now onto the final optimization. While I had some idea about the benefits of all the above three cases, the next one was completely new information to me.

During the tower construction phase before a wave, navigational paths between enemy spawn points and their linked power cores are evaluated in order to ensure that a valid path exists for the AI bots. Since this is a continuous operation, I was looking for ways to make it more efficient. A couple of ideas did cross my mind, but they would require major alterations to the holographic tower display system. Not wanting to delay the update any further unless absolutely necessary, I was searching for other alternatives. And that was when I stumbled upon a thread from the Unreal Engine forums with a potential fix for the issue. As per the instructions posted in the thread, I added the 'Enable Recalculation on Invalidation' node while calculating nav paths and set it's 'Do Recalculation' enum to No. It basically prevents the 'Find Path to Location Synchronously' function from automatically recalculating the path points if the previously calculated path gets invalidated due to changes in the underlying navmesh. And fortunately, coupled with all the other measures, the toolkit was back to its functional state again.


With that, we've reached the end of this dev log. As mentioned earlier, I think there's still room for cost-cutting within the holographic tower display system. But it might require some major changes to the system and hence I'm pushing it onto the next update. Apart from that, there are a couple of bug fixes making it into this update. More details about the same will be posted as part of the v3.1 changelog in the Unreal Engine forum support thread.

As a final note, I'd like to point out that the primary focus of the v3.x series of updates is to bring improvements to the following three facets of the toolkit: the wave spawning systems, AI logic, and visual design. So these areas will keep getting a lot of attention over the course of next few updates. However, the idea of adding support for multiple cores was something that I got from the Unreal Engine community and was not part of the planned features. And I'm really glad to have someone point it out. So if anyone has any feature requests that they think will increase the value of the toolkit, feel free to reach out to me with the same.

Wednesday, April 4, 2018

FPS Tower Defense Toolkit v3.1 Dev Log #2: Improvements to the Weighted Wave Spawning System

The v3.0 update of FPS Tower Defense Toolkit introduced the concept of a weighted wave spawning system. As part of the dev log for the same, I had covered potential methods for improving the system in the future. And one of those plans was to provide designers more control over when different types of AI start making their appearance over the course of a level.

The Weighted Wave Spawn Controller in its native form enabled automated generation of wave spawn patterns based on weighted probability distributions. By controlling the weightings for different AI classes, one could create randomized wave patterns using this system. It employed a system that ensured that only units with a certain threat rating relative to the active wave's threat rating will be allowed to spawn. Coupled with the option to specify if a certain type of unit could be spawned during a mission, it provided some amount of control over the randomness of the system.

However, a limitation still existed in the form of not being able to precisely determine when different types of units would start spawning. In order to negate this issue, I added a new parameter 'SpawnStartingFromWave:'.


If the 'CanBeSpawned?' parameter is set to true, then the spawn controller will now check for the new condition as well, thus providing designers with a tool to control the changes in wave constituency over time. I think this new feature will greatly increase the viability of using the weighted wave spawning system. And with that, we've come to the end of this dev log. The next and final v3.1 dev log will go over optimizations and bug fixes that will make it into the final release.