Saturday, February 24, 2018

FPS Tower Defense Toolkit v3.0 Dev Log #1: Revamping the Visual Design

The FPS Tower Defense Toolkit was published on the Unreal Engine Marketplace more than two years ago, and throughout most of its life cycle, I had concentrated most of my efforts on the addition of new gameplay systems and refinement of existing ones. Very few updates were targeted specifically at improving the visual design, with almost all of them focused on creating a clean and minimalist UI design.

Towards the end of last year, I rolled out a series of mini updates focused on the introduction of new weapons to the player's arsenal. After spending most of the prior updates on refinements to existing systems, this was a huge breath of fresh air to me. First of all, it meant that I had to do research on a diverse set of weapon systems from a wide variety of games. I thoroughly enjoyed this process. There were so many cool weapons and I just wanted to figure out how they all worked. Then came the act of creating working replicas of a selected few of these weapons in Unreal Engine. And this process had a certain playful feel to it, unlike say, working on something that merely increased the overall efficiency without any visible feedback beyond a change in the number of frames per second. I even enjoyed the final testing process, which is usually kind of a mood buster for me. This experience kind of reminded me of what it felt like during the early days of the toolkit's production.

Maybe it was the tonal shift in the experience, or perhaps it was because I really cared enough about these systems to at least create some basic materials and particle systems (which I do not enjoy working on at all), that opened up my mind to the possibilities outside the narrow path that I was following over the past year of the project's life. I felt the need to add some touch of beauty to all parts of the toolkit, even though I had very little experience working on anything other than blueprint scripting. I did not know exactly how to go about doing that, but before I got around to that answer, life took a sudden turn.

Due to some harrowing personal issues, I had to return to my parent's place. Things quickly deteriorated further, and I realized that all my plans would have to wait as it became more and more clear that the likelihood of getting back to work before the end of the year was rather bleak. With access only to my phone, all I could pretty much do from a work standpoint was to provide customer support for the marketplace products. So I spent most of my time studying games and reading up on what was happening in the industry. Perhaps due to the result of being separated from work, and being exposed to a lot of interesting ideas, I started getting excited about all the things that I wanted to do after getting my life back together.

Fast forward a couple of months, and I was back at work. While going over what course of action to take, I decided to do something about the visual design of FPSTDT. I noticed that the sample map provided with the toolkit had a very dark grey color for all walls and floors which, to be honest, looked quite bland and unappealing. Also, there were no distinguishing features on the map, but I figured that I'll set the groundworks for the artistic design first before trying my hands at level design. To provide a frame of reference for the starting position, this is a screenshot from the vanilla map.




The first step was to come up with some basic vision of what I wanted the level to look like. One idea that had crossed my mind during the brief stint away from work was to emulate the visual design of Mirror's Edge and Super Hot. Basically, two contrasting colors used predominantly over the entire level layout. After trying out a few colors in the editor, I really took a liking to the white+red color combination. However, the whites were not looking as clean as they should and the reds turned out to be a bit too bright. So I tried editing the light source to see if its color was causing this tinted result. I also tried adjusting its brightness to get the desired result but to no avail.

After getting frustrated with my lack of experience working on this side of the engine, I tried using unlit emissive materials to see if they could produce what I was looking for. Not only did not help, but it also made it impossible to see the edges of meshes as shown below:




After not making any headway, I started out with a new map from a blank project. I added in a skylight and a sphere reflection capture and applied an alternating white/red color design to the building blocks. The materials were also given a slightly glossy look. Even though it was a simple barebones map made out of cuboid shapes, it finally started looking kind of beautiful, at least compared to the sample map from the toolkit. I guess anything would look beautiful compared to that, but to be honest, I really liked the way the white+red color combination complemented each other. It created a clear distinction between the floor, the tower bases and the towers.



However, there was one major problem with this design. When
 I tried to spawn towers, it was a bit too hard to clearly identify the visual silhouette of the holograms among the bright white floors.



So what started as an exciting prospect of trying to replicate the aesthetic design of Mirror's edge came to a halt. I did not have any other alternative design choices in my mind, and going back to the default style was not an option. So I went ahead and experimented with the entire spectrum of colors in order to identify other interesting color combinations. The floors definitely needed something on the darker side that contrasted well with the holograms. Even though black is the first color that pops up from that viewpoint, it actually looked ok. It did not look as clean as the alternating white plus red design, but it would have to do for now.



With the holographic display issue kind of sorted out, the next problem to tackle was the washed out in-game look. The skylight intensity was ramped up earlier to make all the surfaces look smooth and shiny. Bringing the value down made everything look very dull. I tried to work around this by adjusting the intensity of the directional light, but without making any progress towards the intended result. So I started going over the skysphere settings to see if it would make any difference and noticed that the sun height of the actor was set to -1. Basically, this causes it to display a night sky. Out of curiosity, I decided to check out if it had any impact on the actual lighting of the level by resetting the parameter back to 1. After doing so, I built the lighting again and tested out the level. With the already ramped up skylight intensity, it was like getting caught in Tien's solar flare scene from the intro of Dragonball Z.



Now I could finally bring down the intensity of the skylight down much further, reducing the washed out look further. Around this time I was also reminded of Racer X's car from Speed Racer, which had a striking combination of yellow and black.



So I threw in a few yellow blocks into the scene as well just to get an idea of how it fits into the scene. I could find no immediate use for it, but kept it around just in case some idea pops up in the future.



The level was then expanded outwards to match the scale of the original sample map as I wanted to get a good comparison. It definitely did look better than before, and unfortunately, I do not have any screenshots from this particular phase, but it somehow felt like there was a definite lack of visual cohesion. I hardly have any experience working on art/level design aspects of the engine, but the power core, with its holographic material, looked kind of out of place for some reason. It later occurred to me that there was no unifying theme holding everything together. Since I did not know where to find that, I decided to go back to the roots. I thought about why I started working on this toolkit and what it felt like back then.

During the early stages of the project, I had no intention of publishing it as a toolkit on the Unreal Engine Marketplace. I was very much into Sanctum 2 back then and just wanted to try building the underlying gameplay systems on my own. So it was more of a fun little experiment than anything else at that point. It was, in essence, kind of like playing with a lego set. You may start with trying to create what was shown in the pictures that come with the box, but then the experience transforms into one where you go around exploring your own ideas. Kind of similar to how you start trying to understand and recreate a game but then end up throwing in your own ideas to see what happens. At that point, it kind of feels like a playground, with you actually designing the systems that form this space more than the act of playing with these creations. So I decided to run with this theme of a virtual playground.

With a basic overarching theme set in place, I went around using basic colors to categorize the different building blocks of the level. For example, yellow floor surfaces to signify enemy spawning areas, green for the power core, etc. That went on for a while, experimenting with minor tweaks to the material and lighting setups, and finally, the new visual design for the toolkit was complete.






There's most certainly a lot more room for improvements. The level design, for one, could really help with an upgrade. But this was a good start. And I will continue pursuing this course of action over the upcoming updates.

Sunday, February 18, 2018

Top Down Stealth Toolkit FAQ #2

Q: I want to change the size of the vision arcs. Where can I find the variables that control it? Is there a way to do it from the editor window?

A: You can customize the radius & angle of vision arcs for all types of AI through their perception components. The toolkit uses a custom AI Perception system (BPC_AIPerception) that enables the AI bots to perceive different types of stimuli. It basically allows for four different types of perception (for further details, check out Top Down Stealth Toolkit Basics: AI Perception), out of which the Visual Perception system is responsible for hosting the parameters that determine how far & wide a bot can see. And these include the 'VisionRange' & 'HalfVisionAngle', which among other things, also control the size of the Vision Arcs.

Ideally, these variables could be made public (instance editable) and thus enable customization directly through the editor. However, due to an engine bug (https://issues.unrealengine.com/issue/UE-46795) that automatically resets public struct values stored in components, I had to revert it back to being editable only from the parent blueprint to which the component is attached. So this means that the aforementioned attributes will require editing through the details panel of perception components in AI blueprints (as shown in the screenshot below). Doing so will instantly apply the changes to all actors of that particular AI class.



I understand that this is a bit tedious compared to directly editing these attributes from the editor details panel, but Epic has marked the bug as fixed for the v4.19 release. So if the fix does make it into the final release, it should then be possible to set the variable to public and directly test out changes through the editor itself.

Wednesday, February 14, 2018

Tower Defense Starter Kit v2.2 Dev Log #2: Data Driven AI Design

In the previous dev log for the v2.2 update of Tower Defense Starter Kit, I talked about the new improvements to the wave spawning system, & in particular, the introduction of weighted spawn probabilities for different AI classes. If you're interested in knowing more about the same, you can get an in-depth read at Tower Defense Starter Kit Dev Log #1: Weighted Wave Spawning System. Getting back to the topic at hand, it was while working on the idea of weighted probability distributions that I decided to try out a data-driven approach to spawning AI bots. The tower spawning system had received a similar upgrade during the early stages of the toolkit and ever since then, I've occasionally had this thought cross my mind.



The introduction of a data-driven design ended up greatly streamlining the process of adding new features related to towers, and across a wide variety of scenarios. Most notable among these was that further changes to HUD systems required a far less hands-on approach as most of the associated UI elements were already being displayed dynamically based on the underlying data. And this drastically reduced the time required to release subsequent updates. However, all these advantages did not exactly warrant doing the same for the AI systems, as unlike towers, there were very few processes that were interlinked with them. Plus most of the updates were focused on other aspects of the toolkit. But that is about to change.

The primary focus of the next few updates is to improve the AI & Wave design elements of the toolkit. As mentioned in the previous dev log, I'd like to first create a foundational framework which facilitates the introduction of further customizations with ease. So as implemented in the case of tower spawning systems, I decided to make a small step towards taking a data-driven approach to AI design. No changes have been made to the core AI logic at this point. Instead, the basic attributes common to all AI classes, (like HP, Attack Damage, Vision Range, etc) have been migrated over to a new data table as shown below:



Over the course of new few updates, I would like to transfer more of the AI control parameters into the data table. For example, the Healer class AI has additional parameters that determine how often & by how much it can heal allied bots in the vicinity. A similar case can be said of the Tower Disabler bot as well, which can temporarily disable all nearby towers at periodic intervals. As of now, the variables that control these specialized functions still reside within their respective classes. In the future, I would like to have these unique parameters set through the data table as well. This could potentially allow us to have all data pertaining to the AI systems at a centralized & easily editable location, & also dynamically add special abilities like healing, based on the associated information.

Anyways I don't want to rush down that path without going over all the potential alternatives. So I'm taking it one step at a time right now. The v2.2 update will introduce the data-driven approach at a basic and limited level, with the wave spawning systems already synchronized with this design philosophy. I hope to develop it into a full-fledged system soon enough.

Tuesday, February 13, 2018

Tower Defense Starter Kit v2.2 Dev Log #1: Weighted Wave Spawning System

The v2.0 update of Tower Defense Starter Kit introduced some major improvements to the wave spawning system. Prior to this update, the toolkit came equipped with a single Wave Spawn Controller class that handled three different types of spawning models. This design structure was replaced in favor of an inheritance based system with a parent class that handled the base logic, and two new child classes that individually specialized in user-defined (batched) and randomized (threat based) wave spawn patterns. The third model of wave spawning system, which allowed the user to control the spawn data of every individual unit in a wave, was removed since the batched spawning system could essentially achieve the same desired result while making it far easier to edit the wave data.

The reason behind introducing an inheritance based design was self-explanatory, but the true long-term goal behind the v2.0 update was to act as a pivot to alter the design direction of future updates when it comes to the addition of improved features to the wave spawning system. Getting to the point, the batched spawning model works quite well within it's intended design: the creation of user-defined wave patterns. The unit-based system as mentioned earlier did not serve any real purpose and hence did not warrant an upgrade. While it was easy to use when working with small waves (and the reason behind its inclusion in the first place), it got increasingly cumbersome once you start dealing with larger waves of AI bots. It essentially ended up being feature creep that bloated the toolkit and hence was removed.

As for the threat based model, the primary motivation behind its creation was to provide a more automated approach to wave generation. It did not depend on the user to explicitly define the exact nature of the AI units spawned during a wave. However, to be honest, it had very little intelligent design going for it, instead relying a bit too much on randomness. And this has been bugging me for quite some time. And so I got around to thinking if it was truly adding anything relevant to the toolkit's core feature set? Was it providing the customers with a tool that allowed them to replicate popular tower design tropes? While it did serve the purpose of being an alternative design choice to the batched spawning system, it offered too little in the way of providing the end users with any degree of control over the wave generation mechanic. Ultimately, what units got spawned boiled down to a random number with no set of rules beyond having an upper threshold for the threat rating associated with a wave. And this brings us to the core theme of the v2.2 update: the introduction of weighted spawn probabilities to the wave spawning system.

In an effort to improve the design of threat-based spawning system, I started thinking about ideas that could reduce the element of randomness associated with the model. I noticed that while the existing system allowed the designer to specify enemy AI classes that can be spawned in any particular wave, it did not provide any means to control the probability distribution among them. So I introduced a new weight parameter which can be defined for all types of AI that need to be spawned during a mission.




Now, this facilitated the creation of special conditions like setting an increased spawn chance for the weaker enemies while reducing that of the really powerful ones. However, this in itself won't outright prevent higher tier enemies from spawning during the first wave. So that led to the addition of a new conditional check to ensure that only enemies with threat rating lower than a certain specified percentage of the wave's threat rating get spawned. I plan to extend upon this further in future updates by having unlock levels for different classes of AI. I'm also looking into the possibility of trying out a dynamically changing weight system, that could potentially reduce the weighting of lower tier enemies over each passing wave while increasing that of the higher tier ones.


Another area that received an overhaul was the repeating wave cycle design. The batched wave spawning system retains the wave cycles, where it makes sense, but I wanted the new weighted system to have features focused more on generating waves in an automated fashion (without increasing randomness within individual waves) as opposed to having the user define the specifications of each and every aspect of the wave. 
Hence it was replaced in favor of a system which raised the threat rating of each wave through a user-defined growth function.



I've implemented a simple linear growth function as part of the new update, but it can easily be modified to have an exponential growth curve. Maybe even a curve that dips after a certain number of waves to allow for some quiet time before raising the threat levels higher than before. I would definitely like to try out these scenarios in the future, but that's not what this update is about. Right now, I hope to build a foundational framework upon which future updates can be built upon.

Friday, February 9, 2018

Top Down Stealth Toolkit FAQ #1

Q: I noticed that the turrets are disabled when I start a new game. But then they sometimes get activated over the course of a game. Why is it behaving this way, and how can it be enabled right at the start of a mission?

A: The turret AI in Top Down Stealth Toolkit is set to a deactivated state by default. This is an intended feature designed to showcase the use of automated security devices as a form of backup system for the AI. The default behavior is to activate them once the Global Alert Level escalates to Stage I, which is why they seem to get turned on sometimes during the mission.

However, this design is not set in stone, and it can easily be modified to have the turrets turned on at the start of a level. If you want all turrets to be activated by default, open up the 'BP_AutomatedSurveillance_Turret' blueprint and set the 'UseDelayedInitializationModel' variable to False. Basically, this variable determines if an AI agent gets enabled by default, or on a need basis over the course of a mission. On the other hand, if you want only certain turrets placed in the level to be turned on, then just select those actors in the editor and set the aforementioned variable (check the screenshot below) to False through their details panels.


Thursday, February 8, 2018

Top Down Stealth Toolkit Basics: Patrol Movement Systems

The patrol movement systems for Guard AI in Top Down Stealth Toolkit is driven through a modular component-driven design that ensures minimal coupling between the associated feature & the parent entities that use them. This essentially means that you can easily reuse the functionality wherever required without having to worry about getting bogged down in deep inheritance cycles or having to copy paste large chunks of code between classes that are otherwise not related by a parent-child relationship.

The 'BPC_PatrolMovementControl' component attached to the Patrol Guard AI parent class supports three different types of patrol modes: Stationary, Fixed Waypoints, & Random Waypoints. These can be set for each individual patrol guard from the editor through the publicly editable variable 'PatrolMovementControl' as shown below:





Here is a brief overview of the three patrol modes:


Stationary: The Stationary mode setting is used when AI bots need to be assigned to guard a specified location without any actual patrol movement. This does not impact their decision-making process when it comes to responding to stimuli & hence they can move out to investigate any perceived suspicious activities in their vicinity. However, in the event that these guards fail to find the player after doing so, they will always return to their original guard location.

Fixed Waypoints: The Fixed Waypoints system enables the creation of well-defined paths for patrol movement. This is accomplished through the use of custom waypoint actors as well as an array to store references to these waypoints in a fixed order.

To create a patrol path using this system, drag & drop a set of 'BP_WaypointNode' actors into the level. They will serve as the path points that will be assigned to the AI bot. Now select the required Patrol Guard, & add a few elements (at most as many as available waypoints) to the 'FixedWaypointsArray' in 'BPC_PatrolMovementControl' component. To assign these waypoints, select the individual elements of the array & set the waypoint actors from the drop-down menu in their order of traversal along the required path (check screenshot below).


Guards that were interrupted from their path due to an external stimulus will always return to their designated path once they've completed the investigations.

Random Waypoints: Agents assigned to this mode will keep patrolling between randomly selected destinations within the level. The 'RandomWaypointRadius' parameter within the 'BPC_PatrolMovementControl' component can be used to control the search radius for acquiring new waypoints.


Using these three patrol modes, the AI entities in a level can be assigned different idle behaviors based on the roles they fulfill within the game's theme as well as the area/object they're guarding within the physical game space.

Wednesday, February 7, 2018

Top Down Stealth Toolkit Tutorial: How to create a new level

1. First, ensure that the default Game Mode & Game Instance class parameters in the Project Settings are set to 'BP_GameMode' & 'BP_GameInstance' classes respectively.

2. Now create a new map, open it, & lay down the floor meshes. Add a Nav Mesh Bounds Volume & extend it to encapsulate all the floor meshes. This will ensure that the AI agents/bots, once added will become capable of traversing across the level.


3. Add a Lightmass Importance Volume around the core game space.


4. Now drag & drop the following blueprints as actors into the level: BP_AISensoryManager, BP_AISurveillanceController, BP_GlobalAlertLevelController, BP_PatrolGuardSpawnPoint (multiple, if necessary), & BP_ExitPoint. Before moving on to the next step, here is a brief overview on what each of these actors bring to the toolkit:

  • The AI Sensory Manager continuously evaluates all stimuli against various agents & dynamically assigns new objectives to the AI agents based on the results. It basically is kind of like a task manager for the AI, functioning at a level higher than each of the individual agents.
  • The AI Surveillance Controller directs the activation of all AI agents within the level. This system can be leveraged to create different starting situations for each level, choosing to activate all security measures by default or have them activated dynamically based on the overall threat perceived by the AI.
  • The Global Alert Level Controller uses event dispatchers to continuously listen in on new stimuli being perceived by AI agents across the level, & updates the Global Alert Meter based on the threat rating of the perceived stimulus. This meter enables the aforementioned AI Surveillance Controller to dynamically increase the AI presence in the level in order to counter the threat posed by the player.
  • The Patrol Guard Spawn Point as the name suggests, act as spawn points to bring in additional Patrol Guards as back up. Unlike the other four actors on this list, these spawn points can be added in multiple spots across the level.
  • The Exit Point essentially serves as a sort of final objective marker for the level & gets activated once the player collects all the gems placed in the level.

Together, these five actors drive the core logic that is essential for the toolkit to function as intended.

4. Now it's time to add in the various AI agents & interactive actors into the level. These include collectible Gems, Patrol Guards, Cameras, Motion Sensors, Automated Turrets, Gadget Pickups, etc.


That's all there is to it. You should now have a fully functional level at your disposal. If you have any queries regarding the workflow, feel free to let me know in the comments section.

If you're interested in the toolkit, it's now available for purchase through the Unreal Engine Marketplace:  https://www.unrealengine.com/marketplace/top-down-stealth-toolkit