Jump to content

Leadwerks 3 Update Available

Josh

3,140 views

This update brings the addition of lightmapping across curved surfaces with smooth groups. The image below is a set of lightmapped CSG brushes, not mesh lighting. You can read a detailed account of our implementation of this feature here.

 

blogentry-1-0-89405400-1369109633_thumb.jpg

 

The project manager now includes an "Update" button, so you can easily update your project any time we modify files in the template folders. Although we tested with no problems, it is recommended you back up your project before using this feature. The editor will make a copy of any overwritten files, as an added precaution.

 

blogentry-1-0-09492400-1369110424_thumb.jpg

 

You've now got full control over all the features and settings in the Leadwerks editor, through the new Options dialog. This gives you control over various program behaviors and settings. You can toggle grid snapping, control the snap angle, and lots of other stuff.

 

blogentry-1-0-37868400-1369107369_thumb.jpg

 

We have made one change to the script system. Our multiple script design worked reasonably well during the development of Darkness Awaits. The flowgraph interactions were clean, but when it came to AI and player interaction, things got messy. For example, when the player pushes a button we perform a raycast or proximity test to get the entity hit. Then we have to go through all the scripts attached to the entity, looking for a relevant script attached to it:

--Check if GoblinAI component is present

if entity.script.GoblinAI~=nil then

 

--Call the TakeDamage() function

entity.script.GoblinAI:TakeDamage(10)

 

end

 

That works okay in our example, but when we consider other enemies we want to add, suddenly it gets ugly. We have a few choices.

  1. Add an if statement for every new AI script, checking to see if it is present.
     
  2. Separate the health value out into its own script.
     
  3. Loop through all attached scripts looking for a TakeDamage() function.

 

It should be obvious why option #1 is a bad idea. This would make our code highly interdependent. Encapsulation is one of our goals in game scripting, so we can achieve drag and drop functionality (or as close to that as is reasonably achievable without limiting ourselves).

 

The second option would solve our immediate problem, but this approach means that every single script variable two scripts access has to be a separate script. The thought of this just shuts my creativity down. I already think it's tedious to have to attach an AI and AnimationManager script to an entity, and can't imagine working with even more pieces.

 

The third option is the most reasonable, but it greatly impedes our coding freedom. It means every time two entities interact, you would have to do something like this:

--Check if GoblinAI component is present

if entity.components~=nil then

 

for local k,v in entity.components do

 

if type(v.TakeDamage)=="function" do

 

--Call the TakeDamage() function

v:TakeDamage(10)

end

end

end

 

If you actually wanted to return a value, you would just have to get the first value and exit the loop:

local enemyhealth=0

 

--Check if components table is present

if entity.components~=nil then

 

--Iterate through all attached components

for local k,v in entity.components do

 

if type(v.GetHealth)=="function" do

 

--Call the GetHealth() function

enemyhealth = v:GetHealth()

break

end

end

end

 

This is a major problem, because it means there is no firm "health" value for an entity. Sure, we could consider it a "HealthManager.health" value but therein lies the issue. Instead of having plug-and-play scripts everyone can share, we have to devise a system of expected script names and variables. This breaks compartmentalization, which is what we were going for in the first place. Both Chris and I realized this approach was fundamentally wrong.

 

After careful consideration, and based on our experience working on Darkness Awaits, we have restructured the system to work as follows, with a 1:1 entity:script relationship:

--Check if script is present

if entity.script~=nil then

 

if type(entity.script.TakeDamage)=="function" do

 

--Call the TakeDamage() function

entity.script.TakeDamage(10)

end

end

 

You can set an entity's script with the new function:

Entity::SetScript(const std::string& path)

 

You can also set and get entity script values right from C++:

virtual void SetString(const std::string& name, const std::string& s);
virtual void SetObject(const std::string& name, Object* o);
virtual void SetFloat(const std::string& name, const float f);

 

And it's easy to call a script function from your C++ code:

virtual bool CallFunction(const std::string& name, Object* extra=NULL);

 

The properties dialog is changed slightly, with a consistent script tab that always stays visible. Here you can set the script, and properties will appear below, instead of being spread across a bunch of different tabs:

blogentry-1-0-16203900-1369107653.jpg

 

Maps with entities using only one script (which has been almost everything we see) are unaffected. Objects with multiple scripts need to have their code combined into one, or split across multiple entities. I am very reluctant to make changes to the way our system works. Our API has been very stable since day one of release, and I know it's important for people to have a solid foundation to build on. However, I also knew we made a design mistake, and it was better to correct it sooner rather than later.

 

Probably the best aspect of the script system in Leadwerks 3 has been the flowgraph connections between scripted objects. That's been a big winner:

 

blogentry-1-0-19423100-1369180339_thumb.jpg

 

As we use the editor more and hear about the community's experience, it allows us to refine our tools more. One of the more subtle but we made is in the properties editor. One of the annoyances I experienced was when setting a property like mass or position when an entire hierarchy of entities was selected. Obviously I didn't want to set the mass for every single bone in a character, but how to tell the editor that? I went through all the properties, and for ones that the user is unlikely to want set, I made the following rule: If the entity has a selected parent anywhere in the hierarchy, it gets ignored when properties are retrieved and applied. (Other things like color will still work uniformily.) Just try it, and you'll see. It speeds up editing dramatically.

 

In a similar vein, we finally solved the problem of "too many selection widgets", We only display the top-most control widget for each group of selected objects. Again, it's easier for you to just try it and see than for me to explain in detail. If we did our job right, you might not even notice because the editor will just do what you want without any thought.

 

You've also got control over the color scheme, and can use it to customize the look and feel of the 3D viewports and code editor.

 

blogentry-1-0-22146600-1369107531_thumb.jpg

 

On Windows we fixed an annoyance that triggered menu shortcuts when the user tried to copy and paste in text fields. This tended to happen in the properties editor. We're working to resolve the same issue in the Cocoa UI for Mac.

 

I'm a big fan of the 3ds max Max Script system, so in the output panel we've added a real-time Lua console:

blogentry-1-0-00636400-1369180649_thumb.png

 

You can type Lua commands in and interact with Leadwerks in real-time. Just for fun, try pasting in this line of code and see what happens:

for i=0,10 do a = Model:Box(); a:SetColor(math.random(0,1),math.random(0,1),math.random(0,1)); a:SetPosition(i*2,0,0); end

 

You can't create objects the editor will recognize (yet), but it's fun to play with and should give you an idea of where we're going with the editor.

 

This is an important update that includes new features and enhancements that improve usability and workflow. Thank you for your feedback. It's great to see all the activity that is taking place as people learn Leadwerks 3.



19 Comments


Recommended Comments

Sweet list of updates Josh.

 

I do find the script changes a little weird though. It completely removes the component based behaviour. Instead of script that are dedicated we are now getting massive scripts. I think this makes it more complicated to script.

Share this comment


Link to comment

I didn't understood all script changes, but i'll take a look tonight and try it.

 

Scritp and FlowGraphs are coming back. what is recommended finally ? Use C++ or scripts/flowgraphs to make games ?

 

Would it be not more easy to call C++ from Script attached to characters for examples ? (C++ managing collisions, AI and other stuff)

 

Will we have small examples fo the new script system and flographs ?

 

This is lot of changes, a lot didn't expected as script system was ok for me.

 

That's lot of changes, but where are the new Gizmos ? Just kidding we'll wait ,and if they not appear we will create some 3DSMAX style script for them as this is a new incoming feature.

Share this comment


Link to comment

I like this too. But i think i have issues to understand the script change.

Why is it now impossible to have multiple scripts attached?

 

Maybe some sort of index for attached scripts could work?

 

I think now scripts can sometimes have repeated code in it.

 

For example the Volume Trigger script together with the Sound Node Script attached to one entity. Now i need to have 2 entities or put both scripts together into a single script.

 

Or maybe the Scripting wasn't made for such small script parts in the first place.

Share this comment


Link to comment

There are three major aspects of the script system in Leadwerks 3. First, scripts are attached to objects from a file, rather than having to be named the same as a loaded model. Second, we tried attaching multiple scripts per objects. Third, the flowgraph allows the user to create visual connections between the scripts.

 

All of these were new ideas, and they were all theoretical designs. We didn't really know how practical they would be until we started using them in depth. Fortunately, two out of those three ideas turned out to be good ones.

 

Once we got into more advanced scripted gameplay, we found that a component approach turned out to be very tedious from a programming perspective. It creates an extra layer of abstraction that's confusing for both beginners and advanced users.

 

We believe we should expand on the strength of the flowgraph editor in the future. One way we can do this is by adding purely logical entities...that is, an entity that only exists in the flowgraph, and gets used for connecting logic.

Share this comment


Link to comment

I like the idea about pure logic entities. That would solve the case I was thinking about.

Share this comment


Link to comment

Just updated looks good. A few changes are needed to my project but for the better.

Share this comment


Link to comment

Smooth rounded BSP lightening is great now :)

 

And be able to copy/paste in the scene panel is good also.

 

I seen the player has one script only now , i'll have to dig how it works later.

Share this comment


Link to comment

Can't say I'm thrilled about losing the multiple scripts attached. The animation manager along with other stuff was a good example as to the benefit of this system. Guess we have to copy/paste animation manager code into our 1 entity script now, or rewrite animation code for each entity?

 

At least logical entities will come to the flowgraph, which will add some power to the flowgraph, even though we can do that today but just need to use a pivot to attach the logical script too.

 

Once we got into more advanced scripted gameplay, we found that a component approach turned out to be very tedious from a programming perspective.

 

Can you explain how you found it tedious, or maybe explain how you didn't see the benefit of reusable components that other people could use or better yet programmers could eventually sell? Multiple scripts was meant for the programmer who wants the benefit of reusable scripts and also for the non-programmer who just wants to plug and play functionality. I would rather add 5 different specific scripts that I can reuse over and over again in different situations than recode or copy/paste code into 1 master script per object.

 

Also, couldn't you just leave in the fact that we could add multiple scripts and those who don't want to do that can just make the 1 script? Allowing multiple scripts doesn't hinder the person who wants just 1 script, unless it was causing you bugs behind the scenes and you just didn't want to deal with it wink.png

 

 

Rereading: In my view components need to interact via events, and the attachment of these events is what's specific to the object so as to not break encapsulation of the scripts. This is basically what the flowgraph is doing and why it's needed in a situation like this and does work if we have multiple scripts just like in the script link I provided above.

 

Another option for the programmers would be to make 1 script that is just the glue attaching the events to the functions of the other scripts attached and then gives a more event oriented programming model.

Share this comment


Link to comment

I did like mulitple scripts but if going away from that it was better to do it now rather than further down the line.

Share this comment


Link to comment

How about messages? Send a message to all components attached.

//Send message from script 1
SendMessage("Damage", 10.0)


//Handler in script 2
HandleMessage(message, extra)
{
 if message == "Damage" then
	 health = health - extra
   elseif message == "Repair"
       health = health + extra
    end
}

Overhead is really low and when a component HandleMessage body is empty it simply ignores messages.

Share this comment


Link to comment

Yeah SendMessage() would be good too, however there is still a level of dependency with that, it's just not as hard as an actual object or function. I still think a script that attaches functionality from script to script that we create specifically for the game in question would be cool too.

 

Oh well. :/

 

edit: The other side of this coin is being able to attach multiple scripts that don't have anything to do with each other or need to communicate at all. For example having a rotation script and a moving script attached to a platform or something like that. They don't need to talk, they just need to do their own thing. Without this we are duplicating code inside the 1 script. Now that's a simple example but there can/is more complex situations just like that also.

 

I just don't see what the harm is. If you only want to use 1 script then use 1 script, but leave the multiple script functionality there for those who want to use multiple scripts.

Share this comment


Link to comment

Hi Rick. I see it this way at the moment:

The attached script is really only for the entity behavior.

So if you have a moving/rotating platform add the rotation and movement functions to it. You don't need to use both when you need only one movement. Then I would see the pure logical flowgraph entities for the events: button press -> start the movement of the platform.

 

I know you would need to have another entity for a button anyway, just couldn't think for a better example.

 

With multiple scripts attached you need to find the correct script first as Josh shows in the post.

Share this comment


Link to comment

@beo6 I understand the other side of the coin, I just would like the option to really do component programming (and I think leaving the multiple script logic in doesn't really hurt much).

 

From the example Josh gave he/they weren't going about it "correctly". With component design you don't hook the wiring up directly inside the scripts/components because that creates exactly the issue he ran into. Those scripts are meant to be generic and only do their specific task without knowing about any other script. How can that be possible you might ask? Well, it takes more thought about each component. You need to expose events/messages and fire events/messages and do the hooking up of these in what would be a game specific script/way. The flowgraph is one way to act with this so it's not a total washout for us.

 

So even though Aggror's way is decent too, if you do that inside the actual components/scripts there is a reliance there that you don't want.

 

This is a really different way of thinking about how a game works. Instead of following the logical flow from top/down and programming it that way, you have to think about what events each component could trigger from within and what functionality it should expose. Then at the start of the game you do all the wiring between components instead of interacting directly with the components during run-time.

 

The best thing about the component design is making very robust/reusable components that are completely isolated. This means working in teams becomes that much easier as long as you have thought out the design of what components you need, you can give components to people to make with no fear of them colliding or requiring other components. It makes your code more reliable and stable as well as something with no outside dependencies is easy to test and make less errors with.

 

In short, I think Josh tried to use conventional programming mixed with component programming and that doesn't work. I don't see the harm in allowing multiple scripts for those who want to use that style. If you don't want to you can still just make 1 script. All the code is already there for multiple scripts so why disable it? Just doesn't make sense.

Share this comment


Link to comment

Am I the only one who's thinking that LE3 is focused too much on mobile development? because all updates I see are some updates like: mobile update enhancement or something like that. the core of gaming is an PC (in my opinion) and right now... all the effort is taken too the mobile and tablet world. Why?

Share this comment


Link to comment

You'd have to define "core of gaming" and then decide if the money is there or not. My mom and dad aren't buying PC games, but they will spend .99 on a mobile game for my daughter to play while they have her. Can we simply throw casual games to the side just because they aren't the OMGSWEETFPS game we all want to make? If you are running a game company and aren't hitting mobile in some way you are either massively established or about to go under :)

 

Mobile is a fact of life and it's not going away.

Share this comment


Link to comment

I know that is true, but still. it all started with PC and now it's almost fully based on mobile devices (which all will have Quad core CPU and high end graphics cards eventually) it's just a matter of time till mobile devices are even more powerful as PC. so why making it all mobile supported (in graphics terms). people who say: 'that isn't going to happen' are just hypocrites, because in the 70's they said: 'mobile devices are never going to happen' and look what we achieved now a days. btw sorry for my English, it has been a long time.

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 2
      Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the 
      Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
      We need a way to prevent cracks from appearing along edges. We need to display a consistent density of triangles on the screen. Too many polygons is a big problem. I think these issues are the reason you don't really see much use of tessellation in games, even today. However, I think my research this week has created new technology that will allow us to make use of tessellation as an every-day feature in our new Vulkan renderer.
      Per-Vertex Displacement Scale
      Because tessellation displaces vertices, any discrepancy in the distance or direction of the displacement, or any difference in the way neighboring polygons are subdivided, will result in cracks appearing in the mesh.

      To prevent unwanted cracks in mesh geometry I added a per-vertex displacement scale value. I packed this value into the w component of the vertex position, which was not being used. When the displacement strength is set to zero along the edges the cracks disappear:

      Segmented Primitives
      With the ability to control displacement on a per-vertex level, I set about implementing more advanced model primitives. The basic idea is to split up faces so that the edge vertices can have their displacement scale set to zero to eliminate cracks. I started with a segmented plane. This is a patch of triangles with a user-defined size and resolution. The outer-most vertices have a displacement value of 0 and the inner vertices have a displacement of 1. When tessellation is applied to the plane the effect fades out as it reaches the edges of the primitive:

      I then used this formula to create a more advanced box primitive. Along the seam where the edges of each face meet, the displacement smoothly fades out to prevent cracks from appearing.

      The same idea was applied to make segmented cylinders and cones, with displacement disabled along the seams.


      Finally, a new QuadSphere primitive was created using the box formula, and then normalizing each vertex position. This warps the vertices into a round shape, creating a sphere without the texture warping that spherical mapping creates.

      It's amazing how tessellation and displacement can make these simple shapes look amazing. Here is the full list of available commands:
      shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width = 1.0); shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width, const float height, const float depth, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 16); shared_ptr<Model> CreateCone(shared_ptr<World> world, const float radius = 0.5, const float height = 1.0, const int segments = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreateCylinder(shared_ptr<World> world, const float radius = 0.5, const float height=1.0, const int sides = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreatePlane(shared_ptr<World> world, cnst float width=1, const float height=1, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateQuadSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 8); Edge Normals
      I experimented a bit with edges and got some interesting results. If you round the corner by setting the vertex normal to point diagonally, a rounded edge appears.

      If you extend the displacement scale beyond 1.0 you can get a harder extended edge.

      This is something I will experiment with more. I think CSG brush smooth groups could be used to make some really nice level geometry.
      Screen-space Tessellation LOD
      I created an LOD calculation formula that attempts to segment polygons into a target size in screen space. This provides a more uniform distribution of tessellated polygons, regardless of the original geometry. Below are two cylinders created with different segmentation settings, with tessellation disabled:

      And now here are the same meshes with tessellation applied. Although the less-segmented cylinder has more stretched triangles, they both are made up of triangles about the same size.

      Because the calculation works with screen-space coordinates, objects will automatically adjust resolution with distance. Here are two identical cylinders at different distances.

      You can see they have roughly the same distribution of polygons, which is what we want. The same amount of detail will be used to show off displaced edges at any distance.

      We can even set a threshold for the minimum vertex displacement in screen space and use that to eliminate tessellation inside an object and only display extra triangles along the edges.

      This allows you to simply set a target polygon size in screen space without adjusting any per-mesh properties. This method could have prevented the problems Crysis 2 had with polygon density. This also solves the problem that prevented me from using tessellation for terrain. The per-mesh tessellation settings I worked on a couple days ago will be removed since it is not needed.
      Parallax Mapping Fallback
      Finally, I added a simple parallax mapping fallback that gets used when tessellation is disabled. This makes an inexpensive option for low-end machines that still conveys displacement.

      Next I am going to try processing some models that were not designed for tessellation and see if I can use tessellation to add geometric detail to low-poly models without any cracks or artifacts.
    • By Josh in Josh's Dev Blog 0
      For finer control over what 2D elements appear on what camera, I have implemented a system of "Sprite Layers". Here's how it works:
      A sprite layer is created in a world. Sprites are created in a layer. Layers are attached to a camera (in the same world). The reason the sprite layer is linked to the world is because the render tweening operates on a per-world basis, and it works with the sprite system just like the entity system. In fact, the rendering thread uses the same RenderNode class for both.
      I have basic GUI functionality working now. A GUI can be created directly on a window and use the OS drawing commands, or it can be created on a sprite layer and rendered with 3D graphics. The first method is how I plan to make the new editor user interface, while the second is quite flexible. The most common usage will be to create a sprite layer, attach it to the main camera, and add a GUI to appear in-game. However, you can just as easily attach a sprite layer to a camera that has a texture render target, and make the GUI appear in-game on a panel in 3D. Because of these different usages, you must manually insert events like mouse movements into the GUI in order for it to process them:
      while true do local event = GetEvent() if event.id == EVENT_NONE then break end if event.id == EVENT_MOUSE_DOWN or event.id == EVENT_MOUSE_MOVE or event.id == EVENT_MOUSE_UP or event.id == EVENT_KEY_DOWN or event.id == EVENT_KEY_UP then gui:ProcessEvent(event) end end You could also input your own events from the mouse position to create interactive surfaces, like in games like DOOM and Soma. Or you can render the GUI to a texture and interact with it by feeding in input from VR controllers.

      Because the new 2D drawing system uses persistent objects instead of drawing commands the code to display elements has changed quite a lot. Here is my current button script. I implemented a system of abstract GUI "rectangles" the script can create and modify. If the GUI is attached to a sprite layer these get translated into sprites, and if it is attached directly to a window they get translated into system drawing commands. Note that the AddTextRect doesn't even allow you to access the widget text directly because the widget text is stored in a wstring, which supports Unicode characters but is not supported by Lua.
      --Default values widget.pushed=false widget.hovered=false widget.textindent=4 widget.checkboxsize=14 widget.checkboxindent=5 widget.radius=3 widget.textcolor = Vec4(1,1,1,1) widget.bordercolor = Vec4(0,0,0,0) widget.hoverbordercolor = Vec4(51/255,151/255,1) widget.backgroundcolor = Vec4(0.2,0.2,0.2,1) function widget:MouseEnter(x,y) self.hovered = true self:Redraw() end function widget:MouseLeave(x,y) self.hovered = false self:Redraw() end function widget:MouseDown(button,x,y) if button == MOUSE_LEFT then self.pushed=true self:Redraw() end end function widget:MouseUp(button,x,y) if button == MOUSE_LEFT then self.pushed = false if self.hovered then EmitEvent(EVENT_WIDGET_ACTION,self) end self:Redraw() end end function widget:OK() EmitEvent(EVENT_WIDGET_ACTION,self) end function widget:KeyDown(keycode) if keycode == KEY_ENTER then EmitEvent(EVENT_WIDGET_ACTION,self) self:Redraw() end end function widget:Start() --Background self:AddRect(self.position, self.size, self.backgroundcolor, false, self.radius) --Border if self.hovered == true then self:AddRect(self.position, self.size, self.hoverbordercolor, true, self.radius) else self:AddRect(self.position, self.size, self.bordercolor, true, self.radius) end --Text if self.pushed == true then self:AddTextRect(self.position + iVec2(1,1), self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) else self:AddTextRect(self.position, self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) end end function widget:Draw() --Update position and size self.primitives[1].position = self.position self.primitives[1].size = self.size self.primitives[2].position = self.position self.primitives[2].size = self.size self.primitives[3].size = self.size --Update the border color based on the current hover state if self.hovered == true then self.primitives[2].color = self.hoverbordercolor else self.primitives[2].color = self.bordercolor end --Offset the text when button is pressed if self.pushed == true then self.primitives[3].position = self.position + iVec2(1,1) else self.primitives[3].position = self.position end end This is arguably harder to use than the Leadwerks 4 system, but it gives you advanced capabilities and better performance that the previous design did not allow.
    • By reepblue in reepblue's Blog 0
      As you may have known, I've been dabbling with input methods for a while now using SDL2. Since then, I've learned how to do similar functions using the Leadwerks API. The goal was to make a inout system that's easily re-bindable, and allows for controllers to "just work". My first research of a goof system comes from a talk at Steam DevDays 2016 as they discuss how to allow integration with the Steam Controller. 
       
      My thought was: "If I can create my own Action System, I can bind any controller with any API I want". The SDL experiments was a result of this, but they ended up being sloppy when you tried to merge the window polling from SDL into Leadwerks.
      The next goal was to remove SDL2 out of the picture. I've created functions to allow reading and simulations of button presses with the Leadwerks Window class.
      //----------------------------------------------------------------------------- // Purpose: //----------------------------------------------------------------------------- bool InputSystem::KeyHit(const int keycode) { auto window = GetActiveEngineWindow(); if (keycode < 7) return window->MouseHit(keycode); return window->KeyHit(keycode); } //----------------------------------------------------------------------------- // Purpose: //----------------------------------------------------------------------------- bool InputSystem::KeyDown(const int keycode) { auto window = GetActiveEngineWindow(); if (window != NULL) { if (keycode < 7) return window->MouseDown(keycode); return window->KeyDown(keycode); } return false; } //----------------------------------------------------------------------------- // Purpose: //----------------------------------------------------------------------------- void InputSystem::SimulateKeyHit(const char keycode) { auto window = GetActiveEngineWindow(); if (window != NULL) { if (keycode < 7) window->mousehitstate[keycode] = true; window->keyhitstate[keycode] = true; } } //----------------------------------------------------------------------------- // Purpose: //----------------------------------------------------------------------------- void InputSystem::SimulateKeyDown(const char keycode) { auto window = GetActiveEngineWindow(); if (window != NULL) { if (keycode < 7) window->mousedownstate[keycode] = true; window->keydownstate[keycode] = true; } } The simulate keys are very important for controllers. for this case, we would trick the window class thinking a key was pressed on the keyboard. The only direct input we would need from the controller is the value analog sticks which I haven't touch as of yet.
       Using JSON, we can load and save our bindings in multiple Action Sets!
      { "keyBindings": { "actionStates": { "Menu": { "selectActive": 1, "selectDown": 40, "selectLeft": 37, "selectRight": 39, "selectUp": 38 }, "Walking": { "crouch": 17, "firePrimary": 1, "fireSecondary": 2, "flashLight": 70, "interact": 69, "jump": 32, "moveBackward": 83, "moveForward": 87, "moveLeft": 65, "moveRight": 68, "reloadWeapon": 82 } } } } You may want a key to do something different when your game is in a certain state. For this example, when the Active Action Set is set to "Menu", Only KEY_UP, KEY_DOWN, KEY_LEFT, KEY_RIGHT, and KEY_LBUTTON will work. You can still hover over your buttons with the mouse, but when it comes time to implement the controller for example, you'd just call GetActionHit(L"selectActive") to select the highlighted/active button. If the state is set to walking, then all those keys for Menu gets ignored in-favor of the walking commands. All keys/buttons are flushed between switching states!
      Here's example code of this working. "Interact" gets ignored when "Menu" is set as the default action and vise-versa.
      while (window->KeyDown(KEY_END) == false and window->Closed() == false) { if (window->KeyHit(KEY_TILDE)) { if (InputSystem::GetActiveActionSet() == L"Menu") { SetActionSet(L"Walking"); } else { SetActionSet(L"Menu"); } } // Under "Menu" if (GetActionHit(L"selectUp")) { DMsg("selectUp!"); } // Under "Walking" if (GetActionHit(L"interact")) { DMsg("interact!"); } } Only things I didn't implement as of yet is actual controller support and saving changes to the json file which might need to be game specific. I also want to wait until the UI is done before I decide how to do this.
      As for controllers, we can use SteamInput, but what if your game isn't on Steam? You can try to implement XInput yourself if you want. I tried to add Controller support with SDL2, but people reported issues. And then, what about VR controllers? What matters right now is that we have room to add these features later on. All I need to do is use the GetActionHit/Down Commands and the rest doesn't matter.
×
×
  • Create New...