Jump to content

Leadwerks 3 Update Available

Josh

3,213 views

This update brings the addition of lightmapping across curved surfaces with smooth groups. The image below is a set of lightmapped CSG brushes, not mesh lighting. You can read a detailed account of our implementation of this feature here.

 

blogentry-1-0-89405400-1369109633_thumb.jpg

 

The project manager now includes an "Update" button, so you can easily update your project any time we modify files in the template folders. Although we tested with no problems, it is recommended you back up your project before using this feature. The editor will make a copy of any overwritten files, as an added precaution.

 

blogentry-1-0-09492400-1369110424_thumb.jpg

 

You've now got full control over all the features and settings in the Leadwerks editor, through the new Options dialog. This gives you control over various program behaviors and settings. You can toggle grid snapping, control the snap angle, and lots of other stuff.

 

blogentry-1-0-37868400-1369107369_thumb.jpg

 

We have made one change to the script system. Our multiple script design worked reasonably well during the development of Darkness Awaits. The flowgraph interactions were clean, but when it came to AI and player interaction, things got messy. For example, when the player pushes a button we perform a raycast or proximity test to get the entity hit. Then we have to go through all the scripts attached to the entity, looking for a relevant script attached to it:

--Check if GoblinAI component is present

if entity.script.GoblinAI~=nil then

 

--Call the TakeDamage() function

entity.script.GoblinAI:TakeDamage(10)

 

end

 

That works okay in our example, but when we consider other enemies we want to add, suddenly it gets ugly. We have a few choices.

  1. Add an if statement for every new AI script, checking to see if it is present.
     
  2. Separate the health value out into its own script.
     
  3. Loop through all attached scripts looking for a TakeDamage() function.

 

It should be obvious why option #1 is a bad idea. This would make our code highly interdependent. Encapsulation is one of our goals in game scripting, so we can achieve drag and drop functionality (or as close to that as is reasonably achievable without limiting ourselves).

 

The second option would solve our immediate problem, but this approach means that every single script variable two scripts access has to be a separate script. The thought of this just shuts my creativity down. I already think it's tedious to have to attach an AI and AnimationManager script to an entity, and can't imagine working with even more pieces.

 

The third option is the most reasonable, but it greatly impedes our coding freedom. It means every time two entities interact, you would have to do something like this:

--Check if GoblinAI component is present

if entity.components~=nil then

 

for local k,v in entity.components do

 

if type(v.TakeDamage)=="function" do

 

--Call the TakeDamage() function

v:TakeDamage(10)

end

end

end

 

If you actually wanted to return a value, you would just have to get the first value and exit the loop:

local enemyhealth=0

 

--Check if components table is present

if entity.components~=nil then

 

--Iterate through all attached components

for local k,v in entity.components do

 

if type(v.GetHealth)=="function" do

 

--Call the GetHealth() function

enemyhealth = v:GetHealth()

break

end

end

end

 

This is a major problem, because it means there is no firm "health" value for an entity. Sure, we could consider it a "HealthManager.health" value but therein lies the issue. Instead of having plug-and-play scripts everyone can share, we have to devise a system of expected script names and variables. This breaks compartmentalization, which is what we were going for in the first place. Both Chris and I realized this approach was fundamentally wrong.

 

After careful consideration, and based on our experience working on Darkness Awaits, we have restructured the system to work as follows, with a 1:1 entity:script relationship:

--Check if script is present

if entity.script~=nil then

 

if type(entity.script.TakeDamage)=="function" do

 

--Call the TakeDamage() function

entity.script.TakeDamage(10)

end

end

 

You can set an entity's script with the new function:

Entity::SetScript(const std::string& path)

 

You can also set and get entity script values right from C++:

virtual void SetString(const std::string& name, const std::string& s);
virtual void SetObject(const std::string& name, Object* o);
virtual void SetFloat(const std::string& name, const float f);

 

And it's easy to call a script function from your C++ code:

virtual bool CallFunction(const std::string& name, Object* extra=NULL);

 

The properties dialog is changed slightly, with a consistent script tab that always stays visible. Here you can set the script, and properties will appear below, instead of being spread across a bunch of different tabs:

blogentry-1-0-16203900-1369107653.jpg

 

Maps with entities using only one script (which has been almost everything we see) are unaffected. Objects with multiple scripts need to have their code combined into one, or split across multiple entities. I am very reluctant to make changes to the way our system works. Our API has been very stable since day one of release, and I know it's important for people to have a solid foundation to build on. However, I also knew we made a design mistake, and it was better to correct it sooner rather than later.

 

Probably the best aspect of the script system in Leadwerks 3 has been the flowgraph connections between scripted objects. That's been a big winner:

 

blogentry-1-0-19423100-1369180339_thumb.jpg

 

As we use the editor more and hear about the community's experience, it allows us to refine our tools more. One of the more subtle but we made is in the properties editor. One of the annoyances I experienced was when setting a property like mass or position when an entire hierarchy of entities was selected. Obviously I didn't want to set the mass for every single bone in a character, but how to tell the editor that? I went through all the properties, and for ones that the user is unlikely to want set, I made the following rule: If the entity has a selected parent anywhere in the hierarchy, it gets ignored when properties are retrieved and applied. (Other things like color will still work uniformily.) Just try it, and you'll see. It speeds up editing dramatically.

 

In a similar vein, we finally solved the problem of "too many selection widgets", We only display the top-most control widget for each group of selected objects. Again, it's easier for you to just try it and see than for me to explain in detail. If we did our job right, you might not even notice because the editor will just do what you want without any thought.

 

You've also got control over the color scheme, and can use it to customize the look and feel of the 3D viewports and code editor.

 

blogentry-1-0-22146600-1369107531_thumb.jpg

 

On Windows we fixed an annoyance that triggered menu shortcuts when the user tried to copy and paste in text fields. This tended to happen in the properties editor. We're working to resolve the same issue in the Cocoa UI for Mac.

 

I'm a big fan of the 3ds max Max Script system, so in the output panel we've added a real-time Lua console:

blogentry-1-0-00636400-1369180649_thumb.png

 

You can type Lua commands in and interact with Leadwerks in real-time. Just for fun, try pasting in this line of code and see what happens:

for i=0,10 do a = Model:Box(); a:SetColor(math.random(0,1),math.random(0,1),math.random(0,1)); a:SetPosition(i*2,0,0); end

 

You can't create objects the editor will recognize (yet), but it's fun to play with and should give you an idea of where we're going with the editor.

 

This is an important update that includes new features and enhancements that improve usability and workflow. Thank you for your feedback. It's great to see all the activity that is taking place as people learn Leadwerks 3.



19 Comments


Recommended Comments

Sweet list of updates Josh.

 

I do find the script changes a little weird though. It completely removes the component based behaviour. Instead of script that are dedicated we are now getting massive scripts. I think this makes it more complicated to script.

Share this comment


Link to comment

I didn't understood all script changes, but i'll take a look tonight and try it.

 

Scritp and FlowGraphs are coming back. what is recommended finally ? Use C++ or scripts/flowgraphs to make games ?

 

Would it be not more easy to call C++ from Script attached to characters for examples ? (C++ managing collisions, AI and other stuff)

 

Will we have small examples fo the new script system and flographs ?

 

This is lot of changes, a lot didn't expected as script system was ok for me.

 

That's lot of changes, but where are the new Gizmos ? Just kidding we'll wait ,and if they not appear we will create some 3DSMAX style script for them as this is a new incoming feature.

Share this comment


Link to comment

I like this too. But i think i have issues to understand the script change.

Why is it now impossible to have multiple scripts attached?

 

Maybe some sort of index for attached scripts could work?

 

I think now scripts can sometimes have repeated code in it.

 

For example the Volume Trigger script together with the Sound Node Script attached to one entity. Now i need to have 2 entities or put both scripts together into a single script.

 

Or maybe the Scripting wasn't made for such small script parts in the first place.

Share this comment


Link to comment

There are three major aspects of the script system in Leadwerks 3. First, scripts are attached to objects from a file, rather than having to be named the same as a loaded model. Second, we tried attaching multiple scripts per objects. Third, the flowgraph allows the user to create visual connections between the scripts.

 

All of these were new ideas, and they were all theoretical designs. We didn't really know how practical they would be until we started using them in depth. Fortunately, two out of those three ideas turned out to be good ones.

 

Once we got into more advanced scripted gameplay, we found that a component approach turned out to be very tedious from a programming perspective. It creates an extra layer of abstraction that's confusing for both beginners and advanced users.

 

We believe we should expand on the strength of the flowgraph editor in the future. One way we can do this is by adding purely logical entities...that is, an entity that only exists in the flowgraph, and gets used for connecting logic.

Share this comment


Link to comment

I like the idea about pure logic entities. That would solve the case I was thinking about.

Share this comment


Link to comment

Just updated looks good. A few changes are needed to my project but for the better.

Share this comment


Link to comment

Smooth rounded BSP lightening is great now :)

 

And be able to copy/paste in the scene panel is good also.

 

I seen the player has one script only now , i'll have to dig how it works later.

Share this comment


Link to comment

Can't say I'm thrilled about losing the multiple scripts attached. The animation manager along with other stuff was a good example as to the benefit of this system. Guess we have to copy/paste animation manager code into our 1 entity script now, or rewrite animation code for each entity?

 

At least logical entities will come to the flowgraph, which will add some power to the flowgraph, even though we can do that today but just need to use a pivot to attach the logical script too.

 

Once we got into more advanced scripted gameplay, we found that a component approach turned out to be very tedious from a programming perspective.

 

Can you explain how you found it tedious, or maybe explain how you didn't see the benefit of reusable components that other people could use or better yet programmers could eventually sell? Multiple scripts was meant for the programmer who wants the benefit of reusable scripts and also for the non-programmer who just wants to plug and play functionality. I would rather add 5 different specific scripts that I can reuse over and over again in different situations than recode or copy/paste code into 1 master script per object.

 

Also, couldn't you just leave in the fact that we could add multiple scripts and those who don't want to do that can just make the 1 script? Allowing multiple scripts doesn't hinder the person who wants just 1 script, unless it was causing you bugs behind the scenes and you just didn't want to deal with it wink.png

 

 

Rereading: In my view components need to interact via events, and the attachment of these events is what's specific to the object so as to not break encapsulation of the scripts. This is basically what the flowgraph is doing and why it's needed in a situation like this and does work if we have multiple scripts just like in the script link I provided above.

 

Another option for the programmers would be to make 1 script that is just the glue attaching the events to the functions of the other scripts attached and then gives a more event oriented programming model.

Share this comment


Link to comment

I did like mulitple scripts but if going away from that it was better to do it now rather than further down the line.

Share this comment


Link to comment

How about messages? Send a message to all components attached.

//Send message from script 1
SendMessage("Damage", 10.0)


//Handler in script 2
HandleMessage(message, extra)
{
 if message == "Damage" then
	 health = health - extra
   elseif message == "Repair"
       health = health + extra
    end
}

Overhead is really low and when a component HandleMessage body is empty it simply ignores messages.

Share this comment


Link to comment

Yeah SendMessage() would be good too, however there is still a level of dependency with that, it's just not as hard as an actual object or function. I still think a script that attaches functionality from script to script that we create specifically for the game in question would be cool too.

 

Oh well. :/

 

edit: The other side of this coin is being able to attach multiple scripts that don't have anything to do with each other or need to communicate at all. For example having a rotation script and a moving script attached to a platform or something like that. They don't need to talk, they just need to do their own thing. Without this we are duplicating code inside the 1 script. Now that's a simple example but there can/is more complex situations just like that also.

 

I just don't see what the harm is. If you only want to use 1 script then use 1 script, but leave the multiple script functionality there for those who want to use multiple scripts.

Share this comment


Link to comment

Hi Rick. I see it this way at the moment:

The attached script is really only for the entity behavior.

So if you have a moving/rotating platform add the rotation and movement functions to it. You don't need to use both when you need only one movement. Then I would see the pure logical flowgraph entities for the events: button press -> start the movement of the platform.

 

I know you would need to have another entity for a button anyway, just couldn't think for a better example.

 

With multiple scripts attached you need to find the correct script first as Josh shows in the post.

Share this comment


Link to comment

@beo6 I understand the other side of the coin, I just would like the option to really do component programming (and I think leaving the multiple script logic in doesn't really hurt much).

 

From the example Josh gave he/they weren't going about it "correctly". With component design you don't hook the wiring up directly inside the scripts/components because that creates exactly the issue he ran into. Those scripts are meant to be generic and only do their specific task without knowing about any other script. How can that be possible you might ask? Well, it takes more thought about each component. You need to expose events/messages and fire events/messages and do the hooking up of these in what would be a game specific script/way. The flowgraph is one way to act with this so it's not a total washout for us.

 

So even though Aggror's way is decent too, if you do that inside the actual components/scripts there is a reliance there that you don't want.

 

This is a really different way of thinking about how a game works. Instead of following the logical flow from top/down and programming it that way, you have to think about what events each component could trigger from within and what functionality it should expose. Then at the start of the game you do all the wiring between components instead of interacting directly with the components during run-time.

 

The best thing about the component design is making very robust/reusable components that are completely isolated. This means working in teams becomes that much easier as long as you have thought out the design of what components you need, you can give components to people to make with no fear of them colliding or requiring other components. It makes your code more reliable and stable as well as something with no outside dependencies is easy to test and make less errors with.

 

In short, I think Josh tried to use conventional programming mixed with component programming and that doesn't work. I don't see the harm in allowing multiple scripts for those who want to use that style. If you don't want to you can still just make 1 script. All the code is already there for multiple scripts so why disable it? Just doesn't make sense.

Share this comment


Link to comment

Am I the only one who's thinking that LE3 is focused too much on mobile development? because all updates I see are some updates like: mobile update enhancement or something like that. the core of gaming is an PC (in my opinion) and right now... all the effort is taken too the mobile and tablet world. Why?

Share this comment


Link to comment

You'd have to define "core of gaming" and then decide if the money is there or not. My mom and dad aren't buying PC games, but they will spend .99 on a mobile game for my daughter to play while they have her. Can we simply throw casual games to the side just because they aren't the OMGSWEETFPS game we all want to make? If you are running a game company and aren't hitting mobile in some way you are either massively established or about to go under :)

 

Mobile is a fact of life and it's not going away.

Share this comment


Link to comment

I know that is true, but still. it all started with PC and now it's almost fully based on mobile devices (which all will have Quad core CPU and high end graphics cards eventually) it's just a matter of time till mobile devices are even more powerful as PC. so why making it all mobile supported (in graphics terms). people who say: 'that isn't going to happen' are just hypocrites, because in the 70's they said: 'mobile devices are never going to happen' and look what we achieved now a days. btw sorry for my English, it has been a long time.

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 4
      For finer control over what 2D elements appear on what camera, I have implemented a system of "Sprite Layers". Here's how it works:
      A sprite layer is created in a world. Sprites are created in a layer. Layers are attached to a camera (in the same world). The reason the sprite layer is linked to the world is because the render tweening operates on a per-world basis, and it works with the sprite system just like the entity system. In fact, the rendering thread uses the same RenderNode class for both.
      I have basic GUI functionality working now. A GUI can be created directly on a window and use the OS drawing commands, or it can be created on a sprite layer and rendered with 3D graphics. The first method is how I plan to make the new editor user interface, while the second is quite flexible. The most common usage will be to create a sprite layer, attach it to the main camera, and add a GUI to appear in-game. However, you can just as easily attach a sprite layer to a camera that has a texture render target, and make the GUI appear in-game on a panel in 3D. Because of these different usages, you must manually insert events like mouse movements into the GUI in order for it to process them:
      while true do local event = GetEvent() if event.id == EVENT_NONE then break end if event.id == EVENT_MOUSE_DOWN or event.id == EVENT_MOUSE_MOVE or event.id == EVENT_MOUSE_UP or event.id == EVENT_KEY_DOWN or event.id == EVENT_KEY_UP then gui:ProcessEvent(event) end end You could also input your own events from the mouse position to create interactive surfaces, like in games like DOOM and Soma. Or you can render the GUI to a texture and interact with it by feeding in input from VR controllers.

      Because the new 2D drawing system uses persistent objects instead of drawing commands the code to display elements has changed quite a lot. Here is my current button script. I implemented a system of abstract GUI "rectangles" the script can create and modify. If the GUI is attached to a sprite layer these get translated into sprites, and if it is attached directly to a window they get translated into system drawing commands. Note that the AddTextRect doesn't even allow you to access the widget text directly because the widget text is stored in a wstring, which supports Unicode characters but is not supported by Lua.
      --Default values widget.pushed=false widget.hovered=false widget.textindent=4 widget.checkboxsize=14 widget.checkboxindent=5 widget.radius=3 widget.textcolor = Vec4(1,1,1,1) widget.bordercolor = Vec4(0,0,0,0) widget.hoverbordercolor = Vec4(51/255,151/255,1) widget.backgroundcolor = Vec4(0.2,0.2,0.2,1) function widget:MouseEnter(x,y) self.hovered = true self:Redraw() end function widget:MouseLeave(x,y) self.hovered = false self:Redraw() end function widget:MouseDown(button,x,y) if button == MOUSE_LEFT then self.pushed=true self:Redraw() end end function widget:MouseUp(button,x,y) if button == MOUSE_LEFT then self.pushed = false if self.hovered then EmitEvent(EVENT_WIDGET_ACTION,self) end self:Redraw() end end function widget:OK() EmitEvent(EVENT_WIDGET_ACTION,self) end function widget:KeyDown(keycode) if keycode == KEY_ENTER then EmitEvent(EVENT_WIDGET_ACTION,self) self:Redraw() end end function widget:Start() --Background self:AddRect(self.position, self.size, self.backgroundcolor, false, self.radius) --Border if self.hovered == true then self:AddRect(self.position, self.size, self.hoverbordercolor, true, self.radius) else self:AddRect(self.position, self.size, self.bordercolor, true, self.radius) end --Text if self.pushed == true then self:AddTextRect(self.position + iVec2(1,1), self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) else self:AddTextRect(self.position, self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) end end function widget:Draw() --Update position and size self.primitives[1].position = self.position self.primitives[1].size = self.size self.primitives[2].position = self.position self.primitives[2].size = self.size self.primitives[3].size = self.size --Update the border color based on the current hover state if self.hovered == true then self.primitives[2].color = self.hoverbordercolor else self.primitives[2].color = self.bordercolor end --Offset the text when button is pressed if self.pushed == true then self.primitives[3].position = self.position + iVec2(1,1) else self.primitives[3].position = self.position end end This is arguably harder to use than the Leadwerks 4 system, but it gives you advanced capabilities and better performance that the previous design did not allow.
    • By Josh in Josh's Dev Blog 4
      I have been working on 2D rendering off and on since October. Why am I putting so much effort into something that was fairly simple in Leadwerks 4? I have been designing a system in anticipation of some features I want to see in the GUI, namely VR support and in-game 3D user interfaces. These are both accomplished with 2D drawing performed on a texture. Our system of sprite layers, cameras, and sprites was necessary in order to provide enough control to accomplish this.
      I now have 2D drawing to a texture working, this time as an official supported feature. In Leadwerks 4, some draw-to-texture features were supported, but it was through undocumented commands due to the complex design of shared resources between OpenGL contexts. Vulkan does not have this problem because everything, including contexts (or rather, the VK equivalent) is bound to an abstract VkInstance object.

      Here is the Lua code that makes this program:
      --Get the primary display local displaylist = ListDisplays() local display = displaylist[1]; if display == nil then DebugError("Primary display not found.") end local displayscale = display:GetScale() --Create a window local window = CreateWindow(display, "2D Drawing to Texture", 0, 0, math.min(1280 * displayscale.x, display.size.x), math.min(720 * displayscale.y, display.size.y), WINDOW_TITLEBAR) --Create a rendering framebuffer local framebuffer = CreateFramebuffer(window); --Create a world local world = CreateWorld() --Create second camera local texcam = CreateCamera(world) --Create a camera local camera = CreateCamera(world) camera:Turn(45,-45,0) camera:Move(0,0,-2) camera:SetClearColor(0,0,1,1) --Create a texture buffer local texbuffer = CreateTextureBuffer(512,512,1,true) texcam:SetRenderTarget(texbuffer) --Create scene local box = CreateBox(world) --Create render-to-texture material local material = CreateMaterial() local tex = texbuffer:GetColorBuffer() material:SetTexture(tex, TEXTURE_BASE) box:SetMaterial(material) --Create a light local light = CreateLight(world,LIGHT_DIRECTIONAL) light:SetRotation(55,-55,0) light:SetColor(2,2,2,1) --Create a sprite layer. This can be shared across different cameras for control over which cameras display the 2D elements local layer = CreateSpriteLayer(world) texcam:AddSpriteLayer(layer) texcam:SetPosition(0,1000,0)--put the camera really far away --Load a sprite to display local sprite = LoadSprite(layer, "Materials/Sprites/23.svg", 0, 0.5) sprite:MidHandle(true,true) sprite:SetPosition(texbuffer.size.x * 0.5, texbuffer.size.y * 0.5) --Load font local font = LoadFont("Fonts/arial.ttf", 0) --Text shadow local textshadow = CreateText(layer, font, "Hello!", 36 * displayscale.y, TEXT_LEFT, 1) textshadow:SetColor(0,0,0,1) textshadow:SetPosition(50,30) textshadow:SetRotation(90) --Create text text = textshadow:Instantiate(layer) text:SetColor(1,1,1,1) text:SetPosition(52,32) text:SetRotation(90) --Main loop while window:Closed() == false do sprite:SetRotation(CurrentTime() / 30) world:Update() world:Render(framebuffer) end I have also added a GetTexCoords() command to the PickInfo structure. This will calculate the tangent and bitangent for the picked triangle and then calculate the UV coordinate at the picked position. It is necessary to calculate the non-normalized tangent and bitangent to get the texture coordinate, because the values that are stored in the vertex array are normalized and do not include the length of the vectors.
      local pick = camera:Pick(framebuffer, mousepos.x, mousepos.y, 0, true, 0) if pick ~= nil then local texcoords = pick:GetTexCoords() Print(texcoords) end Maybe I will make this into a Mesh method like GetPolygonTexCoord(), which would work just as well but could potentially be useful for other things. I have not decided yet.
      Now that we have 2D drawing to a texture, and the ability to calculate texture coordinates at a position on a mesh, the next step will be to set up a GUI displayed on a 3D surface, and to send input events to the GUI based on the user interactions in 3D space. The texture could be applied to a computer panel, like many of the interfaces in the newer DOOM games, or it could be used as a panel floating in the air that can be interacted with VR controllers.
    • By Josh in Josh's Dev Blog 0
      Putting all the pieces together, I was able to create a GUI with a sprite layer, attach it to a camera with a texture buffer render target, and render the GUI onto a texture applied to a 3D surface. Then I used the picked UV coords to convert to mouse coordinates and send user events to the GUI. Here is the result:

      This can be used for GUIs rendered onto surfaces in your game, or for a user interface that can be interacted with in VR. This example will be included in the next beta update.
×
×
  • Create New...