Jump to content

Josh

Staff
  • Posts

    23,127
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    The various parts of the new editor are coming together. Today I got the properties list box drag and drop functionality working. I actually had to go back and rewrite some of the list control rendering code because I haven't used it in a while, but it didn't take long.
     
    I'm trying to attack known unknowns first, so we're focusing on the flow graph, AI, and game logic, and everything required to build up to support that. I've already written a CSG editor and an engine with great graphics, so I know I can do that. Gameplay support is what I want to focus on now before I cover old familiar ground.
     


     
    In other news, it has occurred to me that an OpenGL 2 renderer is needed to provide a PC/Mac equivalent of what the iPhone and Android renderer looks like. Leadwerks Engine 2 uses OpenGL 2.1, but this OpenGL 2 renderer won't be anywhere near as advanced. I'm just going to derive a bunch of classes from the OpenGLES 2 renderer, so we can get as close a match as possible to the Android / iPhone render.
  2. Josh
    Below you can see the properties editor. When you select a script attached to an entity, the properties for that script appear on the right. Here we have a simple "Pulse" script that changes the color of the entity along a sine curve. This can be used for making lights that pulsate slowly, or continually turn on and off.

     
    Here's what the script looks like. The "--in" tag at the end of the Pause and Resume functions indicate that these functions can be activated in the flowgraph editor. (A node can be connected to them to call them.)

    --float speed 1 --color color0 0,0,0,0 --color color1 1,1,1,1 --bool recursive function actor:Pause()--in self.paused=true end function actor:Resume()--in self.paused=false end function actor:Update() if ~self.paused then local i = math.sin(AppTime()*self.speed*0.01) self.entity:SetColor(i*self.color0+(1-i)*self.color1,self.recursive) end end
    Here's our "mover" script, which just performs simple movement and rotation without physics:

    --vec3 translation 0,0,0 --vec3 rotation 0,0,0 --bool global true function actor:Start() self.paused=false end function actor:Pause()--in self.paused=true end function actor:Resume()--in self.paused=false end function actor:SetTranslation(translation)--in self.translation = translation end function actor:SetRotation(rotation)--in self.rotation = rotation end function actor:Update() if ~self.paused then if self.movespeed~=Vec3(0) then self.entity:Move(self.translation[0],self.translation[1],self.translation[2],self.global) end if self.turnspeed~=Vec3(0) then self.entity:Turn(self.rotation[0],self.rotation[1],self.rotation[2],self.global) end end end
    Want an entity to animate in a loop? Attach this script to the entity:

    --int sequence --float speed 1 --bool paused false function actor:Start() self.frame=0 end function actor:Pause()--in self.paused=true end function actor:Resume()--in self.paused=false end function actor:Draw() if ~self.paused then self.frame=self.frame+AppSpeed() self.entity:Animate(self.frame,self.sequence) end end
    What if we wanted a one-shot animation that plays when something triggers it? We can define an "Enable" function, connect another node to it, and some other event can cause the script to play the animation through once. Here's "PlayOnce.lua":

    --int sequence --float speed 1 --bool enabled false function actor:Start() self.frame=0 self.enabled=true end function actor:Enable()--in self.enabled=true self.frame=0 end function actor:Disable()--in self.enabled=false end function actor:Draw() if self.enabled then local length=self.entity:GetAnimationLength(self.sequence) self.frame=self.frame+AppSpeed() if self.frame>length then self.frame=length self.enabled=false end self.entity:Animate(self.frame,self.sequence) end end
    By using these general-purpose scripts we can set up game interactions and interesting behaviors, without coding...but if we need to code something new, the tools are a couple clicks away.
  3. Josh
    Leadwerks 3 is compiling for Android. There's presently a problem with the file system that is preventing any 3D rendering, but we'll get that worked out shortly. We're targeting Android 2.2.
     
    In order to compile C++ for Android on Windows, we had to install the following:
    -Java SDK
    -Eclipse IDE
    -Android SDK
    -Android NDK
    -CygWin
    -GDB
     
    We also learned that OpenGL ES 2.0 does not run in the Android simulator. For the time being, we have to run the engine on an actual Android device. That was rather surprising, but I think Google will have this functionality added fairly soon. I also learned there is an x86 version of Android, but no one uses it.
     
    Debugging C++ on Android is done with GDB, a command-line debugger. You definitely don't want to use this to do any heavy work. In this case, the cross-platform nature of coding with Leadwerks comes in handy, and you can debug on Windows or OSX and then just compile your finished code for Android, without much testing.
     
    The plan is to allow one-step publishing for Android when you use Lua. You write your program in script, test it on PC or Mac, then you can export a package for Android that's ready to install on your phone, without even having to install the Android SDK. You can also use C++, and it takes more work, but it's not too hard and we'll have instructions on how to get set up.
     
    Behold the mighty blue screen running on an HTC Evo, and tremble!:


  4. Josh
    Once again, I like to attack the things I haven't done in the past first. I know how to make a CSG editor. What I haven't done in the past is a visual editor for gameplay and logic.
     
    In the shot below, you can see the properties editor, with the "Scripts" tab selected. I moved the name and angle editor outside the tabbed panel, with the idea that these are so important they belong in the main window, but the real reason was to break up the boredom of a full-window tabbed panel. Under the scripts tab, there's a list on the left where you can drag scripts onto to assign a script to an entity. You can also drag the scripts around within the list to rearrange the order they are executed in. When a script item is selected in the list, its properties appear on the right in the bordered area. This has a border because if the number of controls exceeds the available height, a scrollbar will appear to allow you to scroll down and access all of them. Now I personally am not a fan of this, because I think it makes things more complicated, but the user feedback overwhelmingly suggests users want multiple scripts attached to a single object. I think this is so users can combine behaviors and adjust them through trial and error. This doesn't occur to me because for me it's no big deal to just open a single script and program it to do exactly what I want. However, this isn't as trivial for everyone, and for every programmer out there, there are probably 99 non-programmers. This is a good example of why we need to understand how everyone uses the software. The resolution we have satisfies the need for trial and error design, without restricting what you can do with it.

     
    You can drag an entity script or an entity into the flow graph editor to make its attached scripts appear. Each script has inputs and outputs, and you can connect one to another. I think I am going to put both script output functions and exposed attributes on the right, with a different icon for the two. Output functions can be linked to inputs on other scripts, while attributes can be linked along the curved line connecting the two, as function arguments. This is a cool idea I think will make it unique, and it only works due to the flexibility of Lua; you can skip function arguments, and the function call will be executed anyways without any problem.
     
    I'll have some special flowgraph nodes that only appear in the flowgraph editor for logic, timing, etc. It is not my goal to make a visual script editor. I don't think the challenge of programming is because people don't understand text code. The problem is thinking in an extremely logical and meticulous manner. A visual script doesn't get rid of this requirement, it just uses a different method to display the logic, one that will baffle both programmers and non-programmers alike. Think about (most) artists trying to design "Pong" with a flowgraph; I don't think it would work. Trying to code behavior through a visual representation of script is not something I want happening in the flowgraph, except on a fairly lightweight level. People are welcome to try it, but if we expected that everyone would be able to write complex programs with it, I think it would fail and we'd be back to the drawing board.
     
    My goal for the flowgraph system is to allow a designer to connect behaviors to create interesting sequences of events in a game. Individual behavior of objects will still be coded in script, in a manner designed to be as general-purpose as possible. A button opens a door, a cat chases a mouse, a switch turns an elevator on, then opens a trap door, and turns on a machine ten seconds later.
     
    Leadwerks Engine 2 had individual entity scripts, but without a standard system for interactions, there's little you can code in a self-contained script, other than some looping behavior. Setting names and targets is functional but rather abstract and cumbersome when working with complex interactions. With the Leadwerks flowgraph editor, we're all working together on the same game...yet the programmer still has complete control and freedom.
  5. Josh
    The scene browser lets you see every entity and brush in the scene, and also lets you arrange entities in a hierarchy. You can add a light to the scene, position it onto a lamppost, then use the scene browser to parent the light to the lamppost model and save the whole thing as a prefab.
     
    There's no "mesh" class in the new engine, and entities inherit a lot of the properties that models in LE2 had, like scripts and physics. Models are just MDL files, and generally just contain model and bone entities. A prefab, on the other hand, can contains models loaded from other files, lights, particle emitters, and anything else you can create in the engine. Whereas in Leadwerks Engine 2 you would use a script to create special entities in code and parent them to a model, in the new engine you can place and parent them visually, then save them as a prefab to be used again and again.
     
    Top-level models can be dragged around in the hierarchy, but limbs (children contained within a model file) are a bit more problematic. If we allowed the user to create a character, then parent the character's arm to another model in the scene, then delete the character, leaving the arm...you can see how that could cause a lot of issues. What I'm leaning towards is a rule that it's fine to parent the top-level model to something else, and you can add entities within the model's sub-hierarchy, but changing the parent of a model's limb is not allowed. I hope this doesn't confuse people using it, but it makes sense; if you really need to alter an model file's hierarchy, it should be done in the model editor (similar to the texture and material editors you have seen).
     
    You can still set different properties like color, script settings, etc. for limbs of different instances of the same model. The map file will just store data that says "attach these scripts to limb number 7 of this model after you load it". This way models will always be reloaded from their original files, and if you alter your model the whole map will be updated. Actually, the more I think about it, the more this makes sense...this is a map editor, not a modeling program.
     
    Not shown here, but I am adding a filter at the bottom so you can type in a name and a list will appear of all objects in the scene that contain that name, so you can type in "tree" and find all relevant objects. Hmmmm, come to think of it, maybe I should also add a drop-down box to search by either object name or the name of the file the object was loaded from.
     
    The check marks will toggle visibility recursively, though they don't work yet. I'm writing the grid and object selection rendering into the engine, so they'll be some commands like Entity::SetSelectionState(bool mode). I found this easier than a bunch of rendering callbacks, especially when the engine is designed to run multiple renderers. This might come in handy if you are writing any kind of editor.
     

  6. Josh
    I implemented resizable viewports this morning. This is so much more fun than fiddling around with compiler settings.
     
    You can grab the bars between viewports, or the very center space, and drag it around to resize the four viewports. I usually prefer a smaller perspective view, and like to devote most of my space to the 2D views when doing serious editing:

     
    You can also drag the views around to make a single big viewport, or just have two viewports visible:

     
    In 3D World Studio, when you resize the window the program instantly redraws, even as you are still moving the window. This is nice looking but it can get slow when you have a large map loaded, and it makes the program feel slow. The problem is even worse with a deferred renderer, because you are constantly creating and deleting those big gbuffers. I don't like hearing the graphics card fan kick up just because I decided to resize a window.
     
    I'm getting better results by waiting for the user to let go of the mouse before redrawing viewports, but for window and viewport resizing. This has the effect of displaying "bad"/undefined screen regions while the user is resizing things, but it makes the program very fast and responsive. I used to consider those artifacts to be a sort of amateurish trait, but I think this actually works better. The program feels very fast and light, so I think this is actually a pretty good tradeoff. As soon as you let go of the mouse button, the viewports redraw at their new size.

     
    --EDIT--
    And now we have grids. Starting to look familiar, but new:

  7. Josh
    We have our design parameters. We know the editor should let the user control every aspect of the engine in a visual manner, without having to use any other tools or do any editing of text files. We have 3D World Studio to serve as inspiration for the design and feel of the program.
     
    The sky's the limit. You, the users, have told me that you want to invest in a system that does more for you and makes your life easier. I'm happy to provide the basis for your next projects. Thank you for letting me make this for you, because creating 3D tools is what I really love doing.
     
    Look at me, I see a CSG editor grid and I get all sentimental.
     

     
    I like having the scene tree and asset browser in tabs on the right. I tried a couple variations of keeping them both visible at once, and I think it looks silly. The down side of using tabs is you can never drag anything from the asset browser to the scene tree directly. Therefore, it makes the most sense to me to have sounds, scripts, and anything else in the properties editor, so you can drag assets from the asset browser to that. This means scripts, sounds, etc. do not go in the scene tree, and it gets used for only entities and brushes. It seems like a good layout and will support all the drag and drop features we need.
  8. Josh
    The fantastic Fatcow icon set was recommended to me in the comments in a previous blog post. These are very similar to the Silk icon set, but come in 32x32 and 16x16 versions, have more icons, and better coloration. The Silk icon set is sort of a desaturated pastel tone I don't like, and Fat Cow's icons are more saturated and bright. I had actually considered running the Silk icons through a color-correction algorithm, but Fat Cow has already done the work for me.
     
    I knew from the start I wanted our editor to have lots of colorful toyish-looking icons. I love Valve's editing utilities as well as Nem's Tools, even though they are both rather dated. What I miss about these programs is that they embedded 3D tools into the standard Windows UI, so it was easier to focus on the task at hand instead of being distracted by a flashy custom UI you had never seen before. I've always considered homemade GUIs to be amateurish, and even if one looks cool at first, I quickly get tired of it. My goal is to capture the utility and ease of use of programs like this and 3D World Studio, while adding more modern features to make it more friendly to artists.
     
    The editor is starting to take on its own unique character:

     

    Designing the Interface
    What you see here is the result of a lot of different approaches to try to determine the best way to design the layout of a modern 3D editor. 
    The asset browser is built into the main window, but can be resized or hidden by clicking the window divider. The user will frequently be accessing the asset browser, and having it in the main window is better than having it in a separate window, and constantly having to move it around to get it out of the way. Additionally, I felt that having child windows open from a child window (windowed asset browser) was too confusing...more on that below.
     
    The default layout of the asset browser is a vertical split with the treeview on the far right. After using a horizontal split for a while I felt really cramped for space. The vertical layout gives you a better distribution of space and plenty of room to view thumbnails.
     
    Originally, I had the asset files themselves in the tree with small icons, but I found that 16x16 was too small for a meaningful preview of each file. I tried making the tree view icons bigger, but it wasted a lot of space. Folder icons looked ridiculous at 32x32 and above, and took up more room than they needed.
     
    I tried using generic 16x16 "texture", "material", and "model" icons in the treeview, but found it was much easier to identify files by thumbnail rather than reading their name.
     
    Finally the decision was made to keep folders in the tree view, with small icons, and have a separate thumbnail browser with bigger icons. The user can adjust the thumbnail icon size, from 32 to 512. I also liked this approach because it is similar to the texture browser in 3D World Studio.
     
    The folder treeview is on the very right-hand side by default because the white background would cause the dark 2D and 3D viewports to look even darker. Your eye actually will perceive a darker color in an area of high contrast, so keeping the mostly-white treeview off to the side will make your scenes more easily visible.
     
    When you double-click on an asset file, a window for viewing and editing the asset is opened. I considered building a preview window and controls into the main window, but it didn't fit the workflow. Sometimes you will want to drag asset files into an asset editor. For example, you can drag a texture into the material editor to assign a texture to the material. In order for this to work, you have to be able to open the material in the material editor, then navigate the asset browser to where the textiure resides, then click and drag that file into the material editor. This wouldn't work with a single preview window.
     
    There's also a good case for having multiple asset editors open at the same time. You might be working on a shader and want to see its effect on a material in real-time. In that case, you would keep both assets open in their editors while working.
     
    I also considered making the asset editors a separate application the editor opened, but didn't for two reasons. First, drag and drop operations would probably be harder to code, especially across multiple platforms. Second, and most importantly, I wanted the asset editor windows to always appear on top of the main window, and that wouldn't happen if they were external applications.
     
    I also like having the asset editors in their own windows because at any time you can maximize the window and get a nice big fullscreen view of the model, texture, or material you are looking at. This is naturally useful for scripts and shaders as well, where you are editing code and need lots of space.
     
    The font editor is shown, and supports both .ttf and bitmap fonts. I implemented my own .fnt format (I think this is an old Window 3.0 file extension, but it's mine now) and a .ttf to .fnt converter. As with all the Leadwerks 3 converters, the editor automatically handles the conversion for you, so you just drop your .ttf files into your project folder and you're done. I don't know of any cross-platform font maker programs, but I will publish the specification for our font format so they can add exporters. It always seems to work best when we just make our own optimal file formats and publish the spec for everyone to follow.
     

    Bringing Back Level Design
    As I approach our CSG brush implementation, I find myself growing quite excited over something that was originally an annoyance I did not want to deal with, but the users insisted we merge 3D World Studio with Leadwerks 3. When Leadwerks Engine 2 was designed, I thought CSG was obsolete once dynamic lighting became possible, and we no longer needed special geometry for optimal lightmapping. However, with the loss of CSG I think we lost a reference point in our virtual worlds. It was easy to drag out a room in a few minutes for playtesting. CSG is also fantastic for buildings, and can produce results in a fraction of the time it would take with 3ds max or another polygonal modeler. 
    I'm very eager to see what CSG will look like in a modern implementation, especially if I can work hardware tessellation into it. CSG is an area of expertise for Leadwerks as a company, and I am glad to be drawing on that knowledge to bring something new and different to game design tools. I'm looking forward to a return of the lost art of level design, with a modern flair.
     
    Finally, here's a song for you that can be vaguely connected to the subject of this blog, from Alice In Chain's album "Facelift". (See what I did there?)

    http://www.youtube.com/watch?v=QXPA44Ebg_o
  9. Josh
    I am usually very excited to read about new graphical techniques, and even new hardware approaches, even if they are presently impractical for real use. I was very interested in Intel's Larabee project, even though I didn't expect to see usable results for years.
     
    However, sometimes articles get published which are nothing but snake oil to raise stock prices. The uninformed reader doesn't know the difference, and these articles are usually written in such a way that they sound authoritative and knowledgeable. It's unfair to consumers, it's unfair to stockholders, and it hurts the industry, because customers become unable to differentiate between legitimate claims and marketing nonsense. This one is so over-the-top, I have to say something.
     
    In an attempt to stay relevant in real-time graphics, Intel, the company that single-handedly destroyed the PC gaming market with their integrated graphics chips, is touting anti-aliasing on the CPU.
     
    There's a nice explanation with diagrams that make this sound like an exciting new technique Intel engineers came up with. The algorithm looks for edges and attempts to smooth them out:

     
    It's so advanced, that I wrote this exact same algorithm back in 2007, just for fun. Below are my images from it.
     
    Original:

     
    Processed:

     
    The reason this is dishonest is because you would never do this in real-time on the CPU. It may be possible, but you can always perform antialiasing on the GPU an order of magnitude faster, whether the device is a PC, a netbook, or a cell phone. I don't think Sebastian Anthony has any clue what he is writing about, nor should he be expected to, since he isn't a graphics programmer.
     
    Furthermore, swapping images between the GPU and the CPU requires the CPU to wait for the GPU to "catch up" to the current instructions. You can see they completely gloss over this important aspect of the graphics pipeline:
    Normally, graphics are a one-way street from the CPU, to the GPU, to the monitor. The CPU throws instructions at the GPU and says "get this done ASAP". The GPU renders as fast as it can, but there is a few milliseconds delay between when the CPU says to do something, and when the GPU actually does it. Sending data back to the CPU forces the CPU to wait and sync with what the GPU is doing, causing a delay significant enough that you NEVER do this in a real-time renderer. This is why occlusion queries have a short delay when used to hide occluded objects; the CPU doesn't get the results of the query until a few frames later. If I made the CPU wait to get the results before proceeding, the savings you would gain by hiding occluded geometry would be completely negligible compared to the enormous slowdown you would experience!
     
    What Intel is suggesting would be like if you went to the post office to mail a letter, and you weren't allowed to leave the building until the person you were sending it to received your letter and wrote back. They're making these claims with full knowledge of how ridiculous they are, and counting on the public's ignorance to let it slide by unchallenged.
     
    So no, Sebastian, this is not going to "take a little wind out of AMD’s heterogeneous computing sails". Please check with me first next time before you reprint Intel's claims on anything related to graphics. If any Intel executives would like to discuss this with me over lunch (your treat) so that I can explain to you how to turn the graphics division of your company around, I live near your main headquarters.
  10. Josh
    I hired a local Android developer to get Leadwerks 3.0 running on Android devices. We don't know a lot yet, other than that we have an OpenGLES renderer, and everything should be cross-platform compilable. The Android version of LE3 is using a minimum requirement of Android 2.2, which is the lowest version that supports OpenGL ES 2.0. This will run on about 75% of Android devices:
     

     
    As you can see here, the proportion of 2.1 devices is steadily dropping. If a linear rate of decrease is maintained, they will be all but nothing in six months:
     

     
    Interestingly, in the LE3 platform poll about 62% of respondants were more interested in Android than iOS support.
     
    I'll let you know when we have something more to show!
  11. Josh
    It took a while to figure out, but I was able to get drag and drop interaction working on OSX. (Thanks to Seb Hollington for helping with the coordinate transformation stuff.) Now the editor lets you assign textures by dragging them onto texture slots, just the same as on Windows. There's one last step before the editor code becomes totally platform-agnostic. A file system watcher needs to be implemented on Mac. On Windows, this was fairly straightforward, because there is a built-in API for detecting all files events. OSX has something called FSEvents, but they only detect changes to a directory; they don't actually tell you what file changed, or how it changed. QT has a file system watcher, but from what I can tell, it looks like it doesn't detect new folders that get created. I think this is something I am going to put off for now and slowly research it.

     

    As can be expected, OpenGL support on OSX Lion has some issues to be worked out. I haven't actually had a confirmed bug where the driver's behavior is violating the specification, but there are some weird things. Due to the difference between Mac and Windows file paths, I was accidentally not attaching any shader objects to a shader program, yet it successfully linked! This actually does not contradict the OpenGL specification, but ATI and NVidia drivers would have raised an error. The commands for MSAA textures are working, but the highest supported multisample level is one, meaning no multisampling. Again, this is all in accordance with the OpenGL specification, but the GPU in my iMac is capable of doing 16x MSAA textures. I got around a few issues and sent Apple a demo of LE3 running on Lion; The rendering flashes on and off every few seconds, and I have no idea why that could be, but there's nothing more for me to do on it since it's not raising any errors. I think Apple is moving in the right direction, and they'll eventually get things worked out, but I would not recommend buying a Mac for high-end graphics just yet.
     

    I met with an Android developer yesterday about getting LE3 running on Android devices. Since we already have it running on Windows, OSX, iPhone, and iPad, adding one additional platform should not be too hard at this point. Compiling C++ for Android is a pretty complex task, so it makes sense to have someone else work on this area while I focus on the core engine and editor. Once we have it running, the "platform milestone" for LE3 will be reached, with support for all platforms we intend to target at launch.
  12. Josh
    So here's what it took to get Leadwerks running on the iPhone:
     
    Let's start at the point we had the C++ code building and running with an OpenGL 1 renderer on OSX. I was worried OpenGLES might be a dramatically different API that required a whole new learning curve, but I was pleasantly surprised. It's just a stripped-down version of OpenGL, more like OpenGL 3.3 than anything else. Making the OpenGL2ES renderer was mostly just a copy and paste operation, and then I had to comment out a few unsupported commands.
     
    Building for the iPhone simulator was a bit harder. The first problem I ran into was that file paths on iOS are case-sensitive. (They aren't on OSX.) This was problematic, because my asset management code stored file paths in lowercase, so that had to be changed.
     
    Then there was the matter of setting the application's file path. A special function had to be written to set this at application startup.
     
    The hardest part was the context handling. The flow of an iOS app all revolves around Objective-C events, and this doesn't mix well with a C++ program. What I ended up doing was creating an App class like this:

    class App { public: bool Start(); bool Continue(); int Finish(); };
    Objective-C calls these class functions to keep the program loop running. Once the Continue() function returns false, the application calls Finish() and exits.
     
    I'm okay with using this same design across all platforms, if it means your C++ code will run on everything. of course, this is just a convention in the generated project files, and if you want you can code the engine with any program structure you want. This is just the best way to ensure your code runs on everything.
     
    The last challenge was setting up the provisioning profiles, certificate, and some other stuff I don't even remember. This was pretty tricky, and took most of the day to figure out. In the end, I just ran through the instructions a second time, and it magically worked.
     
    At the end of all that, it was nice to see Leadwerks running on the iPhone. iOS was the hardest platform to add support for, and I don't expect the same difficulty when we add support for Android:


     
    Thanks to Ed Upton and Simon Armstrong for their help and advice.
  13. Josh
    There's a new set of pages for Leadwerks Engine 2 up. I think these do a better job communicating all the features of LE2, and explain why it's unique and awesome. Existing users might even learn something from the material:
    http://www.leadwerks.com/werkspace/page/Products/le2
     
    This also integrates into the forum skin. The only page left on the site that isn't using the global forum template is the 3D World Studio page, and that will be changed shortly.
     
    The 2.43 evaluation kit will be released soon. This will allow some programming and generally be less restrictive than the current 2.3 evaluation kit.
     
    You've now got a URL, Facebook, and Twitter share button in almost every page of the site, up in the header. Use them and help us spread the word about all the great stuff people are making.
  14. Josh
    A few weeks ago, I was getting pretty nervous about all the known unknowns in going cross-platform with the Leadwerks Engine 3 code. All my libraries and languages are platform-agnostic, but there's always going to be small issues, and I wanted to get those worked out while the code was still malleable. Here's a few things I found.
     
    -OSX file paths don't have a drive letter, so the only way to specify a relative path is to add ".\" to the beginning. This had some significant implications for the asset reloading system.
     
    -iOS file paths are case-sensitive.
     
    -iOS projects require that all external files included in the project as a resource! Good thing the editor will be able to gather these up and generate a project automatically.
     
    -OSX child windows can't have their own menus. This means I have to make sure the tool windows in the editor have a complete toolbar for every function you might need to use.
     
    -OpenGLES doesn't allow shader uniform initializers. Although I rely on these to define default uniform values in the editor for the visual controls, the editor is actually never going to be running OpenGLES, since it's for embedded systems only. That's odd but convenient.
     
    -The default OSX default font is Lucida Grand 13 pt, a bit larger than the default Windows font, which required more space on some interface elements.
     
    -The minimum button dimensions on OSX you should use are 31x31. Otherwise those rounded corners won't have enough space to display properly.
     
    -There is no consistent OpenGL compressed texture format. Windows and OSX support DXTC, and iOS uses PRVTC. This is annoying, because it means uncompressed RGBA is the only cross-platform texture format. However, I think we can do an automatic PRVTC compression in the editor's "Publish" step for iOS, on DXTC compressed textures only. I don't even know if PRVTC format will even appear in the editor.
     
    -Aside from deferred rendering, OpenGLES2 can do pretty much anything OpenGL3 can. Pretty amazing potential here.
     
    Here's the current editor, showing the texture, material, and shader systems fully working together to make this skybox material. It presently works with OpenGL 3.2 on Windows, and OpenGL 1.4 on Windows and OSX. OSX Lion comes out this month, at which point I hope to have the GL3 renderer running on Mac as well:

     
    Android is the platform I am least worried about, in terms of initial implementation. I'm not sure if the NDK supports pure C++ programming yet, but that seems to be the direction they are heading in. The OpenGLES implementation for iOS should work the same on Android devices. iOS was definitely that hardest implementation to write, and it's not nearly complete, but I've seen enough I am confident we can become platform-agnostic with our code.
     
    I'm going to take a stab at the ATI terrain bug this afternoon. I don't think I'll get it fixed today, but it looks like ATI isn't going to fix their drivers, so I need to find a way around it.
  15. Josh
    So after a lot of learning and mistakes, I finally have Leadwerks Engine 3 running on OSX Snow Leapord, Lion, and iOS. Rather than write out a detailed blog, I'll just throw a lot of random thoughts your way:
     
    -OpenGLES2 is a nice blend of OpenGL 2 and OpenGL 3.3. I'm really surprised my OpenGL 3 shaders translate easily into OpenGLES2 shader code, with only a few changes. In fact, the iOS renderer looks exactly like the deferred OpenGL 3 renderer, except for shadows and some post-effects. The games I have seen running on the iPhone are severely underpowered compared to what they could look like. This little device has quite a lot of power, and you can be sure I will find the way to max out the visuals.
     

     
    -iOS uses it's own texture compression format. This sucks, because it means either you are using uncompressed textures for cross-platform compatibility, or you have to have separate textures for your iOS build. The OpenGL spec really should have a defined compressed format, but I guess there was some patent issue with DXTC compression, or something like that.
     
    -Lua and my other cross-platform libraries seem to work just fine with iOS, which is fantastic. I really had no idea when I started this project whether it would really work like I hoped or not, but everything looks good.
     
    -The iOS port was probably the hardest one, due to the use of Objective-C. Android should be pretty easy after this. The PS3 is another platform I am really interested in, and my guess is LE3 could be ported to PS3 in a few days or less.
     
    -OSX Lion has some very appealing characteristics related to Leadwerks Engine 3. I'm under NDA and can't say exactly what they are, but it's very good for Mac and graphics. BTW, the gestures they are adding to OSX Lion are really fantastic, and reason enough to get the upgrade.
     

     
    There's still a ton of work to do before I have an actual product ready to release, but the plan is working and we're on track for a fantastic 3D development system for a wide variety of platforms.
  16. Josh
    Leadwerks Engine 2.43
    ...will be released tomorrow, along with a new source distro. I've fixed a number of bugs, but I don't like compiling releases when I am tired because there's a lot of little steps to mess up, so I will do it in the morning.
     
    Leadwerks Engine 3
    Optics was always my favorite subject in physics, and I've been getting some amazing results lately by modeling computer graphics after real lighting phenomena.
     
    Once I decided to make the materials system like 3ds max, everything became easy. The engine chooses an "ubershader" variation based on what texture slots a material has a texture assigned to. Creating a normal mapped material is as easy as creating a material and adding two textures. The engine will assume texture slot 0 is the diffuse map and slot 1 is the normal map, and will load a shader based on that. Predefined slots include diffuse, normal, specular, displacement, reflection, emission, refraction, and opacity maps. Of course, you can still explicitly assign a shader if you need something special. The material below was created just by dragging some textures into different slots and adjusting their strength:

     
    Cubemaps are built into the model ubershader, and there's support for reflection, refraction, or both using a fresnel term to combine them. Chromatic aberrtion is also supported, which splits refracted light into its RGB components:

     
    While I was getting into all these advanced optics, I decided to take a stab at color grading, and the results are great. You create a 3D texture (it's easier than it sounds) which gets used as a color lookup table in the post-processing filter. To make a new color table you can just run the source 2D image through a photoshop filter and save it. Color grading gives a scene an overall feel and makes colors look consistent. It's a technique that's used extensively in film, starting with 2000's Oh Brother Where Art Thou:

     
    Here's anpther example:

     
    And here's a simple shot in the engine. The original:

     
    And graded with a "cool" color table:

     
    It's more advanced than just tinting the screen a certain color, because this effect will actually emphasize a range of colors.
     
    Preparing the Asset Store
    Now that Leadwerks.com is hosted on our own dedicated server, I was able to create a more secure folder to store asset store files in that can only be accessed by the forum system. Previously, we had dynamic download URLs working, but the same file could still be downloaded from any browser within a few minutes before the dynamic URL changed. This will give better security for asset store merchants. According to my big flowchart, the only things left to do are to fix the side-scrolling main page and set up a merchant account with our bank, and then we'll be ready to launch.
     
    Great movie and soundtrack, by the way:


  17. Josh
    The C++ object debugger for Lua is working now. It was a little tricky, but I implemented a method to view all members of C++ objects. The debugger does handle dynamic object fetching, so if you expand a node in the debug tree representing a C++ object, the contents of that object will be loaded and displayed. Let's say you have an entity parented to another entity. This allows you to expand the parent member of the child, then find the child in the parent's child list, and so on, ad infinitum.

     
    If you're interested in how this works internally, every class in LE3 is derived from a base class called "Object". The Object class has a virtual function called Debug() which returns a string in a format like the following:
    entity={position={x=0,y=0,z=0},rotation={x=0,y=0,z=0},scale={x=1,y=1,z=1},parent=0x00fh7fC}
     
    This is read by the debugger and used to create more readable nodes in the debug tree. The hex address listed for all pointers can be compared to values in C++ or another language, if needed. Classes that are used as direct objects, like math classes, and, well,. nothing else, simply output their data like this:
    v={0.0,0.0,0.0}
     
    Hard to explain, but easy to understand when you just look at the debugger.
  18. Josh
    The script editor and Lua implementation is very close to being a usable programming environment. You can actually see the pointers to the C++ objects you create, and step through code. It's still missing features, but the parts I was worried about are working.
     
    I am adding Lua commands like crazy, and it's easy to keep track of them because they all reside in one header file. The function overloading is great, as you can see in my example here, where I use a single float value to set the ambient light color. There's also a new Camera::SetFOV command.



    I also fired up the editor for the first time in a couple weeks, and it immediately struck me that there should be single texture, shader, model, etc. editors, instead of creating a new one in a window every time an asset is opened. That and I want to make the interface more like 3D World Studio are the two things I will be working on in the editor. I'm nearly ready to start writing the scene editor. That's not an area of new research, so it will go a lot faster. It's just a lot of work and careful planning, but no more crazy unknown parts that I'm not sure will actually work.
     
    All in all, the tools I am using now give me complete control to give you the best experience possible, and I am really happy with Leadwerks Engine 3 so far!
  19. Josh
    Two issues in the art pipeline were giving me some doubt. I stopped working on them a couple weeks ago, and I'm glad I did because the solutions became clear to me.
     

    Shader Files
    First, there is the matter of shaders. A shader can actually consist of half a dozen files:bumpmapped.opengl3.vert
    bumpmapped.opengl3.frag
    bumpmapped.opengl4.vert
    bumpmapped.opengl4.vert
    bumpmapped.opengles.vert
    bumpmapped.opengles.vert
     
    I originally thought the .shd extension would be good for shaders, in keeping with our extension naming scheme, but there's actually no information an .shd file even needs to contain! I also considered creating a simple format that would pack all the shader strings into a single file, and display the different ones for whatever graphics driver was in use at the time, but that approach reeked of future confusion. I'd like to still be able to open .vert and .frag files in Notepad++ or another text editor.
     
    I came up with the idea to use a .shd file that contains all these shader files, but instead of packing them into a file format, the .shd file will just be a renamed .pak. That way you can easily extract the original text files, but shaders can be distributed and loaded as a single file.
     

    Material Textures
    I've been working with a system that uses texture names defined in the shader file. It works and it's really cool, but I think I have something even cooler. Instead of requiring a shader to define texture unit names, it would be less confusing to just decide on a fixed naming scheme and stick with it, something like this:texture0 - Diffuse
    texture1 - Normal
    texture2 - Specular
    texture3 - Reflection
    texture4 - Emission
    texture5 - Height
     
    And in C++ you would have constants like this:

    #define TEXTURE_DIFFUSE 0 #define TEXTURE_NORMAL 1 ...
    If you want to create a bumpmapped material, you can simply assign textures 0 and 1, and the material will choose a shader with the assumption those are supposed to be the diffuse and normal map. If the current surface being rendered contains bone weights, an animated version of the shader will be used. If a shader is explicitly defined, that of course will override the material's automatic selection. This way, all you have to do is drag a few textures onto a material, and 95% of the time no shader will even have to be explicitly defined. This is similar to the materials system in 3ds max.
     
    The programmer in me screams that this makes no sense, since texture units are completely arbitrary, but from a usage standpoint it makes sense, since the exceptions to this paradigm probably make up less than 5% of the materials in any scene.
     
    Now for custom shaders, you might end up with a situation where you are dragging a texture into the "reflection" slot, when it's really going to be used to indicate team colors or something, but it's still far simpler to say "set the reflection texture with...". Everyone knows what you mean, textures can be set in code without having to look up the slot number in the shader, and there's no ambiguity.
     
    Anyways, that's just two small design problems overcome that I thought I would share with you. I think the texture issue really demonstrates the difference between engineering and product design.
     

    http://www.youtube.com/watch?v=5U6Hzjz5220
  20. Josh
    While working with zlib, I implemented the package system, and it turned out really cool. Here's some stuff you can do
     
    First, let's load the package and register it in the file system:

    Package* pak = ReadPackage("mystuff.pak"); //Read the package RegisterPackage(pak); //Register package into file system
    Read a file from a package, just as if it were on the hard drive:

    Stream* stream = ReadFile("mystuff/new document.txt"); //Read a file straight out of the package!
    Write a file to a package, just as if it were on the hard drive:

    Stream* stream = WriteFile("mystuff/new document.txt"); //Write a file straight into the package!
    To save your changes permanently in the package, make sure you close it before the program ends:

    pak->Close(); //Close the package when finished to save the changes
    This should work in the editor, too, so you can store all your files in packages and the editor will automatically save any packages that change.
     
    When a file exists both on the hard drive and in a package, the engine will preferentially read from the file on the hard drive. When a file is written, the engine will preferentially write to a package file, if one is available with the directory the file is being written to. Zip folders and file hierarchy are kept intact.
     
    The idea is you can register all your packages, probably with a recursive function, and then read files from and write them to the packages, without knowing or caring whether you are dealing with a file on the hard drive or a file in a package. This is different from the abstract file system, because each file has a precise file path. The only ambiguity is when you have a file on the hard drive and in a package file with the same name, in the same directory. The package file names are considered folders. So if I had a model "teapot.mdl" in a package file "Models.pak" I would load it by calling LoadModel("Models\teapot.mdl"). If you want, you can leave all your files on the hard drive during development, and then let Leadwerks automatically package them up for publishing, and you won't have to change any code to load your files.
     
    Once a package is registered, all the file system commands will include them in the file hierarchy, and there is no need to worry about packages at all. If you do write to any, you should close them before ending the program so your changes are saved.
     
    My idea I mentioned last time about putting shaders in zip packages and calling the .shd files won't work, because that involves packages within packages, and I don't want to do that. So the shaders are simply going to be left as collections of associated .vert, .frag, .geom, .ctrl, and .tess files, which will be easier to keep track of when editing anyways.
  21. Josh
    This blog is going to actually be about business rather than technology. Here's what's going to happen this summer:
     
    First, we need to get this ATI driver bug fixed ASAP. Nothing else can happen until that gets fixed:
    http://www.leadwerks.com/werkspace/tracker/issue-165-terrain-textures-bug-radeon-hd-5850
     
    The official documentation is coming along well, and I am really pleased with how this has turned out. (Thanks, Aggror!)
     
    An updated evaluation kit with some limited programming will be released. One of the biggest appeals of Leadwerks Engine 2 is the ease of C++ programming, and we need to demonstrate that better.
     
    With the release of the evaluation kit, we're going to begin an affiliate program. My preference is to have a Facebook and Twitter share buttons in the header that copy the current URL, and adds your affiliate ID into the php arguments in the URL. Then you can just click that to share any page on the site, with your embedded affiliate ID. When a new user comes to the site from that link, your affiliate ID will be detected, and any sale they make will give you 15% of the sale price credit to use in the asset store. if you are a Leadwerks Merchant, you can also have the cash sent to your PayPal account.
     
    The Asset Store will be "launched" at this point, meaning I'll put out a lot of news announcements and PR to make a big deal of it. This requires the front page be fixed, so you have the nice scrolling rows of products. I also have to set up a merchant account, which will lower the transaction costs by a lot. Right now, I am losing about $8 on each engine order through PayPal, and that needs to stop. I also need to formulate an agreement for Leadwerks Merchants, so that we can give more people access to sell their stuff in the Asset Store. The tax issues here are pretty serious, so I need formal paperwork to do this.
     
    We're opening a new section of the site for videos. You can check out the beta here. I like searching for "Leadwerks" on YouTube and clicking around on videos. The idea here is that if someone is wasting time clicking around on Leadwerks videos, they might as well do it on our site. There's a lot of great videos out there that are easy to miss, so this will gather them in one spot. Best of all, it doesn't require any bandwidth from our server, as YouTube provides the data transfer, and even more importantly, users don't have to upload their videos to two different sites. To add a video, just enter the 11-character YouTube video ID, title, and description. If I have already added your video, and you would rather have it appear under your name, just add it to the database yourself, and I will delete my original entry.
     
    My goal with the affiliate system and video gallery is to raise site traffic. Right now we have about 36,000 visits per month. I want to raise that to 100,000 visits per month. Our new dedicated server can definitely handle the load.
     
    Finally, we're going to implement advertisements in the video gallery. I want it to be tasteful and relevant. I tried Google Adsense, but it displayed a lot of irrelevant ads. I tried "section targeting" and even set all information to be blocked except a few keywords I chose, but it made no difference. I inquired about AdBrite, and they seem to be the same sort of system of automatic analysis and placement of irrelevant ads; artificial stupidity, if you will. I know my users are interested in video games, 3D models, programming, computer hardware, and art programs. If a system won't let me choose what content is displayed, I won't use it. Therefore, I am going to attempt to find advertisers with relevant content that have affiliate systems. I'd prefer not to sell advertising space and manage accounts, at least at first.
     
    So, the ultimate goal here is to raise traffic. More traffic = more LE2 license sales, more asset store sales, more advertising revenue, and more opportunities for you guys. There's potential with this we can only begin to imagine.
     
    There is so much to do, I had to draw out a diagram showing the dependency of events, so I can cross them off as they are accomplished:

     
    And last, I will leave you with this video, the band from which I am really starting to like, for some reason:


  22. Josh
    With Luabind, it turns out we don't even need a table associated with an entity. We can just add values and functions straight to Lua's representation of that entity in the virtual machine! So instead of this:

    object.health=100 object.entity:SetPosition(1,2,3)
    You can just do this, which is much nicer!:

    entity.health=100 entity:SetPosition(1,2,3)
    So there's no object/actor nonsense, you just work directly with the entity itself.
     

    Entity Keys
    The Get/SetKey convention from Leadwerks Engine 2 is going away, to be replaced with more direct entity access functions. You can still store "keys" with strings, but this will directly set the values of the entity table in Lua, so they can be access from script more easily:
    void Entity::SetString(const std::string& name, const std::string& value); std::string Entity::GetString(const std::string& name); void Entity::SetFloat(const std::string& name, const float& value); float Entity::GetFloat(const std::string& name); void Entity::SetInt(const std::string& name, const int& value); int Entity::GetInt(const std::string& name); void Entity::SetObject(const std::string& name, Object* o); Object* GetObject(const std::string& name);
    Here's a sample "mover" script that performs simple movement and rotation each frame:

    function entity:Start() self.movespeed=Vec3(0) self.turnspeed=Vec3(0) end function entity:Update() self:Move(self.movespeed) self:Turn(self.turnspeed) end
    And this is how we would set an entity up in C++ to turn 2 degrees on the Y axis each frame:

    Entity* box = CreateBox(); box->AttachScript("Scripts/Entity/Mover.lua") box->SetObject("turnspeed",Vec3(0,2,0))

    Getters and Setters
    LuaBind does support getter and setter functions, and I decided it would be nice if we could have an object-oriented command set for the surface commands (even though vertices are NOT an object-oriented structure, at all!). If I can get vectors to be accessible in Lua, then we will be able to write code like this script, which performs a "breathing" effect on any model, ala Quake 3 Arena:
    function Entity:Start() this.speed = 1 this.amplitude = 0.1 end function Entity:Update() local n,v.d --Make sure this is a model entity before calling commands if this.class = CLASS_MODEL then --Get the distance the vertex should move d = math.sin(AppTime() * speed) * amplitude - amplitude --Apply movement to all vertices for n = 0,this.surface:length()-1 do for v = 0,this.surface[n].vertex:length()-1 do this.surface[n].vertex[v].position += d * this.surface[n].vertex[v].normal end end end end

    Multiple Script Attachments
    This is a very tricky thing to handle, because the behavior is so hard to define. Right now I have it set up so predefined functions will be called, in order of their attachment. As for user-defined functions, that's a lot hard to pin down. If two scripts contain a function called "Kill()", and the script itself calls the function, should both functions be called? It will take more time and testing to see how this should work. A big problem is if the user defines a function that returns a value, and two scripts contain the same function. If another script calls the function, what should be returned? So then I start thinking about separate spaces for each script attachment, with their own set of members and functions, and I realized how incredibly hard to understand that would be, if you were accessing this entity from an outside script.  
    In the absence of any compelling technological advantage, simple is best. For now, only the predefined functions get executed in sequence, and none of those return a value. My prediction is multiple script attachments will be used primarily by non-programmers who just want to combine a few behaviors and see results without touching any code. When you get into more complex behvior I think script programmers will generally use one script per entity that does what they want.
     
    It's still only 1:30 in the afternoon, so I am going to go get lunch and spend the rest of the day working bug reports. The model reloading issue that's active is a tough one. There's also a PHP issue uploading files to the site, so I will try to get that resolved.
  23. Josh
    In Leadwerks Engine 3, you can load a script and attach it to any entity. You can attach multiple scripts to any entity, and they will be called in the order they were attached. Right now, I am calling the script-side table "actor" because it is a little more descriptive than "object" but I'm not sure about all the syntax yet. Below is a sample script for a sliding door. When you attach this to a model, the script will control its behavior:

    ----------------------------- -- Sliding door script ----------------------------- --Attach this to any entity to turn it into a sliding door. --You can make the door move in any direction by adjusting the movement vector. --When the door opens or closes, a one-shot noise will play, along with looping --sound that will continue until the door comes to rest. --expose Sound opennoise --expose Sound closenoise --expose Sound movenoise --expose Sound stopnoise --expose Vec3 movement --expose function Open --expose function Close function actor:Start() if self.movement~=nil then self.openposition = self.entity.position + movement self.closeposition = self.entity.position end end function actor:Open() if self.openstate==0 then self.movestate=1 --Play one-shot noise, if it is set if self.opennoise~=nil then self.entity:EmitSound(self.opennoise) end --Play looping move noise if it is set if self.movenoise~=nil self.movesource = self.entity:EmitSound(self.movenoise,true) end end end function actor:Close() if self.openstate==1 then self.movestate=-1 --Play one-shot noise, if it is set if self.closenoise~=nil then self.entity:EmitSound(self.closenoise) end --Play looping move noise if it is set if self.movenoise~=nil self.movesource = self.entity:EmitSound(self.movenoise,true) end end end function actor:Update() local d local l if self.movestate~=0 --Calculate the difference between where we are and where we should be if self.openstate==1 d = self.openposition - self.entity.position else d = self.closeposition - self.entity.position end --Check to see if there is any difference l=d:Length() if l>0 then --Limit the difference if it is greater than the move speed of the door l = d:Length() if l>self.movespeed d = d:Normalize() * l end self.entity:Move(d,false) else self.movestate=0 --Disable looping noise source if it exists if self.movesource~=nil then self.movesource:Stop() --Play one-shot stop noise if it exists if self.stopnoise~=nil then self.entity:Emit(self.stopnoise) end end end
    Lua is pretty flexible, so I can make it work pretty much any way we want. When designing stuff like this, I find it's best to start with what the end user wants, and then work your way backwards to make it happen. What do you think?
  24. Josh
    We've transferred the site data to a dedicated server hosted with wiredtree. I've filed a ticket with invision power services to configure the new server. When that is confirmed to be working I'll retransfer the database to make sure all posts are saved and change the domain nameservers and A record.
     
    I also have an update for the site skin that is supposed to fix most of the issues reported, but I want to make sure things are working right before installing it.
     
    Then we'll get back to works on docs and other stuff. Stay tuned.
  25. Josh
    Below is raw output from the Lua debugger. It displays all variables, tables, and functions in the Lua script call stack. This will be displayed in a nice treeview interface that looks much better, but I found this terribly exciting.
     
    You'll notice that tables don't get expanded, even though I have the ability to display all the values in a table. The reason they don't get expanded is because tables can contain tables, including tables that might be found elsewhere in the program. This can easily cause an infinite loop of tables leading to tables leading to tables. This is why the Lua debugger must be a networked program that talks to the running process. When the user opens a treeview node to view the contents of that table, the main program will return all the values in the table so you can view them, but not before.
     
    Here's what the Lua script looks like:

    print("Script is running...") local a = 2 local b = 3 someglobalvalue = "hello!" someglobaltable = {} local mytable={} mytable.color = "red" mytable.mood = "happy" mytable.subtable = {} mytable.subtable.n = 9 function dostuff() local test = "dog" RuntimeError("An error has occurred!") end dostuff()
     
    This will allow you to examine the entire contents of the virtual machine of any Leadwerks Engine 3 program built with debugging enabled, regardless of what language the main loop is coded in. The Lua implementation in Leadwerks Engine 2 was well-received, but we found in advanced programs we needed better tools to debug and analyze scripts. The script debugger in Leadwerks Engine 3 will make everything perfectly transparent so you can easily identity and fix problems. It also gave me a start on networking, because networking commands were needed to set this up.
     
    My plan for the networking API is to have commands for sending raw data with a message id:

    bool Send(const int& message, Bank* data=NULL, const int& channel=0, const int& flags=MESSAGE_SEQUENCED)
     
    As well as a few game-oriented commands to easily set up basic behavior:

    bool Say(const std::string& text) bool TeamSay(const std::string& text) bool Join(const int& team)
     
    The real magic is the entity syncing, which handles networked physics and makes it so any command you call on the server affects the corresponding entities on all clients. If you call entity->SetColor(1,0,0) to make an object red, it will turn red on all clients. I've tested an earlier prototype of this, and it worked well, even across continents. This makes network programming fairly easy, and a lot of fun.
     
    I'd really like to have a simple open-source tournament shooter game the whole community can play around with. Only by building a good networking base with a high-level entity syncing system does this become convenient and easy to modify.
     

    http://www.youtube.com/watch?v=7Vae_AkLb4Q
     
    ---------------------------------------------------------------------------------------
     
    And here's the debug output displayed in a tree view. The tables each get one blank child node, and when the user expands the table, the debugger will send a request for that table's data:

×
×
  • Create New...