-
Posts
4,937 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Posts posted by gamecreator
-
-
When you create a bone in either Blender or 3DS Max, they actually create two bones to give the bone length. With Blender it's just a single click but if you export it and import it into Max, you'll see the two bones (Bone and Bone_end by default). In 3DS Max you have to drag the bone out to create bone with the length.
What's confusing to me now is that you can delete the second bone in Max but it still leaves the "length" of the bone. If you export it as FBX and reimport it, it will have the one bone with no length. But if you save and load it as a Max file, it still keeps the length.
- 1
- 1
-
If you know the next bone that it's attached to, you can get the distance between the two bones with GetDistance.
- 2
-
-
1 hour ago, Josh said:
It’s not a 3D engine. It’s just for making programs that need a desktop UI.
That doesn't really answer any of my questions.
-
On 10/23/2020 at 8:53 PM, havenphillip said:
I'm not sure what this is
I'm a little confused by this too. Is this a separate product from Ultra Engine? Or will this App Kit eventually gain all the features of a game engine (like terrain, etc.)? If so, will backing this Kickstarter get you all the future features?
- 1
-
Whoa! Thank you! I'm gonna read through it. I don't have a specific one in mind but this opens up a lot of possibilities, if it's as straightforward as it seems. Josh moved it to the programming forum but here's the link:
Thanks again!
- 1
-
2 hours ago, havenphillip said:
Tons of shaders here. Flipping them over to Leadwerks isn't that hard:
https://shaderfrog.com/app/
https://www.shadertoy.com/It would be great to have a simple tutorial or walkthrough for this process as I know nothing about shaders but I'm pretty comfortable learning by modifying.
-
Right. The host would still be authoritative and could check that the client didn't cheat. But that's down the line. Right now I'm just trying to get to step 1: making sure the client is where the host was.
-
6 hours ago, JMichaelK said:
Why not let the server control all movement so there is no discrepency to resolve?
Well yes, that's the plan. But I'm trying to figure out how the client would have the proper position for the character, since you can't just use GoToPoint() on the client (I think). The ideal way to do it would be for the host to not need to send anything to the client, once it sent the client the start and end points of the navigation (with timestamps). The client should then be able to calculate where the character needs to be at any given time.
1 hour ago, Rick said:To me it sounds like you're talking about 2 issues really. The first one being GoToPoint() with high speeds isn't doing what you think it should be doing.
Yes. I was debating whether to include that but it demonstrated well that the controller might not even be where we expect. Probably should have been a separate discussion.
-
I was just theorizing about how to make sure that a pathfinding character on a host ends up correctly in the same position for a client. Actual network code would come later.
And yes, I call GoToPoint only once (when I hit the spacebar). Ignore that it's possible to hit space multiple times in that code.
-
I'm trying to figure out how to do pathfinding over the network and I'm not having any luck so far. I found this nice thread and post where you could supposedly get the navmesh points. I threw together a project using it but my tests weren't too promising. The problem is that if the character is going fast enough, he'll miss points entirely, kind of acting like it's drunk, making all but the start and end points useless for network prediction/interpolation. Here's the video:
The red markers represent all the points that FindPath returns (10 points + beginning and end). Note that I'm only using GoToPoint once. I have a feeling that FindPath only works for the default speed (and perhaps acceleration). Here's the code:
#include "Leadwerks.h" using namespace Leadwerks; Entity* player = NULL; Entity* navpointbox[100]; int main(int argc, const char *argv[]) { int j=0; Leadwerks::Window* window = Leadwerks::Window::Create("Navmesh Point Test", 0, 0, 1280, 720); Context* context = Context::Create(window); World* world = World::Create(); Camera* camera = Camera::Create(); camera->SetRotation(65, 0, 0); camera->Move(0, -2, -12); Light* light = DirectionalLight::Create(); light->SetRotation(35, 35, 0); Map::Load("Maps/map.map"); NavMesh* navMesh = world->navmesh; //Enable navmesh debugging // camera->SetDebugNavigationMode(true); // Start: -15.0, -2 // End: 9, 7 //Create a character player = Pivot::Create(); Entity* visiblecapsule = Model::Cylinder(16, player); visiblecapsule->SetScale(1, 2, 1); visiblecapsule->SetPosition(0, 1, 0); player->SetPosition(-15, 0, 2); player->SetMass(1); player->SetPhysicsMode(Entity::CharacterPhysics); while(true) { if(window->Closed() || window->KeyDown(Key::Escape)) return false; if(window->KeyHit(Key::Space)) { player->GoToPoint(9, 0, 7, 10.0, 5); // player->GoToPoint(9, 0, 7, 5.0, 5); NavPath* path = new NavPath(); vector<Vec3> pathpoints; path->navmesh = navMesh; path->navmesh->FindPath(player->GetPosition(true), Vec3(9, 0, 7), pathpoints); for(std::vector<Vec3>::iterator it = pathpoints.begin(); it != pathpoints.end(); ++it) { std::cout << "Point: " << it->x << ", " << it->z << endl; navpointbox[j] = Model::Box(); navpointbox[j]->SetColor(1.0, 0.0, 0.0); navpointbox[j]->SetScale(0.3, 1.0, 0.3); navpointbox[j]->SetPosition(it->x, 0.5, it->z); j++; } } Leadwerks::Time::Update(); world->Update(); world->Render(); context->Sync(); } return 0; }
So, is there a way to get ALL the ACTUAL points the character will walk through (not just markers it might miss anyway)? Meaning, if a host lags for a second, I want the client to know exactly where the character should be at any given time.
If this can't be done for Leadwerks 4, maybe we could have a FindPathPoint in Leadwerks 5 that returns a Vec3 at a given time. I understand that a character could be pushed out of the way or whatever too so it won't be 100% reliable.
-
You can detect if two objects collide with Collision but I don't think it'll give you the shape of the collision. Might be able to do it with Newton somehow.
-
You're a few months late:
Also, the article you linked says how they do it. They stream it in from an SSD, which is a newer version and reads faster.
- 1
-
6 hours ago, JMK said:
It's also best to do things on a time basis.
That was an issue in the past. Using GetSpeed isn't accurate so your game won't play the same on different computers.
-
9 hours ago, JMK said:
Define "alter"? You mean change the source code? Shaders in Vulkan are loaded from compiled SPV files. This is nice because there is never any question whether a shader will compile successfully on one card or another. It would be possible to integrate the shader compiler code into the engine, but that's not really something I see a need for.
Say a game is a fixed 60fps and you want to add that blur effect I talked about over 5 seconds.
effect->SetProperty("strength", bluramount); bluramount += 1.0 / 60 / 5; // Adds a fraction of blur per 60fps frame and 5 seconds
Edit: Actually, isn't this what SetFloat does now? I don't know much about shaders but would this be a uniform?
-
5 hours ago, JMK said:
Yes, that should be possible. Instead of using shader uniforms you will just have access to a set of up to maybe 16 float values, or something like that, and the JSON strings will give them names for identification. So you could call something like effect->SetProperty("strength", 0.5).
That would be a great step toward flexibility but I wonder: is this not something that can be manipulated in memory? Can you alter shaders after you load them from a file? Guessing that you can't from what you're saying.
5 hours ago, JMK said:You don't need a shader to render to a texture. There is a command called Camera::SetTarget() or Camera::SetRenderTarget (check the docs) that does this.
Oh man. Didn't even know this was an official function. Testing it now... thanks! Edit: it works! Yay!
-
7 hours ago, JMK said:
adjustable shader settings the user can modify
Do you mean that these will be able to be changed by code? Like you could create an effect where you blur the screen more and more as someone gets more and more drunk or dizzy? Or use it as a fade in/out effect for menus? I'd love for this to be the case in the new engine.
QuoteAm I missing anything else important? Is there an another effect I should add
Yes! Thank you for asking. A shader you need to project a camera view onto a surface, like a TV or a screen. I was just researching how to do this and saw on the forums other people trying to do it. If you know shows like Star Trek, I was thinking of putting on screen the view of a camera in space looking at a ship:
Or use the same trick to show someone talking to you:
Other than that, whatever helps make the games look beautiful, which I know you're already working on.
- 1
-
That might work. You can also try just not including collision meshes/shapes for your models.
-
It's pretty straightforward with the directions you can find online but you can either right-click an exe (like Steam) and run that program through Sandboxie (shell extension) or you can run Sandboxie and find a program to run through it directly. There's maybe one or two more extra steps but that's basically all there is to it. It's kind of like a virtual machine, if you've ever used those.
-
You don't even need a second computer. I've tested it with a free program called Sandboxie ("open-source sandboxing program for Microsoft Windows") and I was able to run a second Steam account on the same computer.
- 1
-
This might help:
I tried to look at the Lua code but I couldn't really understand how it's supposed to connect people. I don't know if you use Lua or C.
-
True, I was thinking about how I could just substitute different models directly. It's not really hard to do. The only real difficulty is turning vegetation into normal models without billboards. I don't think that's possible now.
-
To speed up development, I sometimes use a separate barebones map that still lets me test character changes. It helps to not have to wait for all the normal assets to load every time I run the program. But I was wondering if we could implement (or do we already have?) a sort of model substitution system. This would still allow the loading of the normal map but all the trees, rocks, houses, whatever would be substituted by low-poly meshes of our choice. For example:
tree01.mdl -> treelowpoly.mdl
tree02.mdl -> treelowpoly.mdl
tree03.mdl -> treelowpoly.mdl
tree04.mdl -> treelowpoly.mdl
tree05.mdl -> treelowpoly.mdlhouse01.mdl -> houselowpoly.mdl
house02.mdl -> houselowpoly.mdl
house03.mdl -> houselowpoly.mdl
house04.mdl -> houselowpoly.mdl
house05.mdl -> houselowpoly.mdletc.
Would be even better to have the option to turn billboards off since those take a while to load too. I guess this would change it from vegetation to a static model.
The other option is maybe to have developers be able to stream in assets as they play. Like, have the terrain and collisions load at the beginning but then all other assets stream in. Kind of like how we have that option for textures.
Just some thoughts to speed up development and testing.
- 1
-
Did you see if it detects just the entity if you don't check for the key?
Release date for ultraengine 5?
in General Discussion
Posted
Curious if there's an update on this as this Sunday would end the 6 weeks. No need to answer for me as I'm using another engine now but others might want to know.