Jump to content

Michael_J

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by Michael_J

  1. Yeah, MAX has the option to convert which axis is up, Y or Z, but neither setting alters the above outcome (unless the setting is bugged and is never actually swapping the axis). Again, if I had to guess, I'd say it's PROBABLY a bug with MAX's exporter but still might be worth taking a quick look at at some point....
  2. No, if you convert from fbx to mdl you'll see a global rotation of 89.98022, 0.0, 0.0 instead of 0.0, 0.0, 0.0 as you'd expect. There is no option in MAX fbx to do anything like this, other than to convert the up axis from Z to Y (MAX uses a Z up axis). The only thing I could imagine causing this is not resetting the XForm in MAX before export, but I did so it can't be that. I'd be willing to bet this is an issue in MAX's fbx exporter itself, especially if files from, say, Blender are fine...
  3. Josh is already aware of this and has been provided test art, but I thought I'd post here as a reminder as well as a head's up... When exporting an object from MAX in .fbx format (I'm using MAX 2010 with the most up-to-date version of the FBX plugin), on importing the object into the editor the object, while visually having the proper orientation, will show a 90 degree tilt along the x-axis. This happens regardless of the convert up axis setting in MAX's FBX exporter. The problem here becomes apparent when you grab the global rotation from, say, a pivot and then apply it to an imported object from MAX--it will be visually oriented incorrectly. From an art standpoint, the quick "fix" for this is: --use MAX's "Affect pivot only" mode and, using transform type-in, enter a +90 degree x-axis rotation. --Export the object --undo the pivot rotation you just did (if you want to keep things "proper" within the MAX file) Edit: I ALWAYS reset the xforms on an object before I export it. I don't know if the above method will work properly if you don't... Now, when you open the editor and the object is converted to .mdl format, it will still be oriented properly but it will no longer show a 90 degree tilt along the x-axis. This will allow MAX-created objects to translate as you'd expect with engine-created pivots and geometry.
  4. Just to interject here,and this may be a tad long as I start to ramble on, as I have experience with this (I was a lead environment artist/Director by profession before moving full time to Rogue System). The "typical" workflow that I've seen used to great effect (if you have the manpower) is to have level "teams". A team is made up of a level designer, an environment art lead, a prop artist and/or texture artist. The level designer lays out the level and builds a very rough, but playable, 3D version of it. When done that is passed off to the lead artist who begins to build out the environment (if you have a concept artist then they get it first, obviously) and the designer begins working on a new area. The team's lead artist starts to assign prop tasks to the prop artist, who sends each finished prop back when done. At ALL times the only person allowed to alter the level itself is the team's lead artist. Versions will be submitted for the designer to check daily. Any changes will be addressed immediately by the lead artist. We even used this approach on rFactor 1 and 2 when it came to track creation. In this case the lead artist (me) doubled as the designer. Once the track ribbon was laid out and placeholder objects put down for buildings and such I'd hand those props off. When the model was done they were passed to the texture artist for mapping and texturing. Point is, in my experience, sub-teams with a clearly defined art and design lead are the way to go. Each is responsible for their section, they manage the assigned sub-tasks, and you only ever have one set of hands messing about in version control. As for using prefabs--3D MAX has something called Xref's. These are other MAX scenes that are linked into a main scene. When an Xref is updated it updates in the main scene. Hence, you can have multiple people working on the same project without stomping on each other's toes (as long as you don't have two people trying to work on the same Xref). In this sense, the prefab idea should in theory work the same way. You have the "lead" artist and designer work on the main scene, and they pass out prop work to others via a prefab. The most important thing (and typically the hardest to achieve) is communication. The lead has to be able to effectively delegate and communicate with the prop artists, and the art and design leads NEED to communicate constantly. There is absolutely NO reason two people should stomp on each other's toes IF they have good communication. If you want to make a change, DON'T do it until you talk to the other person/people who also work in that area. As a lead artist one of my biggest jobs was making sure everyone communicated effectively and that new hires fit within the team. I had no issues in firing someone who disrupted that communication...
  5. PolyMesh is for non-moving meshes only (eg static level geometry). ConvexHull should work, but obviously you won't have any concave elements. What needs to happen (and I believe Josh said he would be re-implementing it) is to be able to use Convex Decomposition. What this does is take a concave mesh, break it apart into multiple optimized convex elements, and then use those pieces with Newton's ability to combine multiple shapes for one object. For now, I THINK, you'd have to break your column elements into separate objects and link them to the parent floor...
  6. It's not the preferred method, no--seems a waste of potential not to use stereoscopic. It is a valid option though as it does save a few FPS, and stereoscopic has been known to cause headaches and/or nauseousness for some people (but then, just USING the Rift can do the same thing, too). It'd be fantastic if we could get both views to render in one pass, yes
  7. You don't HAVE to render both eyes if you don't want stereo 3D--you could just render the scene and post FX to a texture and display that in two side by side viewports (I haven't tried this implementation myself yet, but I've heard it works). What I was planning to do with Rogue System was offer an option for "Simple" or "Advanced" -- simple being non-3D (which SHOULD be lighter on FPS) and advanced being true 3D. Also, don't forget that the post effect you run for the Oculus to correct display distortion also trims a lot of unneeded pixels away that are outside of the view, so that helps FPS a bit from what I understand... All that said though, there are cool things you can do when rendering for both eyes, such as having a specific UI for each eye (didn't the earlier Apache have a left-eye targeting display, or am I thinking of a different aircraft?)...
  8. Try this (I've used it in the past): If you don't want to fog out in the distance, create a disc shape for your water plane. The disc should be made of a few concentric rings of polygons. Select the outermost vertices of the disc and set these vertices' alpha to zero (or, if your water plane needs a LOT of verts, or uses tessellation, you can use a texture component that controls the alpha--map accordingly to create the fade). This transparency will allow the water disc to fade gradually into the sky sphere. Obviously, the bottom half of the sky sphere should be painted to look like ocean so the water disc blends to something similar.... Note: if you're using a skybox then you'd use a square plane instead of a disc so it fits better. The special need here would be for a shader that allows vertex alpha to be used to control transparency if needed. Also, generally speaking, fogging out the water plane before the edge can be seen is typically cheaper, but not much. The water plane's transparency doesn't need to be sorted since the only thing that draws "behind" it is the skybox (assuming you can control the render order).
  9. What you would want here (seen it done before, and I believe is what he's asking for) is to be able to set an elevation Z value. Then, when you paint, the terrain vertices would snap to that elevation under the brush (making stepped terraces down a mountain side, flatten the ground under a building, etc). An extension to this would be to set the brush to World or Local. World would obviously snap to the world Z value you enter, while local would alter the vertices +/- their current value (useful for cutting trenches to a precise depth that follow the dips and rolls of the terrain, or flattening a roadbed). NOTE: I haven't used the editor yet, so I'm not sure of its current functionality. Just offering my opinion based on other editors I've worked with. Feel free to tell me to mind my own business
  10. Thanks for looking in to this. Let me know if I can help in any way...
  11. This would be a welcome addition. I have a lot of large vessels that obviously move, but also have hangars that can be flown into. It'd be a LOT more convenient to represent these concave areas with one collision mesh and convex decomposition rather than building all the convex pieces "by hand". Thanks in advance for when you do put this in...
  12. Yeah, I've tested my SDL implementation with a Saitek X52 and a G-25 wheel and pedal set--works great. edit: what DudeAwesome said
  13. In preparation for switching to LE I recently implemented SDL 2.0 into my game code for joystick/gamepad support. I know it's not native, but it's a cross-platform library, easy to implement, and free. I was reading data from my controllers in about 20 minutes. It's a c++ lib, so obviously this probably wouldn't help the LUA users too much. Maybe Josh could implement it directly assuming the license allows for it in something like LE? You only activate the modules you want to use, so it's pretty low overhead...
  14. Yeah, tesselation is great for adding this sort of detail. I was planning on using something similar for asteroids and such if you fly up close to them. We'll definitely have to talk
  15. Well, the Kickstarter was successful in that the prototype I made for it drew enough attention that Image Space Inc. realized I was serious enough about it to partially fund it. I had to streamline the design a bit for the initial version, but that's okay. The missing features will get added after the first release. The first version will be a modestly-priced Alpha in VERY early 2015. All proceeds from that will go towards the development budget, so that will help. I know not everyone likes the idea of pay-to-test (it's not my favorite business model, to be sure), but I've also had a lot of requests for it, too. The hope is it'll make enough so I can bring on a full time artist to help with asset creation in the second half of the dev cycle. I'm already very impressed with the LW forums. The support from the devs, as well as the communication between members, is a nice change of pace
  16. Thanks so much for the kind words! Shader help would be most appreciated as I already have a LOT to do So, thanks for the offer. I'm 99% certain of switching to LeadWerks. I certainly can't use an engine that has no chance of being supported or updated. I'm pretty disappointed about the whole situation. Up until now the dev has been very quick to respond on the forums, offer help, etc. But, he hasn't made a post since mid Nov. That is NOT a good sign. I spent a long time researching engines early on to try and avoid this type of thing, but what are you gonna do, right? Anyway, LW has a VERY similar framework and I've spent the past several evenings studying the docs and watching the tuts--very straight-forward stuff. So, the switch shouldn't be too bad. Plus, my current engine runs through a wrapper I wrote, so in general that's the only code I'll have to update. Now, if I can just get them to release 3.1 so I can get started <jk> Anyway, thanks again!
  17. Hi all, My first post here. I'm currently working on a hardcore space-sim (think Falcon 4 in space) called Rogue System. Anyway, the engine I was using seems to have fallen out of development which is what brings me to LeadWerks. ANYWAY, most of my shader work has been in .fx format, so while hunting around for various GLSL information I stumbled on this (may be old news here, but a search didn't reveal anything): http://forum.nvidia-arc.com/showthread.php?12284-mental-mill-1-2-release-and-download Apparently, while it's no longer supported, nVidia has released mental mill, which at one time was included with MAX. Now, I don't THINK this requires a MAX license anymore (someone may want to check that). Hopefully not... If you're unfamiliar with MM, it's a node-based wysiwyg shader editor that can export, among other formats, GLSL. As I don't actually have LW yet (waiting on 3.1 to release) I'm not sure the code will work directly. If not, maybe it will give a good base code that can be finished off in LW's editor? Anyway, I thought this might be useful for some, so I thought I'd post. Michael
×
×
  • Create New...