• Welcome to SC4 Devotion Forum Archives.

Using Blender (open source modeling program) for content creation.

Started by eggman121, December 29, 2016, 06:01:10 PM

Previous topic - Next topic

0 Members and 3 Guests are viewing this topic.

null45

Quote from: eggman121 on March 05, 2017, 05:22:45 PM
I wonder if a tool Like GoFSH could be adapted to take the renders of the model and apply the required FSHs in the BAT?  :-\

The FSH conversion in BAT4Max 4.5 and later uses a command line tool that I wrote to replace FSHTool.
The DXT compression algorithm used by FSHTool produced a large number of compression artifacts when used by the DarkNite truNite export process in BAT4Max.
SimFox posted more on that issue in the BAT4Max 4.5 thread.

Handyman

I have looked through the links provided by tigerbuilder, and a few more. At this point, I don't think this will serve our purpose as it all seems to be aimed at using a Python api to wrap maxscripts for use with Max. Even though Blender runs, mostly, on Python, we would still need an environment that can support the maxscripts. I did a number of searches with this in mind and came up empty, so far. :'(

mgb204

I think perhaps there is some misunderstanding re my previous postings here.

In short, the only thing we need to get Blender to do is render a model into BMP files, making each texture using the correct camera angle for each zoom/rotation needed.

The export process would be identical to 3DS Max + BAT4MAX, i.e.:


  • Create model in Blender
  • Create LODs (Manually) using Blender
  • Export LODs from Blender as 3DS files
  • Import 3DS LODs into SC4BAT
  • Export (Render) LODs using SC4BAT, creating an SC4Model file with U/V Mapping and blank textures
  • Render model in Blender, exporting the 20 or more valid BMP files
  • From here on in, the exact same tools (command line based) can take over to correctly ID/Package the textures into the SC4Model with the LODs

So in short, the only part we need concern ourselves with is getting the correct textures from Blender. Everything else that's required for this to work exists and can be reused. UV Mapping is handled by SC4BAT, because that is applied based on the LODs, which will be 100% accurate as they originate from the model in Blender.

I hope that clarifies things.

tomvsotis

Has anyone managed to make any more progress w this? I've picked up modeling again recently, in Blender, and have come up against the same wall, i.e. that it seems nigh on impossible to import a Blender model into Gmax. It's driving me insane.

eggman121

Quote from: tomvsotis on November 16, 2017, 08:22:02 PM
Has anyone managed to make any more progress w this? I've picked up modeling again recently, in Blender, and have come up against the same wall, i.e. that it seems nigh on impossible to import a Blender model into Gmax. It's driving me insane.

You can save a model as .3ds and import from there.

I am pondering weather I should make public the method of making True 3d models with FSHs attached like transit Pieces open to others. I have been able to get models out through the Quake .md3 method to make true 3d Models for NAM and thus ditching 3ds Max (With its overpower and prohibitive License tag).

Theoretically you could have true 3d for some parts and combine them with a normal BAT from Gmax. The way this could be done is with Cogeo's Model Tweaker which combines different Types of models into one. That is one tool I am truly grateful for  ()stsfd()

That way you could assign FSHs to the model parts that can be modified and pasted onto a BAT to give the illusion of different texture types.

Thoughts?

-eggman121

tomvsotis

Yeah, to clarify, the prob i'm having isn't importing the model; it's successfully applying the textures i baked in Blender. Has anyone managed to do this? I baked a texture map, but I can't seem to get Gmax to import the UV map from Blender w the 3ds file, and even if I do manage to get the map into Gmax (via exporting from Blender as a .obj file), the baked textures don't seem to follow the map and thus show up on the model as a giant mess. Ugh.

And eggman, your method sounds fascinating, but it's well beyond my experience at this point!

????? .

Blender to Gmax (importer for gmax)
UVmap OK.
I use google translation.

eggman121

Thanks for the Reply!

This will be very useful for getting models out of Gmax and into Ilive's Reader tool.

Many thanks too you  ;)

I use the quake route for making models but this is equally valid.

-eggman121

tomvsotis

Oh, AWESOME. I will look forward to playing with this!

vortext

Played around a bit over the weekend with the 'new' blender (thanks tomvsotis for mentioning that  :thumbsup:) and first I must say the ui definitly is an improvement.

At any rate, with the help of rivit's article and some digging around in the game files to find the sun angle I managed to recreate the game perspective & lighting somewhat decently I reckon (for the closest zoom any way, and no foreshortening applied yet). Though there're some hardly explained settings for the ortographic camera so I eyeballed a few things.

Here's the viewport perspective (with eevee render) showing a 16x16x16 cube with a 'roof' on top:


And here's the actual render (using cycles render):


Also got a barebones addon script working because as it turns out enabling / disabling addons is the fastest way to reload python scripts in Blender. Lots of things to tackle still but definitly seems doable. .  ::)

The major question on my mind atm is how gmax / max handles the camera allignment for rendering. That is, if you look at a fsh file it always show the renderer model alligned to the left. Guessing camera allignment is somehow related to LOD size but howww. .?  &Thk/(  :D
time flies like a bird
fruit flies like a banana

mgb204

Hats off to you for getting this far. Does this work for all 4 rotations in zoom 5?

Re: Camera Alignment, I guess the first question is does the exported texture Map correctly to the LOD exported by SC4BAT? Because if it does, then whilst it may not be completely optimised, at least it's a working render.

fantozzi

The first BAT rendered with Cycles would be like stepping on the moon, I guess.
 

vortext

Quote from: mgb204 on January 22, 2019, 08:05:44 AM
Does this work for all 4 rotations in zoom 5?

No, atm it just add a camera & light to the scene, that's it.

Quote from: mgb204 on January 22, 2019, 08:05:44 AM
Re: Camera Alignment, I guess the first question is does the exported texture Map correctly to the LOD exported by SC4BAT? Because if it does, then whilst it may not be completely optimised, at least it's a working render.

Hm yeah, about that. Looked for the s3d export, couldn't find it anywhere and a quick search result seems to indicate the 3DS export is not ported (yet?) to version 2.8. .  Guess it's back to 2.79 for the time being. . Or import the LOD as another format into gmax. . e.g. obj mentioned below  :-\

And perhaps should've been a bit clearer, currenlty no view rendered with blender has been brought ingame, nor has a 'lod' been exported.
time flies like a bird
fruit flies like a banana

eggman121

Great work here vortex (Erik) :thumbsup:

This is a really solid breakthrough  &apls

I really appreciate you making this happen and look forward to what we can do with blender.

+1 for your effort

-eggman121 (Stephen)

vortext

Thanks Stephen!

so as a rudimentary answer to the question;

Quote from: mgb204 on January 22, 2019, 08:05:44 AM
does the exported texture Map correctly to the LOD exported by SC4BAT?

A very quick trial indicates 'no', along with host of additional issues. Then again, I opted to give the obj route a go and who's to say that's desirable to begin with.

At any rate, exported a 'unit' cube as obj from Blender, then using this script imported it into gmax. No scaling required, but had to change the axes on export from Blender. Next saved the rendered image from Blender as bmp, and used GoFSH to import it into a .dat. Finally replaced the appropriate texture in the sc4model.

sc4model export from gmax 


rendered texture applied


Note the shadow is also not correct, probably due to lacking the proper alpha (tho the exported bmp was argb it appeared to be lacking alpha in gofsh as well, perhaps it would need to composited in Blender. . ? &Thk/( )

Here's a better indication of what's going on with regards to the uv.


All things considered it looks like the camera needs to be pulled back a bit, or would it soley be due to no foreshortening applied yet? In the article Ron also mentions the camera having a 'range' of 190 meter in the default rig. However I don't quite understand what it means; is range the distance from the location of the camera to the origin of the scene?   
time flies like a bird
fruit flies like a banana

rivit

@vortext,
   that's already a great advance on where we were yesterday, so to speak.

I interpret "a 'range' of 190 meter" as the camera distance from the object, everything else is done by the zoom i.e. changing the field of view by making the render wider/narrower in the given ratios. Alternatively the Range*146/Zoom Ratio will do the same ie 146,73,32,16,8  for the Zoom5 down to 1. As you've deduced SUN is always 90 from the left so that shadows fall horizontally. Elevation varies with Zoom. Foreshortening is entirely handled in the projection parameters - ie is intrinsic to them. so you don't need to do anything for that. Just make sure you're not rendering in normal 3d perspective.

As for alignment I remember that gmax has a variable called viewportslop which is the left margin of the render, and I'm pretty sure its aligned by finding the leftmost projection point from the model and the view (ie maths) but there may be intrinsic functions in Blender to help deduce thus.

If you have the original render bmp it might pay to check that there is indeed alpha in it by looking in photoshop. Windows doesn't play nicely with alpha esp. when exported from a program relying on windows libraries. Hence it may have once had alpha that has now become all ones. If you can export as PNG that will prove it. That will also load into GoFSH correctly.

tomvsotis


tomvsotis

Btw, I've done all my modeling up to this point in Blender 2.79, because it's what I'm familiar with, but i'm now playing around w 2.80, and it's SUPER impressive. if you guys aren't familiar w Blender, this is prob a good time to investigate it -- the UI is wayyy better than it used to be, and it's also got a fancy new real-time rendering engine, which is interchangeable with the more photorealistic inbuilt Cycles renderer. It's really pretty great, especially because it's free and open source.

vortext

Quote from: rivit on January 22, 2019, 02:19:18 PM
Foreshortening is entirely handled in the projection parameters - ie is intrinsic to them. so you don't need to do anything for that.

Thanks for clearifying that Ron, indeed I was under the assumption foreshortening had to be applied, however reading up on orthographic projection it makes a a lot of sense it is the result of it.

So since the dreaded 'm' word has come up, it seems I've managed to calculate the camera position for all zooms using a fixed range of 190 meters. In each render the camera is looking at the center of mass of the lod (0,0,8) and uses the same orthographic scale and no offsets (more on that in a bit). On a sidenote, in gmax I noticed the camera looks at a position slightly above the lod, would be interesting to know what the look at point is in studio max.




Now onto what seems crucial to get right: the orthographic scale of the camera, and closely related the x & y camera offsets. Blender describes the scale as being 'similar to zoom', and online I found the following explanation;
QuoteThe Orthographic Scale factor represent the maximum dimension (in scene units) of the portion of space captured from the camera.
Or to put that yet another way, when the camera is looking straight down to the ground the dimensions of the surface area captured by the camera correspond to the scaling factor (and this holds true irrespective of camera height location).

So if for example an orthographic scale of 32 is used (as I had done initially for zoom5 as it seemed to make some intuitive sense) and the camera looks down, the area captured by the camera measures 32 x 32 meters. However by trail-and-error I arrived at the following settings for the zoom5 render. Also note the offsets to manoeuvre the model to the topleft corner. The offsets use a 0..1 range whereby an offset of 1 moves the camera by the scale value along the axis, thus I reckon it should be viable to use them as viewport stop values.



And side-by-side comparison. 



Also made an interesting find playing with some of the numbers in Ron's article. Here's the relevant numbers for zoom 5;




LevelZoom xRel RangeEquiv Range mPixel in m
Z5146126.10.102

The finding is that if an orthographic scale of 26.1 is used, i.e. the Equivalent Range in meters, the 16x16x16 model fits snuggly into the camera viewport heightwise (provided no offsets are used, and the camera is looking at the center of mass). I'm intrigued the corresponding Relative Range of 1 seems to indicate some 'true' scale, and is echoed by the heightwise fit of the model relative to the camera exclusivly. 





So yeah all things considered the orthographic scale seems to be quite crucial indeed, and I stronlgy suspect it can be calculated using the known camera positions and render dimensions but alas, my maths game is not that strong.  $%Grinno$%
time flies like a bird
fruit flies like a banana

Barroco Hispano

@vortext



Z5: 8 Meters Z4: 16 meters Z3: 32 meters Z2: 73m Z1: 292m    (Z axis)
Barroco Hispano