render to texture and « vectorized »
textures with stage3D – Agal

Moussa Dembélé – www.mousman.com- mousman@hotmail.com

download projection project

 

While I was on my way to learn how to make shadows (and it’s a big topic),
I met the rendering to texture method.
And as I was digging into this technic,
I decided to delay my journey in shadowland and to paste some time exploring
the rendered textures and texture projections because it’s the first step of
some great technics such as light mapping, dynamic displacement map, of course shadows , etc…

So I started to play with render to texture and decided to experiment this method to do what framework like starling
do but with 3d models instead of bitmap.
why ?
Because I could use the model projected a little bit like we use bitmapdata,
rendering it once to a texture and then project it to as many other models I want,
and other things like mirrors and…
if we zoom to the model projected the same way we zoom to the model on which the texture is projected,
it will work as a vectorized texture !!!

Easier to say than to do.
It took me time and several headaches to achieve it.
but …  here it is :

drag to rotate and use mousewheel to zoom.

 

Sorry, either Adobe flash is not installed or you do not have it enabled

So how to do this ?
First to get a better understanding of what’s going on I recommand to read this article made by Marco Scabia :
perspective projection

The principle :

The major magic trick and I’m kind of proud to have found it, (let me know if I shouldn’t)
is to use the matrix of the model on which the texture is projected (let’s call it the plan) to dynamically compute the uv mapping.
Let me explain it :
The x,y,z coordinates of the plan are calculated with a model-view-projection matrix in the shader.
Once divided by w these coordinates will stand in the clip space -that is to say- between -1 and 1 for x and y,
0 and 1 for z.
As we’re looking for u,v values we will need only x and y.
What we have to do is adding 1 to to x, y and divide by 2.
Now we have coordinates bewteen 0 and 1 exactly what we want :-)
(the y coordinates has to be negate first).

Hum…well..okay we have some coordinates that could be uv mapping but what are these coordinates exactly ?
These coordinates are the x and y value of the plan projected to the screen if the screen had a width and a height of 1.

Second trick :
If we can put the model projected at the same place and orientation as the plan,
the u,v mapping will fit exactly ! So simple, so beautiful … but this part has given me some headaches to figured out (simple doesn’ t mean always easy)
Well… by the way it will not fit so exactly in fact, at least not the way we want.
Because the z value of the plan is not the one we necessarily want for the model.

So next trick :
first we use two cameras,
one for the plan, one for the model.
We put the model as we want it to be in the space.
In order for the model to be drawn on the plan exactly as it is in the scene,
we must place the plan at a precise position :
It must be at the position where it fit the screen exactly. Let’s call it the zoom frontier.
We can calculate this position (we’ll see how later).
Now we know this z position, we can move the plan wherever we want, calculate the ratio of this move in regards to the zoom frontier,
and apply this ratio the the model.
In my exemple I only move the z position of the plan.
The same has to be done with x and y if we want to change them also.

Last trick :
when rotating the model accordingly to the plan a problem remains because
the model doesn’t act as a plan, it’s a 3D model.
So what we have to do is to flatten it once rotated.
After that we can render it to a texture and it will work just fine :-)
well… almost… because if we flatten it to much we will loose the z-sorting.
So we have to flatten it just enough for it to act like a plan but keep the z-sorting.
Not perfect but it works.

And … that’s it  !
Cooool.

Now the details with some code explanation.

Calculate the « zoom frontier » :

In order to do that we need to get the size of the plan.
For more simplicity in this article I just define them in the computeZoomFrontier() method (in World.as)
var deltax:Number = 178;
var deltay:Number = 178;

I create then a Vector3D with width and height of the plan as x and y,
and apply the perpective matrix to this vector.
If you have read the article made by Marco Scabia you know that the result of applying the perspective matrix
results in a xP’ value that is equal to K1*xW and an yP’ value equal to K2*xW where
« K1 and K2 are constants that are derived from geometrical factors such as the aspect ratio of your projection plane (your viewport) and the « field of view » of your eye, which takes into account the degree of wide-angle vision. »
and xW and yW are coordinates of a point in the world :

xP’ = K1 * xW
yP’ = K2 * yW

These values once divide by zW will fit in the clip-space (between -1 and 1) :
xP = xP’ / zW
yP = yP’ / zW

What we are looking is a value for xP and yP equal to 2 (because clip space is between -1 and 1) :
2 = xP’ / zW
2 = yP’ / zW

We can now get the z value :
zWx = xP’ / 2
zWy = yP’ / 2

That’s it ,
we use the transformVector() method of the perspective matrix to our vector,
and divide by two the resulting x and y values.
We just have to take the max of these two values and we have our zoom frontier.

The function in World.as :

private function computeZoomFrontier():void 
{
	var meshPlan:IMMesh = _meshes[1];
	var zx:Number;
	var zy:Number;

	var matrix:Matrix3D = new Matrix3D();
	matrix.append(_projectionMatrix);

	var minX:Number = _lowestHighestPlan.lowestX;
	var maxX:Number = _lowestHighestPlan.highestX;
	var minY:Number = _lowestHighestPlan.lowestY;
	var maxY:Number = _lowestHighestPlan.highestY;

	var deltax:Number = maxX - minX;
	var deltay:Number = maxY - minY;

	var vec1:Vector3D = new Vector3D(deltax, deltay);
	vec1 = matrix.transformVector(vec1);

	zx = vec1.x / 2;
	zy = vec1.y / 2;

	_zZoomFrontier = Math.max(zx , zy);
}

Flatten the model :
This is happening in the shader (MPhongFlatShader.as).
It’s a classical phong shader with some modifications.

We can’t apply the model-view-projection matrix directly to the model.
We need to apply first the model matrix (line 72),
then flatten the model (line 73),
and after apply the view-projection martrix (line 75).

"mov vt0        ,va0                \n" + // x,y,z coordinates
"m44 vt0        ,vt0     ,vc4       \n" + // apply model matrix
"div vt0.z      ,vt0.z   ,vc12.x    \n" + // flattening model
"m44 vt0        ,vt0     ,vc13      \n" + // texture linked positionment matrix
"m44 vt0        ,vt0     ,vc0       \n" + // transform vertex x,y,z
"mov op         ,vt0                \n" + //

What about line 74?
Because I want the model to be in the same orientation than the plan,
I also give a matrix representing the plan rotation to MPhongFlatShader and apply it to the model.

Now let’s take a look at the render loop (in World.as)

line form 352 to 357 is about calculating the model position according to the zoom in the plan

var deltaZ:Number = meshPlan.modelMatrix.position.z - meshPlan.cameraMatrix.position.z;
zoomRatio = _zZoomFrontier / deltaZ;
var cx:Number = meshModel.cameraMatrix.position.x;
var cy:Number = meshModel.cameraMatrix.position.y;
var cz:Number = _originalModelCameraZ / zoomRatio;
meshModel.cameraMatrix.position = new Vector3D(cx,cy,cz);

In the line 352 I calculate the distance between the plan and it’s camera,
then in line 353 I calculate the ratio that I’ll use in line 356 to set the model position.

Line 366 and 367 are just a constant rotation I apply to the model.

var axis:Vector3D = new Vector3D(0, 1, 0);
modelMatrix.appendRotation(2, axis, modelMatrix.position);

From line 370 to 374 I extract a rotation matrix from the plan and give it to the model.
(The rotation matrix instance is given in line 185 : mesh.texMatrix = _textMatrix;)

var quat:Vector3D = meshPlan.modelMatrix.decompose(Orientation3D.QUATERNION)[1];
_textMatrix.identity();
_vecsTextMatrix = _textMatrix.decompose(Orientation3D.QUATERNION);
_vecsTextMatrix[1] = quat;
_textMatrix.recompose(_vecsTextMatrix, Orientation3D.QUATERNION);

From line 376 to 381 I compute the model-view-projection of the model,
and in fact it is just a view-projection matrix ( remember we can’t apply the model-view-projection matrix directly to the model)

From line 386 to 396 I draw the model and the plan
(model in a renderToTexure, plan in the backbuffer)

_context.setRenderToTexture(_textureMap,true);
_context.clear(0.9, 0.9, 0.9, 1);			 
meshModel.drawTriangles();

// draw plan	
 //set render target to backbuffer
_context.setRenderToBackBuffer();
_context.clear(_bgColor.red, _bgColor.green, _bgColor.blue, 1);					
meshPlan.drawTriangles();

_context.present();
See the complete render loop method

Complete render loop :

public function render(e:Event ):void 
{
	if ( !_context ) return;

	var rot:Number;
	var rx:Number;
	var ry:Number;
	var rz:Number;
	var meshModel:IMMesh = _meshes[0];
	var meshPlan:PlanMesh = _meshes[1] as PlanMesh;
	var modelMatrix:Matrix3D;
	var planRation:Number;
	var zoomRatio:Number;
	var modelPosition:Vector3D;
	var matrix:Matrix3D;

	///////////////////
	 // plan model
	 //////////////////			
	// move plan
	if (!_mouseDown) {
		modelMatrix = meshPlan.modelMatrix;				
		rx = (t/4 * 0.35) % 360 ;
		ry = (t/4 * 0.35) % 360 ;
		rz = (t / 4 * 0.35) % 360 ;
		 //moving plan
		modelMatrix.appendRotation(rx, Vector3D.X_AXIS,modelMatrix.position);
		modelMatrix.appendRotation(ry, Vector3D.Y_AXIS,modelMatrix.position);
		modelMatrix.appendRotation(rz, Vector3D.Z_AXIS,modelMatrix.position);

	}
	else {
		// moving plan
		_arcBall.update();
	}

	 //compute the modelViewProjection matrix
	meshPlan.modelViewProjection = computeMVMatrix(meshPlan);

	// determining zoome ratio
	var deltaZ:Number = meshPlan.modelMatrix.position.z - meshPlan.cameraMatrix.position.z;
	zoomRatio = _zZoomFrontier / deltaZ;
	var cx:Number = meshModel.cameraMatrix.position.x;
	var cy:Number = meshModel.cameraMatrix.position.y;
	var cz:Number = _originalModelCameraZ / zoomRatio;
	meshModel.cameraMatrix.position = new Vector3D(cx,cy,cz);

	///////////////////
	 // textured model
	 //////////////////
	 modelMatrix = meshModel.modelMatrix;

	// rotating the model
	var axis:Vector3D = new Vector3D(0, 1, 0);
	modelMatrix.appendRotation(2, axis, modelMatrix.position);

	// apply plan rotation to model
	var quat:Vector3D = meshPlan.modelMatrix.decompose(Orientation3D.QUATERNION)[1];
	_textMatrix.identity();
	_vecsTextMatrix = _textMatrix.decompose(Orientation3D.QUATERNION);
	_vecsTextMatrix[1] = quat;
	_textMatrix.recompose(_vecsTextMatrix, Orientation3D.QUATERNION);

	matrix = new Matrix3D();
	var viewMatrix:Matrix3D = meshModel.cameraMatrix.clone();
	viewMatrix.invert();
	matrix.append(viewMatrix);
	matrix.append(_projectionMatrix);				
	meshModel.modelViewProjection = matrix;

	// draw model
	  //set render target to texture
	_context.setRenderToTexture(_textureMap,true);
	_context.clear(0.9, 0.9, 0.9, 1);			 
	meshModel.drawTriangles();

	// draw plan	
	 //set render target to backbuffer
	_context.setRenderToBackBuffer();
	_context.clear(_bgColor.red, _bgColor.green, _bgColor.blue, 1);					
	meshPlan.drawTriangles();

	_context.present();			

	// stats
	fpsTicks++;
	var now:uint = getTimer();
	var delta:uint = now - fpsLast;
	// only update the display once a second
	if (delta >= 1000){
		var fps:int = (0.5 + fpsTicks / delta * 1000);
		fpsTf.text = fps + "/60 fps";
		fpsTicks = 0;
		fpsLast = now;
		}				
		addChild(fpsTf);
}

Done !!
Hourrraaa !

Any questions, any comments, don’t hesitate !

4 comments to render to texture and « vectorized »
textures with stage3D – Agal

  • Jazzcat

    Beautiful. Amazing. Brilliant! :)

  • Julio

    Hi, first of all, awesome work. All the projects you have in your page are very good. Now, I wanted to ask you for advice, I am doing a rig in stage3D, the rig part is going fine, but I have the problem that regardless of it´s position in the Z axis what gets rendered last it´s always on top when I upload to the op register in my vertex shader. I am asking you here because you mention something about z-sorting. I have never found any specific information about this which makes me think it is simple to solve. I have done some sort of sorting in AS3 prior to uploading the vertex buffer to the GPU, but I´m afraid that this will cause performance issues as the number of verteces grows. I would love to do this z-sorting on the GPU via a shader or something, and only with one drawTriangles function call. I hope you can help me on this, thanks.

  • Julio

    I realized my mistake, I had the depth and stencil set to false in context3D´s configureBackBuffer. There was no depth buffer enabled.

Leave a Reply to Jazzcat Cancel reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>