Scott Pilgrim is AWESOME!

I’ve started reading Scott Pilgrim comics recently, and I had watched the movie about half a year ago, loved it :)?I also bought Scott Pilgrim VS The World: The Game on PSN. The gameplay is okay, and the soundtrack is excellent. Love the BGM for the openeing stage. I’d say I’ve instantly become a big Scott Pilgrim fan!

Currently I’m reading the third volume (there are six). In order to prepare for my upcoming three-day family trip. I’ve found a nice little tool called Mangle, which optimizes comic pictures for Amazon Kindle. I used it to pack the rest of the unread volumes into my Kindle DX Graphite, so that I have something to read during the ride.

Out of the love for the two main girls in the comic, Ramona Flowers and Knives Chau, I couldn’t resist drawing them in bunny forms. So here they are (click on thumbnails to full- view):

When I was in the marine, I had spent lots of time doodling with my pencil, mostly character drawings. And I got down to practicing different view perspectives of characters that I hadn’t tried before, kind of like a self-challenge. The result is that now I’m much?more willing to draw with foreshortening views. Sweet 🙂

Posted in Uncategorized | Leave a comment

Stardust LEET Day Demo

Genie Capital held a LEET Day contest on May 29, 2011. This contest was all about demonstrating “cool” personal projects; mine was Stardust. Everyone was given 15-20 minutes to show his project. Afterwards, each contestant voted for 3 other contestants. The one with the most votes won an iPad 2, which is me 🙂
(Actually, another constant got the same votes as me, and we were both given an iPad 2)

I didn’t intend to keep the iPad 2 in the first place. Hell, I’m a Flash developer! I gave it to my girlfriend, which was exactly what I had had in mind before the contest.

And this is the recording of the demonstration. Enjoy 🙂
(Genie Capital was busy on their Gulu project, hence the late upload)

Posted in Stardust | 2 Comments

Packing My Stuff

For those who don’t know. I’m going to study at DigiPen Institute of Technology for the Bachelor of Science in Game Design program. Yes, I’ve already had a Bachelor’s Degree in Electrical Engineering, and I’m going to go through another 4 years of college. This is what I’ve always dreamed of since I was a kid, to study game design. And finally I can fulfill my dream. DigiPen is a great school with professionally designed course sequence. It is not the degree that concerns me, but what I can learn.

I’m very, very lucky to have such supportive parents. My mom and dad gave me an instant yes when I told them I’ll be going to study at a second college. They’re also happy for me because they have known that I’ve always wanted to study game design. Mom, Dad, you’re the best parents ever 🙂

I won’t regret spending 4 years studying at the Electrical Engineering Department of National Taiwan University. In fact, I think it’s far better to study in some more general and broader fields. Now I can began my life as a game design student, fully prepared with knowledge in electrical engineering, computer science, computer architecture, and programming. I heard that it is, in fact, advised by DigiPen’s alumni that people should spend at least to years in departments of Computer Science, Electrical Engineering, or Mathematics , before studying at DigiPen. Students that went to DigiPen directly after graduating from high schools said that the programming courses are pretty heavy for a high school student.

I’ll begin packing my stuff pretty soon, and I’ll bey flying to Seattle on August 10. Before that, I shall call my friends from my elementary school, junior high, high school, and college; we’ll go out to have dinner and have a lot of chat. Because soon afterwards, I’ll be very far away from my parents and my buddies.

Man, time flies. I’ve already graduated, I’ve already finished my duty at the marine, and finally I’ll be starting my brand life as a game design student, which is what I’ve always dreamed of!

Posted in Uncategorized | 3 Comments

Picking up My Life where I Left off

Finally, I’ve finished my one-year service in the marine last weekend, and my life has gone back to normal. No more going back to the base on Sundays, no more taking long train rides between home and base, and no more saying yes to jackass officers. This is all too sweet. :p

I spent quite a while in the largest electronic shopping mall in Taipei to upgrade my electronic arsenal. I’ve got my first smart phone, an HTC Desire S, which has a size that nicely fits into my hand, and my favorite feature of Android phones is the ability to sync with Gmail, Google Calendar, and Google Contacts.

I bought a new notebook to replace its 5-year old predecessor, which always overheated easily. My new notebook runs CPU-intensive applications quite nicely, and it doesn’t produce much heat. I finally can play Star Craft II smoothly with all the VFX options set to Ultimate 🙂

Changing to a new computer comes with a price: I have to re-install and re-download all the handy tools I need, so there went another entire day. Also, I got myself a Logitech wireless keyboard & mouse set; Logitech’s Unifying technology is pretty cool and convenient: it allows your computer to communicate with all Logitech’s wireless devices with a single USB signal receiver. Now I can lie back on my seat and use my wireless keyboard.

I will fly to Seattle on August 10th, and I’ll study at DigiPen for the Bachelor of Science in Game Design program. This is another milestone of my life: finally, I mean FINALLY, I can study things that I truly love – Game Design! This probably means I have to switch to C++ coding, and pay less attention to ActionScript. Hell, I think I’ll still do some ActionScript coding in my spare time, implementing stuff learned from DigiPen.

Before the semester starts, I’ll have to complete DigiPen’s Sketchbook Assignment for students taking the ART101 course. It looks quite heavy, so I’ll have to get on to it ASAP. And I took the free online Math refresher course, which requires approximately, according to DigiPen’s description, 15 hours per week.

That’s pretty much I have to say right now. It feels nice coming back to my normal life 🙂

Posted in Uncategorized | 3 Comments

Simple Bunnyhill Demos

As you may have heard, Flash Player Incubator is available now. This means you can view Flash contents powered by the Molehill API! Yeah for GPU-accelerated Flash movies!?Richard Davey has posted an article full of awesome Flash Player 11 Molehill demos. Be sure to check out the stunning 3D visuals.?In order to view the Molehill-powered Flash contents, you’ll need to download and install the Flash Player Incubator, available at Adobe Labs.

I’ve created two simple demos for Bunnyhill, my upcoming 2D rendering engine based on the Molehil API. One demonstrating the scene graph structure of Bunnyhill, and one demonstrating the integration of Stardust with Bunnyhill.

You may download the current revision of Bunnyhill here (CJSignals is also required). Please notice that this revision is still a work in progress, so anything is subject to future change. This source should only be used to compile the following two demos.

Also, you need to do some setup for FlashDevelop in order to make it work properly.?Michael Baczynski posted an article on how to set things up.

Bitmap Drawing

View online (requires Flash Player 11)

Download source

This demo shows the scene graph structure in Bunnyhill. Each renderer has a scene graph, which is traversed by the render visitor on each render call.

Here’s a code snippet from the main class. A Group node acts as the root node and contains several Transform nodes, which all contain a common DrawBitmapData child node.

group = new Group();
group.add(new Clear());

draw = new DrawBitmapData(new Pic2().bitmapData);

for (var i:int = 0; i < 8; ++i) {
	for (var j:int = 0; j < 8; ++j) {
		var t:Transform = new Transform(draw);
		t.rotation = 22.5 * (j + i * 4);
		t.position.set(125 * i + 62.5, 100 * j + 50);
		t.center.set(125, 100);

		group.add(t);
	}
}

renderer.rootNode = group;

The root node in this demo is a?Group node, which simply pass the render visitor to its children. The Transform node pushes a transform matrix onto the render visitor’s matrix stack, pass the render visitor to its only child, and then pops the matrix out off the stack. Finally, the DrawBitmapData node is what draws things onto the stage, it draws images based on the top matrix of the matrix stack.

Notice how only one DrawBitmapData node is used in the scene graph, being a common child of all Transform nodes. This is the power of scene graph structure, saving unnecessary memory usage. This is different from Flash Player’s native display list structure, which forces each display object to have only one parent. So if you want to draw two identical shapes onto the stage at different locations, you’ll need to instantiate two such objects.

Nova

View online (Flash Player 11 required)

Download source

Bunnyhill currently provides a quick implementation of a particle handler to work with Stardust. This demo uses this particle handler to link both libraries together.

var group:IGroup = 	new Group();
emitter.particleHandler = new BHGroupHandler(group);
renderer.rootNode = new Group(
	new Clear(),
	new BlendAdd(),
	group
);

That’s it. I’m pretty happy to see how far Molehill has gone. I’ll keep working on Bunnyhill in order to provide an easy-to-use, GPU-accelerated 2D rendering API.

Posted in Bunnyhill, Stardust | 13 Comments

Stardust Touch-Ups for Brilliant Scenic Demos

Wonderfl is a, as its name suggests, wonderful website that provides online ActionScript 3.0 compilation service, where you may post code, share code, and fork code. Recently, I’ve run into two fantastic scenery simulation demos by Yonatan, the Super Express Desert Sunset, and Sea and Clouds demos.

I’ve decided to use Stardust to apply some touch-up to these two demos, in order to see how much I can boost the visual effects with particle effects. And here they are.

Super Express Desert Sunset (Stardust ver.)

In this demo, I’ve used simple horizontal rectangles as particles, and used the blur filter to create a motion blur effect. The particles are perturbed by some random drift and are drawn to the left by a uniform gravity field. I really love this train window view of rain (or snow) running by during sunset 🙂

Sea and Clouds and Fireflies


Here the water wave reflection effect is pure brilliance. I just can’t help adding some firefly effects to make it even more fantasy-like. The original code draws the entire scene to generate the water reflection, which is quite convenient for me to add particle effects, since the particles are automatically drawn onto the water without extra coding effort.

Posted in Stardust | Leave a comment

Bunnyhill Logo Prototype

This is what I came up with late at night in the marine base: prototype logo for Bunnyhill. This is just a prototype and is subject to change.

BTW, don’t you think the gree grass part and the text look a little similar to the Burger King logo?

Posted in Bunnyhill | Leave a comment

Switch vs Strategy

Recently, I’ve been working with the blend mode feature for Bunnyhill, my upcoming 2D rending engine based on Molehill. And I’ve used strategy objects to replace what could have been written in a switch statement. I think this could be a nice example for replacing switch statements with strategies, so I’ll share some of the details here.

The Switch Appraoch

First let’s take a look at the naive switch appraoch to set the blend mode of a render engine. Say we have a render engine that implements the interface below.

interface IRenderEngine {
	function setBlendMode(value:String):void;
}

And the BlendMode class provides static constants representing different blend modes.

class BlendMode {
	public static const ADD:String = "add";
	public static const ALPHA:String = "alpha";
	public static const NORMAL:String = "normal";
}

A possible render engine implementation for the switch approach is shown below.

class RenderEngine implements IRenderEngine {
	public function setBlendMode(value:String):void {
		switch (value) {
			case BlendMode.ADD:
				//use add blend mode
				break;
			case BlendMode.ALPHA:
				//use alpha blend mode
				break;
			case BlendMode.NORMAL:
				//use normal blend mode
				break;
		}
	}
}

In our main program, we could set the blend mode of the render engine like this.

renderEngine.blendMode = BlendMode.ADD;

The use of switch seems quite reasonable at the first glance, but is identified as “coding bad smell” in Refactoring, by Martin Fowler. If we were to add more blend modes to the render engine, this means we have to add an extra constant in the BlendMode class and an extra case in the switch statement. One feature change results in changes in two places, no good!

This is when the use of strategy objects should be considered.

The Strategy Approach

We could define an interface for strategy objects.

interface IBlendMode {
	function setupBlendMode(engine:IRenderEngine):void;
}

And the new BlendMode class looks like this.

class BlendMode {
	public static const ADD:IBlendMode = new Add();
	public static const ALPHA:IBlendMode = new Alpha();
	public static const NORMAL:IBlendMode = new Normal();
}

class Add implements IBlendMode {
	public function setupBlendMode(engine:IRenderEngine):void {
		//use add blend mode
	}
}

class Alpha implements IBlendMode {
	public function setupBlendMode(engine:IRenderEngine):void {
		//use alpha blend mode
	}
}

class Normal implements IBlendMode {
	public function setupBlendMode(engine:IRenderEngine):void {
		//use normal blend mode
	}
}

Now the implementation of the render engine is as follows.

class RenderEngine implements IRenderEngine {
	public function setBlendMode(value:IBlendMode):void {
		value.setupBlendMode(value);
	}
}

And in the main program, the code to set the blend mode of a render engine is, surprisingly, exactly the same. However, the underlying implementation has switched from switch statements to strategy objects.

renderEngine.blendMode = BlendMode.ADD;

If you want to create new blend modes, all you have to do is create a new IBlendMode strategy class, and you don ‘t have to add any code in the RenderEngine class. This is pretty much the essence of the Strategy Pattern, adding new features and functionalities to a system by simply extending the strategy base class, or implementing a common strategy interface.

Posted in Design Patterns | 6 Comments

Shader Primer

I’ve been studying shaders recently, in order to get myself prepared for shader programming with Molehill, the upcoming update to Flash Player that exposes low-level GPU-accelerated APIs. For those who are not familiar with shaders, and are also eager to get ready for shader programming with Molehill, I’ll share what I have learned about shaders in this post.

To present the concept in a clean way, I write in pseudocode. You may apply the concept in any shader language you like, such as GLSL, HLSL, and the shading language in Molehill.

Shaders

A shader is a piece of compiled program that lies in the GPU memory, which can be executed by the GPU. It’s pretty much like a piece of program in the main memory that can be executed by the CPU. Unlike ordinary CPU-executed program, a shader program consists of instructions that can be executed extremely fast by the dedicated hardware, the GPU.

There are generally two types of shaders: the vertex shader and the fragment shader (or pixel shader). Normally, in a rendering pipeline, vertex data are passed in to the vertex shader, transformed, and then passed to the fragment shader; finally, the fragment shader decides the final result of each pixel on the screen, based on the output of the vertex shader (e.g. within a triangle if you’re using the vertices to draw triangles). A vertex shader is run once for each vertex, and a fragment shader is run once for each pixel on the screen.

To get the most out of the GPU, shaders are essentially executed in parallel for each vertex and each pixel. As a reasonable result, the context of each shader program is independent of and have no access to the result of other input vertices and pixels. For those familiar with Pixel Bender, its programs also have this property since it’s shader-based.

Vertex Shaders

Okay, let’s take a look at an example for a vertex shader. We’d like to create a very simple vertex shader that simply transforms a vertex with a matrix. This is actually useful enough if you want to project a vertex in 3D space into 2D screen space.

The input is the single vertex to be transformed, and the transform matrix is declared as a parameter, which can be set by the main program. In the pseudo code, the vertex in 3D space is stored as a 4D float vector, and the matrix representing a 3D transform is stored as a 4-by-4 float matrix; in this way, we can express translation in 3D space in a single matrix multiplication, which is not possible if you use 3D float vectors and 3-by-3 float matrices.

input float4 vertex;
parameter matrix44 transform;
output float4 result;

void main()
{
	result = transform * vertex;
}

Fragment Shaders

Usually, in the main program you would associate custom data with each vertices. For instance, a very common extra vertex data is the UV coordinate, which is mainly used for UV mapping, a texturing technique. Each pixel on the screen is most likely to lie inside a triangle formed by three vertices projected onto the screen, and for each pixel, the interpolated value of these extra data can be calculated in the fragment shader. Generally, you mark a variable as interpolated with a modifier in the shader program, and the value is automatically interpolated by the GPU when the shader program is run.

Below is a fragment shader example, which interpolates the UV coordinate of the output pixel, takes as input a texture (a image4 parameter representing 4-channel bitmap), and outputs the final color of the pixel. Note that the interpolation is not ordinary linear interpolation; the interpolation is done in a perspective-correct manner, meaning that the sample points become more “sparse” for farther points (“deeper pixels” in screen coordinates). Normally, a shader language would allow you to specify the how the interpolation is done (perspective-correct, non-perspective, etc), and usually the perspective-correct interpolation is the default approach.

input float2 uvCoord;
interpolated float2 interpolatedUVCoord;
paramter image4 texture;
output float4 result;

void main()
{
	//interpolated automatically
	interpolatedUVCoord = uvCoord;

	result = sample(texture, interpolatedUVCoord);
}

Done! With these two example shaders combined, you can render with basic matrix transformation and UV mapping. I’ve pretty much covered the concepts you need to know to get started with shader programming.

Get yourself prepared for Molehill! 🙂

Posted in Bunnyhill, Shader | 3 Comments

Matrix Stack and the Visitor Pattern

Scene Graphs and Scene Trees

When building a scene graph framework, such as the display list in ActionScript 3.0, two types of nodes are involved: internal nodes and leaf nodes. The transformation properties (x, y, rotation, etc.) are passed down from parents to children; that is, if the parent’s local coordinate is (10, 10), and a child’s local coordinate is (5, 5), then the global coordinate of the child is (15, 15).

ZedBox, my 2.5D billboard engine, internally manages objects in scene tree structure (a more specific type of scene graph where each child can only have one parent), just like the display list structure in ActoinScript 3.0. Each node holds a reference to an object containing its global transformation. When the scene tree is traversed in the render routine, the global transformation of each node is updated by its parents; in other words, the global transformation of each node is updated recursively from the root down.

Back then, this seemed to be a very reasonable approach to me. Later, however, I found out that this approach eventually made my code very messy and hard to maintain. Also, this approach does not apply to scene graphs; since in a graph each node can have more than one parents, it’s not right to have each node to hold reference to a single global transformation of its own. After a having a little chat with Jean-Marc Le Roux (the author of Minko 3D engine), I got to know that he used the Visitor Pattern in Minko to traverse the nodes managed in scene graph structure. Suddenly, it had struck me that this pattern is the key to calculate global transformation of the nodes in a scene graph. Instead of storing the global transformation of each node in the node itself, each node can have a chance to access the visitor when the visitor visits the node, where the visitor holds a reference to a matrix stack.

Matrix Stack

A matrix stack is essentially, well, a stack that contains matrices (here a matrix represents spatial transformation). The interface of a matrix stack may look like this.

interface IMatrixStack {
	//returns a reference of the top matrix
	function get top():Matrix;

	//pushes a matrix to the stack
	function push(matrix:Matrix):void;

	//returns the top matrix popped from the stack
	function pop():Matrix;
}

The following code pushes a matrix A to the stack, and then pushes the product of matrix A and matrix B to the stack. Of course, there is no multiplication operation overriding in ActionScript 3.0. I used the multiplication operator just to make the code look clean.

matrixStack.push(A);
matrixStack.push(
	matrixStack.top() * B
);

Visitor Pattern

Using the Visitor Pattern, the scene graph is traversed by allowing a visitor to “visit” each node, holding information passed from a node to another. The visitor in scene graph contains a matrix stack, whose top matrix is the global transformation of a node’s parent.

This is the interface of a node.

interface INode {
	function get localTransform():Matrix;
	function visitor(visitor:IVisitor):void;
}

And this is the interface of a visitor.

interface IVisitor {
	function get matrixStack():IMatrixStack
}

Below is a possible implementation of a container (internal) node class.

class ContainerNode implements INode {

	private var _localTransform:Matrix = new Matrix();
	public function get localTransform():Matrix {
		return _localTransform;
	}

	private var _nodes:Vector.<INode> = new Vector.<INode>();

	public function add(child:INode):void {
		if (_nodes.indexOf(child) < 0) {
			_nodes.push(child);
		}
	}

	public function remove(child:INode):void {
		var index:int;
		if (index = _nodes.indexOf(child) >= 0) {
			_nodes.splice(index, 1);
		}
	}

	public function visit(visitor:IVisitor):void {
		//push the global transform of the group to the stack
		visitor.matrixStack.push(
			visitor.matrixStack.top() * localTransform
		);

		//make the visitor visit the children
		for each (var child:INode in _children) {
			child.visit(visitor);
		}

		//pop the global transform of the group from the stack
		visitor.matrixStack.pop();
	}
}

So, in a child node, the following code calculates the global transformation of the node itself.

function visit(visitor:IVisitor):void {
	var globalTransform:Matrix =
		visitor.matrixStack.top() * localTransform;

	//do something with the global transform
}

This is pretty much my current approach to manage scene graphs in Pyronova, my bitmap rendering engine, and Bunnyhill, my 2D rendering engine based on Molehill, the upcoming ActionScript 3.0 API that supports GPU acceleration.

Posted in Bunnyhill, Design Patterns, Pyronova, ZedBox | Leave a comment