How Does An Entity System Work?

In a previous article I attempted to make a case for using an Entity System as an architecture for game engineering. In this second article, I explain the core mechanism that makes an Entity System tick, and broadly how it is implemented.

Games in which the state of objects are in constant flux – arcade games – are very different beasts to other kinds of software application. Most of the time, an application has a stable state, and responds to external input by updating the state, and then surfacing that change to users. Arcade games by contrast demand change at regular intervals. The simple expression of this is that at the heart of every arcade game is a ‘main game loop’, which is fundamentally a list of state-changes that happen with or without external input, culminating in the surfacing of those changes to users by re-rendering the screen.

An Entity System manages that loop for you by offering a simple interface for all the algorithms that update the game: the System, and a mechanism for pulling into those algorithms all the data that needs to be updated, the Collection.

Entities, Components and Systems

An Entity System is an architecture for developing games that maintains a strong separation between the data that describe the game’s state, and the algorithms that express the game’s behaviour.

Entities are the atoms of an Entity System. An entity is a stateful, independent thing that persists over a period of time during a game. In an Entity System, an entity begins life as a stateless container and is populated with components, which ascribe state to the entity.

Systems contain the algorithms that read, manipulate and write the entities’ states. Commonly, systems iterate once per-frame, and iterate over a collection of entities (although several edge-case scenarios exist which complicate this simple structure).

Defining Collections

Collections are subsets of entities defined in terms of their components. A collection is defined as an array of necessary components: an entity that contains all the necessary components is a member of the collection; an entity that does not have one necessary component is not a member.

The collection is the primary mechanism that bonds together the entities, components, and systems. Most systems depend on and iterate over one collection per frame.

Imagine a simple game in which some entities with a Position are in motion defined by a Velocity. This imaginary MovementSystem will iterate over all and only those entities that have the two components Position and Velocity.

In Dust the MovementSystem would be defined like this:

systems
  .map(MovementSystem)
  .toCollection([Position, Velocity]);

Systems are executed in the order that they are mapped, though they may also be weighted, so that higher-weighted systems precede lower-weighted systems. In general, the iterate method of each system is called each frame.

The system itself may then resemble something like this:

class MovementSystem
{
    @inject public var collection:Collection;

    public function iterate(deltaTime:Float)
    {
        for (entity in collection)
        {
            var position:Position = entity.get(Position);
            var velocity:Velocity = entity.get(Velocity);
            position.x += velocity.dx * deltaTime;
            position.y += velocity.dy * deltaTime;
        }
    }
}

The system relies on the collection having only those entities that have both Position and Velocity components. If such an entity has the Velocity component removed, then the next time the main game loop runs, it will not be a member of the MovementSystem‘s collection.

Maintaining Collections

Importantly, there are usually many systems that use any given component. A Position may be written in a MovementSystem but will be read in lots of other systems. The diagram below shows the structure of an imaginary arcade game. Each color represents a collection for the corresponding system, defined by the corresponding components.

systems

My preferred way to maintain which entity is a member of which collection is to assign to each component an integer, and then maintain for each Entity and each Collection a Bitfield that describes their structure.

Consider the following diagram, which reflects the structure of an entity with various components added. As components are added, the corresponding bits are set, and as components are removed they are removed, maintaining a bitfield that can be described as an integer (for applications with more components than the biggest int has bits (commonly 32), a more complicated bitfield has to be defined as an array of integers):

entity

Collections are similarly defined as a bitfield corresponding to their necessary components. In this way, whether an entity is a member of collection or not is reduced to a simple bitwise calculationĀ (~entityBits & collectionBits) == 0:

collection_satisfaction

Membership of a collection is not checked every iteration. Generally, entities’ structure remain fairly stable so when a component is added or removed the change is cached, and then at the start of the iteration loop collection membership is updated. In Dust, this process is itself defined in a system, the UpdateCollectionsSystem.

Criticisms

A common criticism of this approach is that it is wasteful. If an entity’s velocity is zero, why update the position; if it doesn’t move, why redraw it? At first look, it appears that this approach to architecture will lead to an extremely slow, expensive application. The response to this is in two parts: we approach these problems scientifically (because we can), and when the criticism is true, we fix it.

The structure of an Entity System lends itself to measurement. If all algorithms in your main-game loop are wrapped into a unified interface and called from a centralized place, then it is relatively simple to measure how much time each of them takes. Each system can be wrapped in a mechanism that records the amount of time a system takes to run. As needed, rolling or total means and distributions can be calculated and surfaced and analyzed.

The simplicity of measurement leads to a the approach of writing systems in the simplest possible way to achieve functionality, and then moving on. If later on, performance becomes a concern your metrics will identify the main problems for you. Premature optimization is a waste and is easily worked around.

When a problem is identified, there are many options for remedying it. If an AI loop that makes decisions based on surrounding objects is slowing down the application, it may be enough to adjust how often it runs from once-per-frame to once-per-second, or slower. If the renderer is causing headaches, then the solution becomes more complex. Moving from a full-blitter to a dirty-blitter on a CPU, or more ensuring batched draw calls to the GPU are difficult problems. In these sorts of cases, I have found the problematic system can be refactored to produce manipulate a data structure that is then passed through to a separate, architecturally-agnostic module specific to the high-cost task.

Of course, if for a particular arcade game many or most systems are merely placeholders or input aggregators for other modules, then there is a strong case that an Entity System architecture is not appropriate for that game. To that application’s developer: Good luck! It must be really something; please let me know what architecture does work for you!

Conclusions

The Entity System architecture is an opinionated architecture; it wants you to code in a particular way. Instead of creating classes for your game entities and encapsulating functionality, it expects you to treat all entities in the game as fundamentally the same kind of thing: the Entity, to which stateful Components are added.

The architecture’s utility then comes from how it allows the definition and management of Collections that ensure systems only iterate over entities appropriate to them. A simple, efficient mechanism for this is to express the configuration of an Entity and the definition of a Collection through bitfields, which can then be manipulated quickly to establish collection membership.

In an up-coming article I will demonstrate how to build a simple arcade game with my current Haxe Entity System library, Dust. Dust will remain as an early alpha until I release my first dust-based game (in production), after which I will tighten up the API and release it on haxelib.

Why Use Entity Systems for Game Engineering?

An Entity System is an architecture for developing games that avoids many of the problems that are a feature of games developed using Object Oriented Programming (OOP). This article explains how moving from the OOP paradigm to an Entity System approach requires a change in engineers’ thought processes and the abandoning of the principle of encapsulation. Making this change simplifies development and allows code to be more modular. This in turn enables the engineer to better respond to issues of scale and changing requirements.

Software Architecture

The secondary value of software is its behavior … The ability of software to tolerate and facilitate such ongoing change is the primary value of software. The primary value of software is that it’s soft

Bob Martin, Clean Code, Episode 9

All software is a combination of data and algorithms. The challenge for software engineering is how best to structure the data and algorithms such that it remains open to future change. Software architecture helps the engineer meet these challenges by providing a simple, consistent way to structure the data and algorithms into modules, and provides mechanisms for linking these modules together.

Over time, breaking an application down into modules is necessary because it keeps each part of the application relatively simple. However, as soon as an application is modular those modules must be linked together and linking is the source of rigidity. The biggest problem for software architecture lies in solving how to link two modules of code together so that changing one module does not require changing those modules to which it is linked; how to keep the software soft.

Entities, States, Nouns and Verbs

Games are comprised of entities. An entity is a stateful, independent thing that persists over a period of time during a game. How a game works can be described in terms these entities using three distinct concepts: state, nouns and verbs. The nouns in a game describe the different categorisations of entity that the game has: alien, bullet, hero, timer, score, or camera are all kinds of entity. The state of an entity describes the different data structures they contain: position, color, shape, intention… The verbs in a game describe change: An entity may move, fight, die, collide, rotate, inflate, blink… In code, verbs are expressed by the algorithms that manipulate the entities’ state.

Given a design, the software engineer building the game has two main options for choosing how to organize her code. She may choose as her central organizing units of code the nouns of the game, or she may choose the verbs.

Object Oriented Programming – Organizing Code Around Nouns

For an engineer that has a lot of experience in OOP this is no choice at all. OOP is based on the organizing principle of encapsulation: that the data and the algorithms that manipulate that data comprise a module, or class. Since for one entity many verbs will manipulate the same data structures (move and collide may both reference an entity’s position, for example), encapsulation dictates that they must all be part of the same module. So, the OOP engineer will start to define her class Alien as follows:

class Alien
{
	var position:Vector2;
	var velocity:Vector2;

	public function move(deltaTime:Float);
	public function collide(other:?);
}

There are two main criticisms of this approach: using OOP the nouns define the top-level modules of the code, and the states and verbs are tightly coupled to them. The strong links define a rigid internal structure for each object, which makes it difficult to reuse the verbs or states in different contexts.

The experienced OOP engineer may counter that there are many design patterns and strategies available to her that lets her mitigate this problem. The engineer may avoid repeating herself and share the algorithm for movement between two different classes: Alien and Hero, by defining a Humanoid class from which Alien and Hero inherit. Alternatively she could use a strategy-pattern to define a Movement behaviour and composite the behaviour into the Alien and Hero classes… For the OOP developer to avoid the problems of big, rigid modules she has to make lots of difficult architectural decisions. Sadly, this is in-itself a problem: she must sacrifice consistency to achieve modularity. Instead of wielding a few, big architectural designs, her code becomes separated into many small concepts that interact with each other in lots of different, complicated ways.

The source of the engineer’s difficulties is that the initial choice of organizing her code around the nouns is a mistake. It is a premise of modularity that it is better to have many, smaller modules than to have few, big modules, and it is equally better to have a few, simple ways that modules interact with each other. There will always be fewer nouns than verbs in a game’s design, so organizing code around nouns runs against the best interests of modularity.

There are many other general criticisms of OOP that won’t be discussed here. OOP is a very intuitive way to structure code and has many benefits. This article does not intend to criticise OOP but rather suggest that there is an approach for games engineering that is more robust and scalable.

Entity Systems and Component Systems – Organizing Code Around Verbs

The engineer’s alternative route is to treat as the central organizing modules of her code the verbs. A consequence of this is that the nouns of the system are not centrally defined at all. Instead, the class Entity is defined as a dynamic, runtime composition of Components. Components define the different aspects of the entity’s state, and drive which verbs are applicable to it. For example, the verb ‘to move’ is defined by the Movement Component, which contains (minimally) the entity’s velocity. The algorithm that applies movement is applied to all and only those entities that have the movement and position components; to trigger movement the Movement component is added to the entity, to cancel movement it is removed.

There are two main variations to how the algorithm is applied. In what I term a Component System, the movement algorithm is defined in the Component itself, in an update method. Every component will have an update method, so that as part of the main game-loop every component in every entity is iterated through and update() called.

The structure of the code in a Component System looks something like this:

class Position
{
	public var position:Vector2;

	public function update(deltaTime:Float);
}

[Require(Position)] // an indicator that Movement can only be added to an entity that has a Position
class Movement
{
	[Inject] public var entity:Entity; // the entity to which this instance of Movement is added

	public var velocity:Vector2;

	public function update(deltaTime:Float)
	{
		var position = entity.get(Position).position;
		position.offset(velocity * deltaTime);
	}
}

At first glance this approach may look like it maintains encapsulation, though this is not the case. For the Movement component’s update algorithm to do anything it must access and modify the Position component’s data. The central problem for this architecture is how to reference other components. The most popular incarnation of this approach is currently Unity3D. The Unity3D team solved this problem by forcing all components to inherit from a base class MonoBehaviour, which stores a reference to the entity and allows inter-component communication.

In what I term an Entity System the data and algorithms are kept separate. Components are simply value objects containing the necessary state for a given concept. Algorithms are defined in what I term a System. Every system has an iterate() method. When systems are registered to the main game-loop iterate() is called as part of that loop.

The structure of the code in an Entity System looks something like this:

class Position
{
	public var position:Vector2;
}

class Movement
{
	public var velocity:Vector2;
}

class MovementSystem
{
	[Inject] public var collection:Collection; // a collection of entities with position and movement components

	public function iterate(deltaTime:Float)
	{
		for (entity in collection)
		{
			var position = entity.get(Position).position;
			var velocity = entity.get(Movement).velocity;
			position.offset(velocity * deltaTime);
		}
	}
}

The central problem for this approach is how the MovementSystem maintains a reference to all and only those entities that have both a Movement and a Position component. How I do this in dust (Haxe Entity System) will be the subject of an upcoming article. The clear disadvantage to an Entity System approach is the amount of CPU needed each cycle to maintain these references. However, this disadvantage can be mitigated by the architects developing efficient ways to maintain these structures, so that the gameplay engineer can concentrate on implementing the systems needed.

These approaches avoid the problems caused by organizing code around the nouns of code. If more verbs are added to the game design document, more modules can be added in the code without touching any of the existing functionality. The software engineer can develop and test new features independently without fear of causing regressions.

Of course, problems with this approach can emerge when two or more systems modify the same data structures. Independent modules for Movement and Collision may contradict one another for example in which case the order in which they are run can also become an issue. These problems are not magically handled by an Entity System. In this case, it may make sense to remove movement and collision as separate modules and create a bigger, more over-arching Physics system. These sorts of problem are an inevitable issue for game engineering irrespective of the architecture. To solve such problems in an Entity System or Component System is to create a new system; the structure of the code remains consistent.

The primary advantage to this approach is that the code remains extremely consistent and extremely modular. Every system has exactly one entry point. Systems can be turned off and on by adding and removing them from the main game-loop, their speed can be monitored and compared to one another, so their impact on the game can be carefully measured and assessed.

Summary

An Entity System is a software architecture for game development. Entity (and Component) System Architecture combines a clear separation of data structures and algorithms with a core mechanism to manage which data is passed into which algorithm and when. The separation of data and algorithms encourages good separation of concerns, enabling highly modular application development.

The Object Oriented Programming approach to game engineering is problematic because it leads to either too few modules with big rigid structures, or modules that are linked to each other in complicated, inconsistent ways. By contrast, Entity System architecture seeks to minimize the links between the different parts of game code and define a few, extremely simple ways that they can interact.

The software engineer developing a game in an Entity System architecture must organize her thoughts and code around the verbs of the game, rather than the nouns. This can feel like an alien way to organize code initially, but eventually it rewards the developer with simpler, more consistent code that scales and responds to change better.

Entity Systems are a better way to organize code for game engineering.

Entity Sytems

On Wednesday August 1st I gave a talk at Games@Codame. The lead-up to the talk was fraught. My wife was taken to ER the week before and I spent the week before the talk at home looking after her. It threw all my preparations into a spin, though with hindsight I should have been much better prepared much earlier. Then, during the talk problems with the projector, Adobe Connect, power-supply, mics and anything else that could go wrong spoiled much of what I wanted to say. Not my finest half-hour.

I missed out a lot of things that I wanted to implement. Chief among them was my intention to implement save-state and load-state console commands to demonstrate the versatility of the serialization/deserialization. I also intended to show off property inspectors and to more closely look at how the framework actually fits together.

However, I am pleased with the latest slideshow app that I created. The entire thing is built in with a combination of Entity System and RobotLegs. Press tab to bring up the console and type “list” to get a list of available commands. 1 moves you to the next slide, and 2 to the previous slide. Some slides have steps in them, look out for when extra buttons appear in the bottom right as they’ll give you the other functionality. Enjoy!

Source code for the talk can be found here: github.com/alecmce/es

Interviewing through Pair Programming

The best interview I ever had was a pair-programming session with Luke Bayes and Ali Mills. I was put at ease by their demeanour and attitude. They are coders, and I’m a coder – all that was left was to work out if I was as good as they hoped I was, and whether we could stomach working with one another. As it happened, all of the above, but I had to decide between enterprise work and gaming work, and I took a different route. But the interview was fantastic.

Recruiting good developers is a difficult discipline. Most of the time you get the impression that developers conducting interviews haven’t given the interview any thought until about 5 minutes before it starts. This is a mistake; recruiting good colleagues helps to solidify a team culture. Hiring the right candidate is important. Not hiring a wrong candidate is doubly important.

I have now been interviewer more than I have interviewed. In my experience, the most reliable approach for interviewing interactive developers has been to conduct the entire interview as a sort-of game, played through pair-programming. I have yet to recommend a candidate that I later regretted recommending as a result of this game, which is as good a testimony as an interview technique is likely to receive.

The Game

The game itself is trivially simple. Let’s start with a trivially simple structure – for example a static circle in the middle of the screen – and start adding functionality in turns as a sort of free-association game.

If we’re interviewing an interactive developer most familiar with AS3, I might start with something like this:

package
{
	import flash.display.Sprite;

	[SWF(width="800", height="600", backgroundColor="#FFFFFF")]
	public class Main extends Sprite
	{
		public function Main()
		{
			graphics.beginFill(0x1E90FF);
			graphics.drawCircle(400, 300, 50);
			graphics.endFill();
		}
	}
}

Imagine you’re an interviewee: We’re going to take it in turns to add features to this code. You choose the first feature, and I’ll drive implementing it. Then, I’ll choose a feature and you drive it… and so on. We are pair-programming however; we need to work together all of the time. If there’s something I do you dislike, you disagree with or you don’t understand, you should say so immediately.

This simple structure leads to all sorts of interesting opportunities. When using the circle as a starting point, most often the developer chooses to have the ball fall and ‘bounce’, but some want to make it a speaker and hook it into a sound file, or to make some art by putting other circles next to it, or have it wobble, or chase the cursor, or rotate…

What Is An Interview For?

Principally, an interview is a mechanism to parse candidates for suitability for a job. A candidate is suitable for a job if they can do the work that will be expected of them. A candidate is not suitable for a job if they are disruptive or poorly suited to the existing team culture.

Can They Code?

Evaluating whether a programmer is good enough to do the work expected of them requires evaluating their ability to code. This is commonly achieved by posing technical problems on a whiteboard. Whiteboard problems happen away from a computer, so that there is no safety net of a test suite or just running the code to make sure it doesn’t break. For this reason there is an impetus to solve the problem mentally and then to write down the results. This feeling is familiar: most students’ experience of school is very similar; the interview becomes a test. A maxim in education is that testing students is a fantastic way of measuring how good students are at doing tests. I don’t want to hire the best interviewee, I want to hire the best-fitting programmer.

Pair programming is extremely good at exposing an interviewee’s programming style. The game established above offers the green-field of a project with no built-in dependencies or external concerns. Every developer loves to write code at the start of a project! That’s why developers start so many projects and finish so few. At this point all potentialities are open. Interviewees may write solid, conservative code; they may write wildly creative prototype ideas; they might write badly designed spaghetti code… However the interviewer codes instinctively becomes apparent because the format is familiar and achievable.

What Are They Like To Work With?

After an hour of programming with someone you get to know them a little. If someone is thoughtful, open, aggressive or competitive, it is very difficult to sit side-by-side with someone discussing code with them without their personality expressing itself clearly. Programming is a coder’s natural environment, and they tend to act naturally doing it.

How Creative Are They?

Creativity is an important part of interactive developers’ jobs. Most of the time, requirements that come to interactive developers are poorly defined, or not thought through. An interactive developer needs to fill in the gaps or make the best out of obscure requirements. The reason for the open-choice task is to attempt to see the interviewee’s creative faculties.

The game borrows from improvised comedy the idea of accepting your partner’s idea and extrapolating. I tend to offer conservative extrapolations to the interviewee’s ideas, allowing them to drive the direction of the feature-set more. In almost every interview that I have conducted in this way, interviewees drive the agenda to more complex features, to find interesting features and interesting problems within the context we have made for ourselves. Sometimes the game can break down, or we have to back-track. All of this should be embraced as part of the game. Allowing and encouraging interviewees to run with an idea is informative, as well as being fun for both parties.

Can They Solve Complex Problems?

Another developer who has used pair-programming for interviews Ivan Moore argues that pair-programming it is not a good technique for evaluating a developer’s problem-solving skills.

However, he also points out that:

In typical enterprise IT projects, more problems are caused by over-complex solutions to simple problems, or by solving the wrong problem, than having problems that are unique and difficult and require a brand-new algorithm. Ivan Moore

I would contend that this is true not only for typical enterprise IT projects, but for almost all software. There exist very few software problems that can’t be solved by a team of software engineers iteratively using simple, elegant solutions written in clean, easy-to-read and easy-to-change code.

Of course, there are arcane areas in software development and there are a handful of extreme cases where programmers are being hired because of their niche abilities. In this case, of course I would not advocate using the pair-programming game to evaluate their niche abilities. This is a 0.1% problem however; if you’re reading this and think that you’re a niche programmer, you’re probably delusional.

But What About Problem Solving?

OK. Let’s suppose you really need developers to solve some fiendishly compelx problems. Some companies ask quirky, brain-teaser questions. Most notably, Google’s interviews ask famously inpenetrable questions to challenge interviewees’ inventiveness.

You are shrunk to the height of a nickel and thrown into a blender. Your mass is reduced so that your density is the same as usual. The blades start moving in 60 seconds. What do you do? (Google Interview question, from How to Ace a Google Interview)

This question is measuring a candidate’s ability to make some mental leap of logic. A good leap of logic in this case would be to propose a physical leap out of the blender; if you’re the same density then the ratio of strength to mass will have increased in your favour, so you can jump out of the blender.

Demonstrating such a leap of logic is a good positive indicator that an interviewee can think laterally, invent and innovate. It tests exactly those aspects that Moore argues are not well tested by the game. For this reason, it is worth considering asking such questions during an interview.

Conversely however, not demonstrating such a leap of logic is not a good negative indicator; an inventive problem-solver may fail to answer such a question successfully. Perhaps the question just didn’t chime with them; perhaps they were too nervous.

The utility of these questions for Google is more obvious when you consider that Google have around 130 candidates for each job. A question that reliably filters inappropriate candidates but also filters many good candidates is not a problem in a scenario where there are more candidates than can be readily handled.

Most silicon valley companies have difficulty keeping their engineers, let alone elsewhere in the world. There are too few developers for the work available (worth reflecting on, if you’re dissatisfied with your current working conditions). In a scenario with very few potentially appropriate candidates for a job, using questions like this may not be an appropriate strategy; there is no point in asking a question that nobody can answer!

Conclusions

I am not advocating pairprogramming as an ideal interviewing technique, but it has served as a functional approach for me. Interviewing people for coding positions is complex, and many good techniques or ideas are available. Different styles will fit different cultures better, but it is important and instructive to reflect on how you interview and what you are looking for while you interview, to ensure that you maximise your chances of hiring the best candidates for your team.

Future Of Flash

Flash Is Dead!

Or so you’d think. It’s been a fascinating couple of days in the Flash community. Everyone has had their say. ZDNet are telling everyone that Without Mobile, Adobe Flash is irrelevant. CNN talks about the Beginning of the End for Adobe Flash.

Adobe Shrinks!

This week, Adobe laid off 750 jobs. We should remember that 1950 jobs have been lost at Adobe in the last three years now (600 in 2009 and 600 in 2008 – from daringfireball.net).

Part of this was belt-tightening was the entire US-based Flash Authoring team. This article blithely jumps from this premise to the conclusion that “offering a free Flash Player runtime subsidized by selling tools is no longer a business Adobe is interested in”. Does it? Had Adobe said that they no longer have that business interest, then things would be a lot clearer. They haven’t said that at all. The media and the community is jumping to apocalyptic conclusions. (The community should know better)

Tired Communities

Last night I watched a chunk of an online meeting about the Flash Platform of about 80 developers, that notably included @seantheflexguy, @j0eflash, Lee Brimelow and Thibault Imbert.

The most noticeable thing about that conversation for me was the exasperation and ill-feeling in the room directed towards Adobe. The Flash Platform Community feels that it has been ill-treated by Adobe.

On one level, that’s absurd. Without Adobe the community would be fragmented across myriad other communities. Adobe’s investment into the platform is a primary reason the community exists. For many years now, Adobe has given us tools that allow us to make best-in-class products (or bad, buggy products as per our abilities) and display them on the web, desktop and mobile.

Diluting Juice

There has definitely been a dilution of Adobe’s lead in the interactive experience space. HTML5 has taken a lot of the wind out of Flash’s sails. The player performance on Apple products has been frustrating. The gradual loss of Flash’s performance advantage against other platforms has been demoralizing. But it is hardly surprising that other platforms and developers have looked at what Flash did right and have tried to implement similar functionality targetting other platforms and other languages. What would we have Adobe do about this? Somehow continually out-compete all other vendors? If they don’t, should we consider it a betrayal?

Buoyant Communities

Right now, if you attend a JavaScript developer meetup there are a hundred kids in their 20s inventing the stuff we invented over the last decade (a lot of which was invented before us for other platforms, of course). They’re excited and young and vibrant.

At the equivalent Flash developer meetup down the street the picture is very different. There’s a mash of artists, Flex developers, game developers, developers who make banner ads for platforms, developers who use Flash to produce graphics for massive stadium gigs for rock stars. Meetup after meetup fewer developers attend, they are older, less enthusiastic, jaded.

So the apocalyptic conclusions are understandable, if misguided. You can understand why developers might be angry with Adobe, having watched the platform lose out to the new kids on the block over several years. Now it feels that Adobe have finally admitted as much, they are venting that frustration.

Flash Is Alive!

Could it be that the Flash Player has tried to be all things to all people? That the banner ad guy, the RIA guy building data grids and the game developer are all targetting the same platform could be a problem. Add into that mix a requirement that these games, RIAs and ads display on every browser on every desktop, but also on every browser on every mobile, and you can see what a headache Adobe has been contending with over years.

The rise of HTML5 simplifies this issue: gradually, banner ads and simple RIAs will be doable in pure JavaScript. That may upset a lot of Flex developers, but seriously, if you can actually do it better on another platform, wouldn’t you?

But games, and those more powerful RIAs that really juice the Flash Player are not going anywhere. They need not only the current Flash Player, but a beefed-up, more aggressively powerful Flash Player. These apps may remain in the browser for desktops, but will take over the screen on mobiles, wrapped in an AIR wrapper.

Flash Player can still deliver a particular set of experiences across desktop and mobile better than any other platform. The stats about Flash Player are indisputable and compelling. In an attempt to make bolster their business case against the brunt of criticism this week, Adobe published some of these stats.

Everything dies, even the Flash Player. Just because it isn’t a kid anymore, doesn’t mean it’s dead, and these stats demonstrate it pretty convincingly.

Where Does Flash Go?

Thibault Imbert wrote an article yesterday called Focusing, in which he made the case that what has happened this week is a good thing. I broadly agree with this commentary, if this frees up the team to innovate on the current Flash Player.

I have been arguing the case for generics in AS3 so long that it has become a standing joke. AS3 should also adopt inline functions, enums, typedefs and every other good idea that Nicolas Cannasse has baked into haXe. In fact, Adobe should have brought Nicolas into the fold long ago, and would be well advised to do so now, if they can. This sort of language innovation would free-up the open-source developers to create better, faster tools and architectures for other people to build their games on.

They should also be working hard to develop better tooling. That FDT remains broadly better than Flash Builder should be a cause of embarrassment. That we have to use a creaky buggy platform like Eclipse in the first place, when we are using tooling from the company that leads the world in creating tools for creative professionals is continually frustrating. It makes you think that they still don’t really understand what coders want.

They need to continue to improve the performance of the player and the compiler. That people are still doing things like byte-weaving or using Apparat means that Adobe continue to miss really simple tricks in this area.

Conclusions

The community has gone nuts, but you can understand why. They’re mostly wrong, and are probably largely jealous of those JS script kiddies who seem to be having so much fun! But, they also like their not-quite Java, not-quite JS middle-road language, and want to stick with it. They want Adobe to help them stick with it. Adobe are helping! It just doesn’t feel much like it, because Flash isn’t new anymore, and like all old things, it has its problems.

After a week or two, calmer heads will prevail, and the pendulum will swing back. People will think to themselves “when was the last time I used a Flash app in the browser on a mobile?” and also think “wasn’t the Flash IDE mostly junk?”, and on reflection, may even have something positive to say about Adobe.

Did I miss anything?

TDD-ing Game Of Life in haXe

Last week I attended the inaugural Try Harder collaborative training week with a group of talented and dedicated developers. It was an extraordinary week, my brain is still fizzing with ideas and techniques that I learned there.

Image showing game of life implementation

One afternoon Mike Cann and I put together The Game Of Life following a largely test-driven-development process, using Mike Stead‘s MassiveUnit unit testing implementation for haXe.

This was Mike’s first time using a TDD approach, and one of the first times I’ve built in haXe, and it was fun and informative. You can read Mike’s post about the project here, and browse the source-code on Github here.

(Unfolding) Platonic Solids

Before I was an ActionScript coder I was a mathematics teacher. It may have been a giveaway that coding was more my style than teaching when I made this, originally in AS2: (roll-over to activate)

Later on I ported it to AS3 for a game that was never published. I made these:

Then, I forgot about it, for a long time, but I just came across it again! It’s actually my own 3D library, back in the days before I had heard of Papervision or Away3D, or any of the others. It draws the polygons using Graphics. That part is worthless.

However, I haven’t seen the dihedral angle structure that allows me to define the solid to open and close in other code. Perhaps this will be useful to someone. If you would like it, then you’re very welcome!

The code is here: https://github.com/alecmce/ptolemy3D/tree/master/src. The unfolding parts here.

CommandFlow – Another Approach to (RobotLegs) Asynchronicity

In this post I present another first draft of an extension for RobotLegs, my RobotLegs Flow Extension, which seeks to cope with problems surrounding asnychronicity in command-level code.

In my last blog post, I argued that Robotlegs is poorly equipped to cope with asynchronous processes, and offered an extension to RobotLegs to seek to solve this problem, "Async Processes Extension". This extension follows in the footsteps of other interesting libraries, such as Shaun Smith’s Oil Extension among many others.

Reflecting about Processes

Since I wrote that code and blog post, I have been reflecting on my approach. I have a few problems with it:

  • RobotLegs is fundamentally lightweight. It is my framework of choice because it’s hardly a framework at all. The Processes extension adds a lot of conceptual overhead and does a lot of work under-the-hood.
  • It separates the concept of a process from the concept of a command, but it can often be the case that previously synchronous code can become asynchronous (for example: when moving from mocked data to live data), or the other way (for example: once data is pre-loaded that was previously loaded just-in-time). Refactoring between commands and processes seems more trouble than it’s worth.
  • The under-the-hood part of the Process class means that it’s impossible to duck-type a Process. Duck-typing is good insofar as you know a framework that accepts duck-typable object has minimal dependencies.

These criticisms of the library led me to think about a more lightweight approach.

The Central Premise of Process

The central premise of my critique of asynchronous code is that this sort of code is bad because the Command contains both its own logic and sequencing logic:

// pseudo-code!

class Context
{
	signalCommandMap.map(runSecond, SecondCommand);
	signalCommandMap.execute(FirstCommand);
}

class FirstCommand
{
	[Inject]
	public var runSecond:Signal;
	
	public function execute():void
	{
		process = new SomeProcess();
		process.run().addOnce(onComplete);
	}
	
	private function onComplete():void
	{
		runSecond.dispatch();
	}
}

It would be preferable to abstract the sequencing logic, so that the FirstCommand can just report that it has completed what it needs to do without having to know what’s next.

I have often found that you end up drawing diagrams for code written this way with each command in a box with arrows between them. I want a class that encapsulates that diagram, ensuring that the individual commands are truly agnostic with respect to their context in the application.

A Different Approach

If we abandon the notion of a Process, then how can we retain this separation? My second approach has been to abstract the sequencing logic into a class, called CommandFlow. The intention is to use it like this:

// pseudo-code!

class Context
{
	injector.mapClass(ComamndFlow, CommandFlow);
	
	signalCommandMap.execute(InitCommand);
}

class InitCommand
{
	[Inject]
	public var flow:CommandFlow;
	
	public function execute():void
	{
		flow.push(FirstCommand);
		flow.push(SecondCommand);
		flow.next();
	}
	
}

class FirstCommand
{
	[Inject]
	public var flow:CommandFlow;
	
	public function execute():void
	{
		process = new SomeProcess();
		process.run().addOnce(onComplete);
	}
	
	private function onComplete():void
	{
		flow.next();
	}
}

The command sequencing is abstracted into InitCommand, so that FirstCommand doesn’t need to know anything more than that it is part of a sequence. CommandFlow is a helper that allows asnychronous command sequencing to take place with a minimum of impact on the classes’ functionalities themselves.

Each CommandFlow injected into a command is a separate instantiation, so that if nested command logic is required, then it is possible. For example:

// pseudo-code!

class Context
{
	signalCommandMap.execute(InitCommand);
}

class InitCommand
{
	[Inject]
	public var flow:CommandFlow;
	
	public function execute():void
	{
		flow.push(FirstCommand);
		flow.push(LastCommand);
		flow.next();
	}
}

class FirstCommand
{
	[Inject]
	public var flow:CommandFlow;
	
	public function execute():void
	{
		flow.push(SecondCommand);
		flow.push(ThirdCommand);
	}
}

In this structure, FirstCommand will trigger SecondCommand and ThirdCommand before LastCommand is executed.

Payloads can be passed in explicitly through the push method, as CommandFlow exposes this method:

public function push(command:Class, ...args):Boolean;

A working demo of the code-in-action is provided in the github repository here.

Notes

At the moment it is not clear how to handle commands being pushed into a CommandFlow once the flow has been started with a next() call.

If the CommandFlow approach is to be useful, I will need to add branching into the CommandFlow class, and it could quickly become very difficult to maintain. Careful thought will be needed to cope with this.

After my previous post, Camille Reynders made several good criticisms. One important one is how to handle failures. I think that this might be handled elegantly through branching, or possibly through some CommandFlow.error method that is a general ‘abandon-ship’ method. I’m not sure yet how to implement this, since I’m not yet committed to using this sort of approach to solving asynchronicity problems.

Brian Heylin also pointed me towards some interesting resources that led me to adjust my Notices classes (my implementation of the Signals concept) to expose what I am calling a ‘Future‘; essentially a single-dispatch Signal/Notice, such that if you bind to it post-dispatch, the bound method is immediately called with the data that was originally dispatched.

Last Thought

I am keen to emphasise that these ideas are not complete; they need refining, tweaking and using. Perhaps I have gone down another blind-alley? Perhaps the Processes idea is better after-all? Or perhaps I’m misunderstanding something fundamental about how to wire-up applications with asynchronous functionality that you can enlighten me about?

A blog is for nothing if not shared learning. If you have any comments (positive or negative) then I would really appreciate hearing them!

Asynchronous Processes and RobotLegs

Asynchronous processes are a common feature of ActionScript applications: we often need to initiate some asynchronous process, wait for a response and handle it. We have good language tools and design patterns for solving these sorts issues.

RobotLegs is an MVCS framework that aims to decrease inter-dependency between different parts of code, so that code is more robust, and more reusable. It does this through a combination of dependency injection and the model-view-controller (+services) design pattern. The ‘controller’ portion of RobotLegs’ MVCS implementation is achieved by offering coders a simple way of implementing the Command pattern and binding commands to events that can be called from other areas of the code.

Problematically, commands are stateless and synchronous. They are not wired up to handle asynchronicity under-the-hood. “Async Commands” amend the Command structure to attempt to solve this problem, but they still contain architectural limitations that force us to restrict the way we code to the potential of the framework. Using Commands for asynchronous processes often require us to ‘double wire’ between different application elements to solve the problem of passing data between classes.

What I am calling Processes are a redesign of the Async Command concept that seek to resolve these issues. They can be used in parallel with Commands to elegantly handle the problems of asynchronous code in RobotLegs.

The Async Pattern

The event model exists explicitly because we often need to wire up code to be triggered at an indeterminate future event. Fundamentally, client-side programming is about state and asynchronicity: events handle the asynchronicity. We handle them with this sort of pattern:

public function init():void
{
	var process:AsyncProcess = new MyAsyncProcess();
	process.addEventListener(AsyncProcessEvent.COMPLETE, onProcessComplete);
	process.init();
}

private function onProcessComplete(event:AsyncProcessEvent):void
{
	var processs:AsyncProcess = event.process;
	process.removeEventListener(AsyncProcessEvent.COMPLETE, onProcessComplete);
	
	parse(process.data);
}

Signals improves upon this structure by removing the necessity of event classes and offers some added features like automatically handling the removal of listener functions:

public function init():void
{
	var process:AsyncProcess = new MyAsyncProcess();
	process.completed.addOnce(onProcessComplete);
	process.init();
}

private function onProcessComplete(process:AsyncProcess):void
{
	parse(process.data);
}

This pattern is simple but powerful – create an object to tokenise or act-as-delegate-to the process, bind to some generic event/signal defined on the token, then initialise the process. I take the signal implementation to be a substantial improvement to the original.

The Async Pattern across different objects

A more complex case emerges if one class wants to launch a process in another class and retrieve data from its result. Ideally we want to keep the same three-step process.

class Model
{
	public var service:Service;

	public function init():void
	{
		service.completed.addOnce(onServiceComplete);
		service.init();
	}
	
	private function onServiceComplete(process:AsyncProcess):void
	{
		parse(process.data);
	}
}

interface Service
{
	function get completed:Signal;
	
	function init():void;
}

Unfortunately, this pattern is now insufficient, because it is possible for two different objects to trigger an asynchronous process from the same service at overlapping times (such that the second one is triggered before the first response).

class InitModelsCommand
{
	public var a:Model;
	public var b:Model;

	public function execute():void
	{
		a.init();
		b.init();
	}
}

In this scenario, if modelA and modelB both call init() in turn, then modelB will receive the data for modelA, then remove its listener before its own data is retrieved (assuming that modelA’s data is returned first, otherwise vice-versa).

This problem comes about because the service exposes a single Signal for multiple requests. There are two options to remedy this limitation:

// option A - the calling service tells the response where to go
interface Service
{
	function init(signal:Signal):void;
}

// option B - the service generates a response signal on a per-call basis
interface Service
{
	function init():Signal;
}

There’s not a lot between the two solutions, but I prefer Option B because it keeps together the repsonsibilities for creating and managing delegates to the asynchronous process in the same place as the process itself.

The pattern morphs to this:

class Model
{
	public var service:Service;

	public function init():void
	{
		service.init().addOnce(onServiceComplete);
	}
	
	private function onServiceComplete(process:AsyncProcess):void
	{
		parse(process.data);
	}
}

interface Service
{
	function init():Signal;
}

Here, be aware that if the signal responds within the init() method then it will not be routed to onServiceComplete. This can be handled by defining a signal that once dispatched, will dispatch immediately to any methods that are then added.

Asynchronous Processes in RobotLegs

The point of RobotLegs is to keep as few dependencies between different parts of code as possible. Ideally a model and a service shouldn’t reference each other, so RobotLegs best-practises would suggest you separate their interaction into a Command:

class InitModelCommand
{
	[Inject]
	public var model:Model;
	
	[Inject]
	public var service:Service;

	public function execute():void
	{
		service.completed.addOnce(onResponse);
		service.init();
		
		commandMap.detain(this) // if we don't do this for an async command everything could explode
	}
	
	private function onResponse(process:AsyncProcess):void
	{
		commandMap.release(this) // if we don't do this for an async command commands stay in memory
	
		model.parse(process.data);
	}
}

There several problems with this implementation. Imagine that this command is called as part of an initialisation routine, and once complete, another command should be called:

class Context
{
	public function onStartup():void
	{
		signalCommandMap.mapCommand(InitModel, InitModelCommand);
		signalCommandMap.mapCommand(InitView, InitViewCommand);
	}
}

How should InitViewCommand be called? The only place where the InitModelCommand is known to be complete is in its own onResponse command:

class InitModelCommand
{
	[Inject]
	public var initView:InitView;
	
	...
	
	private function onResponse(process:AsyncProcess):void
	{
		model.parse(process.data);
		initView.dispatch();
	}
}

This is unsatisfactory. Without this, the command is neatly encapsulated and simple to describe and understand. Once this code is added, the command takes on two responsibilities: firstly to initialise the model, and secondarily to kick-off the InitViewCommand.

The stateless command design pattern is ill-suited for asynchronous processes, and the attempt to use them leads to poor code design.

An asynchronous process for RobotLegs

The simple cases at the start of this article are instructive if we try to design a preferred architecture from the ground up. While retaining the power of RobotLegs to remove class dependencies, we want to find a way to use the simple asynchronous process pattern to wire up processes.

These design considerations have led me to what I’ve tentatively called the robotlegs-async-process-extension. It’s very early stages, and as usual I’m sharing the code more to get feedback than as a polished piece of work, but it functions to provide what I think is a superior alternative to Async Commands.

The structure is roughly like this:

class Context
{
	public function onStartup():void
	{
		// a delegate is a new concept...
		processMap.map(InitModelDelegate, InitModelProcess);
		processMap.map(InitViewDelegate, InitViewProcess);
	}
}

Class InitModelDelegate extends ProcessDelegate {}

Class InitViewDelegate extends ProcessDelegate {}

class InitProcess extends Process
{
	[Inject]
	public var initModel:InitModelDelegate;
	
	[Inject]
	public var initView:InitViewDelegate;

	public function execute():void
	{
		initModel.execute().addOnce(onModelInited);
	}
	
	private function onModelInited():void
	{
		initView.execute().addOnce(onViewInited);
	}
	
	private function onViewInited():void
	{
		complete();
	}
}

Note the features here:

  • Like the SignalCommandMap extension, a different core structure, the ProcessMap is defined that allows Processes to be defined in a way similar to Commands;
  • Rather than binding an event or signal to a Process, we bind a ProcessDelegate. We need a little bit of extra functionality in the ProcessDelegate for the structure to work satisfactorily, but this is not functionally different from the SignalCommandMap extension;
  • When a delegate is executed, it passes back a signal that the calling object binds to in order to know when the process completes (it may also send back data through this callback);
  • Processes can’t be duck-typed, because they also need to inherit the functionality from their base class Process. Process exposes a complete() method that is called when the Process completes. If complete() is called inside execute(), then the Process becomes functionally equivalent to a Command.

In fact, this is not quite the library as developed, because I have started using Notices rather than Signals, so I have ported this across to Signals since it has a much broader adoption. Notices are pared down, simple implementations of Signals that allow you to expose very simple interfaces like the SingularNotice interface. It’s a preference thing; if anyone would like the original implementation with Notices, I’m happy to publish it.

I hope that this gives you some food for thought with respect to asynchronous processes. I’d love to know what you think about the idea, the implementation, the implied critique of RobotLegs. Let me know!