JavaScript MVC Frameworks

I’m going to start giving this blog some love over the coming days, including fixing the broken examples in my post about iframe content injection which has turned out to be quite popular.

But first, just wanted to give some love to Steve Sanderson’s summary post on the top MVC frameworks. I don’t really have any color to add on the topic right now. I’ve used Backbone and KnockoutJS before and like both. My eye used to do that stress twinge thing when working with frameworks that extend HTML with custom attributes to achieve declarative bindings, but it’s growing on me. I’m excited to start playing with AngularJS.

Anyway, here’s Steve’s post on the main JavaScript MVC frameworks.

Oregon Farmers Surprised to Find Fish in Fields

I’m surprised that they’re surprised. Given the natural ecological history of the valley floor as a large flood plain it makes sense. Perhaps the surprise is in the diversity of fish species.

It would be interesting to hear more about how these fish populations are effected by agricultural pesticides and herbicides.

Inject Content into a new IFrame

In on-line ad delivery we utilize the <iframe> tag quite a lot. There are good reasons for serving ads in iframes. There are restrictions in communication between an iframe and its parent window when the two documents are served from different domains. This helps protect users as well as site publishers from malicious activity.

Iframes also provide advantages in the speed of delivery of an overall page. The parsing and display of an iframe’s content happens asynchronously to the rest of the parent window’s resources. This means that when an iframe is encountered during an initial page load it does not prevent the rest of the page from loading while the content of the iframe is loaded.

At AppNexus we’ve been developing some improved ad delivery mechanisms which leverage the advantages of iframes, in particular we’ve been leveraging the dynamic creation of iframes to serve ad content after some user interaction is applied. However, this has been a bit of an adventure as idiosyncrasies in how browsers handle the population of content in iframes and the rendering of their  documents.

Let me illustrate.

Here’s how we would use JavaScript to create a new iframe, give it some properties, set its initial source, and add it to the existing document.

var newIframe = document.createElement('iframe');
newIframe.width = '200';newIframe.height = '200';
newIframe.src = 'about:blank'; 

I’ve set the src attribute to ‘about:blank’. This is generally a good practice because IE can behave strangely when an iframe is appended to the document without a source. Setting the src to about:blank makes this a “friendly” iframe which means that javascript code running inside the iframe can interact with the DOM and javascript defined in its parent window and vice versa. In other words, there are no restrictions on the communication and code execution between the iframe window and the parent window.

There are several different ways we can add content to our new iframe:

  • Change the src attribute to an external document URL
  • Use the DOM’s open(), write(), close() API to inject content
  • Use the javascript: URI scheme.

Here are examples of each approach.

I can set the src attribute to an external file:

newIframe.src = 'myIframeContent.html';

If this was my intent all along, then there’s no need to start with the ‘about:blank’ src. I can set this as the src directly.

This is not convenient if we don’t have access to the site’s initial domain, and we want to maintain the “friendly” nature of the iframe. In such a case we can use the DOM API to open the iframe’s document and dynamically write content into it.

var myContent = '<!DOCTYPE html>'
    + '<html><head><title>My dynamic document</head>'
    + '<body><p>Hello world</p></body></html>';'text/html', 'replace');

First, I’d like to call out that the contentWindow isn’t created until we’ve added the iframe to the parent window’s DOM (which we did in the first code block above).

This approach is adequate to dynamically inject content into an iframe in most cases. However, if the content contains resources to external <script> resources you can run into problems in IE.

Look at this example in Chrome, and you will see an iframe populated with content using the approach above, and it includes a reference to an external script which provides content for the iframe.

However, if you look at the same page in IE, you will see a JavaScript error and no content is displayed in the iframe.

Here’s the code for the iframe:

var iframe = document.createElement('iframe');
var ctnr = document.getElementById('ctnr');
var content = '<!DOCTYPE html>'
 + '<head><title>Dynamic iframe</title>'
 + '<body><div id="innerCtnr"></div>'
 + '<script type="text/javascript" src="external.js"><\/script>'
 + '<script type="text/javascript">'
 + 'document.getElementById("innerCtnr").innerHTML = externalVar;'
 + '<\/script>' + '</body></html>';

ctnr.appendChild(iframe);'text/html', 'replace');

In external.js there is a variable defined which is referenced by the code which is being added in our document.write call. external.js is loaded asynchronous to the parsing of document.write(content), therefore at the time that we invoke this:

document.getElementById("innerCtnr").innerHTML = externalVar;

externalVar has not yet been defined and causes a ReferenceError.

To workaround this problem in IE we can make use of the javascript: URI scheme which makes bookmarklets and scriplets possible in all browsers. Instead of using we use the following approach:

iframe.contentWindow.contents = content;
iframe.src = 'javascript:window["contents"]';

First, we assign the dynamic content to a variable on the iframe’s window object. Then we invoke it via the javascript: scheme. This not only renders the HTML properly, but loads and executes the scripts in the desired order.

Here’s an example using the javascript URI scheme.


Dynamic iframes are resourceful ways to add new content into a web page, but care and attention has to be paid as to how those iframes are injected with content to avoid errors caused by browser differences in loading and executing scripts added to those iframes.

HOWTO: Build node.js on a CentOS machine

I recently had the “pleasure” to build node.js on a relatively vanilla CentOS machine. I was a little surprised at the number of dependencies which were missing.

So, I wanted to take a little time to document the experience. I don’t know whether this is successful for anyone else. It will depend on your initial CentOS set-up, and what yum repositories you have access to.

My first surprise during installation was that I did not have the expected C++ packages installed, even though gcc was installed and available on my path. I had to grab the following package:

sudo yum install gcc-c++

Then, node.js began complaining about openssl not being available, even though it was. What node.js meant is that it wanted the SSL libs that are packaged in openssl-devel. So…

sudo yum install openssl-devel

After that, everything went as advertised. Here’s all the install steps together:

sudo yum install gcc-c++
sudo yum install openssl-devel
sudo yum install python
sudo yum install git 
git clone git://
cd node
git checkout v0.4
sudo ./configure
sudo make
sudo make install
curl | sh

My Dream iPad

There are rumors that Apple is on the verge of updating the Macbook Air. Will it be an A5 based Macbook Air?

I have been reluctant to get an iPad so far, mainly because it doesn’t offer enough differentiation from my iPhone, and it doesn’t enable me to do anything close to the amount of activity I can achieve with my laptop. It doesn’t justify the need to have one more device.

What I want is for the Macbook Air and the iPad to come together. There are a number of hybrid notebook/tablets on the market. One of the slickest is Dell’s Inspiron Duo.

I want the best of both worlds. I want a Macbook Air laptop…

  • That can run OS X and OS X applications
  • That has a multi-touch screen which can pivot into a tablet form
  • That can run iOS applications
This would be the ultimate road warrior machine for me as a telecommuting employee: lightweight, powerful, and diverse.
Reasons why it may not happen?
  • Apple would have to reconcile iOS and Mac OS X in some way. Either by porting OS X to A5 architecture and providing an emulator for OS X apps, or by providing an iOS emulator (something they kind of already have in the iOS SDK) for OS X running on intel.
  • Apple would be cannibalizing some of their device stack. They’ve put a lot of marketing effort behind the iPad in trying to convince consumers that they need a laptop/desktop, an ipad, and an iphone (plus an Apple TV and an Airport Express). 
Reasons why it may happen?
  • It would be an AWESOME device. Apple enjoys building awesome devices.
  • The market will force Apple to make this convergence some day. As Android improves for tablet devices, you’re going to see more hybrid form factors that make Apple’s notebook entry more irrelevant than it is already. And as notebooks become more powerful, in terms of the things a variety of users can do on them, the Macbook and Macbook Pro product lines will begin to seem like bulky dinosaurs compared to the laptop market of the future.

I don’t think we’ll see this dream device this month, but we may see an A5-based Macbook Air running OS X which would probably be the first step. 

Non-Destructive Spies in Jasmine

Over the past few months I’ve been building out our UI test framework tools at AppNexus. This gave me the opportunity to research and play with the latest automated testing tools currently available for JavaScript.

I’ve been extremely impressed with the Jasmine BDD framework which is maintained by Pivotal Labs. We’ve begun incorporating it heavily in our development processes and it is helping our code base to mature.

Why Jasmine?

I love the BDD-style interface, both for writing tests, and the reporting output. Jasmine patterns itself after RSpec. Jasmine is designed to take advantage of JavaScript’s strengths. The assertion API has a fluent interface, and anonymous functions are leveraged throughout. Nested describe blocks make it easy to encapsulate repetitive test set-up actions into limited-scope utility functions, which keeps my tests DRY.

Jasmine is extensible. It’s easy to add additional matchers, and since we utilize jQuery heavily at AppNexus, the jasmine-jquery package is extremely helpful.

Jasmine provides a lightweight spy interface (i.e. mocks, test doubles). This is an essential tool for testing our JavaScript in isolation from its dependencies. These dependencies are usually an AJAX back-end. Using Jasmine’s spies I can fake calls to a non-existent server in a test environment. This reduces the complexity of my test environment, and allows me to run tests faster.

One of the first issues I ran into with Jasmine was the destructive nature of its spy creation.

In Jasmine, you create a spy like so:

spyOn($, 'ajax');

In the above example, I’m replacing the $.ajax() method with a spy. The spy will record when $.ajax() is invoked in my application code, and it will record what input was given to $.ajax when it was called. After I’ve executed my application code I can verify that the application code correctly invoked $.ajax()


In JavaScript, functions can have properties of their own. I can define a function like so:

function sayHello() {
sayHello.makeGuestsComfortable = function() {
	console.log('Please sit down');

In this example, the sayHello() function has a method called makeGuestsComfortable().

Here’s a Jasmine spec that spies on sayHello():

describe("Greeter", function() {
   it('says hello', function() {
	spyOn(window, 'sayHello');

But this would fail with something close to the following error: TypeError: Object function () { … } has no method ‘makeGuestsComfortable’. This is because the spyOn() call destroyed the properties of the sayHello function.

We can avoid this by wrapping our call to spyOn in a function which preserves the properties of the spied function.

describe("Greeter", function() {
  var niceSpy = function(obj, funcName) {
	var original = obj[funcName];
	var spy = spyOn(obj, funcName);
	for (var i in original) {
		if (original.hasOwnProperty(i)) spy[i] = original[i];
	return spy;
   it('says hello', function() {
	niceSpy(window, 'sayHello');

Now the test will pass because the makeGuestsComfortable() function has been preserved.

Jasmine is a fantastic testing tool, and part of a new breed of JavaScript testing frameworks that allow software developers to keep their JavaScript code tested. I’m really excited to see this project grow and help in the development of quality software.

Getting Productive, Staying Productive with AutoFocus 4 and The Pomodoro Technique

I’m continually evaluating ways to stay productive in both my professional life as well as my personal life. Over the past two years I’ve come to rely on Mark Forster’s AutoFocus system (v4) in conjunction with the Pomodoro Technique to manage my time effectively.

Adopting these processes has helped me become more productive and helped to reduce the amount of stress and drain I experience throughout my work day.

AutoFocus v4

AutoFocus really just starts with a long list of the things I need to do. The system itself is actually a method for managing that list.

Building the List

I start the process by listing out in no particular order the tasks I need to get done. Once I’ve listed out everything I can think of I draw a line across the bottom of the list. As I think of new items to put on the list they are written below the line.

AutoFocus II

Moving through the List

I start my day by reading through each of the tasks that is above the line. I don’t stop until I’ve read through them all. I then pick one of the tasks and work on it. If I read through the whole list and cannot start on any of the tasks, then I read through the list of tasks below the line and start on one that is ready.

If I read through the list of tasks above the line and don’t work on any of them, then I highlight them. Next time I read through the list above the line I either have to work on a highlighted item or cross it off and either leave it off, or add it onto the bottom of the list.

Once all items above the line have been worked on then a new line is drawn at the bottom of the entire list and the process repeats itself.

In addition to this workflow, here are some extra tips that I’ve found extremely helpful

  • Always read fully through the list before starting on a task
  • After completing a task, always come back and read through the list
  • Keep separate lists for work and for home/personal projects. Don’t allow one to interfere with the other
  • If I find myself continually crossing off a task and re-adding it (and it’s not a recurring task like ‘Check e-mail’) then either this task is too large in its current form and needs to be broken down, or it’s not high enough priority to be on the list.

I wouldn’t use AutoFocus for team project management. And I wouldn’t use it for long-term planning. This is more of a day-to-day management tool. Items you’re unlikely to find on my list are: Buy a vacation home. Manage retirement. Start my own business. These are all long-term goals or plans that are composed of hundreds and thousands of tiny tasks which AutoFocus can help me manage.

The Pomodoro Technique

When I worked at Adchemy one of my co-workers turned me on to The Pomodoro Technique which is popular among pair programmers as a way of time-boxing programming sessions to avoid burnout. But it’s equally effective working on your own.

The technique is very simple. It simply dictates that you work for 25 minutes focused exclusively on a certain task. After that 25 minutes is up, you take a 5 minute break. This can be a 5 minute physical and/or mental break.

A good pomodoro is a little tricky to pull off. First I should note that I really only use Pomodoro for tasks at the computer. I don’t use it for gardening. Since I only use the technique at the computer, it’s nice to have an app for it. This is my favorite.

It has a lot of configurable options

And it even has some statistics to show how effective I’ve been throughout the day.

Here are some tips I recommend for a successful pomodoro

Close all programs on your computer that aren’t essential to the task at hand, especially anything that’s likely to distract you including non-essential chat windows and e-mail. I do leave my IM connection open, but I will generally sign out from non-work accounts or mark myself busy on those accounts to minimize their distraction.

E-mail is a constant attention drain. I generally close my e-mail client during a pomodoro and open it back up during a pomodoro break to catch-up.

Reset the pomodoro when you’re distracted. If you do find yourself getting distracted, either by your own wanderings, or the outside world intruding, then simply reset the pomodoro and restart later.

Honor the 5 minute break. One of the seemingly more annoying things about the Pomodoro Technique is the arbitrary time limit. What about when a pomodoro ends right when I’m in the middle of typing a line of code? Well, I find the best thing to do is finish the immediate train of thought I’m on and then start my break.

The time limit is not that arbitrary to begin with. There are studies which state the average sustained attention span of a healthy adult to be around 20-30 minutes.

Despite the annoyance of having to take a break in the midst of working on a project, I find that if I stick with a strict pomodoro schedule throughout the day I am less mentally drained at the end of it. I also notice less of a decrease in my mental capacity as the day goes on.

Use the break for mental and physical respite. The first thing I do when a pomodoro ends is I usually refill my water glass or coffee cup. Getting away from the computer and moving the legs is a good physical and mental break. I usually use the remainder of the break to peruse e-mail or my news reader before starting in on another round. Since I work from home I also occasionally use the breaks to do some brief physical exercise, a few push-ups, sit-ups or deep stretches help me maintain some semblance of decent posture and increase blood flow to the brain. Also, if the sun’s out and it’s warm I’ll usually step out my backdoor to get a little vitamin D intake.

If I ignore the pomodoro technique and work on a project until I can’t see straight my hunch is that the final several hours of that work were largely distracted and unproductive.

AF4 + Pomodoro

So how do I use these techniques together? You may have already guessed it. For most tasks on my list I devote a pomodoro to them. If I finish the task before the pomodoro is up then I start on a new task and take a break once the pomodoro is completed.

If a task requires more than a full pomodoro, then I usually put a checkmark next to the task for each pomodoro. If I spend more than four pomodoros on a task then I consider breaking up the remainder of the work into new tasks. That’s a sign that the task I have written down is more involved than I initially anticipated.


I hope you find one or both of these techniques useful in your own life. If you find tips or tricks which improve your productivity, please share them!

Keeping Things in Sync with Syncer

In almost every company I’ve ever worked as a developer I’ve had both a personal work computer as well as a development machine. The development computer may be a virtual machine sitting in a datacenter somewhere or it may be a Dell desktop tucked under my desk next to the file cabinet.

In all cases, I’ve preferred to work on my code locally and sync it with my development machine for testing. This allows me to write code in an off-line mode, and allows me to take advantage of many of the bells and whistles of my IDE.

The process of keeping files in sync between my local and remote computers has always been different, and never ideal… until now.

One of my co-workers recently clued me in to Mac OS X’s File System Events API. This is a low-level OS API that emits events corresponding to very common actions on files, like saving, opening, and closing.

I did some hunting and found this great Ruby script which takes advantage of the API. When the script is running it listens for save events in the current working directory tree and syncs the entire directory tree to a remote location using rsync.

I made some improvements to the script so that it can take command-line options like username, hostname, and a remote directory location. The updated script is available in its own repository from GitHub. Feel free to grab Syncer and play with it.

As it stands right now Syncer makes two assumptions about your operating environment:

  1. You have an ssh-agent running which allows you to connect to your remote server without having to type a password every time. If you don’t already have this, read this great tutorial on ssh-agent. It will tell you everything you need to know.
  2. The script takes advantage of growlnotify, a command-line interface for Growl. growlnotify is an extra in the Growl distribution. So, you need to install it separately.

If you find this script useful let me know.

View Files Beginning with a ‘.’ in Eclipse

I recently rebuilt my eclipse installation on Eclipse Helios and had set up a new workspace. In the process, I lost one of my long-time PHP Development Tools (PDT) settings where I can view files that begin with a . in the PHP development perspective file browser.

It took me a while to find the setting again. So, I thought I’d blog about it in case any others out there are having trouble finding this.

In the PHP Explorer View, click on the inverted triangle the upper-right hand corner of the pane to reveal the view’s contextual menu.

The .* resources item may be in the menu. If it is, click on it to remove the filter. If it isn’t then select the ‘Filters…’ option.

Uncheck the .* filter in the Filters… dialog window.

Now you should be able to browse and edit .htaccess files in the PDT perspective.

Bill Gates and the Public Schools Pay Structure

This week Bill Gates and Secretary of Education Arne Duncan argued for merit-based rewards for public school teachers, and criticized a system which gives automatic pay increases to teachers who have earned master’s degrees in education, regardless of how it improves teachers’ abilities.

Obviously teachers’ unions are concerned about the criticism as their task is to protect teachers’ interests and welfare. What I see missing in the debate is the question of what drives education institutions to reward master’s degree candidates in the first place?

My assumption is that there is a wide variety of young teaching candidates coming out of four year universities across the country. Some of these candidates may not have had education as their primary focus of study. A master’s degree in education would help solidify their training and make them stand out in a crowded job market.

Additionally, teachers are chronically underpaid. A two year master’s program that can be done at night and on the weekends may be a good investment for most teachers who want to get paid a reasonable wage for their work.

Gates and Duncan are probably right that for our public education system to be successful there needs to be an emphasis on merit-based performance rewards, although determining the nature and structure of such rewards is a thorny issue.

But, what is missing in this recent discussion is an interrogation of why this problem exists in the first place. We need to determine what pressures are causing school districts to reward master’s degree trained teachers more, regardless of experience or ability.

If master’s programs aren’t turning out better candidates, why are districts rewarding them? If master’s programs ARE turning out better candidates, then what is wrong with our bachelor’s degree system for teachers that is turning out sub-par candidates?