RequireJS 2.0 Delayed Module Evaluation and Google Maps

The RequireJS behavior changed from version 1.0 to 2.0. Now module dependencies are only loaded after a explicit require() call or if it is on the dependency tree of some of the require‘d modules. That means that the module factory (define callback) won’t execute unless it is needed. That is a huge win for many reasons, less work for the JS engine and code has the same behavior before and after build. It also makes it possible to create alias to complex modules without triggering a download.

On my current project I have ~15 JS widgets that can be placed on any page and the amount of JS required by each widget is small so it makes sense to bundle all the JS into a single file for production (<30KB minified + gzipped excluding jQuery) instead of loading things on demand (a single request have better perf results than multiple requests in many cases).

I will show a very simple technique I used on the project to create an alias to the Google Maps API since I think it can be useful to other people as well in different contexts.

Read more…


Node.js as a build script

There are a lot of build tools that covers specific use cases and/or try to cover as many scenarios as possible, amongst the most famous ones are make, rake, Ant and maven. I’m going to talk about why I’ve been favoring plain Node.js scripts as my “build tool” and how to do some simple things. This work flow may not be the best one for you and your team, understand the reasoning behind it and pick the tools based on your needs and preferences.

Read more…


FTP protip: move files instead of deleting

Lately I’ve been favoring SSH to upload/delete files from a server since it’s faster and I feel it gives me more control/power, but I still use a regular FTP client for most of my daily use (since I don’t have SSH access to all servers), this is just a very basic tip that some people may not be aware of and that can boost productivity and reduce downtime during updates.

The Workflow

  1. Upload new files to a _swap directory.
  2. Move current files to a _backup directory.
  3. Move new files from _swap to the parent directory.
  4. Test live files.
  5. Delete files inside the _backup directory.
  6. Win one free internet.


The main reason is because moving files on FTP is way faster than deleting them, specially if you are moving the files to a parent directory, it can reduce the downtime from a couple minutes to a few seconds depending on how many files you are updating.

Another reason is that you will have a backup of the current files on the server and you can easily revert the changes in case anything goes wrong. - You should also keep all source files on a version control system and make updates when the site has as few users as possible (usually from 9pm to 9am) to reduce the chance of ruining everything…

That’s it!


Generating “non-biased” random integers

If you know Number Theory or had algorithms classes in college you can probably skip this post… - I studied Graphic Design so I had to learn those things by myself - I will try to explain it using as less math as possible.. I know there is a lot of info about this subject on the internet and algorithms books but today I spent a couple minutes searching for it and most of them are too hard to understand if you are not a mathematician (which BTW I’m not..) so after giving up on the hard reading I decided to come up with my own solution to the problem, it may not be most elegant solution but it is at least a logical solution and results are acceptable.

Read more…


JavaScript chaining and Trainwreck.js

source images from public domain (1, 2) and edited by Miller Medeiros

Yesterday I released a piece of code that I wrote a long time ago, it’s called trainwreck.js and it’s main purpose is to provide easy method chaining.

The main reason behind it is because a lot of people don’t really understand how chaining works and end up extending jQuery or underscore.js just to be able to use chaining…

Read more…