3Aug

When I first heard about ReactJS, CoffeeScript, Gulp and all these other new libraries I immediately thought “oh god why.” Over the past few months I’ve used each one separately, and learned about why they are cool or useful. Mostly the benefits involved less typing or some performance enhancement.

Slowly, I learned how to integrate all of these libraries and construct a usable workflow. Here’s my report on all of that.

Gulp/Browserify

Setting up gulp to use browserify is easy. You should; however, configure browserify to use the great `coffee-reactify` transformer. Make sure to first `npm install coffee-reactify –save-dev`

SublimeText

Install the “Better CoffeeScript” and “ReactJS” packages. They now support the “.cjsx” extension out of the box. I went ahead and configured all my .coffee files to use the that flavor of Coffeescript. To enable this click “View” -> “Syntax” -> “Open all with current extension as…” -> “ReactJS” -> “Coffeescript”. You may need to restart Sublime after this (I needed to).

Unit Testing with Mocha

First you should: `npm install coffee-react –save-dev`

Next you should set up your mocha.opts:

Finally, you’ll need to set up jsdom. You should start by running `npm install jsdom –save-dev`

Then set up a spec_helper like so:

This spec_helper attaches sinon to @ for every test it also exposes document and window as a global which ReactJS needs.

Now let’s write some tests:

In this test we stub out some Parse methods, create some fake elements using JSDom, Simulate a click and make sure some text is present. Everything a happy healthy test could ever want. Now with some great syntactic sugar.

Conclusions

The great thing about this workflow is that it is 100% browser independent. This makes your unit tests, well, more unit test like. You should of course create some sort of integration suite that tests your product in every browser.

If you need to integrate just one more library it is likely someone has already created something, just go explore! :) To see some more code checkout a new project I’m working on: https://github.com/parris/inventoryjs

Share/Save
17Sep

Preface: I am taking a class on recommendation systems. One of our assignments was to analyze some recommendation system. I chose Spotify!

Link to Recommender: https://play.spotify.com/discover

On this page, if you have Spotify and have listened to some music before, you will find a list of music and concert recommendations. The music recommendations come in variety of forms and will be the focus of this analysis.

Related Artist/Album/Song Recommendations

The first type of recommender that you will see on Spotify is one that recommends music based on what you have previously listened to. This includes similar artists and new songs/albums by the same artists. Other times it recommends that you listen to something you haven’t played in a while as well. All of these types of recommendations seem to operate in some similar manor using a combination of aggregation and categorization based filtering.

Domain: Music, Songs, Albums
Purpose: Education, learn about new content
Recommendation context: Users looking for something to listen to either during a listen or before a listen
Whose opinion: Experts, most of these recommendations seem to be based on time period and genre groupings. For example if you listen to 80s rock it won’t recommend 2000s rock or vice versa unless you listen to rock from both time periods. These recommendations are either due to some automated categorization algorithm or perhaps curated by a team who manually dissects the genome of songs and artists.
Personalization level: Persistent, based on previous usage. In fact, they are so persistent that if you haven’t listened to something in a while it will recommend that you give that song or artists another listen.
Privacy and Trustworthiness: Low risk. Your discover page is not automatically shared and other people’s opinions/tastes do not appear with this particular recommendation type. It also seems that there are not many business rules involved. There does not seem to be a motive beyond discovery for this recommender.
Interfaces: Input: previous listens (implicit), favorites (explicit); Output: Recommendations about similiar artists.
Recommendation algorithms: Content based filtering

Friend recommendations

The second type of recommendation that appears is caused by a user’s influencers’ implicit or explicit shares. Influencers include Facebook friends, musicians and artists that a user subscribes to on Spotify. Within this category there are a few recommendation types. One of these types is a pure aggregation of music that your influencers have publicly shared. A share can be an actual “share” behavior, or can be the result of a more subtle action like adding music to a public playlist. A second type of recommendation that is more implicit comes from influencers describing their own artist influencers. For instance, Spotify will tell me when a friend subscribes to some artist and tells me that maybe I’d be interested in subscribing to them as well.

Domain: Music, Songs, Albums
Purpose: Community
Recommendation context: Users looking for something to listen to either during a listen or before a listen
Whose opinion: Ordinary “Phoaks” (people helping one another know stuff), like minded listeners that you opt’d in to following.
Personalization level: Persistent and Ephemeral. Your friends stay persistent, but what they are interested in from week to week changes. The recommendations here also change as a result.
Privacy and Trustworthiness: Medium risk. Spotify has often been criticized for revealing too much data about what people listen to and are interested in. They do allow you to be more private about listening habits; however, it is very easy to publicize more information than intended on the platform.
Interfaces: Inputs: Opt-in following of a friend, musician or playlist (explicit); Outputs: Recommendations about what people are interested in. There may also be some filtering done here or at least sorted by best match.
Recommendation algorithms: Aggregation and personalized collaborative filtering

Conclusions

Spotify has a number of other features that funnel into the discover page. While listening to music on their desktop app they expose music that your friends are listening to (when they are publicly listening). This mechanism allows you to follow your friends immediately and listen to what they are listening to. At other times Spotify notifies you when you have a new follower and asks if you would like to reciprocate and follow back. Spotify also has a radio feature that performs operates in a similar manor to Pandora. All of these features seem to feed off of one another continually increasing the amount of data they are able to collect about a given listener.

12Sep

You remember that really awesome conditional in HTML that you used to prevent content from showing up in outlook’s desktop client. Yea, that doesn’t work anymore.

If you use:

<!--[if !gte mso 9 ]><!-->
content here
<!--<![endif]-->

You’ll feel great… but only for half a second. Only until all the hotmail, outlook.com and icloud users complain that their emails aren’t rendering properly. Oh and trust me those people exist. The above code actually breaks the page really magnificently in those webmail clients and various parts of your email will go missing.

So what can you do? Well, I found this neat css property:

<div style='mso-hide: all;'>
</div>

It works similarly, but there are caveats of course. This property hides the internal content, but things like height, padding and margin may still exist so you need to zero those out as well. This, of course seems to not always work on all elements.

When is html email going to start sucking less? Why can’t we get all this information in one place. This is why we can’t have nice things people!

7Oct

Lately I’ve been working on a client/server side validation library. I had a need to chain a bunch of methods together, and what that meant was a need to modify how the original functions were called without changing them. This lead to a need to pass some variables into a function and still wait to accept more parameters later. I already knew about currying, which I will summarize as being able to pass as many variables as you feel into a function. A couple other techniques were needed to solve this.

First I learned about the apply and call functions (I know I am late to the party…). In this case apply was extremely helpful, it let me set a context (change the value of this) and collect some parameters together and pass them into a function.

Next I discovered partial applications. This is more of a technique than anything else. In iz we have the following source code:

function validator_partial(fn) {
    //get all arguments except the first, which is the function name
    var args = Array.prototype.slice.call(arguments, 1);
    //pass the "value" in as the first parameter so that the user of this library doesn't need to
    args.unshift(value);
    //return a new function
    return function() {
        //combine all arguments made to this function with the ones above
        var allArguments = args.concat(Array.prototype.slice.call(arguments)),
        //get the result
        result = validators[fn].apply(null, allArguments);
        //update this object
        if (!result) {
            if (typeof this.error_messages[fn] !== "undefined") {
                this.errors.push(this.error_messages[fn]);
            } else {
                this.errors.push(fn);
            }
            this.errors.push(fn);
            this.valid = false;
        }
        //return "this" to allow for chaining of methods
        return this;
   };
}

for (var fn in validators) {
    //for each function, call the partial and pass in the function
    if (validators.hasOwnProperty(fn)) {
        Iz.prototype[fn] = validator_partial(fn);
    }
}

At the bottom I am grabbing the validators, and assigning them to the prototype of the Iz object. Before the validators get called though they go through a partial. This partial is a function that returns a function. The closure allows you to house some variables within it. When the returned function gets called it has access to the outer one’s scope, which is how you are able to pass in parameters before and after the function is called. On top of that this system lets you pass in as many params as you need and it simply forwards everything on.

With this method I was able to replace all of the first parameters (the ‘values’ with the value from the Iz object. This means less typing, which is always nice!

6Aug
There are 2 problems with creating classes in JavaScript. 1) if you create classes in separate files you may need some system to tie the files up together for speed purposes. 2) typically your might lose your function names while you debug if you aren’t careful. In other words, you might be seeing tons of anonymous functions when you debug. There is a balancing act that goes on here.
How to define a class and methods typically:
Method 1:
function MyClass() {
}
MyClass.prototype.myfunc = function(){
}
Method 2:
function MyClass() {
   this.myfunc = function(){
   }
}
Method 3:
var MyClass = {
   myfunc : function() {
   }
}
In all 3 of these techniques you lose the name of the function in the debugger. Why? Well because your functions are technically anonymous.
function() {}
vs
function bob(){}
So how can we make this happen? Well… how about this:
MyClass.prototype.myfunc = function myfunc(){}
Yea, that actually won’t work everywhere unfortunately, but this will:
function MyClass{
   function MyClass_myFunc(){
   }
   this.myFunc = MyClass_myFunc;
}
It is a bit repetitive, but this will help you find where issues are occurring in a debugger. You debugger will now state the issue came from MyClass_myFunc() instead of (?). I believe some modern browsers do actually convert the (?) to the right function, but I believe it stops at some point in the call stack. So this is great!
Well you might also wonder how to namespace? What is a namespace in javascript anyways? Well a namespace like anything other than a “primitive” in javascript is an Object. It is defined:
var mynamespace = mynamespace || {};
This will either define a namespace named “mynamespace” as a new object literal OR it will use the existing mynamespace object in the current scope. So to add a class to our namespace we would just add our class to it:
function MyClass{
   function MyClass_myFunc(){}
   this.myFunc = MyClass_myFunc;
   mynamespace.MyClass = MyClass;
}
The problem with this is that we have both: MyClass the global defined and mynamespace.MyClass. To prevent this from occurring we wrap everything up like this:
var mynamespace = mynamespace || {};
(function() {
   function MyClass{
      function MyClass_myFunc(){
      }
      this.myFunc = MyClass_myFunc;</span></div>
      mynamespace.MyClass = MyClass;
}})();
Using the power of closures we have limited our globals to 1. We ONLY have mynamespace exposed globally. Nowwwww, we got 1 other thought that we should think about. What happens if our class depends on other classes? I’ve been looking at a good way to mimic Node.JS’s commonjs library client side. I think require.js will work well for this. How we might want to define our files with require.js:
var mynamespace = mynamespace || {};
require(["jquery"], function($) {
   function MyClass{
      function MyClass_myFunc(){
      }
      this.myFunc = MyClass_myFunc;
      mynamespace.MyClass = MyClass;
}});
Maybe? I think this looks good. It mimics the functionality we had earlier by limiting our global usage. You might be wondering what require is doing. Well, basically it is acting like a map. You specify where jquery is located and if it isn’t include it will get included. You can do this with all your classes in the app.build.js file defined by require. Note, that this does to some degree break from our notion of joining files together to optimize a website. While I am still investigating this, it seems that once a dependency is loaded requirejs will not load it again. This means that require might actually test for the existence of the dependency and if it isn’t around THEN load that file. This should allow you to use require’s optimizer to pull in various portions of a project at load then the rest later. Still reading, but this seems like the most optimal setup! :)
By the way, using backbone.js also forces you to adhere to the above conventions!
Sources: