Preface: I am taking a class on recommendation systems. One of our assignments was to analyze some recommendation system. I chose Spotify!

Link to Recommender: https://play.spotify.com/discover

On this page, if you have Spotify and have listened to some music before, you will find a list of music and concert recommendations. The music recommendations come in variety of forms and will be the focus of this analysis.

Related Artist/Album/Song Recommendations

The first type of recommender that you will see on Spotify is one that recommends music based on what you have previously listened to. This includes similar artists and new songs/albums by the same artists. Other times it recommends that you listen to something you haven’t played in a while as well. All of these types of recommendations seem to operate in some similar manor using a combination of aggregation and categorization based filtering.

Domain: Music, Songs, Albums
Purpose: Education, learn about new content
Recommendation context: Users looking for something to listen to either during a listen or before a listen
Whose opinion: Experts, most of these recommendations seem to be based on time period and genre groupings. For example if you listen to 80s rock it won’t recommend 2000s rock or vice versa unless you listen to rock from both time periods. These recommendations are either due to some automated categorization algorithm or perhaps curated by a team who manually dissects the genome of songs and artists.
Personalization level: Persistent, based on previous usage. In fact, they are so persistent that if you haven’t listened to something in a while it will recommend that you give that song or artists another listen.
Privacy and Trustworthiness: Low risk. Your discover page is not automatically shared and other people’s opinions/tastes do not appear with this particular recommendation type. It also seems that there are not many business rules involved. There does not seem to be a motive beyond discovery for this recommender.
Interfaces: Input: previous listens (implicit), favorites (explicit); Output: Recommendations about similiar artists.
Recommendation algorithms: Content based filtering

Friend recommendations

The second type of recommendation that appears is caused by a user’s influencers’ implicit or explicit shares. Influencers include Facebook friends, musicians and artists that a user subscribes to on Spotify. Within this category there are a few recommendation types. One of these types is a pure aggregation of music that your influencers have publicly shared. A share can be an actual “share” behavior, or can be the result of a more subtle action like adding music to a public playlist. A second type of recommendation that is more implicit comes from influencers describing their own artist influencers. For instance, Spotify will tell me when a friend subscribes to some artist and tells me that maybe I’d be interested in subscribing to them as well.

Domain: Music, Songs, Albums
Purpose: Community
Recommendation context: Users looking for something to listen to either during a listen or before a listen
Whose opinion: Ordinary “Phoaks” (people helping one another know stuff), like minded listeners that you opt’d in to following.
Personalization level: Persistent and Ephemeral. Your friends stay persistent, but what they are interested in from week to week changes. The recommendations here also change as a result.
Privacy and Trustworthiness: Medium risk. Spotify has often been criticized for revealing too much data about what people listen to and are interested in. They do allow you to be more private about listening habits; however, it is very easy to publicize more information than intended on the platform.
Interfaces: Inputs: Opt-in following of a friend, musician or playlist (explicit); Outputs: Recommendations about what people are interested in. There may also be some filtering done here or at least sorted by best match.
Recommendation algorithms: Aggregation and personalized collaborative filtering


Spotify has a number of other features that funnel into the discover page. While listening to music on their desktop app they expose music that your friends are listening to (when they are publicly listening). This mechanism allows you to follow your friends immediately and listen to what they are listening to. At other times Spotify notifies you when you have a new follower and asks if you would like to reciprocate and follow back. Spotify also has a radio feature that performs operates in a similar manor to Pandora. All of these features seem to feed off of one another continually increasing the amount of data they are able to collect about a given listener.


You remember that really awesome conditional in HTML that you used to prevent content from showing up in outlook’s desktop client. Yea, that doesn’t work anymore.

If you use:

<!--[if !gte mso 9 ]><!-->
content here

You’ll feel great… but only for half a second. Only until all the hotmail, outlook.com and icloud users complain that their emails aren’t rendering properly. Oh and trust me those people exist. The above code actually breaks the page really magnificently in those webmail clients and various parts of your email will go missing.

So what can you do? Well, I found this neat css property:

<div style='mso-hide: all;'>

It works similarly, but there are caveats of course. This property hides the internal content, but things like height, padding and margin may still exist so you need to zero those out as well. This, of course seems to not always work on all elements.

When is html email going to start sucking less? Why can’t we get all this information in one place. This is why we can’t have nice things people!


Lately I’ve been working on a client/server side validation library. I had a need to chain a bunch of methods together, and what that meant was a need to modify how the original functions were called without changing them. This lead to a need to pass some variables into a function and still wait to accept more parameters later. I already knew about currying, which I will summarize as being able to pass as many variables as you feel into a function. A couple other techniques were needed to solve this.

First I learned about the apply and call functions (I know I am late to the party…). In this case apply was extremely helpful, it let me set a context (change the value of this) and collect some parameters together and pass them into a function.

Next I discovered partial applications. This is more of a technique than anything else. In iz we have the following source code:

function validator_partial(fn) {
    //get all arguments except the first, which is the function name
    var args = Array.prototype.slice.call(arguments, 1);
    //pass the "value" in as the first parameter so that the user of this library doesn't need to
    //return a new function
    return function() {
        //combine all arguments made to this function with the ones above
        var allArguments = args.concat(Array.prototype.slice.call(arguments)),
        //get the result
        result = validators[fn].apply(null, allArguments);
        //update this object
        if (!result) {
            if (typeof this.error_messages[fn] !== "undefined") {
            } else {
            this.valid = false;
        //return "this" to allow for chaining of methods
        return this;

for (var fn in validators) {
    //for each function, call the partial and pass in the function
    if (validators.hasOwnProperty(fn)) {
        Iz.prototype[fn] = validator_partial(fn);

At the bottom I am grabbing the validators, and assigning them to the prototype of the Iz object. Before the validators get called though they go through a partial. This partial is a function that returns a function. The closure allows you to house some variables within it. When the returned function gets called it has access to the outer one’s scope, which is how you are able to pass in parameters before and after the function is called. On top of that this system lets you pass in as many params as you need and it simply forwards everything on.

With this method I was able to replace all of the first parameters (the ‘values’ with the value from the Iz object. This means less typing, which is always nice!

There are 2 problems with creating classes in JavaScript. 1) if you create classes in separate files you may need some system to tie the files up together for speed purposes. 2) typically your might lose your function names while you debug if you aren’t careful. In other words, you might be seeing tons of anonymous functions when you debug. There is a balancing act that goes on here.
How to define a class and methods typically:
Method 1:
function MyClass() {
MyClass.prototype.myfunc = function(){
Method 2:
function MyClass() {
   this.myfunc = function(){
Method 3:
var MyClass = {
   myfunc : function() {
In all 3 of these techniques you lose the name of the function in the debugger. Why? Well because your functions are technically anonymous.
function() {}
function bob(){}
So how can we make this happen? Well… how about this:
MyClass.prototype.myfunc = function myfunc(){}
Yea, that actually won’t work everywhere unfortunately, but this will:
function MyClass{
   function MyClass_myFunc(){
   this.myFunc = MyClass_myFunc;
It is a bit repetitive, but this will help you find where issues are occurring in a debugger. You debugger will now state the issue came from MyClass_myFunc() instead of (?). I believe some modern browsers do actually convert the (?) to the right function, but I believe it stops at some point in the call stack. So this is great!
Well you might also wonder how to namespace? What is a namespace in javascript anyways? Well a namespace like anything other than a “primitive” in javascript is an Object. It is defined:
var mynamespace = mynamespace || {};
This will either define a namespace named “mynamespace” as a new object literal OR it will use the existing mynamespace object in the current scope. So to add a class to our namespace we would just add our class to it:
function MyClass{
   function MyClass_myFunc(){}
   this.myFunc = MyClass_myFunc;
   mynamespace.MyClass = MyClass;
The problem with this is that we have both: MyClass the global defined and mynamespace.MyClass. To prevent this from occurring we wrap everything up like this:
var mynamespace = mynamespace || {};
(function() {
   function MyClass{
      function MyClass_myFunc(){
      this.myFunc = MyClass_myFunc;</span></div>
      mynamespace.MyClass = MyClass;
Using the power of closures we have limited our globals to 1. We ONLY have mynamespace exposed globally. Nowwwww, we got 1 other thought that we should think about. What happens if our class depends on other classes? I’ve been looking at a good way to mimic Node.JS’s commonjs library client side. I think require.js will work well for this. How we might want to define our files with require.js:
var mynamespace = mynamespace || {};
require(["jquery"], function($) {
   function MyClass{
      function MyClass_myFunc(){
      this.myFunc = MyClass_myFunc;
      mynamespace.MyClass = MyClass;
Maybe? I think this looks good. It mimics the functionality we had earlier by limiting our global usage. You might be wondering what require is doing. Well, basically it is acting like a map. You specify where jquery is located and if it isn’t include it will get included. You can do this with all your classes in the app.build.js file defined by require. Note, that this does to some degree break from our notion of joining files together to optimize a website. While I am still investigating this, it seems that once a dependency is loaded requirejs will not load it again. This means that require might actually test for the existence of the dependency and if it isn’t around THEN load that file. This should allow you to use require’s optimizer to pull in various portions of a project at load then the rest later. Still reading, but this seems like the most optimal setup! :)
By the way, using backbone.js also forces you to adhere to the above conventions!

Ok, so it has been a while since I’ve posted. It has been a busy last couple of years! I’ve been working at NetApp but just gave notice to start working somewhere new. I had a few abandoned side projects and I am now embarking on a brand new one. I’ve learned a ton. Mostly, I’ve learned what might make a project successful and what makes it fall apart. Today I am going to talk about a little game project we are putting on hold.

There was a project a few of my friends were working on where we decided merge music and gaming together. We had this epic plotline, with awesome episodic content in mind, and various other great concepts. We even had a working prototype of the game engine with some programmer art and some actual artist content. We had plenty of concepts, but when it came time to tying things together and defining deadlines everything would fall apart. We ended up in endless development time for various animations. Promises of “it will take 4 hours” turned magically into 4 months later. When it came down to it was too hard for a half committed team to execute, and definitely too hard for some of the team’s first game.

I stepped back. I know we had a talented team, but I really wanted to make a big impact at a smaller scope so I went back to the drawing board. We needed to get our little team to start thinking smaller/simpler so I threw out the idea of putting our current project off in favor of something more easy to execute. I proposed a standard shoot’em with a few twists.

The other thing I realized is that our team was missing a dedicated game designer. Someone who would really drive the game mechanics and be a champion for awesome level/game design. I reached out to one of my old friends from the Game Development Club at SJSU. He grilled me initially (mostly about how I didn’t think the iPhone was all that special when it came out to which he scoffed), but I think he liked the new premise for the game. I proposed a simple shoot’em up targeted at the OUYA with timed music elements for combos and such. The convo took a life of its own. We both proposed extremely stupid things and expanded on them to make them great. We thought making this a purely cooperative type game. We instantly considered latency issues and how that might impact the gameplay mechanics. We decided to go back and do some research and mock some things up. I immediately opt’d to get an early developer OUYA, which should be coming in by the end of the year some time and in the meantime start developing a simple engine to get us started!

Starting over is somewhat disheartening, but I think it can be refreshing and freeing as well. I am not considering anything a failure. They were all good ideas and could all potentially become something with the right direction. They all taught me a little something though and aided in focusing and explicitly defining my intentions. There is a concept in software development called “coding by coincidence”, it occurs when you achieve your desired outcome but aren’t quite sure why it occurred. I now think that this rule could by altered for the business world as well. You should always be deliberate when designing a product. Magical spurts of popularity do occur, but you should know exactly why and how they occurred. It is possible to “get” lucky, but I believe intent will deliver more guaranteed results.