Extra Fox – A blog by Christopher Taylor

Opinionated Guidelines for Designing a Truly RESTful Web Application

Posted in Uncategorized by extrafox on January 6, 2014

It seems that every vendor and startup these days has a “REST API” that they are advertising. Yet, when I examine most of these APIs, they are clearly not RESTful. Most of them are really Remote Procedure Call (RPC) interfaces, but since they are using cacheable HTTP GET requests and other HTTP methods like PUT and DELETE, they call themselves RESTful. While there are advantages to designing RPC style interfaces based on RESTful principles, these APIs are missing out on the full power of a truly RESTful web application design.

Of course, not every web application should utilize REST, and that is fine. If you don’t need to build a truly RESTful web application, then I would direct you to Vinay Sahni’s great set of best practices [2] for building RPC interfaces that call themselves RESTful. In fact, I will be recommending many of the same practices where appropriate and will generally follow the same format for this article.

I have organized this document for both linear and random access. If you’re in a hurry, you can just start with the guidelines and follow the links for detailed explanations. For everyone else, I recommend starting at the top and reading to the bottom.

Guidelines

REST is a network-based software architecture based on specific design constraints

The authoritative source for the Representational State Transfer (REST) software architecture is Roy Fielding’s 2000 doctoral dissertation, “Architectural Styles and the Design of Network-based Software Architectures“. In it he defines a number of different network software architectures that are in common use. He then defines REST as a network software architecture created based on a process where [1],

a designer starts with the system needs as a whole, without constraints, and then incrementally identifies and applies constraints to elements of the system in order to differentiate the design space and allow the forces that influence system behavior to flow naturally, in harmony with the system.

The constraints Fielding identifies are,

  • Client-Server – Allows for a separation of concerns. In particular, allows components to evolve separately
  • Stateless – All state is handled on the client and necessary state is passed to the server with each request
  • Cache – Caches, often times transparent to client and/or server, can be used to improve performance
  • Uniform Interface – Simplifies architecture and improves visibility of interactions
  • Layered System – Allows for improved scalability, performance and security policy enforcement
  • Code-On-Demand – Optional constraint that allows for transfer of executable code

Of all of these constraints, Uniform Interface is probably the most misunderstood and the most fundamental to the REST architectural style. Because it is misunderstood, it is often one of the first constraints to get relaxed or ignored during web application design. The key properties of Uniform Interface are,

  • Identification of Resources
  • Manipulation of Resources through Representations
  • Self-Descriptive messages
  • Hypermedia as the engine of application state (HATEOAS)

Very simply, if you aren’t abiding by these constraints then your web application is not RESTful. It may still be useful and you may have good justifications for making the tradeoffs you make, but there is not point in calling it a REST architecture.

Further Reading: Roy T. Fielding, Architectural Styles and the Design of Network-based Software Architectures [1]

Avoid the tendency to build RPC interfaces and call them RESTful

Most of us come from a background of Object Oriented, Functional and/or Relational Database programming so, when we go to start building web applications, we tend to resort to the paradigms that we are familiar with. As a result, most APIs that claim to be RESTful are really RPC interfaces that use only some of the architectural features of REST. From Fielding [1],

What distinguishes HTTP from RPC isn’t the syntax. It isn’t even the different characteristics gained from using a stream as a parameter, though that helps to explain why existing RPC mechanisms were not usable for the Web. What makes HTTP significantly different from RPC is that the requests are directed to resources using a generic interface with standard semantics that can be interpreted by intermediaries almost as well as by the machines that originate services. The result is an application that allows for layers of transformation and indirection that are independent of the information origin, which is very useful for an Internet-scale, multi-organization, anarchically scalable information system. RPC mechanisms, in contrast, are defined in terms of language APIs, not network-based applications.

The constraints that tend to get relaxed the most are the Stateless and Uniform Interface constraints.

The Stateless constraint is often relaxed to pass session data, usually for the purposes of authentication. This isn’t the worst choice, but it does negatively affect scalability as Servers are required to manage the session state across a cluster.

The Uniform Interface state is almost completely ignored in nearly every so-called REST API out there. There are a lot of reasons for this and many of them are valid. To maintain this constraint, it is important to realize that HTTP already fully defines the interface for Server and Client communication. When semantics are encoded in URIs that need to be understood by Servers and Clients, then this constraint has not been met. When Representations are not Self-Descriptive, then this constraint has not been met. When allowable state transitions are not provided by the Server as links, then this constraint has not been met.

Further Reading: Roy Fielding, REST APIs must be hypertext-driven [18]

Resources are the key abstraction of a RESTful web application

I can’t really say it better than Fielding, so let’s start with this [1],

The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. “today’s weather in Los Angeles”), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author’s hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.

Most of your design effort should be concentrated on identifying Resources and formalizing the Representations of these Resources. This is especially true in a RESTful web application, because the Uniform Interface constraint ensures that the interface to the Server is straightforward and doesn’t change from one REST application to another.

Keep in mind that Resources are nouns (not verbs) and they are coarse grained (not fine grained).

Maintain a separation of concerns between the Server and the Client

In the REST architecture, the Server and the Client have clearly defined responsibilities. From Fielding [1],

Separation of concerns is the principle behind the client-server constraints. By separating the user interface concerns from the data storage concerns, we improve the portability of the user interface across multiple platforms and improve scalability by simplifying the server components. Perhaps most significant to the Web, however, is that the separation allows the components to evolve independently, thus supporting the Internet-scale requirement of multiple organizational domains.

The unique responsibilities of Servers and Clients is another of the misunderstood principles underlying the REST architecture. I would even go so far as to say that I think most developers have the responsibilities reversed. This isn’t surprising given that in most programming environments, the developer is building applications that utilize APIs provided by libraries. However, in REST, the Server is the application. The Client is providing views for the user to interact with the application.

Representations need to meet the Self-Descriptive and HATEOAS constraints

Representations are how all state is communicated in a RESTful web application. From Fielding [1],

Depending on the message control data, a given representation may indicate the current state of the requested resource, the desired state for the requested resource, or the value of some other resource, such as a representation of the input data within a client’s query form, or a representation of some error condition for a response.

In order to meet the Self-Descriptive constraint, the format of the Representation must support formal definition of semantics. XML and JSON-LD both have provisions for meeting these requirements, however, JSON-LD is much more developer friendly.

The HATEOAS constraint is probably the constraint that is most unfamiliar to application developers. However, it becomes more clear when you understand the separation of concerns that Fielding has defined. Since the Server is responsible for managing the Identifier namespace (URIs), it is the Server that should define what URIs correspond to different state transitions. Clients are responsible for presentation to the user and handling user input and should only concern themselves with manipulating Representations, not Identifiers.

Further Reading: Parastatidis, et al., The Role of Hypermedia in Distributed System Development [16]

Use JSON-LD as the primary Representation format

REST is an architecture that is in desperate need of some standardization around media types. Of course, the W3C has been working on the XML specifications for over a decade to meet the needs of REST. Unfortuantely, XML has proven to be less popular than JSON for most developers. From Mark Lanthaler et al. [13],

JSON-LD is an attempt to create a simple method to not only express Linked Data in JSON but also to add semantics to existing JSON documents. It has been designed to be as simple as possible, very terse, and human readable. Furthermore, it was a goal to require as little effort as possible from developers to transform their plain old JSON to semantically rich JSON-LD.

With JSON-LD, one of the missing pieces falls into place making it much more feasible to create truly RESTful interfaces. JSON-LD provides makes it possible to meet the Self-Descriptive and HATEOAS constraints while also opening up the power of Semantic Web.

Further Reading: Mark Lanthaler and Christian Gütl, On Using JSON-LD to Create Evolvable RESTful Services [13]

Servers are responsible for managing Resource Identifiers

For HTTP based REST web applications, the Resource Identifier will be a URI. From Fielding [1],

REST uses a resource identifier to identify the particular resource involved in an interaction between components. REST connectors provide a generic interface for accessing and manipulating the value set of a resource, regardless of how the membership function is defined or the type of software that is handling the request. The naming authority that assigned the resource identifier, making it possible to reference the resource, is responsible for maintaining the semantic validity of the mapping over time.

The separation of concerns between servers and clients implies that the client should not need to parse or construct URIs. The internal structure of an URI is an implementation detail for the server and should be opaque to the client. This doesn’t mean that URIs can’t have internal structure, only that this structure is not part of the published interface to the application. Instead, published interfaces should focus on constructing/modifying Representations which are passed between client and server.

API version numbers do not belong in the URI

You will find conflicting opinions on the Web about whether version information for your web application should be included in the URI. When building an RPC style interface where the Server and Client are tightly coupled, assuring that both versions are synchronized becomes a primary concern. When building a truly RESTful web application, the need to maintain versions should be mitigated by adherence to the Uniform Interfaces constraint.

The expectation of a REST Resource Identifier is that the semantics of the Resource that is identified is maintained. Since a Resource as a concept is usually unrelated to the version of the application, these should not be so tightly identified with one another in the URI.

You should be choosing REST because you need to build an “Internet-scale, multi-organization, anarchically scalable information system.” This means that your Representations need to be extensible and your application needs to be resilient to having different Client version out in the wild. Of course, the Web is a perfect example of how this works in practice, which doesn’t mean to say that it always works well.

Further Reading: Robbie Clutton, API Versioning [15]

Use HTTP Status Codes correctly

HTTP Status Codes are a principal kind of Control Data that should be set properly and returned with every request. The complete list of codes and full descriptions can be found in the HTTP/1.1 specification [10] and in RFC 6585 [11]. Here are a few that have particular significance for building RESTful web applications.

  • 200 OK – The request has succeeded
  • 201 Created – The request has been fulfilled and resulted in a new Resource being created. Should include a Location header indicating the URI of the created Resource.
  • 202 Accepted – The request has been accepted for processing, but the processing has not been completed. This is the proper response for asynchronous processes and, like 201, should include a Location header.
  • 204 No Content – The server has fulfilled the request but does not need to return an entity body. This is a good response to a successful DELETE call.
  • 400 Bad Request – The request could not be understood by the server due to malformed syntax.
  • 401 Unauthorized – The request requires user authentication.
  • 403 Forbidden – The server understood the request, but is refusing to fulfill it.
  • 404 Not Found – The server has not found anything matching the Request-URI.
  • 429 Too Many Requests – The user has sent too many requests in a given amount of time.

In addition to status codes, error conditions should contain an entity body that provides further details of the error that are useful for debugging and/or displaying to the user.

Further Reading: IETF, RFC 2616, Hypertext Transfer Protocol — HTTP/1.1 [10]

Provide a useful response for errors

Though HTTP Status Codes provide a basic set of codes for signalling errors, you will also want to make sure to provide a useful response from your application. Error responses should be well structured JSON-LD data just like any other Representation that you would communicate to the Client.

Further Reading: Vinay Sahni, Best Practices for Desigining a Pragmatic RESTful API [2]

Clients manipulate Representations and signal actions by following links

Now we’re getting to the meat of what it means for an application to be RESTful. Again, Fielding [18],

A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

Most APIs define semantics in developer documentation and expect developers to build clients that follow the API semantics. In REST, the media type should define the semantics. When a client receives a Representation as a given media type, it should follow the semantics defined by that media type. The interface to the server should be uniform, i.e. no special rules for parsing or accessing URIs.

Further Reading: PaySwarm Web Payments Specification [19]

Servers provide links within Representations to identify allowable transitions

Need I repeat that REST applications are link driven? No, then I’ll let Fielding do it [18],

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly.

Again, semantics are defined by the media type and this includes how to access other Resources as well as the types of actions that are allowed. These Resources and actions should be referenced as links within the Representation.

Clients should not parse or construct URIs

This is really just a corollary to the statement above, “Servers are responsible for managing Resource Identifiers”. If clients are parsing URIs, then there is tight coupling between the client and the server. Tight coupling makes changes to the application (client and server) difficult as you run the risk of breaking systems that are already in use. So, the server is responsible for creating URIs and the client just follows them.

Resources and References

[1] Roy T. Fielding, Architectural Styles and the Design of Network-based Software Architectures, http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

[2] Vinay Sahni, Best Practices for Desigining a Pragmatic RESTful API, http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api

[3] IETF, RFC 3987, Internationalized Resource Identifier, http://www.ietf.org/rfc/rfc3987

[4] Richardson, Leonard; Ruby, Sam (May 2007). RESTful Web Services. O’Reilly Media. ISBN 978-0-596-52926-0

[5] JSON for Linked Data, http://json-ld.org/

[6] REST API Anti-Patterns, http://www.infoq.com/articles/rest-anti-patterns

[7] Model Your Application Domain, Not Your JSON Structures, http://www.markus-lanthaler.com/research/model-your-application-domain-not-your-json-structures.pdf

[8] The RESTful CookBook, http://restcookbook.com/

[9] Manu Sporny, Why should you use json-ld, https://plus.google.com/+ManuSporny/posts/T5WkpieNrjJ

[10] IETF, RFC 2616, Hypertext Transfer Protocol — HTTP/1.1, http://www.ietf.org/rfc/rfc2616

[11] IETF, RFC 6585, Additional HTTP Status Codes, http://tools.ietf.org/search/rfc6585

[12] Manu Sporny, JSON-LD is the Bee’s Knees, http://manu.sporny.org/2013/json-ld-is-the-bees-knees/

[13] Mark Lanthaler and Christian Gütl, On Using JSON-LD to Create Evolvable RESTful Services, http://www.markus-lanthaler.com/research/on-using-json-ld-to-create-evolvable-restful-services.pdf

[14] StackOverflow.com, Best Practices for API Versioning, http://stackoverflow.com/questions/389169/best-practices-for-api-versioning

[15] Robbie Clutton, API Versioning, http://pivotallabs.com/api-versioning/

[16] Parastatidis, Webber, Silveira and Robinson, The Role of Hypermedia in Distributed System Development, http://www.researchgate.net/publication/221271423_The_role_of_hypermedia_in_distributed_system_development/file/9fcfd50f236cf1ef99.pdf

[17] Curtis Schlak, HATEOAS: A Follow-Up Discussion About REST, http://curtis.schlak.com/2012/01/23/hateoas-a-follow-up-to-rest-for-r33lz.html

[18] Roy Fielding, REST APIs must be hypertext-driven, http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven

[19] PaySwarm Web Payments Specification, https://web-payments.org/specs/

Advertisements

Using a PRP to efficiently iterate randomly over a large list

Posted in Uncategorized by extrafox on April 4, 2012

Iterating over lists of data is one of the most fundamental operations that computer programs need to perform. This sort of operation is commonly done by incrementing a counter or, in a modern programming language, by using an iterator. However, I have often found myself wanting to iterate over lists in a random fashion but have found this to be a big challenge. For a short list, it’s easy enough; simply shuffle the contents of the list. Unfortunately, for very large lists, it can take more memory than your system has available in order to store the state of a shuffled list. What is desired is a way to visit every element of a very large list only once without extensive processor or memory requirements. Here, I will show you how to randomly traverse an arbitrarily large list using a Pseudorandom Permutation (PRP). The method uses a Feistel construction along with a Pseudorandom Function (PRF).

Every block cipher, like AES, is actually a PRP. The only problem is that the block size of AES is much larger than would be required for any set we would practically want to traverse. So, what we need is a PRP that permutes a set of the size we care about. The Feistel construction can be used to take any PRF and use it to create a PRP of the size we need.

Block Size

We are accustomed to thinking of block ciphers as taking a plaintext as input and returning a ciphertext. When we look at the mathematics of a PRP, however, it is defined in terms of sets. A PRP takes an integer in a set and returns another integer in the same set. So, in order to return a permutation of a large list, we can think of the PRP as taking an index into the list and returning the permuted index into the same list. The block size we want to use, then, is the smallest number of bits that can store the largest index into our list.

In Ruby, we might define this as follows:

@block_size = Math.log2(@num_elements).ceil

For this implementation, I have chosen to use a balanced Feistel network, so I need to make sure that the number of bits is divisible by two.

Pseudorandom Function

The randomness in the Feistel construction comes from a PRF. In this case, I have chosen to use SHA-256. At each round of the network, we need to generate a random value that is XORd with half the block. Since SHA-256 returns more bits than we need, we simply throw away the extra bits.

def f(i, x)
   hex_digest = @sha256.hexdigest("#{@seed},#{i},#{x}")
   return hex_digest.to_i(16) >> (256-(@block_size/2))
end

Permutations

Each round of the Feistel construction generates a random value seeded by the right side of the block and XORs this with the left side, then the sides are flipped. This should be done at least three times (I’m doing it seven).

def permute(x)
   xl = x >> (@block_size/2)
   xr = x - (xl << (@block_size/2))
   0.upto(6) do |i|
      xl = (xl ^ f(i, xr))
      xl, xr = [xl, xr].reverse
   end
   return (xl << (@block_size/2)) + xr
end

Fitting the set

Most of the time, the number of items you will want to iterate over will not result in a nice even block size. This will mean that the permutation will work in a set that is larger than your actual set. To handle this case, we simply need to recursively permute until we are in the proper value range.

n = permute(@position)
while (n >= @num_elements)
   n = permute(n)
end
return n

Example

Let’s say there is an Internet resource with 10,000,000 elements that you would like to randomly traverse. Here’s how you might implement the traversal.

size = 10_000_000
seed = Random.new.bytes(32)
re = RandomEnum.new(seed, size)
0.upto(size-1) do |i|
   n = re.next
   puts "next: #{n}"
end

Summary

I have given an overview and a sample implementation of how to use a PRP to traverse a large set or list. The full Ruby source code is provided here

require 'digest'

class RandomEnum
   def initialize(seed, num_elements)
      @num_elements = num_elements
      @seed = seed

      @position = -1
      @sha256 = Digest::SHA256.new

      @block_size = Math.log2(@num_elements).ceil
      if (@block_size % 2 != 0)
         @block_size += 1
      end
   end

   def next
      @position += 1
      n = permute(@position)
      while (n >= @num_elements)
         n = permute(n)
      end
      return n
   end

   def f(i, x)
      hex_digest = @sha256.hexdigest("#{@seed},#{i},#{x}")
      return hex_digest.to_i(16) >> (256-(@block_size/2))
   end

   def permute(x)
      xl = x >> (@block_size/2)
      xr = x - (xl << (@block_size/2))
      0.upto(6) do |i|
         xl = (xl ^ f(i, xr))
         xl, xr = [xl, xr].reverse
      end
      return (xl << (@block_size/2)) + xr
   end
end

References

[1] How to Encipher Messages on a Small Domain: Deterministic Encryption and the Thorp Shuffle. B. Morris, P. Rogaway, T. Stegers, Crypto 2009

Fetch.IO is cancelling my account

Posted in Uncategorized by extrafox on July 7, 2011

It seems that I owe $0.00 to Fetch.IO. Interestingly, I have owed this money since a non-existent date from before Jesus’ birth. I think I’ll just let them cancel it…

On Thu, Jul 7, 2011 at 10:35 AM, Fetch.io <noreply@fetch.io> wrote:

Dear ,

This is a notification that your service has now been suspended. The details of this suspension are below:

Product/Service: Free
Amount: $0.00 USD
Due Date: 00/00/0000
Suspension Reason: Overdue on Payment

Please contact us as soon as possible to get your service reactivated.

 

One month of the Rippetoe-Ferris workout

Posted in Uncategorized by extrafox on May 13, 2011

Last month, after finishing “4-Hour Body”, I crafted the “Rippetoe-Ferris” workout. So far, it has been extremely productive, accelerating my gains far beyond my expectations.

To recap, I took the basic plan from the “Effortless Superhuman” section of Timothy Ferris’ “4-Hour Body”. I then substituted bench press with overhead press and added back squats, which constitute the three primary lifts from Mark Rippetoe’s “Starting Strength”.

For whatever reason, my overhead press got stuck at 135 lbs, but I finally completed that weight on my last workout and will be starting 145 lbs on my next.

One other thing I should mention. I have been drinking between 1/3 and 1/2 gallon of milk a day in addition to my regular meals. I think my gains are partially attributable to that change, since I really have a hard time putting on weight. I’ve gained about 4 kg (8.8 lbs) since I began.

I’ll keep you updated with how things progress.

Workout period: April 6th to May 11th

Max Working Weight

Back Squats: 255 lbs -> 315 lbs

Overhead Press: 115 lbs -> 135 lbs

Deadlift: 315 lbs -> 365 lbs

Progress

Back Squats:

Overhead Press:

Deadlift:

Mixing Rippetoe and Ferris in the weight room

Posted in Uncategorized by extrafox on April 14, 2011

As a soccer player, one of my greatest strengths when I was younger was my speed. As my top speed declined in my 30’s, I started to get frustrated and decided I would need to take some steps to slow the decline. After doing some research and consulting with my wife, who has a degree in physical education, I decided that weight training was needed.

I had a couple of false starts until a friend of mine introduced me to “Starting Strength” by Mark Rippetoe. I followed the Rippetoe method as closely as possible and made great progress. The added strength translated into huge improvements in my performance on the soccer field.

Recently, I kept hearing a lot of buzz about “4-Hour Body” by Timothy Ferris. All within about two weeks, I read about it in Men’s Journal, read of few chapters at Barnes & Nobel, overheard two guys in my office building elevator talking about it (one of them was just getting started on the “Occam’s Protocol”) and then I found out another of my colleagues had tried the “Slow Carb” diet from the book. I had to go read it.

The workout that interested me most from 4HB was from the “Effortless Superhuman” chapter. This workout was designed by a collegiate track coach to build strength for his runners while using the minimum of training effort and also keeping them ready to compete. What attracted me to this workout was how much it had in common with Rippetoe and how minimalistic it was.

My new training regimen takes elements from both Rippetoe and Ferris. I adopted the warm up, set and rep prescription from “Superhuman” and applied the three main workouts from Rippetoe to create a workout that I think will be absolutely killer for my purposes.

The Workout

1. Over-and-unders

  • 1 set of 6 reps, no more than 5 minutes

2. Back squats

  • 1 set of 2-3 @ 95% 1RM
  • 1 set of 5 @ 85% 1RM
  • within one minute, 7 18-inch box jumps
  • rest 5 minutes from plyometrics to next set

3. Overhead press

  • 1 set of 2-3 @ 95% 1RM
  • 1 set of 5 @ 85% 1RM
  • within one minute, 4 12-inch box push ups
  • rest 5 minutes from plyometrics to next set

4. Deadlift

  • 1 set of 2-3 @ 95% 1RM
  • 1 set of 5 @ 85% 1RM
  • within one minute, 7 18-inch box jumps
  • rest 5 minutes from plyometrics to next set

5. Torture Twist

  • 3-5 sets of 3-5 reps (30 seconds between sets)
  • Start with 3 sets x 3 reps of 3 seconds on each side. Increase to 5 sets of 3 second holds, then increase time, one second at a time, up to a max of 15 second holds for 5 sets (each set = 3 holds per side).

That’s it. The whole workout takes from 30 to 45 minutes and should be done three days a week.

Cubeduel is not Hot or Not, it’s Kittenwar

Posted in Uncategorized by extrafox on January 17, 2011

Over the past week Cubeduel was released onto the Social Web where it went viral almost immediately. In the first 36 hours alone, more than 240,000 users joined the service. The service hit a temporary speed-bump on Friday when LinkedIn blocked access to its developer API due to an automated rate limit. That was all cleared up by Saturday and the service is now back online and, undoubtedly, still growing fast.

The concept behind Cubeduel is simple and compelling. When you first use the service, you are asked to sign into your LinkedIn account. From there, the service presents you with two of your LinkedIn connections and asks you to choose “who you would rather work with.” It’s a lot of fun to do and it wasn’t long before I had over a hundred duels under my belt.

Of course, as fun as it is to vote on the system, the motivation we all share is to see how we stack up against our co-workers. In order to find that out, you need to get your colleagues to rate you against other people they’ve worked with in the past. The service makes it easy to invite people from your Twitter, Facebook or LinkedIn networks.

No kittens were hurt making this film

A number of articles I’ve read about Cubeduel have compared it to Hot or Not, a site where you can upload a photo of yourself and the community will rate you on a scale of one to ten based on “how hot” you are. I don’t think this is a good comparison. If Cubeduel were to follow the Hot or Not methodology, then it would show you one connection at a time and ask you “how much would you like to work with” that connection. This is a much more direct value judgement than that asked by Cubeduel.

The site that the Cubeduel service more closely follows is my old favorite, Kittenwar. When two kittens are battling it out on Kittenwar, choosing one as cuter does not necessarily mean that you don’t think the other is cute. There are certainly some butt ugly kittens on the site, but many times they are both equally cute and you choose one of them anyway. It doesn’t really matter in the end and no kittens get their feelings hurt in the process.

Cubeduel is just about as arbitrary as Kittenwar. When you choose between two people in a duel, there are a large number of factors that could go into the choice of one over the other. Did I work closely with this person? Were they in different departments? Was I friends with one of them? Are they smart? Are they inept? Are they funny? Could one of them help me in my future career? Do they look good in tight jeans? These and many more considerations will get boiled down into a slit second decision between the two people in the duel. Saying that you would rather work with Jane instead of Joe doesn’t mean you don’t want to work with Joe, it simply means, for whatever arbitrary reason (possibly tight jeans), that you would prefer to work with Jane.

I’m sorry, but your kitten is ugly

Of course, kittens cannot understand the results on Kittenwar. People will take it personally.

According to Tony Wright, co-creator of Cubeduel, “no one shows up on cubeduel lists who haven’t signed into the system and opted in to being part of the high score system.” He goes on to say, “we also suppress duel data for anyone who has lost 60% or more of their duels.” I suppose that’s a good compromise. If you don’t want to know were you stand, don’t opt-in and neither you, nor your connections will know.

At the end of the day, I don’t think the duel methodology is very efficient at getting to the more practical question of “is this person a good employee.” I would love to hear some statisticians chime in on that point. Instead, Cubeduel should be looked at as a diversion.

On the other hand, Kittenwar’s Winningest Kittens are pretty cute!

Connecting a PC to a Sony Bravia HD Display

Posted in Uncategorized by extrafox on February 6, 2009

I have had a media PC for a couple of years that I have used with a regular 26″ LCD monitor. So, when I got my Sony HDTV (model KDL-46V25L1) I thought that it would be incredibly simple to connect my PC up to my new HDTV. In fact, it was simple, but the results were disappointing and should be an embarrassment to Sony. However, with some determination, I was able to take advantage of the full resolution of my Sony display. Hopefully, these steps will help you if you find yourself in the same position.

My Sony monitor comes installed with a DSUB/RGB/VGA connector and two HDMI connectors. Sony’s instructions for connecting a PC to the monitor is to use the DSUB connection. This seems pretty obvious, but it turns out that the DSUB connector only supports resolutions up to 1400×1050. There are two problems with this. One, this is much lower than the full 1900×1080 native resolution of the monitor. Two, this is a 4:3 aspect ratio. So, the best you can hope for using the DSUB connector is a fuzzy, distorted, upscaled image.

LESSON: Don’t use the DSUB connector

The next obvious direction would be to use the HDMI inputs to the monitor. You will see below that this, in fact, will work, however, there are complications. First, Sony explicitly states in the WDL-46V25L1 Operating Instruction manual, on page 14

Do not connect a PC to the TV’s HDMI input. Use the PC IN (RGB IN) input instead when connecting a PC.

Unfortunately, they give no reason for why. I can’t imagine that this would do any damage to either your PC or your TV. My best guess is that they tell you this so that they don’t have to attempt the difficult explanation for how to properly configure your monitor to display the PC input correctly.

LESSON: Sony’s instructions sound like FUD, but, follow the rest of my instructions at your own risk.

So, without further ado, here’s what you need to do to get your PC to use the full HD resolution on your Sony Bravia HD display.

1. Make sure the video card on your PC has a DVI connector and supports 1900×1080 resolution.
2. Get a DVI to HDMI cable (male to male). You could also use a DVI cable and a DVI to HDMI adapter. I recommend the cable because it takes up less room behind your TV and should fit more snugly.
3. Connect your PC’s DVI port to one of your TV’s HDMI ports. You may need to restart your computer.
4. Make sure your display resolution is set at 1900×1080

What you will probably notice at this point is that the edges of your computer display is not visible on the TV. This is the reason I think Sony doesn’t want people connecting DVI to HDMI. However, you should know that a DVI signal is compatible with HDMI, so there is no reason to worry here. You just need to do some configuration on your TV.

Using your TV’s remote…
5. Hit the “MENU” button, then choose “Settings”
6. Using the arrows, move to the “Screen” settings tab
7. Find the “Display Area” setting and make sure it’s value is “Full Pixel”
8. Hit the “MENU” button again to save the setting and exit
9. (optional) Hit the “PICTURE” button until the setting is “Standard”. My setting was “Vivid” which make the PC’s display look really washed out. This is just personal taste, however.

That’s what worked for me. Your video card may have other settings, but you should be able to ignore most/all of those. My Nvidia card has a lot of settings related to displaying on HDTV’s, but I found most of them to be useless. YMMV.

Good luck and happy HDTV viewing!

Excellent list of online music services

Posted in Uncategorized by extrafox on July 10, 2007

What more can I say? Click the link [Mashable.com].

The Traveler’s Dilemma: What dilemma?

Posted in Uncategorized by extrafox on June 1, 2007

In the June issue of Scientific American, there was an interesting article about game theory [The Traveler’s Dilemma]. I often times read the Letters section of the magazine, but I have never sent one in myself. Today, however, I wrote the following letter.

From: Christopher Taylor
Sent: Friday, June 01, 2007 10:26 AM
To: ‘editors@sciam.com’
Subject: What Dilemma?

Dear Editors,

With Lucy and Pete standing beside each other and facing the airline manager across the counter, “The Traveler’s Dilemma”, by Kaushik Basu, frames the question as one of Lucy vs. Pete; each traveler trying to get more than the other. What Mr. Basu seems to overlook is that Lucy and Pete are each more likely to identify with the other than either of them is to identify with the airline manager. Framed this way, the competition is not one pitting Lucy against Pete, but pitting the travelers against the airline manager. With this shift in perspective, the only “logical” choice is for both travelers to ask for $100.

Christopher Taylor
Software Engineer

It would be quite cool if the magazine publishes the letter. But, if not, at least I can still put it on my weblog.

Someone smart blogs about Furigana.jp

Posted in Uncategorized by extrafox on April 26, 2007

I was browsing through the referrer lists for Furigana.jp and I found a very nice post in the Blogosphere.

I found this awesome tool for reading Japanese from web pages and text files. If you plug a URL into Furigana.jp, the engine will turn out your text with the wee hiragana gloss that is called furigana. If you are semi literate in Japanese like me, then you can quickly read a text loaded with unknown kanji using this handy phonetic script. [erizabesu]

Quite flattering considering this person actually knows what they are talking about. Read the full post because it has some very interesting tidbits about the Japanese syllabary and even some things I had never heard about before.