New application: Stopwatch

Stopwatch product shot

This stopwatch is a little bit special. It does not stop laps, but sessions. The difference between a lap and a session is, that you can resume a session. A simple example is a chess game: the time of each player’s turn is stopped and resumed when the player is moving. Stopwatch is not limited to 2 sessions, you can have as many as you need.

Stopwatch is a web application, implemented using HTML and AngularJS. It uses localstorage in order to take the time even if the application is not running. It is also capable of running offline and as an iPhone application pinned to the home screen.

Get started here or drag this to your bookmarks bar: Stopwatch.

Authenticating single-page HTML applications with Angular JS and Flickr

In this blog post I want to demonstrate how to use Flickr in a single-page HTML application with Angular JS. The following picture illustrates roughly the execution flow:


The use case starts with the location /photos, which should display the photo stream of the current user. Current user? Our application doesn’t know anything about the current user yet.

If the user is already authenticated, the client holds a clientToken in its local storage which is send with every HTTP AJAX request. I’m using a HTTP interceptor, like this:

app.config(function($httpProvider) {
  $httpProvider.interceptors.push(function($q, $log, $location) {
    return {
      'request': function(config) {
        config.headers['clientToken'] = localStorage['clientToken'];
        return config;
      'responseError': function(response) {
        if (response.status === 401) {
          localStorage['locationPath'] = $location.path();
          window.location.href = '/api/flickr/authorize';
        return $q.reject(response);

This interceptor is responsible for:

  1. Taking the clientToken from the local storage and putting it into the HTTP header ('request').
  2. In case of an error, and the status code 401, the current location (/photos) is put into local storage for later retrieval and the FlickrAuthorize resource (URL: /api/flickr/authorize) is called.

The FlickrAuthorize resource performs a Flickr authorization and redirects to the Flickr login page. After successful login, Flickr then redirects to the FlickrValidate resource, which retrieves the token and verifier. Once successful, the server saves all relevant OAuth tokens and secrets to its datastore and creates a clientToken. This token is sent to the client using a redirect

resp.sendRedirect("/index.html#/authorize/" + account.getClientToken().getToken());

The client defines appropriate routes:

app.config(function($routeProvider) {
    when('/photos', {
      controller: PhotosCtrl,
      templateUrl: '/templates/photos.html'
    when('/authorize/:clientToken', {
      controller: AuthorizeCtrl,
      templateUrl: '/templates/authorize.html'

and the AuthorizeCtrl controller

function AuthorizeCtrl($location, $routeParams) {
  var clientToken = $routeParams['clientToken'];
  localStorage['clientToken'] = clientToken;
  var path = localStorage['locationPath'];

puts the clientToken to the local storage and redirects to the saved path (which failed previously).

The full source code can be downloaded from Github.

Very funny, Flickr!

If you look at the source code of the Flickr website, you find this:

           . -  ` : `   '.' ``  .            - '` ` .
         ' ,gi$@$q  pggq   pggq .            ' pggq
        + j@@@P*\7  @@@@   @@@@         _    : @@@@ !  ._  , .  _  - .
     . .  @@@K      @@@@        ;  -` `_,_ ` . @@@@ ;/           ` _,,_ `
     ; pgg@@@@gggq  @@@@   @@@@ .' ,iS@@@@@Si  @@@@  .6@@@P' !!!! j!!!!7 ;
       @@@@@@@@@@@  @@@@   @@@@ ` j@@@P*"*+Y7  @@@@ .6@@@P   !!!!47*"*+;
     `_   @@@@      @@@@   @@@@  .@@@7  .   `  @@@@.6@@@P  ` !!!!;  .    '
       .  @@@@   '  @@@@   @@@@  :@@@!  !:     @@@@7@@@K  `; !!!!  '  ` '
          @@@@   .  @@@@   @@@@  `%@@@.     .  @@@@`7@@@b  . !!!!  :
       !  @@@@      @@@@   @@@@   \@@@$+,,+4b  @@@@ `7@@@b   !!!!
          @@@@   :  @@@@   @@@@    `7%S@@hX!P' @@@@  `7@@@b  !!!!  .
       :  """"      """"   """"  :.   `^"^`    """"   `""""" ''''
        ` -  .   .       _._    `                 _._        _  . -
                , ` ,glllllllllg,    `-: '    .~ . . . ~.  `
                 ,jlllllllllllllllp,  .!'  .+. . . . . . .+. `.
              ` jllllllllllllllllllll  `  +. . . . . . . . .+  .
            .  jllllllllllllllllllllll   . . . . . . . . . . .
              .l@@@@@@@lllllllllllllll. j. . . . . . . :::::::l `
            ; ;@@@@@@@@@@@@@@@@@@@lllll :. . :::::::::::::::::: ;
              :l@@@@@@@@@@@@@@@@@@@@@l; ::::::::::::::::::::::;
            `  Y@@@@@@@@@@@@@@@@@@@@@P   :::::::::::::::::::::  '
             -  Y@@@@@@@@@@@@@@@@@@@P  .  :::::::::::::::::::  .
                 `*@@@@@@@@@@@@@@@*` `  `  `:::::::::::::::`
                `.  `*%@@@@@@@%*`  .      `  `+:::::::::+`  '
                    .    ```   _ '          - .   ```     -
                       `  '                     `  '  `

    You're reading. We're hiring.



Interoperable AES encryption with Java and JavaScript

AES implementations are available in many languages, including Java and JavaScript. In Java, the javax.crypto.* packages are part of the standard, and in JavaScript, the excellent CryptoJS provides an implementation for many cryptographic algorithms. However, due to different default settings and various implementation details, it is not trivial to use the APIs in a way, that the result is the same on all platforms.

This example demonstrates implementations of the algorithm in Java and JavaScript that produces identical results using passphrase based encryption. For AES encryption, you cannot – or shouldn’t – simply use a password in order to encrypt data. Instead, many parameters need to be defined, such as:

  • iteration count used for the salting process
  • padding mode
  • key derivation function
  • key length

Then, additional initialization parameters need to be defined, such as the salt and the initialization vector (IV). With all parameters defined, the encryption process is the same for both, Java and JavaScript:

  1. Generate salt and IV (this is typically done using a secure psuedo-random number generator; in my example tests both are fixed in order to produce predictable results).
  2. Generate the key (using the PBKDF2 function) from the given passphrase, salt, key size and number of iterations (for the salting process.
  3. Encrypt the plaintext using key and IV.

The decryption process is even simpler, because IV and salt have already been generated. These have to be reused to successfully reproduce the plaintext. Therefore, for successful encryption, you have to store IV, salt and iteration count (as long as it is not fixed for your application) along with the cipher text. Since these parameters don’t need to get generated the decryption process only has 2 steps:

  1. Generate key (same as step 2. above).
  2. Decrypt cipher text using key and IV.

In this example, I have created a utility class for each language: and AesUtil.js. In the test, all data (salt, passpharse, IV, plaintext, ciphertext) are represented as String. The ciphertext is encoded using base64, in order to get a proper and compact representation of the bytes (AES produces a byte array, not a String). The other parameters, salt and IV are encoded in hex. This is useful to effectively count and read the number of bytes used (and see if the length of both parameters is correct).

JavaScript implementation AesUtil.js

  1. Generate key:

      var key = CryptoJS.PBKDF2(
          { keySize: this.keySize, iterations: this.iterationCount });

    Note, that this.keySize is the size of the key in 4-byte blocks. So, if you want to use a 128-bit key, you have to divide the number of bits by 32 to get the key size used for CryptoJS.

  2. Encrypt plaintext:

    The object returned by the encrypt method is not a String, but a object that contains the parameters of the algorithm and the ciphertext.

      var encrypted = CryptoJS.AES.encrypt(
          { iv: CryptoJS.enc.Hex.parse(iv) });

    To convert the encryption result into base64 format, you have to use the toString() function:

      var ciphertext = encrypted.ciphertext.toString(CryptoJS.enc.Base64);
  3. Decrypt ciphertext:

    To decrypt, a parameter object is created first, that contains the ciphertext (note base64 encoding is used here):

      var cipherParams = CryptoJS.lib.CipherParams.create({
        ciphertext: CryptoJS.enc.Base64.parse(cipherText)
      var decrypted = CryptoJS.AES.decrypt(
          { iv: CryptoJS.enc.Hex.parse(iv) });

    Again, to get the result in text form, you use the toString() function:

      var plaintext = decrypted.toString(CryptoJS.enc.Utf8);

Java implementation

The Java implementation looks a bit different, but the structure is the same:

  1. Create a cipher instance:

        Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
  2. Generate key:

        SecretKeyFactory factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
        KeySpec spec = new PBEKeySpec(passphrase.toCharArray(), hex(salt), iterationCount, keySize);
        SecretKey key = new SecretKeySpec(factory.generateSecret(spec).getEncoded(), "AES");
  3. Encrypt:

        cipher.init(Cipher.ENCRYPT_MODE, key, new IvParameterSpec(hex(iv)));
        byte[] encrypted = cipher.doFinal(bytes);
  4. Decrypt:

        cipher.init(Cipher.DECRYPT_MODE, key, new IvParameterSpec(hex(iv)));
        byte[] decrypted = cipher.doFinal(bytes);

Download example code

I have created a small Github project that contains all required source code and tests. If you want to read more about it, please have a look there.

Flickr API with Scribe: Easy OAuth in Java

In this post I want to show you how easy OAuth in Java is, if you use the Scribe framework. It is focused on Flickr, but you can use any other OAuth web service, as you like.

OAuth requires a lot of work, and if you don’t use a framework, it can be quite cumbersome. Luckily, Scribe does most of it, such as request signing, creating timestamp and nonce. It even provides a Flickr API implementation, even though it’s not getting any more custom APIs. This API, however, only covers the authentication sequence, so you have to do some more work to actually integrate Flickr in your application.

This is the standard sequence for the authentication and authorization process (see also the description at the Flickr developer website):

Flickr OAuth Sequence

In this example, I use standard Java Servlets, but any other API would do as well. I’m using Servlets, because there are no additional dependencies and it is pretty standard, so you don’t get confused by any other framework implementation.

My example consists of three servlets:

  1. The FlickrServlet encapsulates some reusable stuff and provides a session scoped store for the Request Token (more on that later).
  2. The FlickrLoginServlet triggers the authorization and authentication process.
  3. The FlickrCallbackServlet provides a callback that is called by the Flickr API if authorization was successful.

This is how the FlickrServlet looks like:

public class FlickrServlet extends HttpServlet {
    private static final String SESSION_NAME_REQUEST_TOKN = "flickr.requestToken";

    protected Token getRequestToken(HttpServletRequest req) {
        HttpSession session = req.getSession();
        try {
            return (Token) session.getAttribute(SESSION_NAME_REQUEST_TOKN);
        finally {

    protected void setRequestToken(HttpServletRequest req, Token token) {
        HttpSession session = req.getSession();
        session.setAttribute(SESSION_NAME_REQUEST_TOKN, token);

    protected OAuthRequest createRequest(String method) {
        OAuthRequest request = new OAuthRequest(Verb.GET, "");
        request.addQuerystringParameter("format", "json");
        request.addQuerystringParameter("nojsoncallback", "1");
        request.addQuerystringParameter("method", method);
        return request;

It contains the createRequest() method, which creates a request to the Flickr API which returns the response in JSON format.

The FlickrLoginServlet creates an OAuthService with the FlickrApi implementation and a callback URL to the FlickrCallbackServlet below. Your Flicrk API-Key and API-Secret are stored in some constants (I created a separate class FlickrProperties for that). As you can see, Scribe completely handles signing, encoding and sending the request.

By calling service.getRequestToken() it executes a HTTP request to the Flickr API and parses the response.

After successfully retrieving the Request Token, it is stored in the session and the authorization URL is created. It is important to store the Request Token temporarily (in this case, I use the session, but of course you could use a database, memcache or whatever your architecture provides). This token is later used to retrieve the Access Token after authorization was successful.

public class FlickrLoginServlet extends FlickrServlet {
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        String callback = "http://" + req.getServerName() + ":" + req.getServerPort() + "/flickr/callback";
        OAuthService service = new ServiceBuilder()
        Token requestToken = service.getRequestToken();
        setRequestToken(req, requestToken);

Now, the redirect is performed, and you see the Flickr authentication page. After successful authorization, the FlickrCallbackServlet is called:

public class FlickrCallbackServlet extends FlickrServlet {
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        OAuthService service = new ServiceBuilder()
        Token requestToken = getRequestToken(req);
        // TODO: Check if the requestToken matches the token of this request.
        String verifier = req.getParameter(OAuthConstants.VERIFIER);
        Token accessToken = service.getAccessToken(requestToken, new Verifier(verifier));

        OAuthRequest request = createRequest("flickr.test.login");
        service.signRequest(accessToken, request);
        Response response = request.send();
        String body = response.getBody();

Here, once again a OAuthService is created. Then the Access Token is retrieved using the Request Token, previously stored in the session and the verifier, that is passed as query string to this Servlet. Please note, that the request also contains the token, which allows you to check if this call to the Servlet is actually a callback, coming from Flickr, or something else. In this example, I have not implemented this security check, but in a real application, it is strongly recommended.

The Access Token now can be used to call any Flickr API method. In this case, I call the flickr.test.login method, which then returns something like this:

{"user":{"id":"21207597@N07", "username":{"_content":"jamalfanaian"}}, "stat":"ok"}

Note, how simple it is to create, sign and send a request with the Scribe framework.

Evolution of a Cache API

As written previously, a Cache API can reduce clutter in your business code, and ensure consistency in the cache.

However, a good API should also be as less invasive as possible. Therefore, the latest version of the Trafalgar Cache API introduces some annotations that define the cache keys of stored objects:

public class Person implements Serializable {
    @CacheKey private String name;
    @CacheKey private Date birthday;

You use it with the cache in your application like this (of course, the name is quite a bad cache key):

Cache<Person> cache = new Cache<>(Person.class);
cache.get("Thomas", new Callback<String, Person>() {
    public Person execute(String name) {
        // Load person with the given name...

As you can see, when using the cache, you don’t have to provide any index or key; it is generated automatically from the @CacheKey annotation.

Nested cache keys

The Cache API also supports nested keys, as illustrated in the following example:

public class EmailAddress {
    private String address;
    private String name;

public class Person implements Serializable {
    @CacheKey( "address" ) private Collection<EmailAddress> emailAddresses;

Now, you can use each address of all emailAddresses as cache key.

cache.get("", new Callback<String, Person>() {
    public Person execute(String address) {
        // Load person with the given email address ...

I think this approach can reduce a large amount of boiler-plate code. Of course, there are some negative aspects in an API like this:

  1. Using a String in the annotation is not robust against refactorings.
  2. The annotation based approach has some performance overhead.
  3. You cannot use composite cache keys, keys that consist of multiple fields.

For the first issue you have to weight convenience against robustness. To improve on the other issues, I have added another feature. If the class implements the Indexable interface, it can generate keys of any type:

public class Image implements Indexable, Serializable {
    private Long id;
    private String format;
    private String type;

    public void index(Keys keys) {
        keys.add(id, format, type);

Here, a composite key, consisting of the components id, format and type, is generated.

Of course, using Indexable adds a certain degree of intrusion of the Cache API to your business code, but I think it is a good tradeoff altogether.

Keeping cache entries synchronized with multiple keys

As explained in an earlier post, you can dramatically reduce cluttered business code, if the cache API abstracts from the traditional map-like approach and uses callback handlers to load missing cache entries.

The real challenge with caches is, however, not (just) keeping business code clean, but also keeping the cache values consistent.

Let’s assume, you have a simple, map-like cache backend, more or less unlimited cache memory, but in your business logic, you use two different keys to access cache entries. In this example, you have a User which can be queried either by ID or by nickname:

public class User {
    private Long id;
    private String nickname;
    private String realName;


Let’s further assume, that you have a UserService, which provides typical accessors for your user objects:

public class UserService {
    public User getById(Long id) { ... }
    public User getByNickname(Long id) { ... }
    public void put(User user) { ... }

Now you query your users, either by ID or nickname and with the approach described in the previous post, you would fill the cache either with the ID as key or the nickname. You would have two different user instances in cache, representing the same user.

If you update the user, e.g. by changing the realName, how do you put it back into the cache so that both instances are updated?

To solve this issue, I have introduced IdMappers that create the required keys:

public class UserService {
    private final Cache<Account> cache = new Cache<>(User.class, new SimpleIdMapper<String, User>() {
        public String asId(User user) {
            return user.getNickname();
    }, new SimpleIdMapper<Long, User>() {
        public Long asId(User user) {
            return user.getId();

    public User getById(Long id) {
        return cache.get(id, new Callback<Long, User>() {
            public User execute(Long id) {
                // fetch by ID

Now, if you put an entry into the cache, it automatically updates all other entries, by using the IdMappers.

    public void put(User user) {

You can have a look at the full cache API of the Trafalgar project at Github.

Another approach for a caching API

A typical caching API implements a Map API. There is nothing wrong with that, because caching is typically putting data in and getting it out by a key.

As I wrote in an earlier post, the Map interface can be hidden behind a Producer pattern.

Now I want to demonstrate another caching API, that is similar to the Producer pattern, but also handles two additional features:

  1. Typed caching
  2. Efficient cache access to retrieve multiple values at a time

With typed caching, you can store values by their ID along with the type transparently in order to avoid duplicate keys. Typically an application has one big cache, and if two different values of two different types have the same key, it would be impossible avoid conflicts otherwise. The proposed solutions automatically converts between ID and a typed cache key.

Quite often, it is more efficient to retrieve multiple values at a time, especially if you are using distributed caches. In this case, you can pass a list of IDs (or keys) and retrieve a Map that maps each ID to the corresponding cached value.

The central class of this API is the Cache class, which here uses the Google Appengine MemcacheService:

public class Cache<ID extends Serializable, VALUE extends Serializable> {
    private final Class<VALUE> type;
    private final MemcacheService cache;

    public Cache(Class<VALUE> type) {
        this.type = type;
        cache = MemcacheServiceFactory.getMemcacheService();

    public Map<ID, VALUE> getAll(Iterable<ID> ids, Executor<Map<ID, VALUE>, Collection<ID>> executor) {
        Collection<CacheKey<ID, VALUE>> keys = convertToKeys(ids);
        @SuppressWarnings( "unchecked" )
        Map<CacheKey<ID, VALUE>, VALUE> cached = (Map<CacheKey<ID, VALUE>, VALUE>) cache.getAll(keys);
        return convertToResult(executor, keys, cached);

    private Map<ID, VALUE> convertToResult(Executor<Map<ID, VALUE>, Collection<ID>> executor,
                                           Collection<CacheKey<ID, VALUE>> keys,
                                           Map<CacheKey<ID, VALUE>, VALUE> cached) {
        Map<ID, VALUE> result = new HashMap<>();
        Collection<ID> missingIds = new LinkedList<>();
        for (CacheKey<ID, VALUE> key : keys) {
            VALUE value = cached.get(key);
            if (value != null) {
                result.put(key.getId(), value);
            else {
        if (!missingIds.isEmpty()) {
            fetchMissingValues(executor, result, missingIds);
        return result;

    private void fetchMissingValues(Executor<Map<ID, VALUE>, Collection<ID>> executor,
                                    Map<ID, VALUE> result,
                                    Collection<ID> missingIds) {
        Map<CacheKey<ID, VALUE>, VALUE> missing = new HashMap<>();
        for (Entry<ID, VALUE> entry : executor.execute(missingIds).entrySet()) {
            ID id = entry.getKey();
            VALUE value = entry.getValue();
            missing.put(new CacheKey<ID, VALUE>(id, type), value);
            result.put(id, value);

    private Collection<CacheKey<ID, VALUE>> convertToKeys(Iterable<ID> ids) {
        Collection<CacheKey<ID, VALUE>> keys = new LinkedList<>();
        for (ID id : ids) {
            keys.add(new CacheKey<ID, VALUE>(id, type));
        return keys;

When creating a cache instance, the type of the value is defined, in order to enable typed caching. When using the cache, a set of ids and an Executor is passed. The Executor is used to fetch missing values – again using multiple ids to allow efficient implementations.

The cache executes in four steps:

  1. Convert IDs to keys.
  2. Fetch values from cache.
  3. Convert to a Map without CacheKeys, but IDs as keys.
  4. Fetch missing values through the Executor and add those values to the cache.

And this is how it looks, if you use the cache:

public Map<Long, Account> getByIds(Iterable<Long> ids) {
    Cache<Long, Account> cache = new Cache<>(Account.class);
    return cache.getAll(ids, new Executor<Map<Long, Account>, Collection<Long>>() {
        public Map<Long, Account> execute(Collection<Long> ids) {
            return accountDao.findByIds(ids);
  1. Create the cache instance.
  2. Call getAll() with an Executor that fetches missing values from the datastore (in this case through a DAO implementation).

I think this implementation is quite elegant, as it reduces the number of if-else constructs in the application code and moves cache handling to a technical component.

Strange exceptions with Objectify 4, or: what happens if you query unindexed fields

When working with the Google Appengine Datastore, Objectify is the framework to use. Currently it is available in version 4 and the API is simple and beautiful.

On important rule when working with the Datastore – regardless of the framework you use – is that all fields that are used in a query filter have to be indexed. But what happens if you forget to add the @Index annotation?

Here is a simple example:

public class Person {
    @Id private Long id;
    private String name;
    private final Collection interests = new ArrayList<>();


Let’s run a quick test:

    private final LocalServiceTestHelper helper = new LocalServiceTestHelper(new LocalDatastoreServiceTestConfig());

    public void setUp() {

    public void tearDown() {

    public void testFindByName() {

        Person p = new Person();

        List persons = ofy().load().type(Person.class).filter("name", "Max").list();
        assertEquals(p.getId(), persons.get(0).getId());

What do you think will happen? Yes, an exception is thrown:

	at $Proxy12.get(Unknown Source)
	at org.cloudme.Person.testFindByName(
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.googlecode.objectify.util.ResultProxy.invoke(
	... 27 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
	at java.util.ArrayList.rangeCheck(
	at java.util.ArrayList.get(
	... 32 more

How do you fix it? Just add @Index to the name field.

Now, lets have another test and query on a collection field:

public class Person {
    @Id private Long id;
    @Index private String name;
    @Index private final Collection interests = new ArrayList<>();


    public void testFindByInterest() {

        Person p = new Person();
        p.getInterests().addAll(Arrays.asList("Coding", "Jogging", "Reading"));

        List persons = ofy().load().type(Person.class).filter("interests", "Reading").list();
        assertEquals(p.getId(), persons.get(0).getId());

What happens now? The same thing! How is it possible? The reason is, that final fields are not persisted and therefore you can’t query them as well.

Summary: if you start making “stupid” mistakes, forgetting some annotations, making fields final etc., then you might come across some exceptions that don’t really tell you what your problem is. As a rule of thumb: whenever you have problems querying, check if the fields are indexed.