Lark is a REST interface for Redis #

You might have seen our post on webdis a couple years ago. Like webdis, Lark is a REST interface for Redis.

At it’s core it’s just a way of transforming HTTP requests into redis commands, but it comes with a few additions to make this a little more sane.

It comes with a Flask blueprint and a Django app, but it should work with any python web framework.

Disclaimer: Alex (this post’s author) is the creator of Lark.

Earn a Sidekiq blackbelt by breaking a few boards

Wynn posted about Sidekiq last February, briefly introducing a new way of handling background workers. For those of us who took on the challenge of switching from Resque to Sidekiq, you can probably agree with me that it brought a new set of challenges to tackle. The upside of that, though, is that tackling those […]

Redis-faina – query analyzer for Redis #

From folks that know something about scale, the Instagram team has realeased Redis-faina, a tool that parses Redis’ MONITOR command to provide stats on Redis queries:

# reading from stdin
redis-cli -p 6490 MONITOR | head -n <NUMBER OF LINES TO ANALYZE> | ./

Overall Stats
Lines Processed     117773
Commands/Sec        11483.44

Top Prefixes
friendlist          69945
followedbycounter   25419
followingcounter    10139
recentcomments      3276
queued              7

Top Keys
friendlist:zzz:1:2     534
followingcount:zzz     227
friendlist:zxz:1:2     167
friendlist:xzz:1:2     165
friendlist:yzz:1:2     160
friendlist:gzz:1:2     160
friendlist:zdz:1:2     160
friendlist:zpz:1:2     156


Check out the source on GitHub. If you’re new to Redis, Episode 0.4.5 with @antirez is a classic.

rq – Simple job queues for Python #

Vincent Driessen of git flow fame has released, rq a simple, Redis-backed queuing library for Python.

Long-running function calls can be added to a queue with a familiar enqueue method:

import requests

def count_words_at_url(url):
    resp = requests.get(url)
    return len(resp.text.split())

# elsewhere
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, '')

As always, Vincent has a stylish project page and detailed introductory blog post.

If you like Resque but sling Python and would rather use something for which you could hack the internals, you might give rq a shot.

Recommendify – Ruby/Redis-based recommendation engine #

Once application content grows to a certain size, it becomes a challenge to help users find what interests them. Sites like Amazon have offered product recommendations for years based on shoppers’ browsing and buying habits. Recommendify from Paul Asmuth brings that sort of collaborative filtering to your Ruby application. Using a Redis backend, Recommendify lets you build “interaction sets” and retrieve recommendations:

# Our similarity matrix, we calculate the similarity via co-concurrence 
# of products in "orders" using the jaccard similarity measure.
class MyRecommender < Recommendify::Base

  # store only the top fifty neighbors per item
  max_neighbors 50

  # define an input data set "order_items". we'll add "order_id->product_id"
  # pairs to this input and use the jaccard coefficient to retrieve a 
  # "customers that ordered item i1 also ordered item i2" statement and apply
  # the result to the item<->item similarity matrix with a weight of 5.0
  input_matrix :order_items,  
    # :native => true,
    :similarity_func => :jaccard,    
    :weight => 5.0


recommender =

# add `order_id->product_id` interactions to the order_item_sim input
# you can add data incrementally and call RecommendedItem.process! to update
# the similarity matrix at any time.
recommender.order_items.add_set("order1", ["product23", "product65", "productm23"])
recommender.order_items.add_set("order2", ["product14", "product23"])

# Calculate all elements of the similarity matrix

# ...or calculate a specific row of the similarity matrix (a specific item)
# use this to avoid re-processing the whole matrix after incremental updates

# retrieve similar products to "product23"
  => [ <Recommendify::Neighbor item_id:"product65" similarity:0.23>, (...) ]

I can’t wait for a Spree plugin. Almost 400 watchers in the last week, grab the source on GitHub.

sidekiq – More efficient, Resque-compatible message processing for Rails 3 #

What if 1 Sidekiq process could do the work of 20 Resque processes?

Sidekiq from Mike Perham has a multiple-messages-per-process approach and boosts efficiency in a (mostly) Resque-compatible package. Since your workers must be threadsafe, Resque users will notice Sidekiq’s API is slightly different:

# app/workers/hard_worker.rb
class HardWorker
  include Sidekiq::Worker

  def perform(name, count)
    puts 'Doing hard work'

HardWorker.perform_async('bob', 5)

Be sure to check out Mike’s blog post, the project web site, or the GitHub wiki for more.

/via Karthik Hariharan

FnordMetric – beautiful real-time metrics dashboard powered by EventMachine and Redis #

Paul Asmuth has released FnordMetric, a great looking tracking dashboard app that lets you measure and visualize events within your application.


FnordMetric ships with a standalone webserver, and sports a nice Ruby DSL:

# numeric (delta) gauge, 1-hour tick
gauge :messages_sent, 
  :tick => 1.hour.to_i, 
  :title => "Messages (sent) per Hour"

# on every event like { _type: 'message_sent' }
event(:message_sent) do
  # increment the messages_sent gauge by 1
  incr :messages_sent 

# draw a list of the most visited urls (url, visits + percentage), auto-refresh every 20s
widget 'Overview', {
  :title => "Top Pages",
  :type => :toplist,
  :autoupdate => 20,
  :gauges => [ :pageviews_per_url_daily ]

Check out Paul’s screencast or the README for more.

Slanger – Open Source Pusher protocol powered by Ruby and Redis #

Pusher has become a favorite for developers looking to add real-time events to their applications quickly and reliably. For developers who would rather keep everything in-house, Stevie Graham has released Slanger, an open source Ruby implementation of the Pusher protocol that uses Redis on the backend.

Presence channel state is shared using Redis. Channels are lazily instantiated internally within a given Slanger node when the first subscriber connects. When a presence channel is instantiated within a Slanger node, it queries Redis for the global state across all nodes within the system for that channel, and then copies that state internally. Afterwards, when subscribers connect or disconnect the node publishes a presence message to all interested nodes, i.e. all nodes with at least one subscriber interested in the given channel.

With the gem installed, and Redis running, you can fire up Slanger, passing it your Pusher API keys:

$ slanger --app_key 765ec374ae0a69f4ce44 --secret your-pusher-secret

You’ll also need to modify the host and port settings in your server-side Ruby and client-side JavaScript:

# Ruby
...   = ''
Pusher.port   = 4567

// JavaScript
...    = ''
Pusher.ws_port = 8080

The project is brand new, but it looks to be a promising alternative to Pusher if you need to control data end-to-end or just need to run a development environment when your Internet connection is dodgy.

If you’re new to Pusher, websockets, and the real-time web, be sure and check out Episode 0.3.1 for a proper introduction.


Make your Ruby objects Likeable with Redis #

The Gowalla development team has released Likeable, a Ruby library to make it simple to add social ‘likes’ to your Ruby classes.

With just a few lines of Ruby, you can track likes using Redis as your data store:

class Comment
  include Likeable

  # ...

class User
  include Likeable::UserMethods

  # ...

comment = Comment.find(15)
comment.like_count                  # => 0!(comment)         # => #<Likeable::Like ... >
comment.like_count                  # => 1
comment.likes                       # => [#<Likeable::Like ... >]
comment.likes.last.user             # => #<User ... >
comment.likes.last.created_at       # => Wed Jul 27 19:34:32 -0500 2011

Richard Schneeman walks through an example in this screencast:

Check out the announcement or GitHub repo for more.

Qu – Background job queue for Ruby, Redis, and MongoDB #

Qu is interesting project for doing background jobs in Ruby from Brandon Keepers, one of the maintainers of delayed_job :

class ProcessPresentation
  def self.perform(presentation_id)
    presentation = Presentation.find(presentation_id)

job = Qu.enqueue ProcessPresentation,

Check out the README for usage and answers on why another Ruby queuing library.

Thoonk: Persistent, Redis-powered push feeds and queues for Python and JavaScript #

As more and more applications become publishers and subscribers of data
in external systems, open source projects continue to innovate in the background jobs space. Redis, featured in Episode 0.4.5, seems to be a recurring theme with these types of projects. This week, Nathan Fritz and Lance Stout at &yet have released Thoonk, a project that takes a unified approach to feeds, jobs, and queues, all with a Redis backend.

Thoonk defines its different item types this way:

A Feed is a subject that you can publish items to (string, binary, json, xml, whatever), each with a unique id (assigned or generated).

Queues are stored and interacted with in similar ways to feeds, except instead of publishes being broadcast, clients may do a “blocking get” to claim an item, ensuring that they’re the only one to get it. When an item is delivered, it is deleted from the queue.

Jobs are like Queues in that one client claims an item, but that client is also required to report that the item is finished or cancel execution.

Thoonk comes in two flavors Python and JavaScript for Node.js. Compare examples for subscribing to a feed in Python:

thoonk.register_handler("create_notice", create_handler_pointer)
//args: feedname

thoonk.register_handler("delete_notice", delete_handler_pointer)
//args: feedname

thoonk.register_handler("publish_notice", publish_handler_pointer)
//args: feedname, item, id

thoonk.register_handler("retract_notice", retract_handler_pointer)
//args: feedname, id

… and the obviously Node JavaScript example.

thoonk.subscribe(function(id, item) {
    //publish event
function(id, item) {
    //edit event
function(id) {
    //retract event
function(id, position) {
    //position event for sorted feed
    //position is begin:, :end, :X, X: where X is the relative id
function() {
    //subscription ready

Be sure and check out the READMEs for each project for advanced usage and tips on contributing.

[ Source on GitHub] [Thoonk.js Source on GitHub] [&yet Blog post]

travis: Distributed CI for the Ruby community using Rails, Websockets, and Redis #

Berlin based Rubyist Sven Fuchs asks if Java-based Jenkins is the best CI tool for open source Ruby projects.

Sven writes:

Instead, imagine a simple and slim build server tool that is maintained by the Ruby community itself (just like Gemcutter is, or many other infrastructure/tool-level projects are) in order to support all the open-source Ruby projects/gems we’re using every day.

Instead of just imagining, Sven and others have been working toward that vision with Travis, an extremely alpha Rails project. Travis is a single-page application built in Rails and uses Backbone.js as a client-side MVC frontend.

How it works

By configuring a post-receive URL in your GitHub project settings, GitHub will ping Travis when new git commits are received. Travis will then schedule a build in Resque, a Redis queue. Travis then uses Websockets courtesy of PusherApp to update registered browsers on build status as it runs in the background.

travis architecture

Take a look at some of the projects getting built over at, the project’s new home page or checkout Sven’s quick tour of Travis in this screencast:


Currently, the hosted edition of Travis is open to anyone with a GitHub account. Just sign in with GitHub. Once you’re in, grab your Travis build token and configure a post-receive URL in your GitHub project’s Service Hooks page:


Host Travis yourself

If you want to run your own instance, you’ll need to set up configuration settings:

$ cp config/travis.example.yml config/travis.yml

If you want to run on Heroku, you’ll need to set some ENV variables

$ rake heroku:config

IF you’re running locally, you can start a worker with

$ RAILS_ENV=production VERBOSE=true QUEUE=builds rake resque:work

… or if you’re using God:

$ cp config/resque.god.example config/resque.god
  $ god -c config/resque.god

How you can help

Travis is in EARLY ALPHA. Sven and gang are looking for folks to help test, log issues, and submit patches. If you want to join the community, join the Google Group or hang out in #travis on IRC.

Special thanks

Sven and team would like to offer a special thanks to Pusher App for donating a Big Boy account for the project. If you’d like to pitch in with the compute side of the project, (we’re looking at you Heroku or Linode), please ping Sven.

[Source on GitHub] [Blog post] [Discuss on HN]

super-nginx: nginix on steroids serves up async Lua apps #

Ezra Zygmuntowicz, Engine Yard founder now at VMWare, has released a “killer build of nginx” that bundles seventeen popular nginx modules as well as Luajit, a just-in-time compiler for Lua.


Super-nginx bundles the following popular modules:

A nice complement to the built-in Redis and Drizzle support from the list above, Ezra has also included his own script to build Luajit, which allows you to use nginx as an evented Lua web server in the style of EventMachine or Node.js.

[Source on GitHub]

webdis: HTTP + JSON API for Redis #

HTTP is the dial tone of the web. Apps that speak HTTP tend to grow in popularity, such as CouchDB whose built-in HTTP-based RESTful API makes it super easy to store and retrieve JSON data.

Nicolas Favre-Felix gives some web love to Redis fans with Webdis, an HTTP interface for Redis that serves up JSON. Built in C, Webdis aims to be fast like Redis itself. The URL structure takes the pattern of:[VALUE]

Let’s look at some concrete examples:

→ {"SET":[true,"OK"]}

→ {"GET":"world"}

curl -d "GET/hello"
→ {"GET":"world"}

While still early in development, the feature set is impressive. Webdis currently offers:

  • Support for GET and POST
  • JSON and JSONP parameter (?jsonp=myFunction).
  • Raw Redis 2.0 protocol output with .raw suffix
  • HTTP 1.1 pipelining
  • TCP or UNIX socket Redis connections
  • CIDR or HTTP Basic Auth security
  • Pub/Sub using Transfer-Encoding: chunked. Coupled with JSONP, Webdis can be used as a Comet server
  • Built-in support for json, txt, ‘html, xml, xhtml, png, and jpg
  • Custom Content-Type types based on file extension, or ?type=some/thing
  • Cross-origin XHR, if compiled with libevent2 (for OPTIONS support)
  • File upload with PUT, if compiled with libevent2 (for PUT support)

Check the README for the complete list. If you’d like to contribute, Nicolas is thinking about how to add other HTTP verb support like PUT and DELETE, Websockets, and Etags.

Also, if you missed it, be sure and catch Episode 0.4.5 with @antirez, the creator of Redis.

[Source on GitHub] [Comment on Hacker News]

#45: Redis with Salvatore Sanfilippo

Wynn caught up with Salvatore Sanfilippo aka @antirez to talk about Redis, the super hot key value store. Items mentioned in the show: VMWare signs the paychecks for Salvatore and Pieter Noordhuis Redis is an open source, advanced key-value store and data structure server wherein keys can contain strings, hashes, lists, sets and sorted sets […]

Octobot: Fast, reliable Java/Scala task queue for RabbitMQ, Beanstalk, Redis and more #

octobot logo

Background tasks are crucial for any non-trivial web application so it’s no wonder that the landscape of queuing technologies is rapidly evolving.

The latest entry is Octobot, a Java-based task queue worker from C. Scott Andreas designed to be reliable, easy to use, and powerful.


Octobot uses best-of-breed tools for queuing, supporting AMQP/RabbitMQ, Beanstalk, and Redis (Pub/Sub) as backends out of the box. The architecture is extensible so additional backends can be added in the future.

Easy to use

Octobot tasks are simply classes with a static run method which accept a JSON object.

package com.example.tasks;
import org.apache.log4j.Logger; 
import org.json.simple.JSONObject;

public class TacoTask { 
  private static Logger logger = Logger.getLogger("TacoTask");

  public static void run(JSONObject task) {
    String payload = (String) task.get("payload");"OMG, GOT A TACO: " + payload");

… or in Scala:

package com.example.tasks
import org.apache.log4j.Logger
import org.json.simple.JSONObject

object TacoTask {
  val log = Logger.getLogger("TacoTask");

  def run(task: JSONObject) { 
    val payload = task.get(“payload”)“OMG, GOT A SCALA TACO: ” + payload)

Octobot also has a simple YAML-based config file format:

    - { name: tacotruck,
        protocol: AMQP,
        host: localhost,
        port: 5672,
        vhost: /,
        priority: 5,
        workers: 1,
        username: cilantro,
        password: burrito

  metrics_port: 1228

  email_enabled: false
  email_hostname: localhost
  email_port: 465
  email_ssl: true
  email_auth: true
  email_username: username
  email_password: password


Octobot is designed for high throughput, heavy workloads, and ultra-low latency. Early benchmarks using AMQP and MongoDB lookups demonstrate, task execution actually improves as the JIT optimizes execution paths.

[Source on GitHub] [Homepage] [Download Docs]

ACLatraz: Redis-powered access control for your ruby apps #

Authentication options get a lot of press these days, but there is another Auth that can still be a pain: Authorization. ACLatraz from Kriss Kowalik caught our eye because it’s inspired by *nix Access Control Lists (ACLs), powered by Redis, and has a sense of humor.

Install ACLatraz via Ruby gems

gem install aclatraz

and configure your Redis-based storage

Aclatraz.init :redis, "redis://localhost:6379/0"

Everyone is a Suspect

In keeping with the Alcatraz theme, actors in your authorization system are deemed Suspects:

class Account < ActiveRecord::Base
  include Aclatraz::Suspect

ACLatraz supports global, class-related, and object-related roles:

# global admin role
@account.roles.assign(:admin) # or ...!

# Page class-related role
@account.roles.assign(:responsible, Page) # or...!(Page)

# object-related role for page 15
@account.roles.assign(:author, Page.find(15)) # or...!(Page.find(15))

Once, assigned you can interrogate your suspects a couple of ways using has?

@account.roles.has?(:admin)                # => true
@account.roles.has?(:responsible, Page)     # => true
@account.roles.has?(:author, Page.find(15) # => true

… or the more natural semantic shortcuts:

@account.is_not.admin?                      # => false
@account.is_not.responsible_for?(Page)       # => false

Guarding The Rock

To enable access control on an object, include the Aclatraz::Guard module:

class Page
  include Aclatraz::Guard

  suspects :account do
    deny all # notice that it's a method, not symbol
    allow :admin

Check the README for even more features including custom actions, aliases, and class inheritance.

[Source on GitHub]

Rollout: Conditionally roll out features with Redis #

Ever wanted to roll out new features in your web app to only select users, bit by bit, testing performance as you go? James Golick has harnessed the power of Redis to do just that with Rollout.

Rollout is a gem, so just install from the command line:

gem install rollout

To get started, create your Rollout

$redis   =
$rollout =$redis)

You can then activate/deactivate features in a number of ways

by group,

$rollout.activate_group(:chat, :all)
$rollout.deactivate_group(:chat, :all)

by user,

$rollout.activate_user(:chat, @user)
$rollout.deactivate_user(:chat, @user)

or even for a percentage of users

$rollout.activate_percentage(:chat, 20)

As a failsafe, you can back out a broken feature with the nuclear option:


[Source on GitHub]

#19: James Edward Gray II on Ruby, TextMate, and Red Dirt Ruby Conf

While in OKC for OpenBeta4, Adam and Wynn sat down with James Edward Gray II and talked about his many Ruby gems, TextMate bundles, and his upcoming Ruby conference Red Dirt Ruby Conf this May. Challenge your Ruby fu, feel dumb, learn something, repeat. OpenBeta4 We were blown away by the startup community in […]

QR – Easy Redis queues with Python #

QR from Ted Nyman makes it simple to create double ended queues, queues, and stacks in Redis from Python. Double ended queues allow items to be added to the beginning or end of the queue while normal queues and stacks are first-in-first-out (FIFO) and last-in-first-out (LIFO) respectively.

To create a queue simply import QR:

>> from qr import Queue

and create the queue:

>> bqueue = Queue('Beatles', 3)

Here we created a regular Queue, but you can also create a double ended queue with Dequeue or stack with Stack.

To added data, use push:

>> bqueue.push('Ringo')
PUSHED: 'Ringo'

>> bqueue.push('Paul')
PUSHED: 'Paul'

>> bqueue.push('John')
PUSHED: 'John'

>> bqueue.push('George')
PUSHED: 'George' 

To grab the next item simply call pop:

>> bqueue.pop()

You can also grab all items in the queue, even as JSON:

>> beatles_queue.elements()
['Ringo', 'John', 'George']

>> beatles_queue.elements_as_json()
'['Ringo', 'John', 'George']'

Another cool feature of QR is that all three queue varieties may be set up as bounded or unbounded:

  • Bounded: once the DQS reaches a specified size of elements, it will pop an element.

  • Unbounded: the DQS can grow to any size, and will not pop elements unless you explicitly ask it to.

[Source on GitHub]