Tuesday, October 04, 2011

Cacheismo Cluster Update 2

Today I could set the value on server 1 and get it back from server 2.  This script of server 2 was changed a bit.


function handleGET(command, key) 
    local cacheItem = getHashMap():get(key)
    if (cacheItem ~= nil) then
   command:writeCacheItem(cacheItem)
   cacheItem:delete()
else 
   local result = command:getFromExternalServer("127.0.0.1:11212", key)
   if (result ~= nil) then
       writeStringAsValue(command, key, result)
   end 
    end
end

This is where I feel lua beats javascript because of its excellent coroutine support.  I could easily change the above code to a while loop and go through all servers in the cluster looking for a value and it would not change the complexity of the code a bit, but try doing it with javascript callbacks and it would look like a mess. 

I am thinking now that the original idea of using dataKey and virtualKey is a bit lame. What I would do now is that I will make server an explicit argument, just like I used above. For consistent hashing we will have an object exposed in lua which can be used to find the correct server via consistent hashing, but it would be optional to use. This gives users the flexibility to use some cacheismo instances as pure proxies and others as caches. Some of the servers can be part of one consistent hashing group and others can be part of another. For some reason if users want to use one of the servers as a naming server to map keys to server, they can use it that way. Essentially users have full flexibility with respect to how to map keys to servers  - consistent hashing, multiple servers hidden behind a proxy(cacheismo), naming server to map keys to servers or any combination of these.  

Other possibilities include replication and quorum. These can be accomplished with the current API itself, but I will add parallel get support to make it as fast as possible. 

Friday, September 30, 2011

Cacheismo Lua API


The scripting capabilities of cacheismo are very powerful. The code for handing commands like get/set/add etc is completely written in lua using the lua interface. Here is the code for handling SET command:


local function handleSET(command) 
  local hashMap   = getHashMap()   
  local cacheItem = hashMap:get(command:getKey())
  if (cacheItem ~= nil) then 
      hashMap:delete(command:getKey())
      cacheItem:delete()
      cacheItem = nil
  end 
  cacheItem = command:newCacheItem()
  if (cacheItem ~= nil) then 
      hashMap:put(cacheItem)
      command:writeString("STORED\r\n")
  else 
     command:writeString("SERVER_ERROR Not Enough Memory\r\n")
  end
  return 0
end



This is the list of the lua methods which can be called by new scripts

getHashMap()
- Global method, return instance of the global hashmap object.

setLogLevel(level)
- Sets the logging level. Four levels are defined DEBUG=0, INFO=2,
  WARN=2 and ERR=3. The function takes numeric argument.

Table 
The standard table object in lua is extended to support the following methods

marshal(table) 
- returns a string object which represents the serialized table.

unmarshal(serializedTable)
- returns a fully constructed table from the given serialized table string

clone(table) 
- returns a deep copy of the given table


HashMap
The global hashmap object retrieved from getHashMap() supports following methods

get(key) 
- returns a cacheItem object for the given key if found, else nil

put(cacheItem) 
- takes a cacheItem object and puts it in the hashMap
     
delete(key) 
- deletes the cacheItem object associated with the given key from the hashMap.

deleteLRU(requiredSpace) 
- deletes as many LRU objects as required to free up at least requiredSpace.


CacheItem
The data in the hashMap is stored in the form of cacheItem objects. We need to have
a cacheItem object to store in the map and cacheItem is what we get from the map
when we do a get. The object supports following methods. These objects are
read only.

getKey()
- returns the key associated with the cacheItem

getKeySize()
- returns the number of bytes in the key

getExpiryTime()
- returns the time in seconds since epoch when the current item will expire.

getFlags()
- returns the flags associated with the cacheItem

getDataSize()
- returns the size of the data stored in this cacheItem

getData()
- returns the data stored in the cacheItem as lua string. This is normally not
used because lua script don't need access to actual data unless using virtual keys.

delete()
- deletes the reference to the cacheItem object. Important to call this after get
from hashMap, after using the cacheItem.


Command
This represents the request coming from the client. This can be modified if
required by the script.

getCommand()
- returns the memcached the command being used.
  One of get, add, set, replace, prepend, append, cas, incr, decr, gets, delete,
  stats, flush_all, version, quit or verbosity
     
getKey()
- returns the key used in the command. This is nil in case of multi-get and other
  commands which don't have key as argument.

setKey(newKey)
- set the given string newKey as the key for the command object.

getKeySize()
- returns the size in bytes for the key

getExpiryTime()
- returns the expiryTime specified in the command.

setExpiryTime(newTime)
- sets the expiry time to new value newTime.

getFlags()
- returns the flags specified in the command. It is also used by the verbosity
  command to return the logging level.
 
setFlags(newFlags)
- sets the falgs in the command object to new flags.

getDelta()
- returns the delta value used in incr/decr commands

setDelta(newDelta)
- sets the delta value to new delta value

getNoReply()
- returns the boolean noReply from the command.

setNoReply(newNoReply)
- sets the value for the noReply

getDataSize()
- returns the size of data in the command, in bytes. This works for set, add, etc.

setData(newData)
- replaces the data in the command object with the newData string

getData()
- returns the data in the command object as lua string

newCacheItem()
- creates and returns a cacheItem object using the data in the command object.
  This is used with set/add/etc

writeCacheItem(cacheItem)
- writes the cacheItem in the memcached response format on the client socket.
   "VALUE key flags size\r\ndata\r\n"
  cas is not supported.

writeString(dataString)
- writes arbitrary string to the client socket. This is useful for writing
  "END\r\n", "STORED" and other standard memcached response strings.

hasMultipleKeys()
- returns the number of multi-get keys in the command

getMultipleKeys()
- returns a table of all the keys in the multi-get command

Apart from these, few helper methods defined in config.lua are also useful.

writeStringAsValue(command, key, value) 
- writes an arbitrary string in the "VALUE key flags size\r\ndata\r\n" format.
  flags is 0 and size is calculated using string.len


executeReadOnly(command, originalKey, objectType, cacheKey, func, ...) 
- helper function for read only operations on lua tables stored in the cache.
 It retrievs the data from the cache, uses table.unmarshal to construct the
 lua table and calls the function func with this lua table as the first
 argument. Any extra args passed to this function are passed to function func.


executeReadWrite(command, originalKey, objectType, cacheKey, func, ...) 
- helper function for read/write operations on lua tables stored in the cache
  orginal object is deleted and a new object is created with the same key with
  latest data values
 

executeNew(command, originalKey, objectType, cacheKey, func, ...)
- helper function for creation of new objects based on lua tables. if the key
  is in use, it is deleted before creating the new object

See set.lua, map.lua, quota.lua and swcounter.lua for example usage.


Cacheismo sliding window counter

Sliding window counters are useful for maintaining information about some last n transactions. One example could be a site like pingdom storing last n ping times for a site. Cacheismo comes with a default sliding window counter example which can be modified to suit your needs.  The current implementation supports new, add, getlast, getmin, getmax, getavg.

Note that these are memached get requests with special keys encoding our intent.  The keys follow the following syntax:
<ObjectType>:<Method>:<ObjectKey>:<Arg1>:<Arg2>

Object type is "swcounter". Cacheismo has other object types as well and more can be added by adding new lua scripts. Methods for swcounter object are "new", "add", "getmin", "getmax","getavg", "getlast", Object key we are using is "myCounter". You would need key per counter. Number of arguments would vary depending upon the method. Here new takes 1 arg (number of values to store), add takes 1 arg (value to store). Rest all methods take no arguments.

get swcounter:new:myCounter:2
VALUE  swcounter:myCounter:100 0 8 
CREATED 
-- Create a new counter which stores the last 2 values
get swcounter:add:myCounter:100
VALUE  swcounter:add:myCounter:100 0 7 
SUCCESS 
--store 100 as first value 
get swcounter:add:myCounter:50
VALUE  swcounter:add:myCounter:50 0 7 
SUCCESS 
-- store 50 as second value
get swcounter:getmin:myCounter
VALUE  swcounter:getmin:myCounter 0 2
50
-- get the min entry among all the stored values
get swcounter:getmax:myCounter
VALUE  swcounter:getmax:myCounter 0 3
100
-- get the max entry among all the stored values
get swcounter:getavg:myCounter
VALUE  swcounter:getavg:myCounter 0 2
75
-- get the average value for all the stored values

The complete implementation in lua looks like this.
====================================================
-- sliding window counter 
local function new(windowsize) 
  local  object = {}
  if (windowsize == nil) then
     windowsize = "16"
  end
  object.size   = tonumber(windowsize)
  object.index  = 0
  object.values = {}
  local count   = 0
  for count = 1,object.size do 
object.values[count] = 0
  end
  return object
end

local function add(object, value) 
   object.index = object.index+1
   if (object.index > object.size) then 
       object.index = 1 
   end 
   object.values[object.index] = tonumber (value) 
end

local function getlast(object)
   if (object.index ~= 0) then
       return object.values[object.index]
   end
   return 0
end

local function getmin(object)
   local count = 0
   local min   = math.huge
   for count = 1,object.size do
       if (object.values[count] < min) then
          min = object.values[count]
       end 
   end
   return min
end

local function getmax(object)
   local count = 0
   local max   = 0
   for count = 1,object.size do
       if (object.values[count] > max) then
          max = object.values[count]
       end 
   end
   return max
end

local function getavg(object)
   local count = 0
   local total   = 0
   for count = 1, object.size do
        total = total + object.values[count]
   end
   return (total/object.size)
end
=============================================================


Rate Limiting/Quota with Cacheismo

Cacheismo is a non-blocking single threaded high performance scriptable memcache. Instead of storing bytes or blobs in the cache, it can store objects defined in lua. No need to get the object, modify it and then save it back (with retry).  Directly modify object state in the cache itself by invoking methods of your objects. Because it is single threaded, modifications are automatically atomic. It is best suited for managing volatile application state at super speed (70K requests per second per core). Memcached would be more useful if your objects don't change much. But if they are changed frequently, memcached's update mechanism using retries would give you significantly low performance.

It comes with a default implementation of a quota object. You can change it as it suits you without touching the c code.  Here is how to use it.

Note that these are memached get requests with special keys encoding our intent.  The keys follow the following syntax:
<ObjectType>:<Method>:<ObjectKey>:<Arg1>:<Arg2>

Object type is "quota". Cacheismo has other object types as well and more can be added by adding new lua scripts. Methods for quota object are "new", "addandcheck", "reset". Object key we are using is "userKey". You would need key per user. Number of arguments would vary depending upon the method. Here new takes 2 args, addandcheck takes 1 and reset takes 0.

get quota:new:userKey:10:hour 
VALUE quota:new:userKey:10:hour 0 7 
CREATED
-- Creates a new quota object in cacheismo, that enforces 10 requests per hour.
get quota:addandcheck:userKey:10
VALUE quota:addandcheck:userKey:10 0 8
QUOTA_OK
-- Checking if user can use 10 units of quota. This succeeds. It is easy to create a default user object if it doesn't exist in this call also, instead of forcing users to always do a new first.
get quota:addandcheck:userKey:1
VALUE quota:addandcheck:userKey:1 0 14
QUOTA_EXCEEDED
-- Checking if user can use 1 more unit of quota. This fails as user exhausted it in previous call.
get quota:reset:userKey
VALUE quota:reset:userKey 0 8
QUOTA_OK
-- Reset the quota back to 0.


Cacheismo can do around 70K such operations per second running on single core. Like memcached, if single instance is not enough, multiple instances can be used. Cacheismo supports memcached ascii protocol. So any memcached client can be used to access it.

Introduction to cacheismo

=========================================================
-- simple quota implementation 
local quotaTypesMap = { month = 1, day = 1, hour = 1 }


local function findExpiresAt(quotaType)
       local current = os.date("*t")
       if (quotaType == "month") then
           current.month = current.month + 1
           current.day   = 1
           current.hour  = 0
           current.min   = 0
           current.sec   = 1
       elseif (quotaType == "day") then
           current.day   = current.day + 1
           current.hour  = 0
           current.min   = 0
           current.sec   = 1
       else 
           current.hour = current.hour+1
           current.min   = 0
           current.sec   = 1
       end 
    return os.time(current)
end


local function new(limit, quotaType) 
  if (quotaTypesMap[quotaType] == nil) then
      return nil
  end
  local  object    = {}
  object.limit     = tonumber(limit)
  object.type      = quotaType
  object.count     = 0
  object.expiresAt = findExpiresAt(quotaType)
  return object
end


local function addandcheck(object, value)
   local toUseValue = 1 
   if (value ~= nil) then
      toUseValue = tonumber(value)
   end
   if (os.time() > object.expiresAt) then
       object.count = 0;
       object.expiresAt = findExpiresAt(object.type) 
   end
   if ((object.count + toUseValue) > object.limit) then
       return -1
   else 
       object.count = object.count + toUseValue
       return 0
   end
end


local function reset(object) 
   if (os.time() > object.expiresAt) then
       object.count = 0
       object.expiresAt = findExpiresAt(object.type) 
   end
   object.count = 0
   return 0
end
===================================================================

Cacheismo Memory Allocation

Cacheismo uses two different kind of memory allocators. One is called chunkpool which is used to store the items and another is called fallocator which is used for request processing. Both have different design goals. chunkpool is designed to be reasonably fast but very compact. It tries to minimize memory wastage. fallocator on the other hand doesn't care about memory wastage but tries to be very fast. Here are the details.

fallocator 
fallocator is used when we are allocating memory temporarily and it is certain that it will be freed very soon. For example when reading data from socket and parsing the request, many small strings are generated but these are released as soon as the request is over, which could be a matter of micro seconds.   Cacheismo creates a fallocator per connection. fallocator not only allocates memory but also keeps track of memory. Thus if because of bad code we miss freeing some memory, it will be reclaimed when the fallocator is destroyed, essentially when the connection is closed.

fallocator internally allocates a big buffer (4KB) and keeps track of how much of the buffer is used. Malloc simply returns buffer+used at the pointer and increments used by size of malloc.  If the current buffer cannot handle the request, new buffer is allocated and memory given from new buffer. Used buffers are kept in a linked list. On free, fallocator decreases the refcount on the buffer which was used to allocate that memory. If the refcount becomes zero, the buffer is freed from the used buffers list. The benefit of this refcount based memory management is that it keeps memory usage to minimum. So if you are doing some malloc/frees in a loop, they will not exhaust the memory because refcount mechanism will ensure that they are released. If the malloc size is bigger than 4KB, fallocator defaults to malloc but keeps track of the buffer.

Moreover the 4KB buffer are cached to avoid going to malloc every time. This scheme of things give us O(1) malloc and  O(1) free in normal case (when  not going to system malloc).   It avoids fragmentation of the memory but could result in more memory being used than required at any point in time. It also gives us some protection against memory leaks.

chunkpool
chunkpool is used for storing the cacheitems and the goal here is to maximize memory utilization. chunkpool imposes two constrains on the application. One, it never allocates more than 4KB of contiguous memory. That is, if the application needs 8KB of memory, it can at max only get 2 4KB blocks and not a single 8KB block. Applications needs to be designed around this limitation. The second one is not a constraint, but an advice.  Apart from malloc, chunkpool exposes another function called relaxedMalloc which can return memory smaller than what application asked for. Thus if application asks for 4KB of memory, it might return 256 bytes or 1KB if it doesn't have a 4KB block. If the returned memory is too small, application can free it and deal with out of memory condition or if possible use large number of small buffers instead of small number of large buffers.

Because of these restrictions, fragmentation of memory doesn't makes the system unusable, but only adds  more memory management overhead. Internally we have slabs of size 16 + n16 upto 4096 bytes. Initially all memory is allocated to the 4KB slab. If applications asks for say 1KB of memory we split the 4Kb buffer into two buffer on of 3Kb and another of 1Kb. The 1 KB buffer goes to the application and the 3KB is stored in the 3Kb slab. Cacheismo chunkpool uses skiplist to keep track of non empty slabs. At runtime if we found that the slab of asked malloc size is empty, it quickly find the next not empty slab, splits the buffer and returns memory to the application. On free, chunkpool tries to merge the free buffers to create bigger buffers.  One limitation that I will fix later is that chunkpool memory uses the first four bytes for memory management. The implication is that any free buffer can only merge with logical next buffer and not logical previous buffer. This would require storing buffer size also at the end of the buffer and not just at the start.  To circumvent this problem, chunkpool requires an extra GC call where it tries to merge 8MB of pages at a time.  This is called once every second.  This is triggered only if at least 12% of memory is free and average chunk size is less than 256 bytes other wise it is a no-op. Even though chunkpool is capable of addressing upto 64GB of memory, it is advisable not to go beyond say 8GB.

chunkpool is a very efficient allocation manager for cache data, when compared with slab allocator.  It saves user the trouble of first thinking about what slab sizes should be used and maintains high HIT Rate when object sizes are changed because of changes in application code.  Cacheismo is not faster than memcached when comparing raw TPS, but when you take HIT Rate into account, it saves you lots of db accesses and increases overall application performance.  You can download it from here.