Today I could set the value on server 1 and get it back from server 2. This script of server 2 was changed a bit.
function handleGET(command, key)
local cacheItem = getHashMap():get(key)
if (cacheItem ~= nil) then
command:writeCacheItem(cacheItem)
cacheItem:delete()
else
local result = command:getFromExternalServer("127.0.0.1:11212", key)
if (result ~= nil) then
writeStringAsValue(command, key, result)
end
end
end
function handleGET(command, key)
local cacheItem = getHashMap():get(key)
if (cacheItem ~= nil) then
command:writeCacheItem(cacheItem)
cacheItem:delete()
else
local result = command:getFromExternalServer("127.0.0.1:11212", key)
if (result ~= nil) then
writeStringAsValue(command, key, result)
end
end
end
This is where I feel lua beats javascript because of its excellent coroutine support. I could easily change the above code to a while loop and go through all servers in the cluster looking for a value and it would not change the complexity of the code a bit, but try doing it with javascript callbacks and it would look like a mess.
I am thinking now that the original idea of using dataKey and virtualKey is a bit lame. What I would do now is that I will make server an explicit argument, just like I used above. For consistent hashing we will have an object exposed in lua which can be used to find the correct server via consistent hashing, but it would be optional to use. This gives users the flexibility to use some cacheismo instances as pure proxies and others as caches. Some of the servers can be part of one consistent hashing group and others can be part of another. For some reason if users want to use one of the servers as a naming server to map keys to server, they can use it that way. Essentially users have full flexibility with respect to how to map keys to servers - consistent hashing, multiple servers hidden behind a proxy(cacheismo), naming server to map keys to servers or any combination of these.
Other possibilities include replication and quorum. These can be accomplished with the current API itself, but I will add parallel get support to make it as fast as possible.
No comments:
Post a Comment