nosql

Call me maybe: Redis

Posted on by irrlab

“Redis is a fantastic data structure server, typically deployed as a shared heap. It provides fast access to strings, lists, sets, maps, and other structures with a simple text protocol. Since it runs on a single server, and that server is single-threaded, it offers linearizable consistency by default: all operations happen in a single, well-defined order. There’s also support for basic transactions, which are atomic and isolated from one another.

Because of this easy-to-understand consistency model, many users treat Redis as a message queue, lock service, session store, or even their primary database. Redis running on a single server is a CP system, so it is consistent for these purposes…”

aphyr.com/posts/283-call-me-maybe-redis

Billion Messages – Art of Architecting scalable ElastiCache Redis tier

Posted on by irrlab

“Whenever we are designing a highly scalable architectures on AWS running thousands of application servers and supporting millions of requests, usage of NoSQL solutions have become inevitable part. One such solution we often been using for years on AWS is Redis. We love Redis. AWS introduced ElastiCache Redis on 2013 and we started using the same since it eased the management and operational efforts. In this article i am going to share my experience on designing large scale Redis tiers supporting billions of messages per day on AWS, step by step guide on how to deploy the same, what are the Implications you face at scale ? Best Practices to be adopted while designing sharded+replicated Redis Tiers etc…”

harish11g.blogspot.co.uk/2014/08/art-of-architecting-highly-scalable-available-elasticache-redis-tier.html

How Twitter Uses Redis to Scale – 105TB RAM, 39MM QPS, 10,000+ Instances

Posted on by irrlab

“Yao Yue has worked on Twitter’s Cache team since 2010. She recently gave a really great talk: Scaling Redis at Twitter. It’s about Redis of course, but it’s not just about Redis.

Yao has worked at Twitter for a few years. She’s seen some things. She’s watched the growth of the cache service at Twitter explode from it being used by just one project to nearly a hundred projects using it. That’s many thousands of machines, many clusters, and many terabytes of RAM.

It’s clear from her talk that’s she’s coming from a place of real personal experience and that shines through in the practical way she explores issues. It’s a talk well worth watching.

As you might expect, Twitter has a lot of cache…”

highscalability.com/blog/2014/9/8/how-twitter-uses-redis-to-scale-105tb-ram-39mm-qps-10000-ins.html

Dissecting Message Queues

Posted on by irrlab

“Continuing my series on message queues, I spent this weekend dissecting various libraries for performing distributed messaging. In this analysis, I look at a few different aspects, including API characteristics, ease of deployment and maintenance, and performance qualities. The message queues have been categorized into two groups: brokerless and brokered. Brokerless message queues are peer-to-peer such that there is no middleman involved in the transmission of messages, while brokered queues have some sort of server in between endpoints…”

www.bravenewgeek.com/dissecting-message-queues/

ArangoDB

Posted on by irrlab

“A distributed open-source database with a flexible data model for documents, graphs, and key-values. Build high performance applications using a convenient sql-like query language or JavaScript extensions…”

https://www.arangodb.org/

Raft: The Understandable Distributed Consensus Protocol

Posted on by irrlab

“Ben Johnson has ten years of software development experience working in database architecture, distributed systems and data visualization. He is the lead developer of the Sky behavioural database project (skydb.io/) and lead developer of the Go implementation of the Raft protocol (https://github.com/benbjohnson/go-raft)…”

www.infoq.com/presentations/raft

Redis on steroids: Autocomplete using Redis, Nginx and Lua

Posted on by irrlab

“Serving autocomplete instantaneously has always been top priority for us and till recently we hacked our way through by caching autocomplete entries on the client side (embarrassing, I know) and syncing from the backend. This helped us serve results at “Google Instant” level until the data began to hit memory limits (client side arrays sizes)…”

www.cucumbertown.com/craft/autocomplete-using-redis-nginx-lua/