Solving “Big Data” – from the other side of the tracks.

Every two years the amount of digital information created globally more than DOUBLES. In 2012 over 2.8 Zettabytes (YES ZETTA!) of digital information was created. By 2020 IDC projects this number will exceed 40 ZB! Keep in mind this is just NEW data created within a single year.

big-data-318x211

To put it in perspective (from a recent industry study):

  • If we could save all 40 ZB onto today’s Blue-ray discs, the weight of those discs (without any sleeves or cases) would be the same as 424 Nimitz-class aircraft carriers.
  • There are 700,500,000,000,000,000,000 grains of sand on all the beaches on earth (or seven quintillion five quadrillion). It would take the sand of 57 Earths to equal all the data in 40 ZBs — yes 57.
  • In 2020, 40 ZB will be roughly 5 Terabytes per person worldwide — do you still remember your 2GB ipod?.

“40 Zettabytes” is just a couple of characters on a screen to you and I — and lets be honest, it’s really not all that big of a deal at first glance. That is, until you think about what it equates to — it becomes daunting to imagine. So, where on earth does all this data get stored?

I spend a lot of my time talking with clients about “Big Data” — what it is, how to leverage it, where to store it, etc. There isn’t a silver bullet. The goal is to derive benefit from all this information that people create and companies store. The challenge is simply the sheer volume of information that must be stored, managed, accessed, analyzed and understood. Many businesses turn to companies like EMC to help them solve their storage challenges. Companies like EMC have tried, tested, proven and supported solutions like Isilon, GreenPlum and VMAX to address “Big Data”. This is the world I spend most of my time — the world of enterprise solutions.


Today however, I’d like to take you to the other side of the tracks — where innovation and ideas are often incubated — the world of free and open-source. Enter Ceph…

ceph-logo1

About a week ago I set out on a journey to find an open-source solution for highly-scalable storage. Most of these solutions were developed to support HPC (High Performance Computing). Technologies like GFS, Lustre, OCFS, etc. were all born out of this space. Most of what is available is based on file system technology. There wasn’t really anything at first that made me go — “wow, revolutionary!” — that is, until I came across Ceph.

Ceph sets out to build a flexible framework for storage that you can leverage in many different ways (Block, File, Object or API). The fundamental building block is an object based storage engine that is flexible, performance oriented and contains no single point of failure (even cooler is the fact that all components scale as little or as much as you want!).  After doing some research I immediately thought — “way too good to be true — should I even waste my time?”.

So far, I’m very glad I did.

I spent the weekend tinkering around with version 0.56.2. My primary concern was understanding the latency associated with the technology — since my interest is really in primary storage. I wanted to understand if Ceph could be considered for tier 1 applications and data — where performance and latency is of the utmost concern.

Initial single threaded (1 queue) performance tests in my lab environment (this isn’t a good test, but I wanted to see what kind of latency I was going to get before I spent any more time and effort):

ceph

IO Size: 4kb
Threads: 1
Profile: Random Read

random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
2.0.8
Starting 1 process
random-read: Laying out IO file(s) (1 file(s) / 3000MB)
Jobs: 1 (f=1): [r] [100.0% done] [4879K/0K /s] [1219 /0  iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=3141
  read : io=3000.0MB, bw=5177.3KB/s, iops=1294 , runt=593368msec
    clat (usec): min=262 , max=38206 , avg=769.11, stdev=279.80
     lat (usec): min=262 , max=38206 , avg=769.27, stdev=279.80
    clat percentiles (usec):
     |  1.00th=[  406],  5.00th=[  474], 10.00th=[  510], 20.00th=[  604],
     | 30.00th=[  668], 40.00th=[  716], 50.00th=[  756], 60.00th=[  780],
     | 70.00th=[  804], 80.00th=[  844], 90.00th=[  996], 95.00th=[ 1144],
     | 99.00th=[ 1960], 99.50th=[ 2256], 99.90th=[ 3024], 99.95th=[ 3664],
     | 99.99th=[ 6688]
    bw (KB/s)  : min= 3216, max= 5568, per=100.00%, avg=5181.41, stdev=184.08
    lat (usec) : 500=8.57%, 750=40.28%, 1000=41.31%
    lat (msec) : 2=8.95%, 4=0.85%, 10=0.04%, 20=0.01%, 50=0.01%
  cpu          : usr=0.93%, sys=6.46%, ctx=769325, majf=0, minf=23
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=768000/w=0/d=0, short=r=0/w=0/d=0

This is an FIO output after a quick I/O test. What you see for results are:

IOPS: 1294
THROUGHPUT: 5MB/s
AVG LATENCY: 769 usec (less than 1ms)

From my perspective these were AWESOME results. Keep in mind, I’m using a single thread with no tuning, tweaking or advanced features like caching, etc. After seeing these results I decided Ceph was worth additional time and discovery. It is by no means enterprise ready (having had the client software crash the virtual servers they were on a few times early in my setup and testing), but my instincts tell me Ceph has a bright future ahead of it.

Over the next couple of weeks I’ll be working to bring you more information regarding Ceph.

I’ll keep you posted!

</my two cents>

Tagged with: , , , , , , , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*