From f2fc5b580a3e559d50fb8ecf76e7f8fa07b91a0d Mon Sep 17 00:00:00 2001 From: Dru Lavigne Date: Fri, 8 Jul 2016 02:42:45 +0000 Subject: [PATCH] Add Ceph status report submitted by wjw@digiware.nl. Reviewed by: wblock Sponsored by: iXsystems --- .../news/status/report-2016-04-2016-06.xml | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) diff --git a/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml b/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml index b913b93dff..fd0bd61852 100644 --- a/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml +++ b/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml @@ -680,4 +680,172 @@ Microsoft + + + Ceph on FreeBSD + + + + + Willem Jan + Withagen + + wjw@digiware.nl + + + + + Ceph main site + Main repository + My Fork + The git PULL with all changes + + + +

Ceph is a distributed object store and file system designed + to provide excellent performance, reliability, and + scalability. It provides the following features:

+ +
    +
  1. Object Storage: Ceph provides seamless access to objects + using native language bindings or radosgw, a REST + interface that is compatible with applications written for + S3 and Swift.
  2. + +
  3. Block Storage: Ceph’s RADOS Block Device (RBD) provides + access to block device images that are striped and + replicated across the entire storage cluster.
  4. + +
  5. File System: Ceph provides a POSIX-compliant network file + system that aims for high performance, large data storage, + and maximum compatibility with legacy applications.
  6. +
+ +

I started looking into Ceph as using HAST with CARP and + ggate did not meet my requirements. My primary goal + with Ceph is to run a storage cluster of ZFS storage nodes + where the clients run bhyve on RBD disks stored in Ceph.

+ +

The &os; build process can build most of the tools in + Ceph. However, the RBD-dependent items do not work since + &os; does not yet provide RBD support.

+ +

Since the last quarterly report, the following progress was + made:

+ +
    +
  1. The changeover from using CMake to Automake results in a + much cleaner development environment and better test output. + The changes can be found in the + wip-wjw-freebsd-cmake branch.
  2. + +
  3. Throttling code has been overhauled to prevent live locks. + These mainly occur on &os; but also manifest on Linux.
  4. + +
  5. Fixed a few more tests. On one occasion, I was able to + complete the full test set without errors.
  6. +
+ +

11-CURRENT is used to compile and build test Ceph. The + Clang toolset needs to be at least version 3.7 as Clang 3.4 + does not have all of the capabilities required to compile + everything.

+ +

This setup will get things running for &os;:

+ + + +

Parts Not Yet Included:

+ + + +

Tests Not Yet Included:

+ + + + + + The current and foremost task it to get the test set to + complete without errors. + + Build an automated test platform that will build + ceph/master on &os; and report the results back to + the Ceph developers. This will increase the maintainability + of the &os; side of things as developers are signaled that + they are using Linux-isms that will not compile or run on + &os;. Ceph has several projects that support this: Jenkins, + teuthology, and palpito. But even a + while { compile } loop that reports the build data on + a static webpage is a good start. + + Run integration tests to see if the &os; daemons will work + with a Linux Ceph platform. + + Get the currently excluded Python tests to work. + + Compile and test the user space RBD (Rados Block + Device). + + Investigate if an in-kernel RBD device could be developed + ala ggate. + + Investigate the keystore which currently prevents the + building of Cephfs and some other parts. + + Integrate the &os; /etc/rc.d init scripts in the + Ceph stack for testing and for running Ceph on production + machines. + +