- Ceph
-
This article is about the computer file system. For the orchid genus, see Cephalanthera.
Ceph Developer(s) Sage Weil, Yehuda Sadeh Weinraub, Gregory Farnum Stable release 0.38 / November 10, 2011 Operating system Linux Type Distributed file system License LGPL Website ceph.newdream.net Ceph is a free software distributed file system initially created by Sage Weil. Ceph's main goals are to be POSIX-compatible, and completely distributed without a single point of failure. The data is seamlessly replicated, making it fault tolerant.[1]
Clients mount the file system using a Linux kernel client. On March 19, 2010, Linus Torvalds merged the Ceph client for Linux kernel 2.6.34[2] which was released on May 16, 2010. An older FUSE-based client is also available. The servers run as regular Unix daemons.
Contents
History
Ceph was initially created by Sage Weil (developer of the Webring concept and co-founder of DreamHost) for his doctoral dissertation[3], which was advised by Professor Scott A. Brandt in the Jack Baskin School of Engineering at the University of California, Santa Cruz.
After his graduation in fall 2007, Weil continued to work on Ceph full time, and the core development team expanded to include Yehuda Sadeh Weinraub and Gregory Farnum.
Design
Ceph employs three distinct kinds of daemons:
- Cluster monitors (MON), which keep track of active and failed cluster nodes.
- Metadata servers (MDS) which store the metadata of inodes and directories.
- Object storage devices (OSDs) which actually store the content of files. Ideally, OSDs store their data on a local btrfs filesystem, though other local filesystems can be used instead.[4]
All of these are fully distributed, and may run on the same set of servers. Clients directly interact with all of them.[5]
Ceph does striping of individual files across multiple nodes to achieve higher throughput, similarly to how RAID0 stripes partitions across multiple hard drives. A planned extension to this feature is adaptive load balancing, whereby frequently accessed objects are replicated over more nodes.[5]
Etymology
The name "Ceph" is a common nickname given to pet octopus and derives from cephalopods, a class of molluscs, and ultimately from Ancient Greek κεφαλή (kephalē), meaning "head". The name (emphasized by the logo) suggests the highly parallel behavior of an octopus.
See also
- Distributed file system
- List of file systems, the distributed parallel fault-tolerant file system section
- Fraunhofer Parallel File System (FhGFS)
- GlusterFS
- Lustre
- MooseFS
- Pvfs2
References
- ^ Jeremy Andrews (2007-11-15). "Ceph Distributed Network File System". KernelTrap. http://kerneltrap.org/Linux/Ceph_Distributed_Network_File_System.
- ^ Sage Weil (2010-02-19). "Client merged for 2.6.34". ceph.newdream.net. http://ceph.newdream.net/2010/03/client-merged-for-2-6-34/.
- ^ Sage Weil (2007-12-01). "Ceph: Reliable, Scalable, and High-Performance Distributed Storage". University of California, Santa Cruz. http://ceph.newdream.net/weil-thesis.pdf.
- ^ "Btrfs - Ceph Wiki". http://ceph.newdream.net/wiki/Btrfs. Retrieved 2010-04-27.
- ^ a b Jake Edge (2007-11-14). "The Ceph filesystem". LWN.net. http://lwn.net/Articles/258516/.
Further reading
- M. Tim Jones (2010-05-04). "Ceph: A Linux petabyte-scale distributed file system". developerWorks > Linux > Technical library. http://www.ibm.com/developerworks/linux/library/l-ceph/index.html. Retrieved 2010-05-06.
- Jeffrey B. Layton (2010-04-20). "Ceph: The Distributed File System Creature from the Object Lagoon". Linux Magazine. http://www.linux-mag.com/cache/7744/1.html. Retrieved 2010-04-24.
- Carlos Maltzahn, Esteban Molina-Estolano, Amandeep Khurana, Alex J. Nelson, Scott A. Brandt, and Sage Weil (August 2010, Volume 35, Number 4). "Ceph as a scalable alternative to the Hadoop Distributed File System". ;login:. http://www.usenix.org/publications/login/2010-08/openpdfs/maltzahn.pdf. Retrieved 2010-11-30.
External links
Categories:- Distributed file systems
- Linux file systems
- Network file systems
- User space file systems
Wikimedia Foundation. 2010.