Condor High-Throughput Computing System

Condor High-Throughput Computing System
Condor
Developer(s) University of Wisconsin–Madison
Stable release 7.6.4 Stable / October 24, 2011; 21 days ago (2011-10-24)
Preview release 7.7.2 / October 12, 2011; 33 days ago (2011-10-12)
Operating system Microsoft Windows, Mac OS X, Linux
Type High-Throughput Computing
License Apache License 2.0
Website Official website

Condor is an open source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks.[1] It can be used to manage workload on a dedicated cluster of computers, and/or to farm out work to idle desktop computers — so-called cycle scavenging. Condor runs on Linux, Unix, Mac OS X, FreeBSD, and contemporary Windows operating systems. Condor can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.

Condor is developed by the Condor team at the University of Wisconsin–Madison and is freely available for use. Condor follows an open source philosophy (it's licensed under the Apache License 2.0)[2]. It can be downloaded from their Web site or by installing the Fedora Linux Distribution. It is also available on other platforms, like Ubuntu from the repositories.

By way of example, the NASA Advanced Supercomputing facility (NAS) Condor pool consists of approximately 350 SGI and Sun workstations purchased and used for software development, visualization, email, document preparation, etc. [3] Each workstation runs a daemon that watches user I/O and CPU load. When a workstation has been idle for two hours, a job from the batch queue is assigned to the workstation and will run until the daemon detects a keystroke, mouse motion, or high non-Condor CPU usage. At that point, the job will be removed from the workstation and placed back on the batch queue.

Condor can run both sequential and parallel jobs. Sequential jobs can be run in several different "universes", including "vanilla" which provides the ability to run most "batch ready" programs, and "standard universe" in which the target application is re-linked with the Condor I/O library which provides for remote job I/O and job checkpointing. Condor also provides a "local universe" which allows jobs to run on the "submit host".

In the world of parallel jobs, Condor supports the standard MPI and PVM (Goux, et al. 2000) in addition to its own Master Worker "MW" library for extremely parallel tasks.

Condor-G allows Condor jobs to use resources not under its direct control. It is mostly used to talk to Grid and Cloud resources, like pre-WS and WS Globus, Nordugrid ARC, UNICORE and Amazon EC2. But it can also be used to talk to other batch systems, like Torque/PBS and LSF. Support for Sun Grid Engine is currently under development as part of the EGEE project.

Condor supports the DRMAA job API. This allows DRMAA compliant clients to submit and monitor Condor jobs. The SAGA C++ Reference Implementation provides a Condor plug-in (adaptor), which makes Condor job submission and monitoring available via SAGA's Python and C++ APIs.

Other Condor features include "DAGMan" which provides a mechanism to describe job dependencies.

Condor is one of the job scheduler mechanisms supported by GRAM (Grid Resource Allocation Manager), a component of the Globus Toolkit.

Condor was the scheduler software used to distribute jobs for the first draft assembly of the Human Genome.

Whilst Condor makes good use of unused computing time, leaving computers turned on for use with Condor will increase energy consumption and associated costs. The University of Liverpool[4] has demonstrated an effective solution for this problem using a mixture of Wake-on-LAN and commercial power management PowerMAN (Software).[5]. Starting from version 7.1.1, Condor can hibernate and wake machines based on user-specified policies without the need for third-party software[6].

See also

References

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • High-throughput computing — (HTC) is a computer science term to describe the use many computing resources over long periods of time to accomplish a computational task.ChallengesThe HTC community is also concerned with robustness and reliability of jobs over a long time… …   Wikipedia

  • Condor (disambiguation) — Condor is the common name for two species of birds: Vultur gryphus (Vultur) the Andean condor Gymnogyps californianus (Gymnogyps), the California condor Condor may also refer to: Contents 1 Civil aviation 2 …   Wikipedia

  • DiaGrid (distributed computing network) — DiaGrid is a large high throughput distributed research computing network utilizing the Condor system and centered at Purdue University in West Lafayette, Indiana. DiaGrid received a 2009 Campus Technology Innovators award from Campus Technology… …   Wikipedia

  • Many-task computing — (MTC)[1][2][3][4][5][6][7] aims to bridge the gap between two computing paradigms, high throughput computing (HTC) …   Wikipedia

  • Miron Livny — Born May 1, 1950 Fields …   Wikipedia

  • List of distributed computing projects — A list of distributed computing projects. Berkeley Open Infrastructure for Network Computing (BOINC) The Berkeley Open Infrastructure for Network Computing (BOINC) platform is currently the most popular volunteer based distributed computing… …   Wikipedia

  • Morgridge Institute for Research — The Morgridge Institute for Research is a private, nonprofit biomedical research institute in Madison, Wis., affiliated with the University of Wisconsin–Madison. Research in disciplines including regenerative biology, virology, medical devices,… …   Wikipedia

  • Job scheduler — This article is about a class of software. For the mathematical problem in Computer Science, see Job Shop Scheduling. For other uses, see Scheduling (disambiguation). A job scheduler is a software application that is in charge of unattended… …   Wikipedia

  • Globus Toolkit — Infobox Software name = Globus Toolkit caption = developer = Globus Alliance latest release version = [http://www.globus.org/toolkit/downloads/4.2.0/ 4.2.0] latest release date = July 02, 2008 latest preview version = latest preview date =… …   Wikipedia

  • Minimum intrusion Grid — The Minimum intrusion Grid (commonly MiG) is an effort to design and develop a modern grid computing system that has minimal requirements for usage yet still maintains a security for all participants in the system. MiG currently covers three… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”