High-throughput computing
- High-throughput computing
High-throughput computing (HTC) is a computer science term to describe the use many computing resources over long periods of time to accomplish a computational task.
Challenges
The HTC community is also concerned with robustness and reliability of jobs over a long-time scale. That is, being able to create a reliable system from unreliable components. This research is similar to transaction processing, but at a much larger and distributed scale.
Some HTC systems, such as Condor and PBS, can run tasks on opportunistic resources. It is a difficult problem, however, to operate in this environment. On one hand the system needs to provide a reliable operating environment for the user's jobs, but at the same time the system must not compromise the integrity of the execute node and allow the owner to always have full control of their resources.
High-throughput computing vs. high-performance computing
There are many differences between high-throughput computing and high-performance computing (HPC). HPC tasks are characterized as needing large of amounts of computing power for short periods of time, whereas HTC tasks also require large amounts of computing, but for much longer times (months and years, rather than hours and days) [cite web|url=http://www.hpcwire.com/hpc-bin/artread.pl?direction=Current&articlenumber=11444|first=Alan|last=Beck|date = 1997-06-27|publisher=HPCWire|title=High Throughput Computing: An Interview with Miron Livny] . HPC environments are often measured in terms of FLOPS. The HTC community, however, is not concerned about operations per second, but rather operations per month or per year. Therefore, the HTC field is more interested in how many jobs can be completed over a long period of time instead of how fast an individual job can complete.
As a general rule, HPC systems are tightly coupled parallel jobs, and as such they must execute within a particular site with low-latency interconnects. Conversely, HTC systems are independent, sequential jobs that can be individually scheduled on many different computing resources across multiple administrative boundaries. HTC systems achieve this using various grid computing technologies and techniques.
ee also
* Batch processing
* Condor High-Throughput Computing SystemHTC is not only for sequential jobs but also for parallel jobs.
References
External links
* [http://www.cs.wisc.edu/condor/htc.html Condor's Definition of high throughput computing]
Wikimedia Foundation.
2010.
Look at other dictionaries:
Condor High-Throughput Computing System — Condor Developer(s) University of Wisconsin–Madison Stable release 7.6.4 Stable / October 24, 2011; 21 days ago (2011 10 24) Preview release 7.7.2 / October 12, 2011; 33 days ago … Wikipedia
High-throughput — may refer to:* High Throughput Computing a computer science concept (See Also Throughput) * High throughput screening a bioinformatics concept * Measuring data throughput a communications concept … Wikipedia
Many-task computing — (MTC)[1][2][3][4][5][6][7] aims to bridge the gap between two computing paradigms, high throughput computing (HTC) … Wikipedia
DiaGrid (distributed computing network) — DiaGrid is a large high throughput distributed research computing network utilizing the Condor system and centered at Purdue University in West Lafayette, Indiana. DiaGrid received a 2009 Campus Technology Innovators award from Campus Technology… … Wikipedia
List of computing and IT abbreviations — This is a list of computing and IT acronyms and abbreviations. Contents: 0–9 A B C D E F G H I J K L M N O P Q R S T U V W X Y … Wikipedia
Reconfigurable computing — is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field programmable gate arrays (FPGAs). The principal difference… … Wikipedia
Cluster (computing) — A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks.… … Wikipedia
List of distributed computing projects — A list of distributed computing projects. Berkeley Open Infrastructure for Network Computing (BOINC) The Berkeley Open Infrastructure for Network Computing (BOINC) platform is currently the most popular volunteer based distributed computing… … Wikipedia
Measuring network throughput — Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements. Contents 1 Reasons for measuring… … Wikipedia
Scheduling (computing) — This article is about processes assignment in operating systems. For other uses, see Scheduling (disambiguation). Scheduling is a key concept in computer multitasking, multiprocessing operating system and real time operating system designs.… … Wikipedia