- Automated Tissue Image Systems
Automated Tissue Image Systems (ATIS) are computer-controlled
automatic test equipment(ATE) systems classified as medical deviceand used as pathology laboratory tools (tissue-based cancer diagnostics) to characterize a stained tissue sample embedded on a bar-coded glass slide. After careful tissue sample preparation on to a slide, characterization of the sample is based on detection of cellular traits (color, shape, and numeration) observed in high-powered microscopic visual images (so-called bright-field microscopy).
A typical ATIS application includes measuring aggregate cellular activity or expression in a breast cancer tumor (carcinoma) biopsy to determine stage of disease for the appropriate course of treatment prescription. The technologies utilized in all the seven basic functions of Automated Tissue Image Systems are state-of-the-art and includes: Specimen Preparation; Image Acquisition; Image Analysis; Assessment Reporting; Data Storage Management; Network Communication, and System Self-Diagnostics.
Peripheral system equipment includes, but is not limited to, automated
staining, hybridizing, pressurizing, and heating of the tissue sample. The integrated system software is complex and highly regulated, as is the entire system, by the U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health (CDRH), under Code of Federal Regulations (CFR) Title 21. As such, stringent compliance to validation and verification (V&V) methods with respect to safety, efficacy, and documentation is paramount.
Automated Tissue Image Systems (ATIS) redefine typical Histology Laboratories (HL), in a similar way that ATE systems redefined manufacturing testing, by applying
Lean Manufacturing(LM) & Six Sigma(SS) tools to the Histology Laboratory production floor. Production in this context fundamentally means turn-around time and diagnosis quality. Although there are exceptions, typical HL today do not utilize modern industrial manufacturing lean methods (concurrent engineering methods) that provide world class manufacturing environments in other industries, they (HL) lack LM/SS methodology, automation, and connectivity. ATIS provides the automation component of a modern, safe, effective, and efficient Histology Laboratory system.
Breast cancer diagnosis
Large number of ATIS applications are for testing gene amplification or protein
over expressionto assess HER-2 ErbB-2status and determine eligibility for appropriate adjuvant breast cancertreatments. There is good reason why this particular application is of large interest, mainly, that ATIS were conceived for the purpose of significantly reducing the variability involved in the manual and complex process of characterizing breast cancer tumors.
ATIS significantly reduce the procedural uncertainty involved in characterizing tumors, and hence, determining whether or not a specific type of treatment is appropriate when compared to the same procedure done manually by various histotechnologists. Adjuvant treatments may include chemotherapy, hormone therapy, radiation therapy, pharmacotherapy (
Trastuzumab, antibody therapy), or biological therapy.
The reduction in uncertainty is attributable to the inherent consistency of machine action. The word "significantly" above is not used in the
statisticalsense, but in the fact that even a minuscule reduction of process variability can lead to few cancer cases whereby specific therapeutic treatments are not bypassed (so-called equivocal cases), and are then found to be beneficial. Test procedures include immunohistochemistry, in situ hybridization, and polymerase chain reactionhaving test target receptors ErbB-2, NR3A1, NR3A2, NR3C3, Ki-67, p53, and ErbB-1.
High-performance CCD cameras are used for digital image restoration because of their superior quantitative imaging characteristics. Coupled with advanced widefield microscopes and numerous algorithms for image restoration, this approach not only provides improved results over confocal techniques but can do so at comparable speeds and lower costs. And, as opposed to a dedicated confocal instrument, a high-performance CCD camera can be utilized for numerous other applications.
*Fluorescence recovery after photobleaching (FRAP).
*Forster resonance energy transfer (FRET) microscopy .
*Green Fluorescent Protein (GFP) imaging in living Cells.
*Total Internal Reflection Fluorescence (TIRF) microscopy.
*Membrane research - quantify intensity of membranous immunohistochemistry stains.
*Nuclear research - quantify intensity of nuclear immunohistochemistry stains.
*Cytoplasmic research - analyze cellular cytoplasm staining.
*Single-Molecule Fluorescence (SMF) Imaging.
*Micro-Vessel Density research - analyze micro-vessels and density.
*Rare Event in Tissue research - detection of rare positive cells in stained tissue sections.
*Rare Event in Cytospin research - detection of rare positive cells in stained cytospin preparations.
*DNA Ploidy research - analysis of DNA ploidy status.
*Fluorescent speckle microscopy.
Technologies & Methods
Classified by FDA as medical devices, ATIS fall into the general instrumentation category of
automatic test equipmentand are subject to the same basic principles of design, development, validation/verification, documentation, and support as do non-medical automatic test equipment (ATE) systems. These systems generally undergo computer-aided design and simulation, prototype development, validation & verification of design, various operation and specification documentation throughout the process, and manufacturing & customer support not unlike a new product developmentprocess.
What differentiates ATIS from more traditional-type ATE systems, for example the Agilent (HP) 3070 with
multi-site testcapability for testing electronics hardware and firmware, is the value placed in the unit under test (UUT) as an inanimate object versus urgency of an animate-related UUT. Of course, in the latter case, regulatory controls and validation/verification play a significantly stronger role in development.
ATIS have seven basic functions (sample preparation, image acquisition, image analysis, results reporting, data storage, network communication, and self-system diagnostics) and realization of these functions requires complex system integration of hardware and software from several state-of-the-art technologies in engineering disciplines that include chemical, computer science, electrical, electronics, hydraulics, mechanical, optical, pneumatics, and quality.
The preparation of the quantity-limited tumor
specimenis absolutely critical in successful characterization and generally, for example in situ hybridizationsay fluorescent or FISH (the so-called gold standard for HER-2 status), encompasses two basic processes over a period of a few days.
First process involves a complex multi-step procedure that includes cutting the breast cancer tumor or
biopsyto appropriate size (typically 4mm), 24-hour fixation in buffered formalin, ethanol- xylene dehydration, paraffinembedding (heated), and mounting thin (typically 4um) slices onto at least two barcoded slides (control & test).
The second process involves staining which includes preparation of
reagents, equilibration of reagents, removal of paraffin from slides, rehydration, pre-treatment, enzymesdigestion, denaturation& hybridization, wash series, and mounting. Any variation or inconsistency introduced in these procedures from process to process or case to case will result in unpredictable irregularities and uncertainties in the outcome of the test or analysis findings. It is precisely these inconsistencies in analysis results that led to and motivated the development of Automated Tissue Image Systems.
It is estimated (based on Mod Pathol. 2005 Aug;18(8):1015-21) that 8% of HER-2 status findings have major scoring discrepancies (i.e. - non-amplification versus amplification), meaning that 8% of the cases are either deprived or robbed of proper treatment, or treated with an inappropriate therapy which could include chemo, hormone, radiation,
Trastuzumab/Herceptin, or surgery.
Charge-couple device (CCD)
Image acquisition involves digitizing photographic two-dimensional microscopic views of the stained specimen on a glass slide. The photographs are taken by a set of three
charge-coupled devices(an array of photoelectric light sensors composed of a focal plane arrayand readout integrated circuit) color cameras integrated in a microscope field of view then digitized into ultra-high resolution terapixel (1012) digital computer images. A color filter array (Bayer filter) in the 3-CCD camera is made to arrange a red filter, a green filter, and a blue filter over a grid of sensors which output an electric signal in response to the light impinged upon it. Because each sensor input is color-filtered, each sensor will essentially measure the analog intensity of incident light at a particular frequency (red, green, or blue). The combination functions as a pseudo-pixel.
CCD are ranked according to the number and type of defective pixels. The manufacturing yield of each CCD affects the cost - the more defects, the cheaper the CCD. The CCD is obviously a large cost in the overall system, hence, an imaging system is largely ranked based on its CCD quality. There are several parameters for ranking CCDs including: point defects; cluster defects; row/column defects; charge traps; neighbor defects; and hot defects.
There are frame-transfer and full-frame CCDs. A frame-transfer CCD has its parallel register divided into two areas: image array (where images are focused) and storage array (where the integrated image is temporarily stored before readout). The full-frame CCD design employs a single parallel register for photon exposure, charge integration, and charge transport. A shutter is used to control the exposure and block light from striking the CCD during readout.
All these sensors require calibration so that each measurement is with respect to the same reference source. CCD calibration techniques can be single-point or multi-point. Single-point is simple but can not correct for non-linear effects across the spectrum. Multi-point is more complex, using B-spline fitting techniques. The analog signal or intensity measurement from the sensor undergoes signal conditioning and sampling at high frequency for conversion from an analog scale to a digital scale (for computer use as 0-1 bits) and to eventual image file formation (in Bitmap or vector-based format) for computer processing and storage. This is the digitizing process.
Modern high performance imaging applications require
FPGA-based coprocessors system to supplement traditional DSP ( digital signal processor) performance. For example, a 7x7 2-dimensional preprocessing filter kernel applied to broadcast HDTV 1080p video at 1920 by 1080 resolution at 30 frames per second and 24 bits per pixel requires more than 9 Giga macs ( multiply-accumulate) per second. This is more than the fastest commercial DSP can provide.
For video compression systems, FPGA coprocessing architectures are better solutions compared to platforms based on multiple DSP’s. Implementations of high-definition broadcast quality encoding utilizing video
codecs MPEG2, MPEG4, and H.264with a single FPGA and DSP solution are better.
The FPGA can implement algorithms that require the most cycles on the DSP, including the
motion estimationblock, entropy coding, and the de-blocking filter. The DSP can focus on algorithms that are more control oriented and better mapped to a C-code implementation. Newer entropy coding techniques, such as Context-Adaptive Variable Length Coding ( CAVLC) and Context Adaptive Binary Arithmetic Coder (CABAC), are best realized as hardware accelerated blocks on the FPGA.
At 24 bits per pixel, 224 (over 16 million) different combination mixtures of red, green, and blue attributes leading to very large file sizes unsuitable for file transfer over a network but very suitable for discriminating minute color differences in the specimen image. It is this discrimination or resolution capacity, unmatched by humans, that makes digital image analysis a powerful pathology tool in the quest toward greater consistency. The digitizing process allows quantification of a finite number of well defined measurements, as opposed to, analog based vision having infinite degrees of subjectively defined measurements among histotechnologists.
Prior to the digitizing, a binning algorithm or process is performed whereby charge from adjacent pixels in a CCD during readout are combined. This process is performed in embedded circuitry of the CCD by serial and parallel control registers. Benefits of
data binningare improved signal-to-noise ratio (SNR) and increase frame rate. Binning is the process of combining charge from adjacent pixels in a CCD during readout into a single superpixel. Binning neighboring pixels on the CCD array may allow reaching a photon-limited signal more quickly, but at the expense of angular resolution.
Higher SNR can be obtained due to reduced read noise. CCD read noise is added during each readout event and in normal operation, read noise will be added to each pixel. However, in binning mode, read noise is added to each superpixel, which has the combined signal from multiple pixels. In the ideal case, this produces SNR improvement equal to the binning factors.
Unlike read noise, dark current noise is not reduced by binning since each pixel will contribute dark current noise to the superpixel. To ensure that dark current noise does not lower SNR during binning, it is essential that the CCD be cooled sufficiently to reduce the dark current noise to a negligible level relative to the read noise.
Binning also increases frame rate. Since the slowest step in the readout sequence is the digitization of a given pixel, binning can be used to increase the effective total frame rate of a given system. Thus, highly binned, low-resolution images can be obtained when high speed is required and full-frame, high-resolution images can be obtained when the ultimate resolution is required.
Signal-to-noise ratio(SNR) describes the quality of a measurement. In CCD imaging, SNR refers to the relative magnitude of the signal compared to the uncertainty in that signal on a per-pixel basis. It is the ratio of the measured signal to the overall measured noise (frame-to-frame) at that pixel. High SNR is particularly important in applications requiring precise light measurement.
Photons incident on the CCD convert to photoelectrons within the silicon layer. These photoelectrons comprise the signal but also carry a statistical variation of fluctuations in the photon arrival rate at a given point. This phenomenon is known as photon noise and follows
Poissonstatistics. Additionally, inherent CCD noise sources create electrons that are indistinguishable from the photoelectrons. When calculating overall SNR, all noise sources need to be taken into consideration, including photon noise, read noise, dark noise.
under development, sept 2....expand on noises: photon(photon-shot), dark, temporal read,
Image analysis involves complex computer algorithms which identify and characterize cellular color, shape, and quantity of the tissue sample using image pattern recognition technology based on
vector quantization. Vector representations of objects in the image, as opposed to bitmap representations, have superior zoom-in ability. Once the sample image has been acquired and resident in the computer’s random access memory as a large array of 0’s and 1’s, a programmer knowledgeable in cellular architecture can develop deterministic algorithmsapplied to the entire memory space to detect cell patterns from previously defined cellular structures and formations known to be significant.
The aggregate algorithm outcome is a set of measurements that is far superior to any human sensitivity to intensity or
luminanceand color hue, while at the same time improving test consistency from eyeball to eyeball.
For example, the process of signal enumeration in FISH (as described above in Preparation), is a scoring scheme that entails counting red and green signals (colored dots inside well bounded non-necrosis nuclei) representing hybridized ErbB-2 probe with normal and overly amplified ErbB-2, as well as, the control probe hybridizing to the middle of
Chromosome 17, respectively. Signal enumeration (counting) is a subjective process among histotechnologist who may debate as to what constitutes an ambiguous nuclear border. No such ambiguity exist in an algorithm, whether clinically right or wrong, the algorithm will follow a deterministic set of rules to yield consistent outcomes, in this case, count the number of red and green pseudo-pixels within the border.
High-performance CCD cameras are used for fluorescence in situ hybridization (FISH) applications such as whole-chromosome painting, locus-specific analysis, gene mapping, and comparative genomic hybridization (CGH).CCD cameras are used because of their ability to see beyond the capabilities of the human eye. Visualization of small cDNA clones, CGH experiments, and combinatorially labeled probes all require low-light sensitivity. Cameras are optically coupled to the microscope such that the pixel-to-pixel resolution matches that of the microscope optics. Gene mapping is limited by the theoretical resolution limit and not by the imaging device.
Communicating test results involves presenting data (in text and graphic forms) to the system user in a format that is not only friendly to oncologists or qualified system users for verification and further analysis, but should also be as realistic as the actual specimen and not introduce additional artifacts. Specimen artifacts already exist including edge, retraction, thermal, crush, and decal artifacts. The media for data presentation is largely high quality computer monitors but can also include printers.
An appropriate type of high quality display for ultra-high resolution image acquisitions are High Definition (HD) monitors or televisions commonly used in operating rooms that employ 3-CCD telescopic camera head on endoscopes to display surgical areas of interest inside the body. These high definition monitors or televisions generally use progressive (non-interlaced) scanning techniques at refresh rates between 24 to 60 Hz.
A typical wide-screen HD device has an aspect-ratio of 16:9 which has been found to offer a more natural, panoramic view since human horizontal field of view is wider than the vertical field and therefore causes less surgeon fatigue during long procedures. A hi-def device can also provide improved depth perception and improved recognition of landmarks or rare-event sightings that may go undetected by computer algorithms. A 1080p (progressive) HD signal requires no interlacing conversion for a 1920x1200 HD monitor, which means that the vertical lines are all painted sequentially in one frame and hence minimizes possible noise between adjacent lines during the second scan of an interlaced process for fast moving images.
Furthermore, as hospitals and laboratories plan to adopt HD technology for their endoscopic surgery programs, compatibility with both existing and future technologies will be a critical factor in assessing the overall cost of ownership. Planning must include whether an HD system under consideration will be compatible with existing components, or will require the purchase of entirely new HD components. It must also include whether the system will readily accommodate future generations of HD technologies and components as they are developed. This includes large-format touchtable display technology.
Computer printers, as relatively low image resolution devices, are used mostly to present final test reports (pathology reports) that could include text and graphics to match department, laboratory, or hospital formats. A typical pathology report may include: laboratory name, address, and logo; patient demographics; a graphical display of the antibody markers (including the control marker); a graphical display of bar charts representing numerical results; tabular representation of
assaysresults with reference ranges; a comment section; and a system user signature.
An additional reporting feature of modern Automated Tissue Image Systems is the ability to provide real-time
Statistical Process Control(SPC) metrics of the applicable Histology Laboratory processes undertaken by the automated system. For example, ATIS reporting includes real-time laboratory work flow control charts which monitor the on-going procedure in reference to all other previous procedures metrics and alerts when the current procedure is “out-of-control” with respect to process variability or any other desired metric. SPC, a Lean Manufacturing and Six Sigma tool, is but one of the tools available in sophisticated automated systems in a manufacturing or production environment.
Storage of the acquired data (graphical digital slide files and text data) involves saving system information in a
data storage devicesystem having well-defined schemaand hierarchy for reference traceability, fast retrieval, and overall management. A fibre channel storage area networkSAN is an appropriate data storage device system.
When ATIS is implemented in a laboratory with an existing database, or
Laboratory Information System(LIS), the two storage systems may not be compatible in terms of content or format and will require integration if ATIS does not comply with industry standards. There are several ways of integrating ATIS with LISor LIMSdepending on the size and activities of the laboratory sites.
Medical imaging industry standards includes the Picture Archiving and Communication Systems (
PACS), of European origin, which are image and information management solutions in computer networks that enables hospitals and clinics to acquire, distribute and archive medical images and diagnostic reports across the enterprise. Another standard of European origin is the Data and Picture Archiving and Communication System (DPACS). Although medical images can be stored in various formats, a common format has been Digital Imaging and Communications in Medicine ( DICOM).
DICOM is a standard developed by the National Electrical Manufacturers Association (
NEMA) for handling, storing, printing, and transmission of medical images. It includes file format and network protocol definitions. The file format is relatively peculiar and has found implementation resistance in North America. The communication protocol is an application protocol that uses TCP/IP. DICOM files can be exchanged between two entities that are capable of receiving image and patient data in DICOM format. Software languages for writing applications have been a mixture of Java and ANSI C/C++, applications include DICOM-JPEG viewers. DICOM offers solutions for many network and off-line communication applications. However, DICOM is no guarantee for a plug and playintegration of all information systems in a hospital. Such a scenario requires a careful combination of all the partial solutions offered by DICOM.
Open architecture using industry-standard
Web Services(Web Application Programming Interface, API) to communicate between clients and servers using Extensible Markup Language(XML) messaging having Simple Object Access Protocol(SOAP) standard provides best flexibility. For example, a laboratory already using barcodes for slide identification could benefit from established industry API standards for barcodes and simple implementation by having internal IT personnel setup appropriate internal and ATIS servers (SQL based for interacting with a database table manager such as MS SQL server). SOAP would be used to exchange XML messages over the network using http/https. ATIS could take requests from users via TCP/IP sockets to privately interact with the MS SQL server. Another common example providing a standard integration solution is a laboratory using the international industry standard Heath Level Seven(HL7), an ASCIIbased protocol for participants of the health management community communicate.
Sharing saved system information among the medical community involves linking various computers located throughout a facility on a local area network using fibre channel
storage area networkSAN with fibre channel switches and host bus adaptors(HBA), or computers elsewhere throughout the world using an encrypted tunneling technique called Virtual Private Network(VPN) on public network (internet), with integrated VoIPboth methods facilitate real-time group analysis. This is state-of-the-art connectivity not unlike telepresence. In fact, modern imaging applications, such as those being undertaken by General Electric Healthcare in conjunction with the University of Pittsburg Medical Center, are currently being based on technologies more advanced than VPN, such as Microsoft's Office Communications Server (OCS), to replace legacy databases such as the Picture Archiving and Communications System (PACS). These new technologies, OCS2007 & Exchange Server 2007, provide a unified communications solution with VoIP, webconferencing, videoconferencing, e-mail, voice mail, instant messaging and presence.
A VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using costly dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the laboratory's private network to the pathologist's remote site. The cost effective solution provided by Internet-based VPN has allowed low-budget hospitals to comply with state regulations. VPN solution involves a
firewall, router, proxy server, VPN software or all of these. Authenticationand encryptionare critical components of VPN. The security of a VPN is a function of how tightly authentication, encryption and access controls are connected. There are a variety of security schemes used by Virtual Private Networks including, but not limited to Ipsec(Internet Protocol Security), SSL/TLS (Secure Sockets Layer/Transport Layer Security), OpenVPN(open source).
As mentioned in the Acquisition section above, the bitmap format is not suitable for file transfer over a network nor is it suitable for high power zoom-in. The latter is the principal reason why vector-based format is preferred over bitmap, the ability to zoom-in without loss of image quality. The standard format used for image files captured by CCD cameras for scientific image processing is the
Tagged Image File Format(TIFF) or BigTIFF(files greater than 40GB) format and is a more suitable format than BMP (used by MS Windows operating system). A popular computer data transfer standard is the IEEE1394, known as FireWire. Firewire has advantages over most differential serial busses for 16bits/pixel transfers, including the ability for direct memory access(DMA). Structured Query Language (SQL) is used for local communications between archive servers and dedicated reporting workstations as it is faster than DICOM. Likewise, HTML with Java support should be employed whenever possible between general-purpose clients and system servers.
Fibre Channel HBAs are available for all major open systems, computer architectures, and buses, including
peripheral component interconnect(PCI). Each HBA has a unique World Wide Name(WWN), which is similar to an Ethernet MAC addressin that it uses an OUIassigned by IEEE standard. There are two types of WWNs on a HBA; a node WWN, which is shared by all ports on a host bus adaptor, and a port WWN, which is unique to each port. HBA speeds vary 2-8 Gigabits/second.
Built-In Self Test (BIST) is a Design For Test concept implemented in complex, sophisticated systems to provide health-checks of its components and general operation in real-time, including power-up self testing, continuous background monitoring, and post processing troubleshooting. System diagnostics includes but is not limited to testing electromechanical components such as linear actuator, mixers, pressure/vacuum status, heaters, and waste overflow. An intelligence library database structured as a failure reporting analysis and corrective action system (
FRACAS), and a failure mode and effects analysis ( FMEA) catalog provides guidance to system operator with root cause analysis, troubleshooting and repair upon BIST failures.
The idea is to provide the system operator with on-the-spot guidance to problems like: interface between instrument software and the laboratory’s Information system; or software upgrades that change the performance of the assay (e.g. assay cut-off); or virus/worm/spyware that infects the device operating software.
oftware System Control
Integrated system software for Automated Tissue Image Systems has to be reflective of the procedural work flow in Histology laboratories, hence, concurrent engineering integration is highly complex. For example, the following are system modules (which are also stand-alone system applications) found in a typical ATIS with peripheral options.
ATIS System Software
Built-In-Self Diagnosis and Support Software
iShare Connectivity Professional Software
Rare Event in Tissue Application Software
Rare Event in Cytospin Application Software
Nuclear Application Software
Micro-Vessel Density Application Software
Membrane Application Software
IOD Ploidy Application Software
DNA Ploidy Application Software
Cytoplasmic Application Software
PR Application Software
ER Application Software
HER2Test Application Software
Tissue Micro-Array Application Software
Pre-Treatment Link, Module for Tissue Specimens
Autostainer Plus Link
Autostainer Link 48
This degree of integration complexity requires rigorous validation/verification compliance to one or more regulatory agencies. As was mentioned in the section on Technology & Methods, ATIS as an ATE generally undergo computer-aided design and simulation, prototype development, validation & verification of design, various operation and specification documentation throughout the process, and manufacturing & customer support. However, because of the critical importance placed in system software by regulatory agencies and the inherent integration complexity of ATIS, software design V&V is discussed in the Safety and Efficacy section below. Finally, the software development platform for ATIS design can make a competitive difference if it is chosen such that it is flexible to accommodate new technologies. Clearly, the choice between traditional Windows-based applications and web-based (client/server) applications is now a no-brainer. Further, the .NET platform with its wide flexibility for technologies integration, especially now in communications, is very compelling for state-of-the-art ATIS development applications.
Many traditional ATIS applications have been and continue to be written in Java. However, with emerging technologies in multimedia internet communications constantly evolving, there may be competitive advantages to using C# and the
Java & C# Differences
Although there are similarities between these two languages, many very significant differences exist.
JAVA has been around longer than C#. Java is compiled in byte code as opposed to C#, and C++(object code). Byte code was designed to run on any processor (Intel, AMD, Motorola) due to the Java Virtual Machine (
JVM) which somewhat analogous to the .NET Framework. The feature of running on any processor was useful with web-based cross-platform applications such as Java Applets, but this technology has become relatively slow. Java, however, continues to be used for server-side development. On the other hand, C#.NET is used extensively in Windows-based PC.
Real time operating systems
An operating system for embedded applications (DSP/
FPGAimplementations) is defined as integrated firmware that runs a computer, controlling and scheduling the execution of subroutines or programs, and managing storage, input/output, and communications. There are many standard operating systems such as Windows, Mac OS, Unix, Linux, etc which can also be embedded. The definition of real-time is not black and white. A system is said to be a real-time system if its correct operation depends on the logical correctness of its operation and the time in which it is performed.
A true real-time system is a system where a late result leads to critical or catastrophic failures such as physical damage or loss of life. Examples include pacemakers and car braking systems. A pseudo real-time system is a system that will tolerate lateness with decreased service quality but no critical consequences. Examples include DVD players and mobile telephones.
real-time operating system(RTOS) is a multitasking operating system intended for real-time applications. An RTOS does not guarantee results will be real-time, this depends on correct development of the software. An RTOS does not necessarily have high throughput. Quickness and/or predictability are preferred. Key performance factors are minimal interrupt latency and minimal task-switching time.
Why are RTOS beginning to be used in imaging systems? Consider that computer speeds and non-volatile memory capacities have been increasing and will continue to increase. A combination of factors have led to larger, more complex imaging systems, including more sophisticated detection, increasingly connected devices, and faster processors with larger memories.
multitaskingand inter-task communications features of the RTOS allow a complex application to be partitioned into a set of smaller, more manageable tasks. The partitions can result in easier software testing, work breakdown within teams, and code re-use. Complex timing, sequencing, and memory management details can be removed from the application code and become the responsibility of the operating system.
Key components of an RTOS includes, memory allocation/protection, intertask communication (messaging), and task scheduling (multitasking). An RTOS architecture can include C++, ANSI C, and Ada applications.
An RTOS requires a
Board Support Package(BSP) (or Kernel) to run on a given board. The BSP contains implementation-specific support code for a given board: address map, access to I/O devices, etc. The BSP initializes the hardware during the boot process.
Multitasking - a conventional microprocessor can only execute one task at a time. By rapidly switching between tasks an RTOS can make it appear as if each task is executing concurrently.
Task scheduling - the RTOS scheduler is responsible for deciding which task should be executing at any particular time. The scheduler can suspend and later resume a task many times. The scheduling of tasks can be a complex and important part of the software development.
Industry standards that affect RTOS do so by: defining a standard interface (API) between medical application software and RTOS; allowing application software to be developed concurrently and independent of RTOS; and allowing for memory and time partitioning.
What does all this mean to software developers for imaging systems? Hardware upgrades will reduce the number of
microprocessorsin the box. The processors will be much faster and contain much more memory. Future imaging system upgrades will include a RTOS. Large routines will be broken into smaller and multiple applications will share the same computing resources. New software architectures, which will lead to: smaller routines; less regression testing; more code re-use; and shorter release cycles
...con't under development (hd)...sep 5: (variables/loops/branches); Objects; Data types; notation/syntax; methods; relation to other high level languages such as C++ (both derived from); namesspaces; exception/event handling.
Quality & Regulatory
Areas of concerns in the regulation of Automated Tissue Image Systems includes, but is not limited to: safety; efficacy; and documentation.
afety and Efficacy
A search for software-related Adverse Event Reports in the FDA MAUDE Database will quickly reveal over eight thousand software-related medical device problems which range from Patient Outcome none to Patient Outcome Disability to Patient Outcome Death. Although in the case of an Automated Tissue Image System the patient never comes in direct contact with instrumentation, an improper diagnosis of the patient’s tumor as a result of faulty hardware, software, or both can either deprive appropriate cancer therapy or induce unnecessary harmful treatment, in either case the result can be death.
Some of the recognized standards bodies in the industry includes: DHHS – Department of Health and Human Services; ANSI – American National Standards Institute; NCVHS – National Committee on Vital and Health Statistics (United States); CHI – Consolidated Health Informatics Initiative (United States); DICOM – Digital Imaging and Communications in Medicine; HL7 – Health Level 7; X12 – Accredited Standards Committee; IOM – Institute of Medicine (United States); and CISB – United Kingdom Clinical Information Standards Board.
An on-going industry problem has been worldwide standardization of Histology Laboratory procedures. Specifically, process inconsistencies introduced by histotechnologist in sample preparations from case-to-case and lab-to-lab. The College of American Pathologist (CAP) can only impose standard operating procedures within each lab but not between them. A CAP/ISO working advocacy group is necessary to bring about enforceable world standards. Standardization is an ever growing necessity for efficacy improvements and progress in general.
FDA 21 CFR Part 820 Quality System Regulation; FDA 21 CFR Part 864; ISO 13485:2003 Medical Devices - Quality Management Systems; ISO 9001:2000 Quality Management Systems; SOR/98-282 Canadian Medical Device Regulations; and European Union 98/79/EC (IVD Directive) all have direct influence on Automated Tissue Image Systems and their peripherals. For network connectivity: ISO/IEC80001, Joint IEC TC62a, and ISO TC215.
Specific to software and firmware V&V is FDA 21 CFR Part 820.30 Design Controls; May 11, 2005 Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices; and January 11, 2002 General Principles of Software Validation. Others include IEC-61508 ; IEC-60601; IEC-62304 ; AAMI SW68:2001 Medical device software - Software life cycle processes; and ISO-14971 Medical devices - Risk management - Part 1: Application of risk analysis.
Design controls (21CFR820.30) is a general requirement for controlling the hardware and/or software design process. The code requires a design & development plan; inputs (specs); outputs (features); personnel review meetings; verification; validation; transition to manufacturing; design changes; and documentation (design history file).
Software V&V relates to efficacy. In a nutshell, software validation answers and documents the question: is it the right software? while software verification answers and documents the question: is the software right? Software is evaluated and reviewed against the software specifications during the ongoing development of the device design. When a final prototype is available, the software and hardware are validated to make certain manufacturer specifications for the device and process are met. Before testing the software in actual use, the detailed code should be visually reviewed versus flow charts and specifications. All cases, especially decision points and error/limit handling, should be reviewed and the results documented.
In all cases, algorithms should be checked for accuracy. Recalls have occurred because algorithms were incorrectly copied from a source and, in other cases, because the source algorithm was incorrect. During the development phase, complex algorithms may need to be checked by using a test subroutine program written in a highorder language, if the operational program is written in a lowlevel language. The validation program is planned and executed such that all relevant elements of the software and hardware are exercised and evaluated. The testing of software usually involves the use of an emulator and should include testing of the software in the finished device.
The testing includes normal operation of the complete device; and this phase of the validation program may be completed first to make certain that the device meets the fundamental performance, safety and labeling specifications. The combined system of hardware and software should be challenged with abnormal inputs and conditions. These inputs and conditions includes: operator errors; induced failure of sensors and cables or other interconnects; induced failure of output equipment; exposure to static electricity; power loss and restart; simultaneous inputs or interrupts; deliberate application of none, low, high, positive, negative, and extremely high input values. Design validation shall include software validation and risk analysis (RA is not included in 21CFR820.30, however, ISO-14971 is a good reference).
General heuristic software V&V testing might include one, a combination, or all of the following test techniques: functional testing; specification-based testing; domain testing; risk-based testing; scenario testing; regression testing; stress testing; all-pairs testing; combination testing; user testing; state-model based testing; high volume automated testing. Risk-based testing is helpful meeting FDA risk analysis requirement while specification-based testing might assist in compliance of 21CFR820.30(f).
A test case exercises one particular situation or conditions of the system being tested, it describes what to test and how. IEEE Std 610 defines a test case as a set of test inputs, execution conditions, and expected results developed for a particular program path or to verify compliance with a specific requirement. The test plan and test cases should be developed based on a risk assessment. A risk assessment is done early in the validation process to determine the degree of validation necessary based on the identified risks, and then develop the test plan and test cases.
Each test case in the plan includes the input, expected output, actual output, acceptance criteria, whether the test passed or failed, the name or initials of the person performing the test, and the date the test was performed. Test cases also includes normal results (results within the “normal range”), abnormal results (unacceptable results or those outside the “normal” range) and boundary results or values. Boundary (Domain) testing is done at the spec boundary, at the limit, just below the limit, and just over the limit.
Validation records are retained. The records include documented evidence of all test cases, test input data and test results. Test results includes screen shots. For traceability purposes and to facilitate quality assurance review and follow-up, supporting documentation, such as screen shots, are identified to link them to the specific test case. Retained test cases that previously passed can be used later for regression testing.
FDA in 1997 issued 21 CFR Part 11 which applies to all electronic records that are created, modified, maintained, archived, retrieved, or transmitted in companies or departments that work under FDA regulation. Medical devices must comply with these regulations. Specifically, system software with capability to manage digital images, electronic data, and documents, as found in typical Automated Tissue Image Systems.
Hence, for regulatory purpose, ATIS software which includes integrated image acquisition and analysis as well as client/server-based data management, will also need to provide the following compliance features: Full computer-generated audit trails; validated time stamps; version control of all files; advanced user access management; electronic long-term archive option; document life cycle management; and electronic signatures (optional).
21 CFR Part 11 restricts system access. Images, data sheets, diagrams, and text files saved in the system must be tracked via audit-trail and version control. Data can not be deleted, and anytime an image is opened for viewing, the system should make the user must indicate whether it is being opened as Read-Only or Not-Read-Only (new version then created).
Design Controls is a Test Engineering application used by the
Food and Drug Adminstration(FDA) to validate, verify, and control the development process of medical devices. Formulated in 1996, Design Controls (21 CFR 820.30) has 9 parts: Planning; Input; Output; Review; Verification; Validation; Transfer; Changes; and Files. The objective of Design Controls is to demonstrate compliance to the various "tests" imposed by the Code which would then suggest a minimal, but acceptable, level of medical device efficacyand safety.
Corrective and Preventive Action
The initial step in the process is to clearly define the problem. It is important to accurately and completely describe the situation as it exists now. This should include the source of the information, a detailed explanation of the problem, the available evidence that a problem exists.
The situation that has been described and documented in the “Identification” section should now be evaluated to determine first, the need for action and then the level of action required. The potential impact of the problem and the actual risks to the company or customers must be determined. Essentially, the reasons that this problem is a concern must be documented.
In this step of the process a procedure is written for conducting an investigation into the problem. A written plan provides assurance that the investigation is complete and nothing is missed. The procedure should include: an objective for the actions that will be taken, the procedure to be followed, the personnel that will be responsible, and any other anticipated resources needed.
The investigation procedure that was created is now used to investigate the cause of the problem. The goal of this analysis is primarily to determine the root cause of the problem described, but any contributing causes are also identified. This process involves collecting relevant data, investigating all possible causes, and using the information available to determine the cause of the problem. It is very important to distinguish between the observed symptoms of a problem and the fundamental (root) cause of the problem.
Root Cause Analysis
Determining the root cause often requires answering a series of ‘why?’ questions and digging deep until the fundamental reason for the problem is found. For example, in the out of tolerance parts situation described earlier, the investigation revealed that the operator had not been properly trained and had forgotten an essential step in the machining process. The improperly trained operator is the immediate cause of the problem, but may not be the root cause. Why was the operator not trained properly? Are the existing training programs adequate and are they being implemented properly? Further investigation revealed that the operator was on vacation when the training was given and, therefore, did not receive the training when other operators did. The root cause of the problem was a lack of follow up in the training program. No mechanism existed to cross check training records to assure that a missed training session was rescheduled. The root cause of the problem is documented. This will be essential for determining the appropriate corrective or preventive actions that must be taken.
By using the results from the Analysis, the optimum method for correcting the situation (or preventing a future occurrence) is determined and an action plan developed. The plan should include, as appropriate: the items to be completed; document changes; any process, procedure, or system changes required; employee training; and any monitors or controls necessary to prevent the problem or a recurrence of the problem. The action plan should also identify the person or persons responsible for completing each task.
The corrective / preventive action plan that has been created is now implemented. All of the required tasks listed and described in the action plan are initiated, completed, and documented.
One of the most fundamental steps in the CAPA process is an evaluation of the actions that were taken. Several key questions must be answered: Have all of the objectives of this CAPA been met? (Did the actions correct or prevent the problem and are there assurances that the same situation will not happen again?) Have all recommended changes been completed and verified? Has appropriate communications and training been implemented to assure that all relevant employees understand the situation and the changes that have been made? Is there any chance that the actions taken may have had any additional adverse effect on the product or service?
Marketing, Competition, Future Development
The industry has clearly seen pathologists willing to accept Automated Tissue Imaging (ATIS) Systems, which move them away from a microscope and to a telepresence-based computer touchtable monitor. This has motivated the entry into the histology laboratory by GE Healthcare in partnership with the University of Pittsburgh Medical Center (UPMC) digital pathology to form Omnyx LLC, as well as Danaher's acquisition of Leica Microsystems, and should signal the digital pathology industry that competition is alive and well.
Market share is at risk to traditional companies which manufactures ATIS for the principal reason of generating revenue from their reagents and ingredients internally utilized by ATIS, not unlike the computer printer/ink revenue loop. As histology laboratory processes become leaner and regulatory policies become better defined at the national level (i.e.- proposed universal healthcare by new administration), their overall budgets for supplies, such as reagents and buffers, will undergo further analysis in waste management which in turn will act as a feedback mechanism for further automation. Continuous Improvement (
Kaizen) in the laboratory will reduce cost to labs and reduce revenue to reagent suppliers. Reagent suppliers are advised to increase their portfolio R&D investments by increasing staffing and funding for a speedier and more aggressive attack on cancer diagnostics techniques.
Future areas of cancer diagnostic techniques and competitive high-technology applications includes: three dimensional views of glass-slide specimen; pre-treatment preparation automation; specimen characterization methods and quantification variables; nanotechnology applications to characterization; treatment eligibility, validation, and verification of epigenetic therapy; and applications in other cancer diagnostic techniques.
References and External Links
* FDA [http://www.fda.gov/ohrms/dockets/ac/01/briefing/3815b1_08_HER2%20FISH.htm]
* FDA MAUDE [http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm]
* CAP [http://www.cap.org/apps/cap.portal]
* Clarient, Inc [http://www.clarientinc.com/Default.aspx?tabid=357]
* HD-Endoscopy [http://www.hd-endoscopy.com/monitor.html]
* Auroramsc [http://www.auroramsc.com]
* Zeiss [http://www.zeiss.com/micro]
* Dmetrix [http://www.dmetrix.net]
* Bioimagene [http://www.bioimagene.com]
* OlympusAmerica [http://www.olympusamerica.com/seg_section/msfive/ms5_features.asp]
* 3dhistech [http://www.3dhistech.com]
* Omnyx [http://www.omnyx.com]
* Ventanamed [http://www.ventanamed.com/products/files/VIAS_specs.pdf]
* Leica-Microsystems [http://www.leica-microsystems.com]
* Biocaremed [http://www.biocaremed.com/biocare-equipment-instruments.html]
* Aperio [http://www.aperio.com/productsservices/prod-imageanalysis.asp]
* The Doctor's Doctor [http://www.thedoctorsdoctor.com/labtests/Her_2.htm#ref]
* Nikon Imaging Center (site: Harvard Medical School) [http://nic.med.harvard.edu/mission.html]
* Agilent [http://www.home.agilent.com/agilent/product.jspx?cc=CA&lc=eng&ckey=1410602&nid=-536900432.786036.00&id=1410602]
* DICOM [http://dicom.offis.de/standard.php.en]
* PACS [http://www.wma.net/e/publications/pdf/2000/inchingolo.pdf]
* MS OCS2007 & Exchange Server [http://www.microsoft.com/uc/what.mspx]
Wikimedia Foundation. 2010.