Frequently Asked Questions
Updated 19 January 1999.
What is teleradiology?
Teleradiology is the process of sending radiologic images from one point to another through digital, computer-assisted transmission, typically over standard telephone lines, wide area network (WAN), or over a local area network (LAN). Through teleradiology, images can be sent to another part of the hospital, or around the world.
How are images captured from modalities such as CT, MR, ultrasound, and nuclear medicine?
Images can be captured either by a video capture (frame grabber) board which connects directly to the composite video signal of either the image processor or the console, or digitally by connecting directly from a modality to a workstation over a network (such as, ethernet). The least expensive method to acquire digital data is through a DICOM file transfer.
How are plain film (standard radiograph) images captured?
Standard radiographs can be digitized by either a video camera or a film scanner. Video cameras (commonly referred to as a "camera on a stick") were in fact the method of digitizing any
image for transmission as recently as five years ago. Typical camera systems utilize a light box designed to illuminate radiographs, an extension arm for holding the camera above the film, and a high-sensitivity video camera with a zoom lens. This is an inexpensive but poor-quality method of image acquisition.
Film scanners and digitizers arrived on the teleradiology scene a few years ago. There are two basic types of film scanners: (1) CCD (charge coupled device) digitizers and (2) laser digitizers. There is considerable debate about which technology is superior. Laser digitizers seem to be perceived as providing better images; CCD digitizers seem to be perceived as offering more value for the money. Certainly, CCD technology has improved considerably over the past few years, and today high quality CCD digitizers produce images as good as (or nearly as good as) top-of-the-line laser digitizers. The major difference between images produced with laser digitizers and CCD digitizers is in the optical density captured from the film.
What is DICOM?
(Digital Imaging and Communications in Medicine) is a standard that is a framework for medical-imaging communication. Based upon the Open System Interconnect (OSI) reference model, which defines a 7-layer protocol, DICOM is an application-level standard, which means it exists inside layer 7 (the uppermost layer). The standard was developed by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) with input from various vendors, academia, and industry groups. It is referred to as "version 3.0" because it replaces versions 1.0 and 2.0 of the standard previously issued by ACR and NEMA, which was called the "ACR-NEMA" standard.
DICOM provides standardized formats for images, a common information model, application service definitions, and protocols for communication.
Which telecommunication media can be used in teleradiology?
Depending on data-transfer rate requirements and economic considerations, images can be transmitted by means of common telephone lines (twisted pairs of copper wire), digital phone lines (ISDN, switched-56, etc.), coaxial cable, fiber-optic cable, microwave, satellite, and frame relay or T1 telecommunication links.
Today most teleradiology systems run over standard telephone lines. Over the next couple of years, we should see a substantial migration to switched-56 and ISDN (Integrated Services Digital Network) lines, which offer higher speed and better line quality than standard dial-up phone lines. Other high-speed lines, including T1 and SMDS (shared multimegabit data services) will also become more popular as prices continue to drop.
What is meant by image-bit size?
Digital images, whether viewed on a computer monitor, transmitted over a phone line, or stored on a hard disk or archival medium, are pictures that have a certain spatial resolution.
The spatial resolution, or size, of a digital image is defined as a matrix with a certain number of pixels (information dots) across the width of the image and down the length of the image. The more pixels, the better the resolution. This matrix also has depth. This depth is usually measured in bits and is commonly known as shades of gray: a 6-bit image contains 64 shades of gray; 7-bit, 128 shades; 8-bit, 256 shades; and 12-bit, 4096 shades.
The size of a particular image is referenced by the number of horizontal pixels "by" (or "times") the number of vertical pixels, and then by indicating the number of bits in the shades of gray as the depth. For example, an image might have a resolution of 640 x 480 and 256 shades of gray, or 8 bits deep. The number of bits in the data set can be calculated by multiplying 640 x 480 x 8 equals 2,457,600 bits. Since there are 8 bits in a byte, the 640 x 480 image with 256 shades of gray is 307,200 bytes or .3072 megabytes of information.
What does image compression mean?
Although images should be permanently archived as raw data or with only lossless data compression (no data is destroyed), hardware and software technology exists that allows teleradiology systems to compress digital images into smaller file sizes so that the images can be transmitted faster.
Compression is usually expressed as a ratio: 3:1, 10:1, or 15:1. A 10:1 compression factor means that for each piece of information in the original image's matrix, ten are compressed.
Certain images can withstand a substantial amount of compression without a visual difference: CT and MR images have large areas of black surrounding the actual patient image information in virtually every slice. The loss of some of those pixels has no impact on the perceived quality of the image nor does it significantly change reader-interpretive performance.
How do you calculate image-transmission time?
"How fast can you transmit an image?" is probably the question asked most often of teleradiology sales reps. However, the answer is not simple. An "image" is what is shown on the display monitor; it could be a single CT slice or an entire 14" x 17" film. If a vendor answers the question with "15 seconds," he or she is only giving you half of the answer. Image-transmission time is directly proportional to the file size of the digital image. The greater the amount of digital information in an image (that is, the larger the matrix and the larger the number of bits per pixel), the greater the time required to transmit the image from one location to another. A radiological image contains a large amount of digital information. For example, an image with a relatively low resolution of 512 x 512 x 8 bits contains 2,097,152 bits of data, and a 1,024 x 1,024 x 8-bit image has 8,388,608 bits of data.
Transmission time has to follow the laws of size. The only way to decrease the transmission time is either to increase the speed of the modem or reduce the number of bits (compress the image) being sent.
The following formula is used to calculate the time to transmit an image:
(Matrix Size) x (Matrix Depth + 2 bits) x (Percentage of Compression) / (Modem Speed) = Seconds to Transmit.
Matrix Depth is shades of gray: 256 shades of gray equal 8 bits; 128 shades of gray equal 7 bits; 64 shades of gray equal 6 bits. For modem control, all modems add 2 bits when transmitting.
Modem protocol overhead, turnaround time, and typical phone line interference cause all modems to be slower than their published capacity, usually by a factor of 20 to 30 percent. When a phone line has noise, such as music from a local radio station, other voices from a crossed line, or clicks caused by line switches, the modem will "hear" the noise, interpret it as bad data, and try to retransmit. If the disruption exceeds a modem-determined threshold, the modem will reduce its speed. The amount that a modem slows down is called a fall-back rate. The fall-back rate is different for each manufacturer's modem and will vary between phone calls. (A typical modem drops from 28,800 bits per second to 26,400 bits per second at the first sign of line interference.) A smaller fall-back rate would make phone-line interference less noticeable to the user. Because fall-back rates make modem speed such a variable, this formula and discussion assumes a completely clear line and both modems transmitting at their top speed.
As you investigate teleradiology choices, you should pay attention to the fall-back rate of a system's modem. A large fall-back rate would be a source of frustration. (Some modems have a fall-back rate of 2,400 bits per second, some even as much as half the modem's speed.)
What is client/server computing?
Client/server computing developed from the need to move systems used for application development and operations from expensive mainframes to more efficient, less expensive--yet just as powerful--workstations. Client/server architecture involves the use of two types of computers: a client computer, which runs applications and makes requests for data and other resources, and a server, which processes the client's requests by distributing the requested resources. Client/server computing is known as a cooperative distribution system because both the client and the server cooperate in performing a task. For example, if a client requests a record from the server, the server uses its resources to process the entire file, while the client computer uses its resources to run an application that reads and writes individual records in the file. The server does not need to send the entire file to the client, thus diminishing network traffic or traffic over communication lines.
Client/server computing has numerous advantages for the medical world and for the field of radiology in particular. It permits workstations to achieve computing power previously only available from mainframes--at a fraction of mainframe costs. By efficiently dividing resources, client/server computing reduces network traffic and improves response time.
This efficiency offers a significant advantage to physicians who need to receive images quickly and who require real-time image navigation and manipulation to perform diagnostic tasks effectively. Client/server computing facilitates the use of graphical-user interfaces, making teleradiology and PACS applications easier to use and more responsive. Additionally, clients and servers can be run on different platforms, allowing end users to free themselves from particular proprietary architectures. Software applications designed for client-server computing can interface seamlessly with most HIS or RIS systems, while providing rapid soft-copy image distribution.
What is a RAID?
RAID stands for "Redundant Array of Inexpensive (or Identical) Disks." RAID employs a group of hard disks and a system that sorts and stores data in various forms to improve data-acquisition speed and provide improved data protection. To accomplish this, a system of levels (from 1 to 5) "mirrors," "stripes," and "duplexes" data onto a group of hard disks.
What is a teleradiology "overread network"?
While most teleradiology systems purchased over the last decade were intended for on-call purposes, the past two years have seen a rapid increase in the use of teleradiology to link hospitals and affiliated satellite facilities, other primary hospitals, and imaging centers. A number of the enabling technologies needed for effective overread networks, such as more affordable high-speed telecommunications networks and improved data compression techniques, have matured in recent years.
In addition, health-care reform in the United States has emphasized increasing the quality of services and access to them while decreasing cost. By providing rapid, accurate, and cost-effective radiology consultations, overread networks are enhancing the accessibility and quality of health care, especially for rural hospitals, which make up about 60 percent of all hospitals in the U.S.A.
As radiologists experience a decrease in reimbursement for services, teleradiology allows them to use their time more efficiently, thus increasing volume without substantially increasing costs.