Literature DB >> 22368594

Publishing on the WWW. Part 3 - Scanning medical images for screen presentation.

P Zammit1.   

Abstract

The purpose of this article is to remove the myths and black art surrounding electronic imaging and prove beyond doubt that any mind competent in the workings of the human body is capable of producing good on-screen images from adequate equipment. Note that in this day and age of SI units imaging still uses inches to describe diagonal screen size and pixels per inch to describe resolution.

Entities:  

Keywords:  Internet; Medical Illustration; Publishing

Year:  2001        PMID: 22368594      PMCID: PMC3232498     

Source DB:  PubMed          Journal:  Images Paediatr Cardiol        ISSN: 1729-441X


Foveal human vision is explained by the trichromatic theory. This stipulates that an image is made up of a matrix of red, green, and blue dots of intensity varying from black to the maximum saturation possible for each of the respective colours. This theory also conveniently explains image scanning and monitor images, as in both instances, the image is made up of coloured dots of light. It is also the most popular way of encoding digital colour images.

The screen image

An on-screen image is composed of a number of little squares (picture cells = pixels) of different colours typically 800 × 600 or 1024 × 768. Actually each pixel is made up of a finite screen area with red, green, and blue phosphors at varying levels of excitation. Colour monitors are commonly 15 or 17 inch diagonal, corresponding to 10½ and 12½ inches horizontal viewable image respectively. The actual resolution (pixels per inch = ppi) is obtained by dividing the horizontal number of pixels by the horizontal screen size The same image will occupy a different screen size if viewed on a different size of monitor or on the same monitor at a different screen resolution setting. It should be apparent that there is no universal screen resolution, and also that it is not possible to determine the exact size of an on-screen image published on the web or presented on-screen. It is however very easy to envisage the proportion of screen that will be occupied. Thus, for an on-screen presentation at 1024 × 768, an image of 500 × 700 pixels will occupy almost half the screen leaving the other half for text. Furthermore, if an image is to fill most of the screen without being cropped on a web page optimised for 800 × 600 (allowing for toolbars) it must not be larger than 600 × 450 pixels. Images for on-screen viewing have no such thing as dpi, ppi or size, they are simply described in pixels. However, images intended for printing must have a final size defined, as the printer driver will process the image again from a matrix of red, green and blue values to a raster of cyan, magenta, yellow plus black dots, and therefore needs to be told the exact size of the final image. Thus one can talk of a 6 × 4 inch image at 150 ppi that may be printed on a colour inkjet (at 300, 600, 720 or 1440 dpi depending on the quality of the printer).

The scanner

A typical colour scanner consists of a linear white light source and a linear camera assembly with a mechanism to move the subject in relation to the camera plus some electronics to control this motion and make the camera communicate with the computer. The linear camera usually consists of a lens to reduce the size and form an image, and a linear array of light sensitive devices. These devices are conveniently etched on a piece of silicon and are known collectively as a charge-coupled device (CCD). As the lens and CCD assembly does not physically change, the actual number of light sensitive dots and their arrangement does not change either. Therefore the scanner can only see the subject at one resolving power, and this is known as the optical resolution of the scanner. A typical example may be 600 × 1200 or 1200 × 2400. The lesser figure depends on the closeness of the packing of the light sensitive array on the CCD while the other figure is the ability of the motor moving the assembly to move in very small steps.

Resampling

How does the scanner scan at different resolutions? The simple answer is, it does not. The different resolutions are obtained mathematically by a process called resampling (or interpolation) plus variation of the motor step size. It should by now be obvious that the best scans are obtained by scanning at the optical resolution of the scanner or submultiples thereof. Any scanning above the optical resolution is a waste of time. Scanning at strange ratios uses heavy interpolation and unless the final required image is obtained at this step, any further resampling will potentially further degrade the image quality. For instance, with a 600 ppi scanner it is best to stick to 600, 300, 200, 150, 120, 100 ppi etc. and arrive at the final image with a single resample. For least loss during an interpolation the image reduction factor should also be close to an integer value. It is also possible to increase the image resolution by interpolation but note that the information in the image will not change and the results are usually poor. With the notable exception of drum scanners, most scanners use CCD image sensors and the above applies to all these devices including dedicated transparency scanners.

Worked example

If one intends to fill most of the screen (without scrolling) in a web presentation designed for viewing at a screen resolution of 800 × 600 with a 6 × 4 inch photo, what is the best scanning resolution? Screen size 800 × 600 less margins (approx 20%) = 640 × 480 pixel image. Photo may be scanned at 200 ppi or 120 dpi and then resampled to 107 ppi.

Optimisation of levels

The number of discreet values possible for a pixel is called Bit depth. This is a physical property of the electronics of a scanner. A typical value is 36 bit, and this works out as 12 bits (212 = 4096 discrete values) for each of the red, green, and blue channels. RGB images are normally handled at 24 bits, the extra information being discarded after scanning. Inspection of the image level histogram of the first unoptimised scan below reveals that the plot only occupies two thirds of the total space; the scan looks rather dull. The image levels are optimised by setting the black and the white levels (little arrows at the bottom of the histogram) to the points that represent black and white on the image. The optimised image is seen to be brighter and all the steps on the colour chart are clearly resolved. The histogram of this image neatly fills up the whole of the space. If this manipulation were to be performed after scanning, the image information will already have been truncated to 24 bits (256 steps for each of the Red, Green, or Blue channels as opposed to the previous 4096). Although the manipulated image might superficially look the same, the histogram below clearly shows the information loss as a discontinuity. A similar argument can be applied to further image manipulation of gamut or heavy cast removal. The loss of information will easily show up as bands on continuous gradients, as can be clearly seen in the two images below where one can actually count the number of steps in the gradient. The take home message is - aggressive manipulation of the image is best done at the scanning stage to make use of all the information available. Note however that some scanning software and professional image editing packages (e.g. Photoshop) will allow importation of 48 bit files. These very large images files will conveniently allow much more flexibility at the image editing stage, although some attention at the scanning stage will go a long way even here.

Gamut

You will have noticed a third pointer in the middle of the histogram. This is used to define the mid tone. In an underexposed image one would move the pointer to the left to increase the number of pixels that are in the lighter tones. Conversely in an over exposed image the pointer may be moved to the right to decrease the number of pixels in the lighter tones. This is a convenient way of coarsely adjusting what is referred to as the image gamut. Finer control of the precise image gamut is usually also available as curve control.

Exposure

Most scanner interfaces will sport an exposure control. This will control the time allowed for the CCD to accumulate charge before reading. CCDs saturate (overexpose) very easily and when they do they transfer the extra charge onto the neighbouring cells. This results in spilling of the highlights or blooming. Correct exposure is thus effectively determined by the highlights in an image and is best left to automatic. Exceptions will occur, the commonest being a backlit subject where the subject is in shadows and highlight detail is irrelevant. Judicious adjustment of the exposure control in this situation will optimise the scan for the subject increasing shadow detail and decreasing noise (see below).

Other scanner quality parameters

Dmax is the logarithm of the density ratio between whitest highlight and darkest shadow resolvable. A good flatbed scanner should have a Dmax of 3.0 or better. This is more important when scanning transparencies than when scanning reflective targets. Complex multi scanning techniques with mixing are possible to enhance the limited Dmax of CCD scanners when scanning transparencies. These may be built-in in some more upmarket transparency scanners or one may make use of a third party scanning package notably Hamrick Vuescan http://www.hamrick.com/vsm.html. This package is cheap and has most of the important settings available automatically. Noise is an electronic random phenomenon that appears as multi coloured dots in dark areas. Good scanners do not show up noticeable noise on most scans but if a very dark image is scanned and then enhanced, noise usually shows up. Improvements in this can be achieved by averaging multiple scans. Multiple scans can be a laborious manual process or may be part of the scanning software. Alternatively a third party scanner controller package may be used.

Blur

Image blurring occurs at three discreet steps; these are: Optical blurring involved in forming an image on the CCD. CCD charge leakage resulting in electronic blur Image degradation on resampling Blurring may be improved at times dramatically by using a filter called unsharp mask. There are no standard settings but setting the radius to 0.5 pixels, the threshold to 1 level and amount to 150% is a good start. Again probably the best time to apply the unsharp mask is after the final resampling.

Colours

Colour is a very complex subject and a detailed discussion is beyond the scope of this article. Fortunately some standardisation exists in the presentation of pictures on the Internet. Although most devices, with the exception of printers, see colour images as a matrix of red, green, and blue dots this does not mean that they all use exactly the same wavelengths or that what a scanner sees as maximally saturated red corresponds to maximum red on the monitor. This can go on for each of the colours plus black and white so that each device will have its own “profile”. An elegant way to sort this out was proposed by the International Colour Consortium http://www.color.org/ (ICC read ICM for Windows) where each device has its own profile; digital images are rendered standard by using a standardised device independent profile. Now the problems start. Changing a scanner's exposure will change its profile, and changing a monitor's brightness or contrast will also change its profile, as will ageing of the device. Device independent profiles abound (although the most popular are Adobe 1998 and sRGB). I dare not mention printers with all their different settings and papers. Add to this different system gamuts and colour temperature and you have all the ingredients for a perfect mishmash. The only proper way to work professionally is to profile the whole setup for each job using standard targets, colorimeters and dedicated software. It is no longer surprising why there is such a black art in the production of a decent colour print. Fear not, a workaround exists. The device independent profile to use for the Internet is sRGB. This is assumed by Windows from Win98SE and Win2000 onwards, (ICC has been on Macs for a long time now). A decent monitor may well have bundled generic profiles or one may find them on the Internet. These although not accurate, provide a starting point. Even if these are not available an adequate working profile for a monitor can be created using the Adobe gamma utility. In Windows, start Adobe Gamma, located in the Control Panels folder or in the Program Files/Common Files/Adobe/Calibration folder on your hard drive. In Mac OS, from the Apple menu, choose Control Panels > Adobe Gamma. Just follow the on-screen instructions. It does not require much genius to figure out that if you like splashy multicoloured desktops you will have problems in achieving a decent calibration and it is best to resort to a neutral grey. If the monitor is adequately calibrated and the image saved in an appropriate device independent profile (one can conveniently set this up as the workspace in Adobe Photoshop in our case sRGB) then one can be confident that the image files created will have a good degree of fidelity to what was seen when the image was being scanned and edited. This is the most valid reason that I find for commending Adobe Photoshop. In addition it is undisputedly the best bitmap editor available although rather complex and with a steep learning curve. http://www.adobe.com/products/photoshop/main.html

In summary

Decide on the screen area in pixels that the final image should occupy. Prescan the original and decide on the final cropping. Decide on the scanning resolution (approx 2× final resolution, this depends on the scanner optical resolution). Apply the image corrections possible at this stage (histogram range and gamut) Scan the image. Edit the image as desired using a calibrated monitor (this will be the topic of a further article) Resample the image to the final resolution decided at 1. Apply Unsharp mask to the image. Save (usually 24 bit JPEG) in sRGB device independent profile. Note that if you want to archive the image or work on it later you should save it first in a lossless format e.g. TIFF

Additional notes

Buying a scanner

If the intention is to scan photographic images only then a 36bit A4 flatbed scanner with a Dmax of 3.0 or more and an optical resolution of 600ppi or more is sufficient. Transparency is a different kettle of fish altogether. If one needs to scan 35mm film or slide for screen viewing only a 1200ppi scanner with a dedicated transparency adapter (Dmax 3.2) will be excellent but if one intends to print the scans at A4 then a dedicated 35mm scanner is required. These dedicated 35mm scanners will have a resolution of 2500ppi or better with a minimum Dmax of 3.2. Although the prices are coming down these scanners are quite expensive. An easy work around is to get one's film scanned onto Kodak photo CD. These will accommodate 40 high quality or 100 standard quality images per CD and usually cost around Lm5. As far as I know, this service is not available locally. If one intends to scan X-rays then an A3 flatbed with a transparency hood is ideal. I must say that these devices are expensive and cumbersome and the results obtained with some care using a good quality digital camera and a viewing box are adequate. Interfaces come in three flavours: USB, SCSI and parallel. SCSI scanners are considered more upmarket but require the installation of a card inside the computer, which if not bundled with the scanner will add to the cost. Parallel port scanners tend to be cheap but cause a lot of problems with the printer if they share the same port. USB is reasonably fast and just involves plugging up the unit. The latter is probably the best option unless one has specific reasons to go for the SCSI option.

Monitors

Any image scanned can only be seen or manipulated via a monitor unless it is printed. If your only scope is producing images for screen (i.e. sRGB low pixel count) any decent 15-inch monitor will do. Any other aspiration must be served by at least a high quality 17-inch device with a modern tube. TFT screens although very good are usually not considered suitable for image editing. This is due to the light emission physics; CRT phosphor has an exponential response to excitation while a TFT has a sigmoid response. Furthermore even if a TFT screen is accurately calibrated using a colorimeter and special software changing the viewing angle by a few degrees will change the viewed image so much as to make editing impossible.

Other system requirements

There is one feature that graphics systems crave for and that is RAM and a lot of it. If you are just scanning for screen then the now standard 128MB is more than enough, in fact one will even get away with 64MB. But if you intend to do any other image processing 256MB or more will be needed. All the information has to be sent to the monitor via the Graphics adapter. This must be capable of at the very least 1024 by 768 at a minimum 24-bit colour depth with a sufficiently fast refresh rate. such an adapter allow viewing of 800 by 600 images (required to fill the screen at this resolution for presentations). If you are buying a system anew aim for a card/monitor combination that is capable of 1600 by 1200 at 32-bit colour depth. Note that it is the 2D performance rather than the 3D acceleration that will get a bitmap image on your screen.

Further reading

A few scanning tips by Wayne Fulton http://www.scantips.com/ Accurate image manipulation for desktop publishing http://www.aim-dtp.net/aim/index.htm Colour Management Workflows for Photoshop http://www.adobe.com/support/techguides/photoshop/cms2/cmwork.html Sullivan's scanning tips online http://www.hsdesign.com/scanning/tipswelcome.html An Introduction to Digital Scanning http://www.agfabooks.com/ncuav.html Kodak Photo CD http://www.kodak.com/US/en/digital/products/photoCD.shtml
  1 in total

1.  Publishing on the WWW. Part 6 - Simple graphic manipulation.

Authors:  V Grech
Journal:  Images Paediatr Cardiol       Date:  2001-10
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.