The present invention relates to graphic image processing techniques and, more particularly, to a method for producing truly scaleable typeface data that is capable of providing bit-map font data at any resolution and at any point size.
The problem of representing graphic images on digitally controlled machines is classic. Graphic images are conceived and usually executed in an "analogue" fashion with continuous smooth flowing lines, infinite angles and subtle variations in their internal dimensions.
The study of a graphic image as simple as a dot drawn in ink on a piece of paper would reveal infinite angles measured along the contour which encircles the dot through 360 degrees. Likewise, an infinite number of measurements could be taken across the dot at an infinite number of locations.
However, the representation of such a graphic image in "digital" form places major restrictions on its appearance. First the infinite and subtle variation of dimensions must be represented by a discrete set of dimensions as measured between a finite set of locations on a two dimensional coordinate plane. Additionally, in many digital systems, there exists only two discrete angles: vertical and horizontal.
Under these restrictions, one can only produce an "illusion" of the dot, but its size, smoothness and position will be compromised. The success of the illusion (or the degree to which the image is compromised) is directly dependent on two factors.
First, the resolution of the digital system determines the number of addressable locations, and consequently the number of representable dimensions available to aid in the illusion--the higher the resolutions, the better the quality of the illusion.
Secondly, the selection of appropriate key angle, lines, and dimensions used to represent the image contributes significantly to the success of the illusion. In the example of the dot, key points to be established in order to create an illusion of the dot would be its height and width. However, that in itself makes no distinction between a square dot and a round dot. The round dot illusion, therefore, could be promoted by notching the corners of the square. In this way, an illusion of angles other than vertical and horizontal is created.
An additional complication arises when digitally defined images are to be scaled. Essentially, this involves translating the image from one digital system to another (i.e., from one "illusion" to another).
These digital images can exist as either contours defining the bounds of the images or as dots or strokes filling the interiors of the images. In either case, the fundamental problems are the same. Neither representation can define the graphic image without some loss of detail. Scaling the image can contribute to further loss of detail. Conversion from contours to strokes and vice versa likewise destroys detail.
Because the basic interest is in producing solid filled images, the following comments are confined to those regarding the generation of dots or strokes filling the interior of graphic images. Essentially, there are two methods to produce solid filled graphic images at multiple sizes.
The first method is to utilize a digital representation composed of strokes or data, which has been carefully created to form the optimum illusion of the image. Scaling is then accomplished by replicating stroke or dot patterns for increased scale and by throwing away strokes and dots for reduced scale. The problem arises in determining which dots or strokes are key to the creation of an optimal illusion at the desired scale size. Because important information regarding the original "analogue" graphic image is missing, this method cannot produce optimal quality images.
The second method is to define the digital image as a contour initially. This contour can then be scaled and mathematically "filled" with strokes or dots. If the original contour is described in sufficient detail, the resulting illusion can be good. However, because the contour is only a description of a hollow shell, and the strokes or dots represent a digital system, once again the size, position, smoothness, and internal dimensions of the image are compromised. The unavoidable mathematical errors caused by interpreting "analogue" images as digital produce unpredictable results.
The output of characters (letterforms) as graphic images on an output graphic device requires the scaling of digital images. If the contour of the character is described in a sufficiently high resolution, one can consider the digital image as "analogue", that is to say, it has lost no substantial detail. To scale this digital representation for the creation of strokes or dots for lower resolution output requires some special consideration in order to create optimal character illusions. The following is the basic description of a system designed to scale typefaces over a variety of sizes and output resolutions while still maintaining optimal character design.