Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Image Processing Techniques: Point & Histogram Processing, Exams of Digital Image Processing

Point processing and mask filtering

Typology: Exams

2016/2017

Uploaded on 04/19/2017

shabnam-khanna
shabnam-khanna 🇬🇧

4

(2)

2 documents

1 / 21

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
s- Denotes the gray level of g(x,y) at any point (x,y)
Because enhancement at any point in an image deepens only on the gray level at that point,
technique in this category are referred to as point processing.
There are basically three kinds of functions in gray level transformation
-
2.1.2.1 Point Processing
2.1.2.1.1 Contract stretching -
It produces an image of higher contrast than the original one.
The operation is performed by darkening the levels below m and brightening the levels above m
in the original image.
In this technique the value of r below m are compressed by the transformation function into a
narrow range of s towards black .The opposite effect takes place for the values of r above m.
2.1.2.1.2 Thresholding function -
It is a limiting case where T(r) produces a two levels binary image.
The values below m are transformed as black and above m are transformed as white.
2.1.2.2 Basic Gray Level Transformation
These are the simplest image enhancement techniques.
2.1.2.2.1 Image Negative
-
The negative of in image with gray level in the range [O, l-1] is obtained by using the negative
transformation.
The expression of the transformation is
s= L-1-r
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15

Partial preview of the text

Download Image Processing Techniques: Point & Histogram Processing and more Exams Digital Image Processing in PDF only on Docsity!

s- Denotes the gray level of g(x,y) at any point (x,y)

Because enhancement at any point in an image deepens only on the gray level at that point, technique in this category are referred to as point processing. There are basically three kinds of functions in gray level transformation -

2.1.2.1 Point Processing

2.1.2.1.1 Contract stretching -

It produces an image of higher contrast than the original one. The operation is performed by darkening the levels below m and brightening the levels above m in the original image.

In this technique the value of r below m are compressed by the transformation function into a narrow range of s towards black .The opposite effect takes place for the values of r above m.

2.1.2.1.2 Thresholding function -

It is a limiting case where T(r) produces a two levels binary image. The values below m are transformed as black and above m are transformed as white.

2.1.2.2 Basic Gray Level Transformation

These are the simplest image enhancement techniques.

2.1.2.2.1 Image Negative -

The negative of in image with gray level in the range [O, l-1] is obtained by using the negative transformation. The expression of the transformation is

s= L-1-r

Reverting the intensity levels of an image in this manner produces the equivalent of a photographic negative. This type of processing is practically suited for enhancing white or gray details embedded in dark regions of an image especially when the black areas are dominant in size.

2.1.2.2.2 Log transformations

The general form of the log transformation is

s= c log(1+r)

Where c- constant

R � o This transformation maps a narrow range of gray level values in the input image into a wider range of output gray levels. The opposite is true for higher values of input levels. We would use this transformations to expand the values of dark pixels in an image while compressing the higher level values. The opposite is true for inverse log transformation.

The log transformation function has an important characteristic that it compresses the dynamic range of images with large variations in pixel values. Eg- Fourier spectrum

2.1.2.2.3 Power Law Transformation

Power law transformations has the basic form

S=cry Where c and y are positive constants. Power law curves with fractional values of y map a narrow range of dark input values into a wider range of output values, with the opposite being true for higher values of input gray levels. We may get various curves by varying values of y.

a) If r 1 =r 2 and s 1 =s 2 , the transformation is a linear function that deduces no change in gray levels.

b) If r1=s1, s1=0 , and s2=L-1, then the transformation become a thresholding function that creates a binary image

c) Intermediate values of (r1, s1) and (r2, s2) produce various degrees of spread in the gray value of the output image thus effecting its contract.

Generally r1s r2 and s1 s s2 so that the function is single valued and monotonically increasing

2.1.2.3.2 Gray Level Slicing

Highlighting a specific range of gray levels in an image is often desirable For example when enhancing features such as masses of water in satellite image and enhancing flaws in x- ray images. There are two ways of doing this-

(1) One method is to display a high value for all gray level in the range. Of interest and a low value for all other gray level.

(2) Second method is to brighten the desired ranges of gray levels but preserve the background and gray level tonalities in the image.

2.1.2.3.3 Bit Plane Slicing

Sometimes it is important to highlight the contribution made to the total image appearance by specific bits. Suppose that each pixel is represented by 8 bits. Imagine that an image is composed of eight 1-bit planes ranging from bit plane 0 for the least significant bit to bit plane 7 for the most significant bit. In terms of 8-bit bytes, plane 0 contains all the lowest order bits in the image and plane 7 contains all the high order bits.

High order bits contain the majority of visually significant data and contribute to more subtle details in the image.

Separating a digital image into its bits planes is useful for analyzing the relative importance played by each bit of the image.

It helps in determining the adequacy of the number of bits used to quantize each pixel. It is also useful for image compression.

The transformation function should be single valued so that the inverse transformations should exist. Monotonically increasing condition preserves the increasing order from black to white in the output image.The second conditions guarantee that the output gray levels will be in the same range as the input levels. The gray levels of the image may be viewed as random variables in the interval [0.1] The most fundamental descriptor of a random variable is its probability density function (PDF) Pr(r) and Ps(s) denote the probability density functions of random variables r and s respectively. Basic results from an elementary probability theory states that if Pr(r) and Tr are known and T- (s) satisfies conditions (a), then the probability density function Ps(s) of the transformed variable s is given by the formula-

Thus the PDF of the transformed variable s is the determined by the gray levels PDF of the input image and by the chosen transformations function. A transformation function of a particular importance in image processing

This is the cumulative distribution function of r.

Using this definition of T we see that the derivative of s with respect to r is

Substituting it back in the expression for Ps we may get

An important point here is that Tr depends on Pr(r) but the resulting Ps(s) always is uniform, and independent of the form of P(r). For discrete values we deal with probability and summations instead of probability density functions and integrals. The probability of occurrence of gray levels rk in an image as approximated

N is the total number of the pixels in an image. nk is the number of the pixels that have gray level rk. L is the total number of possible gray levels in the image. The discrete transformation function is given by

Thus a processed image is obtained by mapping each pixel with levels rk in the input image into a corresponding pixel with level sk in the output image. A plot of Pr (rk) versus rk is called a histogram. The transformation function given by the above equation is the called histogram equalization or linearization. Given an image the process of histogram equalization consists simple of implementing the transformation function which is based information that can be extracted directly from the given image, without the need for further parameter specification.

Equalization automatically determines a transformation function that seeks to produce an output image that has a uniform histogram. It is a good approach when automatic enhancement is needed

2.1.3.2 Histogram Matching (Specification)

In some cases it may be desirable to specify the shape of the histogram that we wish the processed image to have. Histogram equalization does not allow interactive image enhancement and generates only one result: an approximation to a uniform histogram. Sometimes we need to be able to specify particular histogram shapes capable of highlighting certain gray-level ranges. The method use to generate a processed image that has a specified histogram is called histogram matching or histogram specification. Algorithm

  1. Compute sk=Pf (k), k = 0, …, L-1, the cumulative normalized histogram of f.
  2. Compute G(k), k = 0, …, L-1, the transformation function, from the given histogram hz 3. Compute G-1(sk) for each k = 0, …, L-1 using an iterative method (iterate on z), or in effect, directly compute G-1(Pf (k))
  3. Transform f using G-1(Pf (k)).

2.1.4 Local Enhancement

In earlier methods pixels were modified by a transformation function based on the gray level of an entire image. It is not suitable when enhancement is to be done in some small areas of the image. This problem can be solved by local enhancement where a transformation function is applied only in the neighborhood of pixels in the interested region. Define square or rectangular neighborhood (mask) and move the center from pixel to pixel. For each neighborhood

  1. Calculate histogram of the points in the neighborhood
  2. Obtain histogram equalization/specification function
  3. Map gray level of pixel centered in neighborhood
  4. The center of the neighborhood region is then moved to an adjacent pixel location and the procedure is repeated.

The use of image subtraction is seen in medical imaging area named as mask mode radiography. The mask h (x,y) is an X-ray image of a region of a patient ' s body this image is captured by using as intensified TV camera located opposite to the x-ray machine then a consistent medium is

K

injected into the patient ' s blood storm and then a series of image are taken of the region same as h(x,y). The mask is then subtracted from the series of incoming image. This subtraction will give the area which will be the difference between f(x,y) and h(x,y) this difference will be given as enhanced detail in the output image.

This procure produces a move shoving now the contrast medium propagates through various arteries of the area being viewed. Most of the image in use today is 8- bit image so the values of the image lie in the range 0 to 255. The value in the difference image can lie from -255 to 255. For these reasons we have to do some sort of scaling to display the results There are two methods to scale an image

(i) Add 255 to every pixel and then divide at by 2.

This gives the surety that pixel values will be in the range 0 to 255 but it is not guaranteed whether it will cover the entire 8 - bit range or not. It is a simple method and fast to implement but will not utilize the entire gray scale range to display the results.

(ii) Another approach is

(a) Obtain the value of minimum difference (b) Add the negative of minimum value to the pixels in the difference image(this will give a modified image whose minimum value will be 0) (c) Perform scaling on the difference image by multiplying each pixel by the quantity 255/max. This approach is complicated and difficult to implement.

Image subtraction is used in segmentation application also

2.1.5.2 Image Averaging

Consider a noisy image g(x,y) formed by the addition of noise n(x,y) to the original image f(x,y) g(x,y) = f(x,y) + n(x,y) Assuming that at every point of coordinate (x,y) the noise is uncorrelated and has zero average value The objective of image averaging is to reduce the noise content by adding a set of noise images, {gi(x,y)} If in image formed by image averaging K different noisy images

1 K

g ( x , y ) = Â gi ( x , y )

i = 1 E{g ( x , y ) }=f(x,y)

Where a= (m-1)/2 and b = (n-1)/ To generate a complete filtered image this equation must be applied for x=O, 1, 2, -----M-1 and y=O,1,2---,N-1. Thus the mask processes all the pixels in the image. The process of linear filtering is similar to frequency domain concept called convolution. For this reason, linear spatial filtering often is referred to as convolving a mask with an image. Filter mask are sometimes called convolution mask.

R= W,Z,+W 2 , Z 2 +….+ Wmn Zmn

Where w' s are mask coefficients and z' s are the values of the image gray levels corresponding to those coefficients. mn is the total number of coefficients in the mask.

An important point in implementing neighborhood operations for spatial filtering is the issue of what happens when the center of the filter approaches the border of the image. There are several ways to handle this situation.

i) To limit the excursion of the center of the mask to be at distance of less than (n-1) /2 pixels form the border. The resulting filtered image will be smaller than the original but all the pixels will be processed with the full mask.

ii) Filter all pixels only with the section of the mask that is fully contained in the image. It will create bands of pixels near the border that will be processed with a partial mask.

iii) Padding the image by adding rows and columns of o ' s & or padding by replicating rows and columns. The padding is removed at the end of the process.

2.1.6.1 Smoothing Spatial Filters

These filters are used for blurring and noise reduction blurring is used in preprocessing steps such as removal of small details from an image prior to object extraction and bridging of small gaps in lines or curves.

2.1.6.1.1 Smoothing Linear Filters

The output of a smoothing liner spatial filter is simply the average of the pixel contained in the neighborhood of the filter mask. These filters are also called averaging filters or low pass filters.

The operation is performed by replacing the value of every pixel in the image by the average of the gray levels in the neighborhood defined by the filter mask. This process reduces sharp transitions in gray levels in the image.

The median £ of a set of values is such that half the values in the set less than or equal to £ and half are greater than or equal to this. In order to perform median filtering at a point in an image,

we first sort the values of the pixel in the question and its neighbors, determine their median and assign this value to that pixel.

We introduce some additional order-statistics filters. Order-statistics filters are spatial filters whose response is based on ordering (ranking) the pixels contained in the image area encompassed by the filter. The response of the filter at any point is determined by the ranking result

2.1.6.1.2.1 Median filter

The best-known order-statistics filter is the median filter, which, as its name implies, replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel:

The original value of the pixel is included in the computation of the median. Median filters are quite popular because, for certain types of random noise, they provide excellent noise-reduction capabilities, with considerably less blurring than linear smoothing filters of similar size. Median filters are particularly effective in the presence of both bipolar and unipolar impulse noise. In fact, the median filter yields excellent results for images corrupted by this type of noise.

2.1.6.1.2.2 Max and min filters

Although the median filter is by far the order-statistics filter most used in image processing.it is by no means the only one. The median represents the 50th percentile of a ranked set of numbers, but the reader will recall from basic statis+tics that ranking lends itself to many other possibilities. For example, using the 100th perccntile results in the so-called max filter given by:

This filter is useful for finding the brightest points in an image. Also, because pepper noise has very low values, it is reduced by this filter as a result of the max selection process in the subimage area S. The 0th percentile filter is the Min filter.

2.1.6.2 Sharpening Spatial Filters

The principal objective of sharpening is to highlight fine details in an image or to enhance details that have been blurred either in error or as a natural effect of particular method for image acquisition. The applications of image sharpening range from electronic printing and medical imaging to industrial inspection and autonomous guidance in military systems. As smoothing can be achieved by integration, sharpening can be achieved by spatial differentiation. The strength of response of derivative operator is proportional to the degree of discontinuity of the image at that point at which the operator is applied. Thus image differentiation enhances edges and other discontinuities and deemphasizes the areas with slow varying grey levels.

Similarly we can define a second order derivative as the difference

2.1.6.2.1 The LAPLACIAN

The second order derivative is calculated using Laplacian. It is simplest isotropic filter. Isotropic filters are the ones whose response is independent of the direction of the image to which the operator is applied. The Laplacian for a two dimensional function f(x,y) is defined as

Partial second order directive in the x-direction

And similarly in the y-direction

The digital implementation of a two-dimensional Laplacian obtained by summing the two components

The equation can be represented using any one of the following masks

Laplacian highlights gray-level discontinuities in an image and deemphasize the regions of slow varying gray levels. This makes the background a black image. The background texture can be recovered by adding the original and Laplacian images.

For example:

The strength of the response of a derivative operator is propositional to the degree of discontinuity of the image at that point at which the operator is applied. Thus image differentiation enhances eddies and other discontinuities and deemphasizes areas with slowly varying gray levels. The derivative of a digital function is defined in terms of differences. Any first derivative definition (1) Must be zero in flat segments (areas of constant gray level values) (2) Must be nonzero at the onset of a gray level step or ramp (3) Must be nonzero along ramps. Any second derivative definition

(1) Must be zero in flat areas (2) Must be nonzero at the onset and end of a gray level step or ramp (3) Must be zero along ramps of constant slope.

It is common practice to approximate the magnitude of the gradient by using also lute values instead or squares and square roots:

Roberts Goss gradient operators

For digitally implementing the gradient operators Let center point, 5z denote f(x,y), Z1 denotes f(x-1,y) and so on