What is Image Processing?
Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life for various applications.
There are two methods available in Image Processing.
- Analog Image Processing
- Digital Image Processing
Analog Image Processing – Analog Image Processing refers to the alteration of image through electrical means. The most common example is the television image.
Digital Image Processing – In this case, digital computers are used to process the image. The image will be converted to digital form using a scanner – digitizer and then process it.
Purpose of Image processing
- Visualization – Observe the objects that are not visible.
- Image sharpening and restoration – To create a better image.
- Image retrieval – Seek for the image of interest.
- Measurement of pattern – Measures various objects in an image.
- Image Recognition – Distinguish the objects in an image.
Image Processing is used in various applications such as:
- Remote Sensing
- Textiles
- Non destructive Evaluation
- Forensic Studies
- Medical Imaging
- Material Science.
- Military
- Film industry
- Document processing
- Graphic arts
- Printing Industry
The various Image Processing techniques are:
- Image representation
- Image preprocessing
- Image enhancement
- Image restoration
- Image analysis
- Image reconstruction
- Image data compression
Digital Image Processing
- Human vision – perceive and understand world
- Computer vision, Image Understanding / Interpretation, Image processing.
-
- 3D world -> sensors (TV cameras) -> 2D Images
- Dimension reduction -> loss of information
-
- Low level image processing
-
- transform of one image to another
-
- High level image understanding
-
- knowledge based – imitate human cognition
- make decision according to information in image
-
Low level digital image processing
- Low level computer vision ~ digital image processing
- Image Acquisition
- image captured by a sensor (TV camera) and digitized
- Preprocessing
- suppresses noise (image pre-processing)
- enhances some object features – relevant understanding the image
- edge extraction, smoothing, thresholding.
- Image segmentation
- separate objects from the image background
- colour segmentation, region growing, edge linking
- Object description and classification
- after segmentation
Fundamental Steps in DIP: (Description)
Step 1: Image Acquisition
The image is captured by a sensor (eg. Camera), and digitized if the output of the camera or sensor is not already in digital form, using analogue to digital convertor
Step 2: Image Enhancement
The process of manipulating an image so that the result is more suitable than the original for specific applications.
The idea behind enhancement techniques is to bring out details that are hidden, or simple to highlight certain features of interest in an image.
Step 3: Image Restoration
Improving the appearance of an image
Tend to be mathematical or probabilistic models. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result.
Step 4: Colour Image Processing
Use the colour of the image to extract features of interest in an image
Step 5: Wavelets
Are the foundation of representing images in various degrees of resolution. It is used for image data compression.
Step 6: Compression
Techniques for reducing the storage required to save an image or the bandwidth required to transmit it.
Step 7: Morphological Processing
Tools for extracting image components that are useful in the representation and description of shape.
In this step, there would be a transition from processes that output images, to processes that output image attributes.
Step 8: Image Segmentation
Segmentation procedures partition an image into its constituent parts or objects.
Step 9: Representation and Description
Representation: Make a decision whether the data should be represented as a boundary or as a complete region. It is almost always follows the output of a segmentation stage.
Boundary Representation: Focus on external shape characteristics, such as corners and inflections.
Region Representation: Focus on internal properties, such as texture or skeleton shape
Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing (mainly recognition)
Description: also called, feature selection, deals with extracting attributes that result in some information of interest.
Recognition: the process that assigns label to an object based on the information provided by its description.
Step 10: Knowledge Base
Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database.
Components of an Image Processing System
1. Image Sensors
Two elements are required to acquire digital images. The first is the physical device that is sensitive to the energy radiated by the object we wish to image (Sensor). The second, called a digitizer, is a device for converting the output of the physical sensing device into digital form.
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), which performs arithmetic and logical operations in parallel on entire images.
This type of hardware sometimes is called a front-end subsystem, and its most distinguishing characteristic is speed. In other words, this unit performs functions that require fast data throughputs that the typical main computer cannot handle.
3. Computer
The computer in an image processing system is a general purpose computer and can range from a PC to a supercomputer. In dedicated applications, sometimes specially designed computers are used to achieve a required level of performance.
4. Image Processing Software
Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code that, as a minimum, utilizes the specialized modules.
5. Mass Storage Capability
Mass storage capability is a must in a image processing applications. And image of sized 1024 * 1024 pixels requires one megabyte of storage space if the image is not compressed.
Digital storage for image processing applications falls into three principal categories:
- Short-term storage for use during processing.
- on line storage for relatively fast recall
- Archival storage, characterized by infrequent access
One method of providing short-term storage is computer memory. Another is by specialized boards, called frame buffers, that store one or more images and can be accessed rapidly.
The on-line storage method, allows virtually instantaneous image zoom, as well as scroll (vertical shifts) and pan (horizontal shifts). On-line storage generally takes the form of magnetic disks and optical-media storage. The key factor characterizing on-line storage is frequent access to the stored data.
Finally, archival storage is characterized by massive storage requirements but infrequent need for access.
6. Image Displays
The displays in use today are mainly color (preferably flat screen) TV monitors. Monitors are driven by the outputs of the image and graphics display cards that are an integral part of a computer system.
7. Hardcopy devices
Used for recording images, include laser printers, film cameras, heat-sensitive devices, inkjet units and digital units, such as optical and CD – Rom disks.
8. Networking
Is almost a default function in any computer system, in use today. Because of the large amount of data inherent in image processing applications the key consideration in image transmission is bandwidth.
In dedicated networks, this typically is not a problem, but communications with remote sites via the internet are not always as efficient.
Histogram
- A histogram is a representation of the total number of pixels of an image at each gray level.
- Histogram information is used in number of different processes, including thresholding.
- The shape of the histogram of an image gives us useful information about the possibility for contrast enhancement.
Histogram Equalisation
The objective is to map an input image to an output image such that its histogram is uniform after the mapping.
Thresholding
- Image thresholding is a simple, yet effective, way of partitioning an image into a foreground and background.
- This image analysis technique is a type of image segmentation that isolates objects by converting grayscale images into binary images.
- Image thresholding is most effective in images with high levels of contrast.
Edge Detection
- Edges are significant local changes of intensity in an image.
- Edges typically occur on the boundary between two different regions in an image
Goal of edge detection
- Produce a line drawing of a scene from an image of that scene.
- Important features can be extracted from the edges of an image (e.g., corners, lines, curves).
- These features are used by higher level computer vision algorithms (e.g., recognition).
Image Filtering
- Image processing converts an input image into an enhanced image from which information about the image can be retrieved.
- To enhance images any unwanted information or distortions called noise has to be removed.
- Filtering is the process which removes noise from an image [which also includes lightening darker regions to enhance quality of the image or suppresses unwanted information /region]
Read More Topics |
Abstract Class in C++ |
Realloc in C |
Calloc in C |
Accessing Structure Members in C |