Showing posts with label image processing. Show all posts
Showing posts with label image processing. Show all posts

Tuesday 31 October 2023

Image processing

Image processing refers to the manipulation of digital images to enhance or extract information from them. It is a field of computer science and engineering that has applications in various domains, including photography, medical imaging, remote sensing, computer vision, and more. 

Image processing techniques can be broadly categorized into two main types:

  1. Digital Image Enhancement: This involves improving the quality of an image for human perception or for further processing.
  2. Digital Image Analysis: This involves the extraction of information or features from an image for machine interpretation. Common techniques include:

Digital Image Enhancement:

  • Brightness and Contrast Adjustment: Changing the overall intensity or contrast of an image to make it more visually appealing or to reveal hidden details. 
  • Noise Reduction: Reducing unwanted random variations in pixel values caused by factors like sensor noise.
  • Image Sharpening: Enhancing the edges and fine details in an image to make it appear clearer.

Digital Image Analysis:

  • Segmentation: Dividing an image into meaningful regions, such as identifying objects in a scene.
  • Object Detection: Locating specific objects within an image.
  • Pattern Recognition: Identifying patterns or shapes within an image, which can be used for tasks like character recognition or face detection.
  • Image Classification: Categorizing images into predefined classes or categories.

There are various software tools and programming libraries available for image processing. Some of the popular libraries for image processing in programming include OpenCV, scikit-image (Python), and MATLAB's Image Processing Toolbox. These libraries provide a wide range of functions and algorithms to perform tasks like filtering, edge detection, image transformation, and more.

Image processing is widely used in several applications, such as medical image analysis for diagnosing diseases, satellite imagery for remote sensing, facial recognition, autonomous vehicles, and many others. It plays a crucial role in extracting valuable information from images and making them more useful for various purposes.

Image processing is a broad field of computer science and digital signal processing that involves the manipulation of digital images to enhance, analyze, or extract information from them. It encompasses various techniques and methods for altering and interpreting images, both photographs and other types of visual data. 

Image processing can be used for a wide range of applications, including:

  1. Image Enhancement: This involves improving the quality of an image, making it more visually appealing or easier to interpret. Techniques such as contrast adjustment, brightness correction, and noise reduction fall under this category.
  2. Image Restoration: Image restoration aims to recover the original image from a degraded or corrupted version. This is particularly useful in applications such as medical imaging and historical photo restoration.
  3. Image Compression: Image compression techniques reduce the size of an image while trying to maintain acceptable visual quality. Common image compression methods include JPEG and PNG.
  4. Image Filtering: Filtering techniques are used to highlight or extract specific features from an image. Examples include edge detection, blurring, and sharpening.
  5. Object Detection and Recognition: Image processing is crucial in computer vision applications. It's used to detect and recognize objects within images or video streams. Techniques like Haar cascades and convolutional neural networks (CNNs) are often employed.
  6. Image Segmentation: Image segmentation involves dividing an image into distinct regions based on some criteria, such as color, intensity, or texture. This is useful in medical imaging, robotics, and object tracking.
  7. Morphological Operations: Morphological operations work with the shape and structure of objects in an image. They are commonly used for tasks like noise reduction, object detection, and text extraction.
  8. Geometric Transformation: Geometric transformations include tasks like rotation, scaling, and image warping. These are used to correct distortions or modify the spatial orientation of objects in images.
  9. Pattern Recognition: Image processing is used to extract meaningful patterns or information from images. This can include facial recognition, fingerprint analysis, and document text extraction.
  10. Color Image Processing: This branch of image processing focuses on manipulating color images. Techniques include color correction, color space conversions, and color-based object tracking.
  11. Medical Image Processing: Image processing is widely used in the medical field for tasks like diagnosis, treatment planning, and monitoring. Examples include CT scans, MRI images, and X-rays.
  12. Remote Sensing: Image processing is crucial in the analysis of satellite and aerial imagery for applications like land cover classification, weather forecasting, and environmental monitoring.
  13. Entertainment and Art: Image processing is employed in various creative applications, such as video game graphics, special effects in movies, and digital art creation.

Image processing can be performed using both traditional techniques and modern machine learning methods. The choice of method depends on the specific application and the nature of the image data being processed.

Monday 30 October 2023

Computer vision

Computer vision is a field of artificial intelligence (AI) and computer science that focuses on enabling computers to interpret and understand visual information from the world, typically in the form of images and videos. It seeks to replicate and improve upon the human visual system's ability to perceive and comprehend the surrounding environment.

Key components and concepts of computer vision include:

  1. Image Processing: This involves basic operations like filtering, edge detection, and image enhancement to preprocess and improve the quality of images before further analysis.
  2. Object Detection: Object detection is the process of identifying and locating specific objects within an image or video stream. Techniques like Haar cascades, Viola-Jones, and deep learning-based methods, such as YOLO (You Only Look Once) and Faster R-CNN, are commonly used for this purpose.
  3. Image Classification: Image classification involves assigning a label or category to an image based on its content. Deep learning models, especially convolutional neural networks (CNNs), have significantly improved image classification accuracy.
  4. Image Segmentation: Image segmentation involves dividing an image into meaningful regions or segments. It's particularly useful for identifying object boundaries within an image. Common techniques include semantic segmentation and instance segmentation.
  5. Object Recognition: Object recognition goes beyond detection by not only identifying objects but also understanding their context and attributes. This may include identifying object categories and their relationships within a scene.
  6. Feature Extraction: Feature extraction is the process of extracting relevant information or features from images to be used for further analysis. Features can include edges, corners, textures, or higher-level descriptors.
  7. 3D Vision: This aspect of computer vision deals with understanding three-dimensional space and depth perception from two-dimensional images, often using stereo vision or structured light techniques.
  8. Motion Analysis: Computer vision can be used to track the motion of objects over time, allowing for applications like video surveillance and human-computer interaction.
  9. Face Recognition: Face recognition is a specialized area of computer vision that involves identifying and verifying individuals based on their facial features. It has applications in security, authentication, and personalization.
  10. Image Generation: Some computer vision models are capable of generating images, either by combining existing images or creating entirely new ones. This can be used for tasks like image synthesis and style transfer.
  11. Robotics and Autonomous Systems: Computer vision is a crucial component in robotics and autonomous systems, enabling robots and vehicles to perceive and navigate their environments.
  12. Medical Imaging: Computer vision plays a vital role in medical fields, helping with tasks such as diagnosing diseases from medical images like X-rays, CT scans, and MRIs.
  13. Augmented and Virtual Reality: Computer vision is fundamental to creating immersive experiences in augmented reality (AR) and virtual reality (VR) applications, where the real world is combined with digital information.

Computer vision relies heavily on machine learning and deep learning techniques, with the use of neural networks, especially convolutional neural networks (CNNs), being prevalent in recent advances. It has numerous real-world applications, including in industries such as healthcare, automotive, manufacturing, retail, and entertainment.

Sunday 22 May 2016

Erosi

             Operasi Dasar Morfologi Erosi
Proses erosi kebalikan dari proses dilasi (Putra, 2010). Jika dalam proses dilasi menghasilkan objek yang lebih luas, maka dalam proses erosi akan menghasilkan objek yang menyempit (mengecil). Operasi erosi A dengan B (E(A,B) dapat dinyatakan pada rumus (3.13).
Proses dilasi dilakukan dengan cara:
1.    Membandingkan setiap pixel citra input dengan nilai pusat SE dengan cara melapiskan SE dengan citra sehingga pusat SE tepat dengan posisi pixel citra yang diproses.
2.    Jika semua pixel pada SE tepat sama dengan semua nilai pixel objek (foreground) citra, maka input pixel diset nilainya dengan nilai pixel foreground dan bila tidak maka input pixel diberi nilai pixel background,
3.    Proses yang sama dilanjutkan dengan menggerakan (tranlasi) SE pixel demi pixel pada citra input.
Contoh erosi:
A         = {(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)}
B         = {(0,0),(0,1),(1,0)}
AB  = {(1,1),(1,2),(2,1),(2,2)}       

Tabel  Erosi A dengan B
posisi poros ((x,y)  A)
(1,1)
{(1,1),(1,2),(2,1)}
(1,2)
{(1,2),(1,3),(2,2)}
(1,3)
{(1,3),(1,4),(2,3)}
(2,1)
{(2,1),(2,2),(3,1)}
(2,2)
{(2,2),(2,3),(3,2)}
(2,3)
{(2,3),(2,4),(3,3)}
(3,1)
{(3,1),(3,2),(4,1)}
(3,2)
{(3,2),(3,3),(4,2)}
(3,3)
{(3,3),(3,4),(4,3)}

   

Gambar Erosi A dengan B

referensi;
  1. Gonzalez, R.C. dan Woods, R.E., 2008, Digital Image processing, Addison- Wesley Publishing Company, USA.
  2. Pujiastuti, A., 2016, Sistem Perhitungan Lama Penyinaran Matahari Dengan Metode Otsu Threshold (Studi kasus: St. Klimatologi Barongan), Tesis, Program Studi S2 Ilmu Komputer Fakultas Matematika dan Ilmu Pengetahuan Alam Universitas Gadjah Mada Yogyakarta
  3. Putra, D.,2010, Pengolahan Citra Digital, Andi Yogyakarta, Yogyakarta