Abstract

The image object labeling problem plays an important role in the computer vision and artificial intelligence fields [5]. Several parallel processing approaches have been proposed to obtain solutions in various stages involved in this problem [4]. This paper describes a new parallel object labeling algorithm, based on multi-processor systems with a hypercube interconnection network. The system could be either SIMD or MIMD. The computation time complexity is discussed in this paper and compared with sequential methods and other parallel methods.The labeling problem can be briefly described as following. Let A = {a1, a2, …, an} be a set of objects detected from an image, and L = {l1, l2, …, lm} be a set of labels, i.e., names or identifications of objects. Li is a subset of L that contains of all possible labels for the object ai. Consider an element L of L1 X L2 X … X Ln, in which the labelings of all neighboring objects are compatible. Such L is called an “unambiguous labeling”, which is a logical explanation of the image. The objective of the labeling problem is to find all such L's.A “consistent labeling” is a collection of subsets, L'i where L'i is a subset of Li for 1≤i≤n, in which, for i≠j, for each label li in L'i there exists a label lj in L'j that is compatible with li. Finding the greatest consistent labeling is a necessary step towards deriving unambiguous labelings. Computation time to detect an unambiguous labeling will be determined by the size of the consistent labeling in the first step.To find consistent labelings, we first define a compatibility matrix C = {Cpq}, where each row and each column correspond to a possible label for an object. The size of C is S|Li|XS|Li|. Cpq = 1 if this pair of labelings is compatible, Cpq = 0 if not.In regular relaxation approaches, the elements of C corresponding to objects which are not neighbors of each other are given value “1” or “unconstrained”. The submatrices corresponding to neighboring objects are examined and used to reduce the size of matrix “C”.In our approach, we need several steps to find the consistent labeling with the smallest size. In the first step, the submatrices corresponding to neighboring objects are examined. If some rows or columns are not compatible, we cross out these rows or columns. All pairs of objects are classified according to the length of the shortest path connecting them in the neighboring graph. The immediate neighbor pairs have path length 1.The second step will process the object pairs of path length 2. A labeling of an object pair of path length 2 is considered to be “compatible” if all labelings along all paths of length 2 are compatible. With this definition, the submatrices of C corresponding to object pairs with path length 2 can be used to reduce the size of matrix C significantly in the same way as in regular relaxation approaches for the submatrices corresponding to neighboring objects. The further processing of C can greatly simplified after the second step.Since each step in our approach involves a high number of matrix operations, a multi-processor machine with a hypercube interconnection network can effectively execute the computation and communication for our algorithm. A Gray sequence will be used in the hypercube network to eliminate unnecessary date movements. The similar technique has been used for image correlation computation [1,2,3]. A simulation study of the above labeling algorithm will be reported in a separate paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.