Abstract
This paper proposes an efficient GPU-based massively parallel
implementation of the edge-directed adaptive intra-field deinterlacing method
which interpolates the missing pixels based on the deinterlaced covariance
estimated from the interlaced covariance according to the geometric duality
between the interlaced and the deinterlaced covariance. Although the edge-directed
adaptive intra-field deinterlacing method can obtain better visual quality
than conventional intra-field deinterlacing methods, the time-consuming computation
is usually the bottleneck of this deinterlacing method. In order to tackle
the problem, Graphics Processing Units (GPUs), as opposed to traditional CPU
architectures, are better candidates to speed up the computation process.
The proposed method interpolates more than one missing pixel at a time in
order to gain a significant speedup compared to the case of interpolating
just one missing pixel at a time. Experimental results show that we obtained
a speedup of 94.6
$\times$
when the I/O transfer time was taken into account, compared to
the original single-threaded C CPU code with the -O2 compiling optimization.
© 2014 IEEE
PDF Article
More Like This
Cited By
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Contact your librarian or system administrator
or
Login to access Optica Member Subscription