TY - JOUR
T1 - Benefits of using parallelized non-progressive network coding
AU - Kim, Minwoo
AU - Park, Karam
AU - Ro, Won W.
PY - 2013/1
Y1 - 2013/1
N2 - Network coding helps improve communication rate and save bandwidth by performing a special coding at the sending or intermediate nodes. However, encoding/decoding at the nodes creates computation overhead on large input data that causes coding delays. Therefore the progressive method which can hide decoding delay in waiting time is proposed in the previous works. However, the network speed has been greatly accelerated and progressive schemes are no longer the most efficient decoding method. Thus, we present non-progressive decoding algorithm that can be more aggressively parallelized than the progressive network coding, which can diminish the advantages of hidden decoding time of progressive methods by utilizing the multi-core processors. Moreover, the block algorithm implemented by non-progressive decoding helps to reduce cache misses. Through experiments, our scheme which relies on matrix inversion and multiplication shows 46.0% improved execution time and 89.2% last level cache miss reduction compared to the progressive method on multi-core systems.
AB - Network coding helps improve communication rate and save bandwidth by performing a special coding at the sending or intermediate nodes. However, encoding/decoding at the nodes creates computation overhead on large input data that causes coding delays. Therefore the progressive method which can hide decoding delay in waiting time is proposed in the previous works. However, the network speed has been greatly accelerated and progressive schemes are no longer the most efficient decoding method. Thus, we present non-progressive decoding algorithm that can be more aggressively parallelized than the progressive network coding, which can diminish the advantages of hidden decoding time of progressive methods by utilizing the multi-core processors. Moreover, the block algorithm implemented by non-progressive decoding helps to reduce cache misses. Through experiments, our scheme which relies on matrix inversion and multiplication shows 46.0% improved execution time and 89.2% last level cache miss reduction compared to the progressive method on multi-core systems.
KW - Matrix inversion
KW - Matrix multiplication
KW - Network coding
KW - Non-progressive decoder
KW - Parallel algorithm
KW - Tiling algorithm
UR - http://www.scopus.com/inward/record.url?scp=84870652815&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84870652815&partnerID=8YFLogxK
U2 - 10.1016/j.jnca.2012.05.014
DO - 10.1016/j.jnca.2012.05.014
M3 - Article
AN - SCOPUS:84870652815
SN - 1084-8045
VL - 36
SP - 293
EP - 305
JO - Journal of Network and Computer Applications
JF - Journal of Network and Computer Applications
IS - 1
ER -