OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 12, Iss. 20 — Oct. 4, 2004
  • pp: 4881–4895
« Show journal navigation

Network layer packet redundancy in optical packet switched networks

Harald Øverby  »View Author Affiliations


Optics Express, Vol. 12, Issue 20, pp. 4881-4895 (2004)
http://dx.doi.org/10.1364/OPEX.12.004881


View Full Text Article

Acrobat PDF (280 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

A crucial issue in optical packet switched (OPS) networks is packet losses at the network layer caused by contentions. This paper presents the network layer packet redundancy scheme (NLPRS), which is a novel approach to reduce the end-to-end data packet loss rate in OPS networks. By introducing redundancy packets in the OPS network, the NLPRS enables a possible reconstruction of data packets that are lost due to contentions. An analytical model of the NLPRS based on reduced load Erlang fix-point analysis is presented. Simulations of an OPS ring network show that the NLPRS is in particular efficient in small networks operating at low system loads. Results also show how the arrival process, packet length distribution, network size and redundancy packet scheduling mechanism influence the NLPRS performance.

© 2004 Optical Society of America

1. Introduction

Wavelength Division Multiplexing (WDM) has emerged as the most promising technology to increase available capacity in future core networks [1–5

1. R. Ramaswami and K. N. Sivarajan, Optical Networks: A Practical Perspective (Morgan Kaufmann, 2002).

]. Today, WDM is utilized in a point-to-point architecture, where electronic switches terminate optical fibres. However, future WDM core networks are predicted to evolve into all-optical architectures (where optical switches replace electronic switches) such as Wavelength Routed (WR) networks [1

1. R. Ramaswami and K. N. Sivarajan, Optical Networks: A Practical Perspective (Morgan Kaufmann, 2002).

], Optical Packet Switched (OPS) networks [2

2. M. J. O’Mahony, D. Simeonidou, D. K. Hunter, and A. Tzanakaki, “The Application of Optical Packet Switching in Future Communication Networks,” IEEE Communications Magazine 39(3) (2001) 128–135. [CrossRef]

,3

3. L. Dittmann et al., “The European IST Project DAVID: A Viable Approach Toward Optical Packet Switching,” IEEE Journal on Selected Areas in Communications 21(7) (2003) 1026–1040. [CrossRef]

] and Optical Burst Switched networks [4

4. J. S. Turner, “Terabit burst switching,” Journal of High Speed Networks 8(1) (1999) 3–16.

,5

5. M. Yoo, C. Qiao, and S. Dixit, “QoS Performance of Optical Burst Switching in IP-Over-WDM Networks,” IEEE Journal on Selected Areas in Communications 18(10) (2000) 2062–2071. [CrossRef]

]. Among these all-optical network architectures, OPS is viewed as the most promising (at least in the long run) due to good resource utilization and adaptation to changes in the traffic pattern [2

2. M. J. O’Mahony, D. Simeonidou, D. K. Hunter, and A. Tzanakaki, “The Application of Optical Packet Switching in Future Communication Networks,” IEEE Communications Magazine 39(3) (2001) 128–135. [CrossRef]

].

The major aim in this paper is to investigate the performance gain that can be achieved from using the NLPRS under various network scenarios. After explaining the NLPRS in detail in Section 2, we propose an analytical model of the NLPRS based on reduced load Erlang fix-point analysis in Section 3. A simulation model of the NLPRS is presented in section 4, while analytical and simulation results are reported in Section 5. Finally, Section 6 concludes the paper.

2. The network layer packet redundancy scheme (NLPRS)

The NLPRS proposed in this paper is equal to the FSRaid application when it comes to generating redundancy packets from a set of data packets. However, unlike the FSRaid application, which considers file transfers between two Internet hosts at the application layer, the NLPRS considers packet transfers between an ingress router and an egress router in an OPS core network. Also, the FSRaid application’s primarily goal is to combat packet losses due to erroneous files on bad Internet servers, while the NLPRS’s primarily goal is to combat packet losses due to contentions in OPS networks.

With the NLPRS, data packets with different destinations arrive from an access network to an electronic ingress router in the considered OPS network, as seen in Fig. 1. Data packets may e.g. be independent IP packets arriving from end-hosts in the Internet. Upon arrival to the ingress router, packets are grouped according to their access network destination. Denote a packet set as the ms subsequent data packets that arrive to an ingress router with the same access network destination. Data packets that arrive to the ingress router are immediately transmitted to the core network, but copies of the data packets are temporally stored in the ingress router. When all packets in a set have arrived (i.e. a total of ms data packets), rs redundancy packets are created from the ms data packet copies stored in the ingress router using the FSRaid specification. The redundancy packets must all have the same size as the largest data packet in the packet set [12

12. Fluid Studios, FSRaid documentation, http://www.fluidstudios.com/fsraid.html (accessed March 2004).

]. After being created, the redundancy packets are transmitted to the same egress router as the data packets in the considered set, and the data packets stored in the ingress router are deleted.

Due to contentions, both data and redundancy packets may be dropped in the OPS network. Consider a certain packet set and let mr (mr≤ms) and rr (rr≤rs) be the number of data and redundancy packets successfully received at the egress node, respectively. This means that ms-mr data packets and rs-rr redundancy packets are lost due to contentions. If mr+rr≥ms, lost data packets can be reconstructed from successful data and redundancy packet arrivals according to the FSRaid specification. However, if mr+rr<ms, reconstruction is not possible, which means that the number of lost data packets equals ms-mr [12

12. Fluid Studios, FSRaid documentation, http://www.fluidstudios.com/fsraid.html (accessed March 2004).

]. Figure 1 shows the case with ms=4, rs=1, mr=3 and rr=1. Here, reconstruction is possible since mr+rr=3+1=4≥ms=4, which ultimately results in no data packet loss regarding this packet set when the NLPRS is utilized. Note that the NLPRS is not a contention resolution scheme, which means that it can be combined with any contention resolution architecture [6

6. S. Yao, B. Mukherjee, S. J. Ben Yoo, and S. Dixit, “A Unified Study of Contention-Resolution Schemes in Optical Packet-Switched Networks,” IEEE Journal of Lightwave Technology 21(3) (2003) 672–683. [CrossRef]

].

Fig. 1. Illustration of the NLPRS. One redundancy packet is created from a set of four data packets at ingress router j. All five packets are transmitted to egress router k. One data packet is dropped in the network due to contention. However, the lost data packet is reconstructed at egress router k using the successful data- and redundancy packet arrivals.

The benefit of using the NLPRS is that lost data packets can be reconstructed at the egress router, which results in a reduced data PLR. We term this effect the redundancy effect. However, using the NLPRS has two major drawbacks:

  • The introduced redundancy packets contribute to an increased system load.
  • The introduced redundancy packets may contribute to an increased burstiness.

Both these effects leads to an increased PLR, and we group and term them the altered traffic pattern effect. As will be demonstrated further in this paper, the viability of the NLPRS depends on whether the redundancy effect dominates the altered traffic pattern effect or not. We will show that this is dependent on the parameters ms, rs, the system load, data packet arrival process, packet length distribution, redundancy packet scheduling mechanism and the size of the OPS network.

3. Analytical model

We consider an OPS network equal to the basic OPS architecture presented in [6

6. S. Yao, B. Mukherjee, S. J. Ben Yoo, and S. Dixit, “A Unified Study of Contention-Resolution Schemes in Optical Packet-Switched Networks,” IEEE Journal of Lightwave Technology 21(3) (2003) 672–683. [CrossRef]

], and assume the following:

  • The OPS network operates in asynchronous mode.
  • The switches in the network are non-blocking, bufferless, but utilize full wavelength conversion to resolve contentions.
  • Denote an output fibre in a switch as an output link, and let there be a total of C output links in the OPS network. The output links are termed ei (1≤i≤C).
  • Let the number of wavelengths and the normalized system load at output link ei be Ni (Ni≥1) and Ai (Ai≥0), respectively.
  • Let πk be a uni-directional end-to-end path in the OPS network, which traverses Hk output links given in the ordered set ek.
  • Let ℜ={πk, ∀k} denote the set of all end-to-end paths in the network.
  • When the NLPRS is used, r redundancy packets are added to a set of m data packets.
  • We assume that data packets are offered to an end-to-end path πk according to a Poisson arrival process with constant arrival intensity equal to λk. When the NLPRS is utilized, we assume that the arrival process is still Poisson, but with an intensity equal to λk(r+m)/m Hence, the increased system load due to the redundancy packets is reflected in the analytical model, but the increased burstiness introduced by the redundancy packets is not reflected in the analytical model.
  • Since there is no queuing in the network, the arrival process to output link ei is a sum of Poisson processes, which equals a single Poisson process with intensity equal to the sum of the added Poisson processes.
  • Blocking (or contention) occurs at the output links only, and we assume that blocking events occur independently between the output links.
  • We assume that the packet service time is deterministic i.i.d. with service time μ-1 (i.e. both data and redundancy packets have the same size).
  • The effect of the switching time is ignored.
  • Denote the following terms as:
    • Overall PLR (OPLR): The relative share of lost data and redundancy packets offered to path πk after the NLPRS has been deployed, but before the redundancy effect is accounted for.
    • Data PLR (DPLR): The relative share of lost data packets offered to path πk after the NLPRS has been deployed, and after a possible reconstruction of lost data packets has taken place.
    • Reference PLR (RPLR): The relative share of lost packets offered to path πk before the NLPRS has been deployed. In this case, all packets are data packets.

Since injected redundancy packets leads to an increased system load, it is obvious that OPLR>RPLR. However, due to the redundancy effect we have that DPLR<OPLR. The crucial question is under what conditions the inequality DPLR<RPLR is true. Further in this section we will derive analytical expressions for the OPLR, DPLR and RPLR based on Erlang fix point analysis [14

14. Leonard Kleinrock, Queuing Systems Volume I: Theory (John Wiley & Sons, 1975).

]. Erlang fix point analysis has recently been adapted to OPS/OBS [15

15. Z. Rosberg, H. L. Vu, M. Zukerman, and J. White, “Performance Analyses of Optical Burst-Switching Networks,” IEEE Journal on Selected Areas in Communications 21(7) (2003) 1187–1197. [CrossRef]

].

Bi=ENi,Ai=((NiAi)NiNi!)(j=0Ni(NiAi)jj!)
(1)

Ai=1μNi(πkeiekλkp=1C(1I(ep,ei,πk)ENp,Ap))
(2)

B(πk)=1ek(1Bi)
(3)

where B(πk) is the end-to-end PLR for traffic flowing on path πk.

Ai,r,m=1μNi(πkeiekλkr+mmp=1C(1I(ep,ei,πk)ENp,Ap,r,m))
=r+mm1μNi(πkeiekλkp=1C(1I(ep,ei,πk)ENp,Ap,r,m))=r+mmAi
(4)

Bi,r,m=ENi,Ai,r,m=((NiAi,r,m)NiNi!)(j=0Ni(NiAi,r,m)jj!)
(5)
B(πk,r,m)=1ek(1Bi,r,m)
(6)

  1. Initially set Bi,r,m=0.
  2. Calculate the normalized system load at output link ei using Eq. (4).
  3. Use the normalized system load calculated in step 2 to calculate the blocking probability at the output link ei using Eq. (5).
  4. Repeat from step 2 until the desired accuracy of the Bi,r,m is achieved.

At last, use Eq. (6) to calculate the end-to-end blocking probability on path πk (after all Bi,r,m’s in the considered end-to-end path are found).

We now turn our attention to the special characteristics of the NLPRS, and calculate the number of lost packets when the NLPRS is utilized. We assume that whether a packet transmitted on path πk is lost or not is a Bernoulli trial, where the probability of being lost (success) is B(πk,r,m) as given in Eq. (6). We also assume that packet losses occur independently within the same packet set, which means that the probability for s lost DPs and s lost RPs is Binomial distributed, and given in Eq. (7) and (8), respectively:

Qs=msB(πk,r,m)s(1B(πk,r,m))ms
(7)
Rs=rsB(πk,r,m)s(1B(πk,r,m))rs
(8)

Equation (7) and (8) give the probability for lost DPs and RPs before a possible reconstruction has taken place. However, since lost DPs may be reconstructed from successful DP and RP arrivals, the number of lost DPs may decrease after a possible reconstruction. Let the term ‘lost data packets after reconstruction’ (DPAR) denote the number of data packets lost from the original DP set after a possible reconstruction has taken place. Note that lost DPAR≤lost DP. If the total number of lost DPs (i) and lost RPs (j) in a set is greater than the total number of transmitted RPs (r), reconstruction is not possible, and the number of lost DPAR equals lost DPs. Else, reconstruction is possible, and there are no lost DPAR, as summarized in Table 1.

Table 1. The number of lost DPAR in a packet set as a function of the number of lost DPs and RPs.

table-icon
View This Table
| View All Tables

We set up the mean number of lost DPAR considering a packet set consisting of m DPs and r RPs transmitted on the path πk by using Eq. (7) and (8), and Table 1:

Tm(πk,r,m)=i=1mj=Max[ri+1,0]riQiRj
=i=1mj=Max[ri+1,0]rimiB(πk,r,m)i(1B(πk,r,m))mi
·rjB(πk,r,m)j(1B(πk,r,m))rj
=i=1mj=Max[ri+1,0]rimi(1ek(1Bi,r,m))i(ek(1Bi,r,m))mi
·rj(1ek(1Bi,r,m))j(ek(1Bi,r,m))rj
(9)

The end-to-end data packet loss rate (i.e., the DPLR, since the redundancy effect from the NLPRS is taken into account) for path πk is then:

T(πk,r,m)=1mTm(πk,r,m)
(10)

Eq. (10) gives the DPLR on a single path in the OPS network. In order to calculate the average PLR in the network, we must consider the PLR on all paths in the network. The RPLR and the DPLR for the considered network are calculated as:

RPLRNET=1λTOTπkλkB(πk)
(11)
DPLRNET=1λTOTπkλkT(πk,r,m)
(12)

where λTOT = Σπk∈ℜλk.

4. Simulation model

A simulation model of the NLPRS has been implemented in the Discrete Event Modelling on Simula software [16

16. G. Birtwistle, DEMOS: Discrete Event Modelling on Simula (MacMillan, 1978).

]. We present the considered OPS network in section 4.1, followed by a description of the arrival models in section 4.2, the packet length distributions in section 4.3 and the redundancy packet scheduling mechanisms in section 4.4. If not stated differently, we use the same assumptions and notations in this section as in section 3.

4.1. OPS ring architecture

Fig. 2. The considered optical packet switch and its corresponding electronic edge router.

4.2. Arrival models (AM)

We consider two different arrival models, i.e. the Poisson arrival model and the 2-stage hyper-exponential arrival model (H2 arrival model). Compared to the Poisson arrival model, the H2 arrival model has a larger coefficient of variation, which makes it burstier.

With the Poisson arrival model, data packets arrive to an electronic edge router according to a Poisson process with constant arrival intensity. We assume a uniform traffic pattern, which means that data packets arrive with the same intensity γ to all electronic edge routers. The packets are uniformly routed to other nodes in the network and no packets are routed to the node it is transmitted from. The normalized system load imposed by the data packets (ρD) is obtained by considering the average number of bits offered by the sources divided by the wavelength capacities, as seen in Eq. (13) [6

6. S. Yao, B. Mukherjee, S. J. Ben Yoo, and S. Dixit, “A Unified Study of Contention-Resolution Schemes in Optical Packet-Switched Networks,” IEEE Journal of Lightwave Technology 21(3) (2003) 672–683. [CrossRef]

].

ρD=Gγ·av·hop·L3GNC=γ·av·hop3
(13)

Let ρR denote the normalized system load imposed by the redundancy packets. When the NLPRS is utilized, the total normalized system load on the network is ρTOTDR. If not stated otherwise and for the rest of this paper, we use the term ‘system load’ to describe the normalized system load due to the data packets only as defined in Eq. (13). Regarding the analytical model, the total data packet arrival intensity to a path is λk=λ=λTOT/(G(G-1))=γ/(G-1), and the system load (due to data packets) is calculated using Eq. (13).

a=(1+(c21)c2+1)2,θ0=2,θ1=2λ(1a)
(14)

The burstiness of an arrival model can be characterized by the coefficient of variation. The Poisson arrival model has a coefficient of variation equal to c=1, while the H2 arrival model has a coefficient of variation c>1 when θ0≠θ1 and a>0. For the H2 arrival model, the coefficient of variation is given as [18

18. H. Øverby and N. Stol, “Effects of bursty traffic in service differentiated Optical Packet Switched networks,” Optics Express 12(3) (2004) 410–415. [CrossRef] [PubMed]

]:

c2=(σm1)2=m2m121=ε1=2a+k2a·k2(a+ka·k)21
(15)

where mi is the i’th moment of the H2 arrival process, σ is the variance, ε is Palms form factor and k=θ01. Further details regarding the H2 arrival model can be found in [18

18. H. Øverby and N. Stol, “Effects of bursty traffic in service differentiated Optical Packet Switched networks,” Optics Express 12(3) (2004) 410–415. [CrossRef] [PubMed]

].

4.3. Packet length distribution (PLD)

We consider both deterministic and empirically PLDs (the analytical model considers only deterministic packet lengths). For the deterministic PLD, all packets (both data and redundancy packets) have the same size equal to L=500 bytes. In this case, we have that ρRDr/m and ρTOTDRD(r+m)/m. The empirically PLD can be seen in Fig. 3(a), and is in accordance with recent measurements of Internet packet lengths [19

19. H. W. Braun, NLANR/Measurement and Network analysis http://www.caida.org/analysis/AIX/plen_hist/.

]. The mean packet length is L=431 bytes. When the empirical PLD is used, the size of the redundancy packets will be equal to the largest data packet in a packet set. On average, the redundancy packets will be larger than the data packets, which means that the normalized system load due to redundancy packets will be larger compared to when the deterministic PLD is used, i.e. we have that ρTOTD(r+m)/m.

4.4. Redundancy packet scheduling mechanism (RPSM)

Redundancy packets are scheduled to the queue Q according to one of the following redundancy packet scheduling mechanisms (RPSM):

  • Transmit right away (TRA): When m data packets in a packet set has arrived, r redundancy packets are created and immediately put into queue Q.
  • Back to back (BTB): When m data packets in a packet set has arrived, r redundancy packets are created and put into queue Q back-to-back. More exactly, the time until redundancy packet j (1≤j≤r) is put into queue Q is tj=tj-1+ 1/μj-1 , where 1/μj-1 is the duration of packet j-1.
  • Exponential back-to-back (EBTB): This scheduling mechanism resembles the BTB, except that the redundancy packet inter arrival time is extended with an exponential i.i.d. time with intensity rγ/(G-1).

The various RPSM will influence the burstiness of the traffic offered to the OPS network. Since a smoother traffic pattern leads to a reduced PLR [18

18. H. Øverby and N. Stol, “Effects of bursty traffic in service differentiated Optical Packet Switched networks,” Optics Express 12(3) (2004) 410–415. [CrossRef] [PubMed]

], we assume that the EBTB will give the best performance, followed by the BTB and the TRA.

Fig. 3. The cumulative packet length distribution before the NLPRS has been applied (a). The cumulative end-to-end delay distribution before the NLPRS has been applied. ρD=0.2, G=7, AM=Poisson, PLD=Deterministic (b).

5. Results

Both the simulation model (section 4) and the analytical model (section 3) have been utilized to evaluate the NLPRS. We use red and blue graphs/plots to show that the results are obtained from the simulation and analytical model, respectively. Regarding the simulation results, 10 independent runs where performed for each plot and the 95 % confidence intervals are plotted. The performance metrics considered are the data PLR (DPLR), the reference PLR (RPLR) and the end-to-end delay. The DPLR and the RPLR is used as defined in section 3. The end-to-end delay is the aggregated delay from the moment the packet arrives to the electronic edge router until it enters the access network at the correct node, and is composed of propagation delay, queuing delay (ingress router) and transmission delay. The parameters used and their initial values are summarized in Table 2. We first consider NLPRS basic performance in section 5.1. Sections 5.2-5.4 presents how the arrival process, packet length distribution and redundancy packet scheduling mechanism influence the NLPRS performance, respectively.

Table 2. The parameters used in the performance evaluation. Unless stated differently, the initial values of these parameters are used in the rest of the section.

table-icon
View This Table
| View All Tables

5.1. NLPRS basic performance

Figure 4 shows the DPLR as a function of the product r/m for different values of the parameter m. We see that using the NLPRS results in a reduced DPLR compared to the case where the NLPRS is not utilized, which means that the redundancy effect dominates the altered traffic pattern effect for this scenario. Regarding the case when r/m=0 and for a certain value of the parameter m>0, we see that increasing the product r/m leads to a reduced DPLR, which continues until the DPLR reaches a minimum at r/m≈1.2. At this minimum point, the DPLR has been reduced from 1.8×l0-2 when the NLPRS is not utilized to 4.2×l0-4, 3.4×l0-5 and 8.7×l0-6 when m=5, m=10 and m=15, respectively. Also note that increasing the product r/m further leads to an increased DPLR compared to the minimum values. Furthermore, we see that increasing the parameter m for a fixed value of the product r/m leads to a reduced DPLR. The analytical model is a good approximation to the simulation model when r/m<0.8. However, when r/m>2.0, the analytical model yields optimistic results regarding the NLPRS performance. This is because that the analytical model does not consider the increased burstiness due to the introduced redundancy packets, while this has been accounted for in the simulations. Hence, as the product r/m increases, the burstiness in the network increases since a larger portion of the traffic originate from the redundancy packets, which leads to a larger bias between the analytical and simulation results. Also note that the total normalized system load in the network is larger than ρD=0.2 when the NLPRS is utilized, e.g., ρTOTDR=0.2+2.0×0.2=0.6 when r/m=2.0.

Fig. 4. The DPLR as a function of the product r/m for various values of the parameter m. ρD=0.2, G=7, AM=Poisson, PLD=Deterministic, RPSM=BTB.
Fig. 5. The DPLR as a function of the parameter ρD for various values of the parameter r. m=10, G=7, AM=Poisson, PLD=Deterministic, RPSM=BTB.
Fig. 6. The DPLR as a function of the product r/m for various values of the parameter G. ρD=0.2, m=10, AM=Poisson, PLD=Deterministic, RPSM=BTB.

Figure 5 shows the DPLR as a function of the system load (ρD) for various values of the parameter r. We see that an increase in the system load leads to a relative larger increase in the DPLR when the NLPRS is utilized compared to the case where the NLPRS is not utilized. When ρD≈0.4, there is no DPLR reduction by using the NLPRS compared to not using the NLPRS. This indicates that the NLPRS is efficient in networks operating at low system loads only, and that the performance degrades as the system load increases. The results obtained from the analytical model matches the results from the simulation model when the parameter r is small (r≤8) or for moderate system loads (ρD≥0.3).

5.2. Arrival models

Figure 7 shows the DPLR as a function of the product r/m when the H2 arrival model is used. We see that the DPLR increases as the coefficient of variation increases (for all values of r/m), which is because the traffic received from the access network is increasingly bursty (note the c=1 equals the Poisson arrival model). This is in accordance with earlier works, which showed that increasing the coefficient of variation leads to an increased PLR in OPS networks [18

18. H. Øverby and N. Stol, “Effects of bursty traffic in service differentiated Optical Packet Switched networks,” Optics Express 12(3) (2004) 410–415. [CrossRef] [PubMed]

]. In particular, note that the increased burstiness has greater impact on the DPLR when the NLPRS is utilized (i.e. when r/m>0). For instance, when r/m=0, the DPLR is increased from 1.79×10-2 to 3.25×10-2 when c=1 and c=8, respectively, while when r/m=0.8, the DPLR is increased from 4.68×10-5 to 2.07×10-3 when c=1 and c=8, respectively. However, using the NLPRS reduces the DPLR even though the arrival process is highly bursty, e.g. with c=8, the DPLR is reduced from 3.25×10-2 to 2.07×10-3 when r/m=0 and r/m=0.8, respectively.

5.3. Empirically packet length distribution

Figure 8 shows the DPLR as a function of the product r/m for various values of the parameter m when the packets are empirically distributed. Compared to Fig. 4, we see that the NLPRS performs worse when the packets are empirically distributed. For instance, when m=10, we see an increase in the DPLR when the NLPRS is utilized for r/m>0.3, which means that using the NLPRS actually decreases the network performance. This is because the size of the redundancy packets equals the largest data packet in the packet set. Regarding the empirical PLD, the introduced redundancy packets increase the average packet size as seen in Fig. 9(a). Hence, the system load imposed by the redundancy packets is larger for the empirical PLD scenario compared to the deterministic PLD scenario, due to the increased packet size. The cumulative packet length distribution in the network when m=10 and r=20 is seen in Fig. 9(b). Note that the share of 12000 bits packet has increased significantly compared to Fig. 3(a). However, even though the packets are empirically distributed, the NLPRS is able to reduce the DPLR, but only for large values of m, as seen in Fig. 8.

Fig. 7. The DPLR as a function of the product r/m for various values of the parameter c. ρD=0.2, m=10, G=7, PLD=Deterministic, RPSM=BTB.
Fig. 8. The DPLR as a function of the product r/m for various values of the parameter m. ρD=0.2, G=7, AM=Poisson, PLD=Empirically, RPSM=BTB.

5.4. Redundancy packet scheduling mechanisms

Fig. 9. The mean packet length as a function of the product r/m for various values of the parameter m. ρD=0.2, G=7, AM=Poisson, PLD=Empirically, RPSM=BTB (a). The cumulative packet length distribution when ρD=0.2, m=10, r=20, G=7, AM=Poisson, PLD=Empirically, RPSM=BTB (b).
Fig. 10. The DPLR as a function of the product r/m for various redundancy packet scheduling mechanisms. ρD=0.2, m=5, G=7, AM=Poisson, PLD=Deterministic.

5.5. End-to-end delay

Figure 11 shows the cumulative end-to-end delay for two different network scenarios. Compared to the results in Fig 3(b), we see in Fig 11(a) that the cumulative end-to-end delay has a long tail, which is because of the large value of the parameter m. Lost packets must wait for all packets in the set to arrive, which means that the end-to-end delay increases as the parameter m increases. However, the largest observed delay when m=500 is 6.3 ms, and only 2.28 % of the packets have an end-to-end delay larger than 1 ms. Hence, the extra delay imposed by the NLPRS is not significant unless the parameter m is large. Figure 11(b) shows the end-to-end delay when ρTOT=0.8. The cumulative end-to-end delay has become smoother because the queuing delay in queue Q becomes significant (due to the high system load).

Fig. 11. The cumulative end-to-end delay (in μs) when ρD=0.2, m=500, r=500, G=7, AM=Poisson, PLD=Empirically, RPSM=BTB (a). The cumulative end-to-end delay when ρD=0.4, m=10, r=10, G=7, AM=Poisson, PLD=Empirically, RPSM=BTB (b).

6. Conclusions

In this paper we have presented the NLPRS, which is a novel approach to reduce the end-to-end data PLR in OPS networks. We have derived an analytical model of the NLPRS for asynchronous OPS using reduced load Erlang fix point analysis. An OPS ring network has been simulated, and the major findings include the following:

  • The NLPRS is able to reduce the PLR by several orders of magnitude, depending on the parameters r and m, the system load, network size, data packet arrival process, redundancy packet scheduling mechanism and packet length distribution.
  • Increasing the parameter m while holding the product r/m constant leads to improved performance, but also to an increased end-to-end delay.
  • The NLPRS performance is degraded for an increasing system load.
  • The NLPRS performance is degraded for an increasing network size.
  • The NLPRS performance is degraded as the burstiness of the data packet arrival process increases.
  • For the empirical PLD, we have shown that the NLPRS is efficient for large values of the parameter m only. For the deterministic PLD, the NLPRS is efficient for small values of the parameter m as well.
  • The redundancy packet scheduling mechanism influences the performance of the NLPRS significantly. That is, using the TRA scheme results in no performance gain from using the NLPRS, while using the BTB and EBTB schemes results in a significant improvement in the network performance when using the NLPRS.
  • The proposed analytical model is a rough approximation to the results obtained from the simulation model. The observed bias between the analytical and simulation results is mainly due to the increased burstiness caused by the redundancy packets, which is not reflected in the analytical model. More exactly, for high values of r/m, the analytical model overestimates the performance of the NLPRS

Future work should consider deployment issues regarding the NLPRS.

Acknowledgments

The author gratefully acknowledges Telenor AS for financially supporting this work and Associate Professor Norvald Stol for valuable discussions.

References and links

1.

R. Ramaswami and K. N. Sivarajan, Optical Networks: A Practical Perspective (Morgan Kaufmann, 2002).

2.

M. J. O’Mahony, D. Simeonidou, D. K. Hunter, and A. Tzanakaki, “The Application of Optical Packet Switching in Future Communication Networks,” IEEE Communications Magazine 39(3) (2001) 128–135. [CrossRef]

3.

L. Dittmann et al., “The European IST Project DAVID: A Viable Approach Toward Optical Packet Switching,” IEEE Journal on Selected Areas in Communications 21(7) (2003) 1026–1040. [CrossRef]

4.

J. S. Turner, “Terabit burst switching,” Journal of High Speed Networks 8(1) (1999) 3–16.

5.

M. Yoo, C. Qiao, and S. Dixit, “QoS Performance of Optical Burst Switching in IP-Over-WDM Networks,” IEEE Journal on Selected Areas in Communications 18(10) (2000) 2062–2071. [CrossRef]

6.

S. Yao, B. Mukherjee, S. J. Ben Yoo, and S. Dixit, “A Unified Study of Contention-Resolution Schemes in Optical Packet-Switched Networks,” IEEE Journal of Lightwave Technology 21(3) (2003) 672–683. [CrossRef]

7.

S. L. Danielsen, C. Joergensen, B. Mikkelsen, and K. E. Stubkjaer, “Optical Packet Switched Network Layer Without Optical Buffers,” IEEE Photonics Technology Letters 10(6) (1998) 896–898. [CrossRef]

8.

Y. Chen, H. Wu, D. Xu, and C. Qiao, “Performance Analysis of Optical Burst Switched Node with Deflection Routing,” in Proceedings of International Conference on Communication, pp. 1355–1359, 2003.

9.

D. K. Hunter, M. C. Chia, and I. Andonovic, “Buffering in Optical Packet Switches,” IEEE Journal of Lightwave Technology 16(12) (1998) 2081–2094. [CrossRef]

10.

A. S. Tanenbaum, Computer Networks (Prentice Hall, 1996).

11.

V. Santonja, “Dependability models of RAID using stochastic activity networks,” in Proceedings of Dependable Computing Conference, pp. 141–158, 1996.

12.

Fluid Studios, FSRaid documentation, http://www.fluidstudios.com/fsraid.html (accessed March 2004).

13.

The Smart Par Primer, http://usenethelp.codeccorner.com/SPar_Primer.html (accessed March 2004).

14.

Leonard Kleinrock, Queuing Systems Volume I: Theory (John Wiley & Sons, 1975).

15.

Z. Rosberg, H. L. Vu, M. Zukerman, and J. White, “Performance Analyses of Optical Burst-Switching Networks,” IEEE Journal on Selected Areas in Communications 21(7) (2003) 1187–1197. [CrossRef]

16.

G. Birtwistle, DEMOS: Discrete Event Modelling on Simula (MacMillan, 1978).

17.

E. V. Breusegem, J. Cheyns, B. Lannoo, A. Ackaert, M. Pickavet, and P. Demeester, “Implications of using offsets in all-optical packet switched networks,” in Proceedings of IEEE Optical Network Design and Modelling (Institute of Electrical and Electronic Engineers, 2003).

18.

H. Øverby and N. Stol, “Effects of bursty traffic in service differentiated Optical Packet Switched networks,” Optics Express 12(3) (2004) 410–415. [CrossRef] [PubMed]

19.

H. W. Braun, NLANR/Measurement and Network analysis http://www.caida.org/analysis/AIX/plen_hist/.

OCIS Codes
(060.4250) Fiber optics and optical communications : Networks
(060.4510) Fiber optics and optical communications : Optical communications

ToC Category:
Research Papers

History
Original Manuscript: July 27, 2004
Revised Manuscript: September 23, 2004
Published: October 4, 2004

Citation
Harald �?verby, "Network layer packet redundancy in optical packet switched networks," Opt. Express 12, 4881-4895 (2004)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-12-20-4881


Sort:  Journal  |  Reset  

References

  1. R. Ramaswami and K. N. Sivarajan, Optical Networks: A Practical Perspective (Morgan Kaufmann, 2002).
  2. M. J. O�??Mahony, D. Simeonidou, D. K. Hunter and A. Tzanakaki, �??The Application of Optical Packet Switching in Future Communication Networks,�?? IEEE Communications Magazine 39(3) (2001) 128-135. [CrossRef]
  3. L. Dittmann et al., �??The European IST Project DAVID: A Viable Approach Toward Optical Packet Switching,�?? IEEE Journal on Selected Areas in Communications 21(7) (2003) 1026-1040. [CrossRef]
  4. J. S. Turner, �??Terabit burst switching,�?? Journal of High Speed Networks 8(1) (1999) 3-16.
  5. M. Yoo, C. Qiao and S. Dixit, �??QoS Performance of Optical Burst Switching in IP-Over-WDM Networks,�?? IEEE Journal on Selected Areas in Communications 18(10) (2000) 2062-2071. [CrossRef]
  6. S. Yao, B. Mukherjee, S. J. Ben Yoo and S. Dixit, �??A Unified Study of Contention-Resolution Schemes in Optical Packet-Switched Networks,�?? IEEE Journal of Lightwave Technology 21(3) (2003) 672-683. [CrossRef]
  7. S. L. Danielsen, C. Joergensen, B. Mikkelsen and K. E. Stubkjaer, �??Optical Packet Switched Network Layer Without Optical Buffers,�?? IEEE Photonics Technology Letters 10(6) (1998) 896-898. [CrossRef]
  8. Y. Chen, H. Wu, D. Xu and C. Qiao, �??Performance Analysis of Optical Burst Switched Node with Deflection Routing,�?? in Proceedings of International Conference on Communication, pp. 1355-1359, 2003.
  9. D. K. Hunter, M. C. Chia and I. Andonovic, �??Buffering in Optical Packet Switches,�?? IEEE Journal of Lightwave Technology 16(12) (1998) 2081-2094. [CrossRef]
  10. A. S. Tanenbaum, Computer Networks (Prentice Hall, 1996).
  11. V. Santonja, �??Dependability models of RAID using stochastic activity networks,�?? in Proceedings of Dependable Computing Conference, pp. 141-158, 1996.
  12. Fluid Studios, FSRaid documentation, <a href="http://www. fluidstudios.com/fsraid.html"> http://www. fluidstudios.com/fsraid.html</a> (accessed March 2004).
  13. The Smart Par Primer, <a href="http://usenethelp.code ccorner.com/SPar_Primer.html">http://usenethelp.code ccorner.com/SPar_Primer.html</a> (accessed March 2004).
  14. Leonard Kleinrock, Queuing Systems Volume I: Theory (John Wiley & Sons, 1975).
  15. Z. Rosberg, H. L. Vu, M. Zukerman and J. White, �??Performance Analyses of Optical Burst-Switching Networks,�?? IEEE Journal on Selected Areas in Communications 21(7) (2003) 1187-1197. [CrossRef]
  16. G. Birtwistle, DEMOS: Discrete Event Modelling on Simula (MacMillan, 1978).
  17. E. V. Breusegem, J. Cheyns, B. Lannoo, A. Ackaert, M. Pickavet and P. Demeester, �??Implications of using offsets in all-optical packet switched networks,�?? in Proceedings of IEEE Optical Network Design and Modelling (Institute of Electrical and Electronic Engineers, 2003).
  18. H. �?verby and N. Stol, �??Effects of bursty traffic in service differentiated Optical Packet Switched networks,�?? Optics Express 12(3) (2004) 410-415. [CrossRef] [PubMed]
  19. H. W. Braun, NLANR/Measurement and Network analysis <a href="http://www.caida.org/analysis/AIX/plen_hist/.">http://www.caida.org/analysis/AIX/plen_hist/</a>

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited