Category Archives: Results

ECOC 2018 Demonstration: VNF Implementation of the Virtual DBA

We demonstrate a VNF implementation of a sliceable PON architecture which has been optimised using DPDK data plane acceleration techniques. This gives Virtual Network Operators optimal control over capacity scheduling in a large scale multi-tenant PON environment.

Experimental Setup

Our demonstrator (see figure 1), implements a shared PON scenario with a number of real and virtualised ONUs, each with 3 T-CONTs. The main components within the testbed are: a physical PON, a set of emulated ONUs, a traffic generator and a multi-access edge computing node. The physical PON is based on one OLT and two ONUs (with the ONUs multiplexed into the same physical board), implemented on FPGA development boards13 offering 10Gb/s symmetric capacity. The emulated ONUs, running in software, are used to increase the number of users, and generate typical self-similar traffic. The traffic generator produces both real-time sensitive and best effort traffic flows (such as file transfer and video streaming) through the physical PON. Traffic flows are VLAN-tagged which are then mapped to specific TCONTs at the ONUs. Openstack runs the Network Function Virtualisation (NFV) implementation of the PON, running the virtual DBA and the merging engine. The Merging Engine is the element that merges all virtual bandwidth maps from the different VNOs generating one physical bandwidth Map allocation and the SDN control plane. The virtualisation node is logically composed of the Virtual Network Functions(VNFs), an Openstack virtualization platform, a DPDK Data Plane Acceleration toolset and an Orchestration and Control layer.

We have implemented the Merging Engine (ME) and the vDBA functions for the Virtual Network Operators (VNOs) as Virtual Network Functions (VNFs), allowing these functions to be instantiated and scaled independently. The virtualized infrastructure, shown in figure 1, leverages Single Root Input/Output Virtualization (SRI-OV) technology? and Open vSwitch6 with Data Plane Development Kit (DPDK) enhancements7. The DPDK offers a set of lightweight software libraries and optimized drivers to accelerate packet processing. It utilizes polling threads, huge pages, numa locality, zero copy packet handling, lockless queue and multi core processing to achieve low latencies and a high packet processing rate. Thus, all VNFs leverage the DPDK drivers and libraries to minimize the I/O and packet processing cost. The PCI Special Interest Group8 on I/O Virtualization proposed the Single Root I/O Virtualization standard for scalable device assignment. PCI devices supporting the SRIOV standard present themselves to host software as multiple virtual PCI devices, thus introduce the idea of physical functions (PFs) and virtual functions (VFs). The PFs are the full-featured PCIe functions and represent the physical hardware ports; VFs are the lightweight functions that can be assigned to VMs. The userspace VF driver for the merging engine VNF helps VM to directly access the FPGA interface, thus, provides near line-rate packet I/O performance. The OVS-DPDK replaces the standard OVS kernel data-path with a DPDK-based data-path, creating a user-space vSwitch on the host for faster connectivity between VMs. The OVS-DPDK ports have vHost user interfaces which allow user to fetch/put packets from/to the VMs. Furthermore, all the VNFs in different VMs employs para-virtualized interface that utilizes the DPDK userspace virtio poll mode driver to accelerate packets I/O from OVS-DPDK. Each of the VNFs used for VNOs implements vDBA mechanism, thus, have identical functionality in terms of packet processing. The VM running merging engine VNF has two interfaces – VF interface for packets I/O with FPGA interface, and the second one is virtio interface to communicate packets with OVS-DPDK switch. There are two directions in which traffic flows in this virtualized system: North/South and East-/West. In the North/South flow pattern, traffic is received from the network through FPGA interface and sent back out to the network. In the East-/West flow pattern, traffic is processed by a VNF and sent to another VNF through OVS-DPDK for further processing.

 

F. Slyne, R. Giller, J. Singh and M. Ruffini, “Experimental Demonstration of DPDK Optimised VNF Implementation of Virtual DBA in a Multi-Tenant PON,” 2018 European Conference on Optical Communication (ECOC), Rome, Italy, 2018, pp. 1-3.
doi: 10.1109/ECOC.2018.8535109
keywords: {Acceleration;Bandwidth;Passive optical networks;Merging;Engines;Real-time systems;Cloud computing},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8535109&isnumber=8535099

OFC 2017 Paper in Antenna Spectrum and Capacity trade­off for Next Generation PONs

We propose a cost-optimal antenna vs. spectrum resource allocation strategy for mobile 5G MD-MIMO over Next-Generation PONs. Comparing wavelength overlay and shared wavelength approaches, split-PHY leads to solutions with higher mobile capacity than fronthaul.
Screenshot 2018-11-20 at 13.06.04.png

Figure 1:Optimal antenna numbers (al) and bandwidth (a2) for the fronthaul overlay model; optimal antenna numbers (bl) and bandwidth (b2) for the fronthaul shared wavelength model; optimal antenna numbers (c1) and bandwidth (c2) for the split-phy shared wavelength model. Overall system rate for the fronthaul (b3) and split-phy (c3) for the wavelength sharing model.

The results discussed in this section refer to a scenario where a number of users are randomly distributed in an area of 1km2. The scenario uses an average urban population density of 1,350 habitants and 570 dwellings per km2 (we take the city of Dublin, Ireland, as a reference); we analyse an MD-MIMO system with 20 active users and 64 RRHs. We have also simulated a city centre scenario with a density 4 times higher, scaling proportionally the number of active users and antennas. Since however the results were similar to those of the first scenario (Fig. 1), they are not reported in this paper. We assume the maximum spectrum bandwidth available is 50 MHz and the number of PONs (we assume a 64-way split) is enough to cover all dwellings and RRHs. We used exhaustive search in Matlab to solve the optimization problems previously described; the number of antennas used in each PON (mi) is determined by distributing uniformly the overall optimal number of antennas among the PONs.

The results presented in Fig. 1 show the optimal number of antennas and spectrum resources used for cost ratios of wireless spectrum to PON capacity (Rwb) and PON capacity to antenna site (Rbm) varying over several orders of magnitude. We have also attempted to estimate a potential reference value for RWb and Rbm, taking into consideration estimated costs for spectrum, antennas and PON channels. The cost of the spectrum (at 1.8GHz) was approximated to 0.1138 GBP per MHz per habitant for a 20-year lease, following data in [9]. The cost of antenna site leasing was chosen to be $1900 per month, according to [10]. The cost for leasing one of the eight NG-PON2 wavelengths was calculated at $1,510 per year, by carrying out a discounted cash flow model over costs reported in [11] (we considered 1% OPEX on passive and 4% on active infrastructure, a return on investment of 5% and a Weighted Average Cost Of Capital of 10%). All costs were brought back to a similar currency and normalized to one-year period; since we only consider cost ratios, we assume that similar ratios might still be valid when the lease time operates over much shorter time scales for highly dynamic resource allocation. The approximate reference value for Rbm is thus calculated at 0.066 (although, due to high variability of antenna site costs, we highlight in the plots a two order of magnitude shaded area from 0.006 to 0.6), while the approximate value for RWb is of 0.0065.

The plots are organized as follows: the first three in the upper line, (al), (b1) and (c1) report the optimal number of antennas from, respectively, the fronthaul overlay model, the fronthaul shared wavelength model, and the split-PHY shared wavelength model. In the lower line, (a2), (b2) and (c2) report the associated optimal spectrum bandwidth. Plots (b3) and (c3) in the last column report instead the wireless data rate, across all users, for the wavelength sharing model with, respectively, fronthaul (b3) and split-PHY (c3). The plots show that the higher Rwb, the higher is the sensitivity of the optimal number of antennas to Rbm: indeed low cost in optical transport facilitates the use of more antennas as cbdecreases over CW. From plot (al) we can see that for the fronthaul overlay model the reference value (between red and green curves) has low sensitivity to changes in the Rbmratio, meaning that the optimal strategy is to use the lowest numbers of antennas necessary for MD-MIMO. This is due to the high cost of optical transport with respect to spectrum, and only when this ratio changes considerably (i.e., by 100 times-blue curve) the system becomes sensitive to Rbm and the optimal strategy quite variable with it. When fronthaul is considered (b1), the situation does not change considerably within the reference shaded zone although the strategy becomes more sensitive to changes in Rbm and Rwb. The sensitivity becomes instead more pronounced for the wavelength sharing over split-PRY case (cl), as the red and green curves (i.e., around the reference value) become steeper for values near the shaded area. In this scenario in fact the split-PRY drastically reduces the C-RAN bit rate, which, combined to the ability to share PON wavelengths between multiple RRHs signals, lowers considerably the cost of optical transport. Thus for split-PRY the optimal MD-MIMO strategy is visibly dependent on the cost ratios, making resource allocation optimisation a necessity in dynamic markets where costs change with demand, or across different countries and geotypes.

Looking at the bandwidth plots in the lower line (a2), (b2) and (c2), we can see that while the spectrum used tends to decrease as more antennas are used, the relation is not strictly inversely proportional, because the model objective is the minimisation of the cost per bit. Moreover, the optimal bandwidth is less sensitive to Rbmfor the fronthaul overlay model, as the high optical transport cost makes it difficult to use more antennas. For split-PRY, the lower cost of optical transport allows instead the use of more antennas even when more spectrum is utilized, leading to an increase in the overall capacity (as visible in plot (c3)). Finally, the last column plots (b3) and (c3) show the overall MD-MIMO system rate (according to Shannon capacity) for fronthaul and split-PRY over shared wavelength models: the lower cost of split-PRY transport enables higher wireless data rates compared to fronthaul.

 

I. Macaluso, B. Cornaglia and M. Ruffini, “Antenna, spectrum and capacity trade-off for cloud-RAN Massive distributed MIMO over next generation PONs,” 2017 Optical Fiber Communications Conference and Exhibition (OFC), Los Angeles, CA, 2017, pp. 1-3.
keywords: {5G mobile communication;antennas;MIMO communication;next generation networks;passive optical networks;radio access networks;resource allocation;cost-optimal antenna;spectrum resource allocation;capacity trade-off;cloud-RAN massive distributed MIMO;next generation PON;mobile 5G MD-MIMO;passive optical networks;wavelength overlay;shared wavelength approaches;split-PHY;mobile capacity;fronthaul;radio access network;massive distributed multiple input multiple output system;Antennas;Passive optical networks;Bandwidth;Mobile communication;Wireless communication;Mobile computing;Dynamic scheduling},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7936823&isnumber=7936771

 

 

 

OFC 2018 Paper on PON Capacity Auctions

We propose an economic-robust auction mechanism for multi-tenant PON’s capacity sharing that operates within the DBA process. We demonstrate that our mechanism improves PON utilization by providing economic sharing incentives across VNOs and infrastructure providers.
Screenshot 2018-11-20 at 12.18.38

We have simulated a multi-tenant PON market considering a10-Gbps symmetrical PON (i.e., XGS-PON). The simulation duration is 6 seconds, which allows us to average our results over 48,000 frames, each of 125μs duration. The PON is shared amongst 10 VNOs, each serving 10 Optical network units (ONUs). Although not reported, we have repeated the same simulations for different numbers of ONUs and VNOs, obtained similar results.

Our results are reported in Fig 2, comparing the three sharing mechanisms described above. Fig2.a shows the network utilization for an unbalanced load scenario (i.e., the mean of the traffic generated by the ONUs is assigned according to a random uniform distribution), confirms that our proposed economic-robust mechanism outperforms the Non-Sharing by achieving higher utilization across all offered loads. The Upper-Bound scenario reflects the case that with no trade reduction and, as a result, it increases the number of trades, leading to higher utilization. It is important to note though that the upper-bound is idealistic since without incentivizing VNOs to report their truthful value, they will likely manipulate their bids to achieve higher utility: the buyer VNOs shading their bids, and the sellers reporting higher untruthful values. This leads to a higher price per item from the sellers and lower offer per item from buyers leading to a natural reduction of trades. However, our results do not account for the manipulative bidding behavior of the VNOs. In Fig 2.b we report, for completeness, the scenario with the balanced load across the ONUs, although this is less realistic. As expected, although the trend is confirmed, the difference between the three mechanisms is much less remarked, as the number and value of the trades are far less when VNOs all have similar traffic. Fig 2.c compares, for the unbalanced load scenario, the average VNOs’ and InP’s utility against the average number of trades conducted during each frame using the proposed mechanism. We define the VNOs’ utility as the difference between their trading price and their valuation for the FU,. i.e. this determines how close is their final payment to their perceived value. The InP’s utility is the difference between the trading price of the seller and buyer VNOs, i.e., this reflects the price gap occurred due to the supply and demand ratio. Both Fig 2.c and Fig 2.d show that as we move to the right along the X-axis, the ratio of the demand to supply increases and, as a natural reaction, the market adapts by raising the price. As the number of trades increases, VNOs and the InP gain more utility. Once the overloading ratio exceeds the factor of 2, the VNOs become more demanding. At the same time, the supply declines and leads to fewer trades and eventually almost no trade when it reaches saturation as all the VNOs are asking for more than their negotiated share. By design, while the supply is higher than the demand the trading price is equal to the base price thus the utility of the InP remains zero. Once the demand grows over the supply, the price rises and the InP’s utility starts to grow. The InP’s utility is at its highest when the number of trades is maximum, and the average price of an FU is high.

Nima Afraz, Amr Elrasad, Marco Ruffini
In OFC 2018, 2018

ONDM 2017 Paper on Virtual DBA

Frame Level Sharing for DBA virtualization in multi-tenant PONs in Multi-Tenant PON

 

Figure 2

Fig. 2:Frame level sharing architecture, sharing frames among VNOs.

Performance Evaluation

We developed a C++ XGS-PON simulator (e.g., using symmetric 10G upstream/downstream rates) and used it to simulate one OLT and 60 ONUs with maximum physical distance of 40 Km. The upstream capacity was set to 9.95328 bps, according to the standard. The Ethernet frame size for the packet load generator ranges from 64 to 1518 bytes with trimodal size distribution, as reported in [14]. We employed self-similar traffic with long range dependence (LRD) and Hurst parameter 0.8. The ONUs are divided equally among VNOs and all VNOs employ the GIANT [9] DBA algorithm with three T-CONTs, namely: assured, non-assured and best effort. We considered service intervals of 4, 8, and 8 frames respectively. The ONU buffer size is set to 3 MB. The offered load is uniformly distributed among ONUs and T-CONTs. We assume the total PON assured traffic capacity is divided homogeneously among VNOs, but they are allowed to exceed this figure with non-assured and best effort traffic.

Regarding the number of VNOs and offered load distribution, we consider three simulation scenarios as follows:

  • Scenario 1: we consider two VNOs with offered load divided equally among them.
  • Scenario 2: we consider five VNOs with offered load divided equally among them.
  • Scenario 3: we consider two VNOs with offered divided on a 1:2 ratio among them.

The performance of our FLS algorithm is tested against the SS framework [7], discussed in section II. In order to achieve a fair comparison between FLS and SS capacity sharing policies we use same number of VNOs and same service intervals (both mechanism are based on the GIANT DBA) and set the maximum service rate to the full XGS-PON capacity. The forgetting factor was set to 0.125 [7], while the minimum committed service rate was set equal to the share of the total PON capacity for each VNO. Our main performance metrics is the average packet delay, while we also investigate the frame loss rate.

A. Scenario 1

The average packet delay for our proposed FLS and the benchmark SS is shown in Fig. 5. It can be noted that both static and dynamic SS switching mechanisms have the same performance for assured, non-assured, and best effort traffic. Regarding FLS, both capacity sharing and no capacity sharing have very close performance. Capacity sharing has lower delay than no capacity sharing at high load for best effort traffic. This situation is reversed for non assured traffic. Comparing both SS and FLS, we find that FLS delay is significantly lower than SS delay by 50% for assured and non assured traffic. This statement is still true for best effort traffic for offered load below 9 Gbps. The minimum achieved delay by SS matches our note in Section II. Since the service interval is equal to 4 for assured traffic and there are two VNOs, then the assured T-CONTs are polled every 8 frames. Hence the minimum average delay is 12 frames (1500μs). The reported results in the original SS paper [7] show the same behavior. On the other hand, FLS allows assured T-CONTs to be polled every 4 frames. Hence FLS achieves 50% lower delay. The frame loss rate is similar in both SS and FLS, thus we do not report its plot.

Figure 5

Fig. 5:Average delay (scenario 1): (a) Assured bandwidth (b) non-assured bandwidth (c) best effort.

B. Scenario 2

In scenario 2, the number of VNOs is set to 5. There are three interesting points to note. First, in FLS for best effort traffic the capacity sharing policy shows significant lower delay at high load compared to no capacity sharing. Second, the minimum achieved latency for FLS is still the same as in scenario 1. This shows that FLS is more resilient to the number of VNOs compared to SS. Third, the minimum achieved latency of SS is increased by a factor of 2.5, since, as explained in the subsection above, it is proportional to the number of VNOs in the system. This shows that SS framework performance is highly dependent on the number of VNOs.

C. Scenario 3

In Scenario 3, the number of VNOs is set to 2, but the offered load of one VNO is twice that of the other VNO. The delay performance of the low loaded operator is shown in Fig. 7, while the high loaded one is shown in Fig. 8. Comparing assured bandwidth performance for both operators, we see that FLS achieve higher isolation than SS, as we notice that the assured bandwidth delay of the lower loaded operator is almost constant over the load range for both the capacity and no-capacity sharing policies. On the other hand for SS, the dynamic switching mechanism achieves increasing delay for the lower loaded operator and decreasing delay for the higher loaded operator at high offered load. This is because as the offered load increases, the SS layer (dynamic) assigns more upstream frames to the higher loaded operator. This leads to reduced delay for the higher loaded operator and increased delay for the lower loaded one. Regarding Best effort traffic, FLS capacity sharing policy achieves significant lower delay compared to no-capacity sharing policy and both SS switching mechanisms.

The frame loss rate is reported in Fig. 9. For the lower loaded operator, the SS dynamic approach increases the frame loss rate by a small amount. For higher loaded operator, the SS dynamic approach and FLS capacity sharing policy are more stable than the SS static approach and FLS no-capacity sharing policy. However, the FLS capacity sharing approach has the advantage of not raising either the lower loaded operator frame loss rate nor the average delay, thus providing again good isolation between the two VNOs.

Figure 7

Fig. 7:Average delay (scenario 3, low loaded VNO): (a) Assured bandwidth (b) non-assured bandwidth (c) best effort.

Figure 8

Fig. 8:Average delay (scenario 3, high loaded VNO): (a) Assured bandwidth (b) non-assured bandwidth (c) best effort.

Figure 9

Fig. 9:Scenario 3: frame loss rate (a) low loaded VNO (b) high loaded VNO.

Conclusion

In this work, we proposed a novel virtualized PON sharing architecture called Frame Level Sharing (FLS). FLS introduces the concept of virtualization by migrating and virtualizing the DBA function from the physical OLT (owned by the infrastructure provider) to a virtual PON slice controlled by the virtual network operator. FLS is designed to achieve upstream frame level sharing among VNOs while maintaining service isolation among them, by introducing a new sharing engine layer on top of the TC layer. The sharing engine is responsible for merging the received virtual bandwidth maps into the physical bandwidth map to be transmitted along with the downstream frame. Simulation results in balanced load scenarios shows that FLS achieves less delay compared to a benchmark scheme (the Slice Scheduler) found in literature, and a low dependency on the number of VNOs sharing the PON. In addition, even for non-balanced load scenario, FLS achieves excellent service isolation among VNOs.

A. Elrasad and M. Ruffini, “Frame Level Sharing for DBA virtualization in multi-tenant PONs,” 2017 International Conference on Optical Network Design and Modeling (ONDM), Budapest, 2017, pp. 1-6.
doi: 10.23919/ONDM.2017.7958528
keywords: {channel capacity;passive optical networks;frame level sharing;DBA virtualization;PON;fiber-to-the-premises access network;ubiquitous fiber infrastructure;passive optical networks;point-to-point solutions;virtual network operators;capacity scheduling;virtual dynamic bandwidth assignment;Passive optical networks;Bandwidth;Delays;Engines;Switches;Business;Scheduling algorithms},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7958528&isnumber=7958513

OFC 2017 Paper: Introduction to DBA Virtualisation

We propose a virtual-DBA architecture enabling true PON multi-tenancy, giving Virtual Network Operators full control over capacity assignment algorithms. We achieve virtualization enabling efficient capacity sharing without increasing scheduling delay compared to traditional (non-virtualized) PONs.
Figure 2

Fig. 2:(a) Average delay, load divided by 1:1 (b) average delay, load divided by 1:2 (c) frame loss rate

The aim of our analysis is to show that the vDBA mechanism we propose does not cause additional delay to the capacity scheduling, while it gives full control to VNOs over their vDBA algorithm and is able to re-assign unused capacity among the VNOs.
Fig. 2 shows the average delay versus the offered load for the sharing capacity, nonsharing capacity and traditional (non-virtualized) PON as well as the frame loss rate. From Fig. 2.a and Fig 2.b, we can see that employing virtualization in the DBA does not affect the delay of the assured and non-assured bandwidth performance. For these two cases the plots show the same constant delay performance both for the traditional PON and the two virtualized PONs. Assured bandwidth is the most important to consider as it is the most likely to carry traffic with higher QoS requirements.
Regarding best effort traffic, in Fig 2.a we see that when the load increases towards saturation this experiences delay, which however is similar across traditional and virtual PONs (red, black and blue curves increase together). From Fig 2.b however we can see that when the traffic is unbalanced between VNOs, MT-PON with capacity sharing (blue) outperforms MT-PON with non-sharing capacity (red) and it performs similarly to a traditional PON (black). This advantage is also clear from Fig 2.c, showing that MT-PON with capacity sharing does not experience any noticeable frame loss when the load is unbalanced. On the other hand the non-sharing capacity MT-PON experiences noticeable loss rate (here we also show the case where the traffic is unbalanced by a factor of 3).
In conclusion, our approach to DBA virtualization has shown that it is possible to achieve true multi-tenancy in PONs, giving operators full control over capacity scheduling, without increasing delay performance and without wasting PON capacity when the load is unbalanced among the VNOs.
A. Elrasad, N. Afraz and M. Ruffini, “Virtual dynamic bandwidth allocation enabling true PON multi-tenancy,” 2017 Optical Fiber Communications Conference and Exhibition (OFC), Los Angeles, CA, 2017, pp. 1-3.
keywords: {optical communication equipment;optical fibre networks;passive optical networks;virtual dynamic bandwidth allocation;true PON multitenancy;virtual-DBA architecture;virtual network operators;capacity assignment algorithms},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7936877&isnumber=7936771