Energy Implications of Photonic Networks With Speculative Transmission
Spotlight summary: Power consumption is becoming more and more a primary research topic, pushed by economic, environmental and physical reasons. In particular in the computer field, years of development aimed at achieving the maximum performance, while neglecting power consumption, urge now the compelling need to set as first priority the improvement of the energy efficiency. Power density is also reaching the physical limit of electronic-based chip and interconnections, and is seriously hindering the development of chip multiprocessors (CMP), by limiting the scaling of the number of transistors predicted by Moore’s law.
Photonic solutions are seen as a viable way for reducing energy consumption of point-to-point interconnects and interconnection networks, which are used in data centers for server communication, as well as in computers for on-chip and chip-to-chip communication. When considering on-chip interconnection networks, not only the power consumption should be as low as possible, but also stringent latency requirements must be satisfied, for example to enable cache coherence processes in high performance shared memory systems.
Latency reduction when scheduling traffic has been the driving motivation for fast and efficient scheduling algorithms for interconnection networks. Round robin algorithms are typically used for their easy implementation, whereas speculative algorithms achieve better performance but are more complex, as they need to predict the slot allocation without waiting for the final grant.
Watts et al. in their work entitled “Energy Implications of Photonic Networks with Speculative Transmission” investigate for the first time the energy implications of using a round robin scheduling algorithm or a pipelined speculative algorithm for scheduling the transmissions in on-chip interconnection networks. In order to derive the power consumption profile as a function of the interconnection network load, a verilog-based model of scheduled and speculative scheduling algorithms was carried out. Data about the power consumption was collected from the literature, while the values from key components like the considered scheduling algorithms and the network interface electronic buffers were gathered from the actual implementation on a commercial CMOS process. Results show that the benefits of the reduced latency in the speculative algorithm have an energetic cost. Indeed, a higher power is drained due to the increased number of retransmissions per packet required by the algorithm, and to the more complex adapter. However, both the investigated scheduling algorithms allow significant saving in terms of energy consumed, compared to traditional electronic interconnections.
Anyway, looking at the overall power consumption, allocator circuits which are the electronic circuit where scheduling algorithms are implemented, are found to consume a negligible amount of power compared to other elements (such as power sources). This finding opens the field for the use of more complex and better performing scheduling algorithms, without significantly affecting the total power consumption.
In summary, this timely and cutting-edge work represents the first analysis on how the scheduler’s choice can affect the power consumption of on-chip interconnection networks. The work is expected to have a strong impact and to open the way on future research that will consider the dependence of power consumption on further elements as, for example, coded signals, more complex scheduling algorithms, or different assumptions for energy figures of the considered components.
--Pier Giorgio Raponi
|OCIS Codes:||(060.4250) Fiber optics and optical communications : Networks|
|(200.4650) Optics in computing : Optical interconnects|
You must log in to add comments.