The history of optical signal processing and computing can be divided into two main periods. The first period contains the rapid growth of the field and its decline while the second period is the more conservative, but secure revitalization that is leading us to a much brighter future. As often happens in science and technology, the evolution of the field has not necessarily followed the track anticipated by its initiators.
The modern era of optical signal processing and computing started with the introduction of the coherent optical processors by Cutrona, et al. and VanderLugt.1, 2 These coherent processors exploited the main attributes of optics, namely its massive parallelism and speed. In particular, the success of the VanderLugt correlator raised much interest and extended applications were anticipated. As a result, intensive research efforts were started and they continued for about two decades. Unfortunately, the attributes of optics were compensated by severe technical difficulties, the lack of proper devices and their inflexibility. Consequently, researchers turned toward digital computing and attempted to replace electrons by photons, an approach which was doomed due to the fundamental differences between the behavior of light and electrons.
The fact that nature exploits optics so extensively indicates that optics must have some attributes that cannot be matched by other media. One of these attributes is the capability of photons to solve the wave equation with any given set of boundary conditions. Moreover, this wave equation is solved almost instantaneously, in parallel, and with no expenditure of energy. Energy is dissipated only at the moment when the final result is detected. In contrast, digital computers dissipate energy for each intermediary step of a calculation even if those intermediary calculation results are not interesting. As pointed out by Caulfield and Shamir in 1990,3 this attribute alone is an adequate incentive to pursue optical signal processing. Optical Fredkin gates and gate-arrays4–7 were also introduced within the effort to exploit the non-dissipative nature of processing with light.
Optical signal processing and computing has thus far been limited to certain narrow niches. Apart from technological difficulties, the main reason that has prevented a wider applicability is the relative inflexibility of optical architectures as compared to their electronic counterparts. In this article we introduce a novel structure for digital computing networks specifically designed to exploit the attributes of optics. The new concepts introduced here enable the implementation of all combinational Boolean operations in a reversible way without the need to dump information or interchange signal and control inputs as required by Fredkin’s approach.
In the next section we present an overview of the structure followed by a discussion of its advantages and limitations. Section 4 is devoted to an overview of various technological aspects of implementing optical gates and their assembly into complete computing networks. Section 5 discusses some prospects for the future and this is followed by concluding remarks.
2. Directed logic: an Overview
We believe that the failure of optical computation has been due, in large part, to the fact that researchers have primarily attempted to make optics behave like electronics. That is, optical computation researchers have adopted the paradigm of logic used in electronics, but, as indicated above, this paradigm does not recognize the inherent differences between optics and electronics. In this section we introduce a new logical paradigm, “Directed Logic”, which is specially adapted to the features and promises of optics. This section will introduce the main features of directed logic.
Directed logic circuits are networks of simple elements. The primary input to the circuit is a vector, not the traditional Boolean scalar. Each element performs a specific operation on its input vector. The cumulative effect of these operations yields the value of the function in question. Locally, each element performs one of two operations on its input vector. Which operation is performed is determined by a separate Boolean input to the element. The output vector is passed on either in part or in whole to subsequent elements in the network, each of which performs an operation determined by its Boolean input. In short, directed logic architecture performs distributed parallel computation of a function and its negation, using a computational method based on vector operations.
The most obvious difference between directed logic and traditional logic is the lack of anything corresponding to a Boolean logic gate. Computation in a directed logic circuit is performed by a network of elements each of which performs a simple switching operation. The operation of each element is independent of the operation of the other elements in the circuit. Computation of the logical function is performed only by the circuit as a whole; one cannot in general identify portions of the circuit as computing sub-functions. A second noticeable feature is that directed logic computation inherently computes both a function and its inverse simultaneously. Thus any circuit that computes AND also, and at the same time, computes NAND.
Directed logic operates on vectors represented as ordered pairs of Boolean values. Thus (0,0),(0,1), and (1,0) are admissible values. It will turn out that (1,1) is not admissible, corresponding in some ways to the notion of contradiction. There are two operators in directed logic which we call “pass” and “switch”. Both operators are monadic – they take only one argument. Pass (hereafter P
) is the identity operation, it’s output is the same as it’s input. Switch (hereafter S
) reverses its input vector. Thus S
(1,0) = P(0,1) = (0,1), S
(0,1) = P
(1,0) = (1,0) and S
(0,0) = P
(0,0) = (0,0). If we interpret (1,0) as “True” and (0,1) and “False”, then sequences of S and P can be used to calculate certain Boolean functions on given arguments. Trivially, S calculates Boolean negation and P calculates Boolean identity. Less trivially, we can compute Boolean XOR and XNOR with a string of S and P elements as illustrated in Figure 1
and described in the following paragraph.
Suppose we want to calculate XOR
). This is computed by the string of elements E
is P if vi
= 0 and S
= 1. Importantly, this means that the nature of each element is determined by the value of the corresponding variable. In Figure 1
this control is represented as a separate input at the top of each element. E
receives the input (1,0) and thereafter the output of each element is the input to the next.
In the simple XOR/XNOR circuit just described the output vector of one element is used as the input vector of a subsequent element. However in other circuits the output vector of one element may be de-composed before being used as input. Here we use the OR/NOR circuit as an example.
Fig. 1. XOR/XNOR computed with a series of directed logic elements
Fig. 2. OR/NOR circuit in directed logic
It is worth spending a bit of time understanding what is going on in Figure 2
. The output vector of the first A element is split into two paths, it may be helpful to think of the top path as the negative path and the bottom path as the positive path. One of these two paths will carry the scalar 1, the other will carry the scalar 0. In this sense the position of the scalar 1 carries the information of the value of A. If A is positive (the scalar takes the bottom path) then it is switched to the OR output line without the need to check the value of B. If, on the other hand, A is negative, then it becomes necessary to check the value of B. A negative result from B yields the scalar 1 at NOR, a positive result sends it down to the second A element. Since the scalar 1 only passes through B when A is negative, it will be passed through the second A element to the OR output. The reader should take the time to convince himself that the 1 will always arrive at either OR or NOR while the other will have a scalar 0. The extra 0 input at B merely ensures that every path has either a 1 or a 0 scalar. It should be reiterated here that the (1,0) input vector elements are maintained throughout the whole network and are merely redirected. Minor changes to the OR/NOR circuit produce circuits for all of the other two input Boolean functions. (diagrams are given in Appendix A)
2.1. Directed logic and Fredkin Gates
Readers familiar with conservative logic will recognize the elements in the above circuit as similar to Fredkin gates. Fredkin used controlled switches as gates, and proved their completeness with respect to Boolean logic in Ref. 8. However there are crucial difference between Fredkin’s implementation and ours, despite their being based on the same fundamental elements..
Fredkin conceived of the three inputs as being interchangeable, that is, any output of one gate could be used as any input for a subsequent gate. His proof of the universality of Fredkin Gates essentially depends on this interchangeability. In all our circuits the controlling input is kept entirely separate from the other two lines. This careful separation facilitates the optical implementation of our circuits along with their generalizability to other media.
Current versions of optical Fredkin gates4,9 require that the gates be controlled by a signal which is different in character from the other inputs. In most cases the control signal is electronic. However, even in ‘all optical’ solutions the controlling signal differs from the other inputs, for example by being of a different wavelength. This difference means that optical Fredkin gates have not been cascadeable in the way that Fredkin envisaged. This in turn has meant that they cannot be shown to produce all of Boolean logic. We resolve this problem by reinventing cascading.
2.2. Directed logic cascading
2.2.1. From syntax to circuits
It is commonplace that logical syntax may be developed using different grammars. We wish to highlight here the difference between the grammars of infix and suffix notation. Infix notation places the operator between its arguments while suffix notation places the operator after the arguments. Thus ‘p OR q’ is in infix notation while ‘pq OR’ is in suffix notation.
As we scan an expression in suffix notation from left to right, we encounter the arguments for each function prior to the operator itself. In a somewhat similar way, if we look at the operation of gates cascaded in the traditional way, the inputs for each gate are computed temporally prior to the output for the gate. Thus there is a certain analogy between the temporal ordering of computation in traditional cascading and the spatial ordering of symbols in suffix notation. To be sure, the analogy is not perfect. For example, expressions in suffix notation are linearly ordered while the corresponding circuits are only partially ordered. However, analogies may be instructive even when imperfect. Infix notation is substantially different than suffix notation in that the arguments for the main operator in infix notation are typically not scanned until well after the operator is scanned.
Directed logic circuits cascade in a way that we suggest is analogous to the infix notation rather than suffix notation. Directed logic circuits are cascaded by nesting within each other rather than chaining one after another. In fact, there is typically no explicit computation of the operator that is temporally distinct from the computation of the operands. Instead, the operator
is computed by computing the operands within the context of a particular type of structure. The structure within which computation is performed determines the function computed. This is a vastly different model than that used for traditional cascading.
The crucial observation underlying this new model is that directed logic circuits are themselves controlled switches in many ways similar to the elements of which they are composed. Both have a constant 1 and a constant 0 input. Both have two outputs, one of which is the negation of the other, and both are controlled by one or more control lines that may be of a different type than the data lines. What this suggests is that we may treat the argument positions in the structures as composed of “black boxes” which may in turn be replaced either by individual elements or by directed logic circuits. Circuits for complex functions may thus be built by recursive nesting. We begin with the circuit for the main operator. Into each argument position we place the directed logic circuit that computes the appropriate function. We continue placing circuits into argument positions until we reach arguments which may be computed with a single element, i.e. the level of literals. There is one caveat to this process. When an argument position is marked with a′, that indicates that the circuit filling that position is to be reversed. When the position is filled by a single switch this does not matter as the functionality of a single switch is the same whether reversed or not. However, when the position is occupied by a more complex circuit the ‘decomputation’ of the two inputs can only be accomplished by reversing the entire circuit. (It is possible to replace the reversed circuit with other devices. This can be accomplished optically, for example, by a coupler followed by an amplifier. However, non-switch based circuits may lack the speed advantages of section 3.1 and the energy efficiency of conservative and reversible circuits. For these reasons we concentrate on the use of reversed circuits. We believe it is important to realize that the decomputation can be done entirely within the logic without the need for extra-logical devices.)
2.2.2. Building a complex circuit
As an example, in this section we demonstrate how the new notion of cascading is used in constructing the circuit for (A
) AND C
. The circuit for (A
) AND C
is obtained by inserting circuits for A
into the A
′ argument places of the circuit for AND as indicated in Fig. 3
. The end result of the nesting is shown in Figure 4
In this way a circuit for any logic formula can be ‘read off’ of the structure of the formula in much the same way that traditional logic circuits can be. This point cannot be stressed too much. The circuit of Figure 4
follows simply from the syntax of the formula. It is not necessary to have the truth table or any normal form of the formula, a correct circuit follows from the formula itself.
Of course the circuit that can be simply read off a formula may not be the most efficient circuit for computing the function represented by the formula. For example, the circuit of Figure 4
can be simplified by noting that (A
) AND C
is equivalent to C
) and then reading the circuit off of the latter formula. The result is shown in Figure 5
. Moreover, if we are not interested in the complement output, a significant fraction of the circuit can be discarded (i.e. the right hand decomputing OR gate in Fig. 4
3. Advantages and limitations of directed logic
3.1. Slower is Faster
In traditional logic architectures each gate must wait for the result of previous gates before computing its result. Upon receiving all inputs the gate effects a change of state depending on the inputs and shortly thereafter the output stabilizes at a particular value. The time between the initial presentation of the inputs and the time the output signal stabilizes is known as the ‘gate delay’. Gate delay is influenced by two factors: the speed of the state change and the size of the gate.
Fig. 3. Recursive cascading to produce a complex circuit: OR circuits are nested within an AND circuit as indicated by the arrows.
Fig. 4. The final circuit for (A OR B) AND C
A simpler circuit computing the same function as Figure 4
Size is important because it takes a given type of signal a certain amount of time to traverse a given distance in a given medium. For a toggle switch (e.g. a typical light switch), this is the length of time it takes for a signal to cross the switch when the switch is already in the correct position. The longer the path the signal must travel, the longer it will take to travel it. This is one reason why smaller gates are preferable to larger gates, and why it is preferable to have gates packed as closely together as possible. We will call this sort of delay ‘path delay’ as it is the kind of delay that is present even in the paths that connect the various logic elements. Path delay is reduced simply by making the circuit physically smaller, even if each gate switches at the same speed as larger versions.
A second kind of delay stems from the fact that each logic element must make a state change based upon its inputs. Although a signal may travel across the gate prior to the completion of the state change, it’s value is unpredictable and cannot be used as a logic output until the signal has stabilized after the state change. For a toggle switch, this is the length of time it takes for the switch to change from “on” to “off”. Let us call this type of delay ‘state delay’. State delay is reduced by using faster switches even if the size of the circuit is not altered.
Together, the path and state delays determine the speed at which logic elements can operate. Delay times vary with the type of gate and the specifics of its construction, but typically are on the order of a few tenths of a nanosecond. The portion due to path delay is, of course, cumulative. However, since each gate depends on the previous gates for its input, state delays also add up. Later gates on a path cannot begin their state changes until all previous gates in the path have completed theirs. So each additional gate on a path adds both path delay and state delay to the circuit as a whole. For example, a path involving 100 gates, each of which has a state delay of 0.5ns, will have a delay of 50ns above and beyond the time it would take for the signal to traverse a wire of the same length as the circuit (the path delay of the circuit). This is one of the reasons that minimization is so important in circuit design; minimized circuits are significantly faster than non-minimized ones. By reducing the number of gates, logical minimization of a circuit increases the speed of the circuit more than simply shrinking it would.
The situation is quite different in directed logic. Each element needs to make a state change just as electronic gates do. However, because the signals that determine the state changes do not pass through previous gates in the circuit, all elements can perform their state changes simultaneously. As a result, the circuit is slowed by only the duration of a single state delay, not by the cumulative state delays of the entire path. The upshot of this is that directed logic circuits can have markedly less state delay than traditional circuits, even when they are built of elements which are individually slower.
Returning to the above example of a circuit with a path length of 100 gates or elements, and assuming a conservative 5 ns state delay for the directed logic elements, the overall state delay remains 5ns for the whole circuit. The advantage of directed logic circuits increases as the circuits become larger.
The speed advantage of optical directed logic circuits cannot be overemphasized. Although controlled optical switches are currently large compared to their electronic counterparts, we expect that technology will continue to make them smaller. Because all switches in a DL circuit operate simultaneously, reducing the path delay in this way is much more significant than similar reductions in traditional implementations. Assuming that switches can eventually be fabricated at about a 1 micron pitch, directed logic could potentially compute on the order of 105 level of logic in a clock period of 1/3 nanosecond. This represents a factor of about 104 over current logic implementations.
While in this paper we are targeting optical computing paradigms, we should reiterate here our earlier remark that directed logic circuits can be implemented in many other ways, including conventional electronic components. This should be remembered when considering the present state of art with integrated optics still being in its infancy. Thus, at present the elements of which optical directed logic circuits are built are both larger and slower than comparable electronic elements. Nonetheless, optical directed logic circuits can already experience substantially less delay than traditional circuits. This advantage is likely to increase substantially as the field of integrated optics matures.
3.2. Conservative and Reversible
The fact that directed logic is based on Fredkin-like gates, combined with the fact that there is no detection during computation, means that computational processes are reversible and conservative. As a result there is no theoretical lower bound to the energy dissipated in computation as there is with traditional electronic logic.
Conventional implementations of Boolean logic destroy information and so also incur an energy cost. This point was originally made in Ref. 10. A clear exposition of the claim along with a discussion of the implications for logic implementations is provided in Ref. 8. A typical Boolean operation, say AND, takes two bits of input and returns only one bit of output. There is thus less information at the output than at the inputs. The lost information must be dissipated as heat. Although the heat of information loss is quite small on a per gate basis, it can be significant for large structures.
Directed logic is conservative and reversible. (For expositional reasons we have shown the control inputs as terminating at the elements they control. To be fully conservative, they must lead beyond the elements and be gathered at the end of the complete circuit. This is a trivial exercise, but the additional lines make the diagrams harder to read, thus we have left them out.) Every circuit has just as many ’1’ and ’0’ inputs as ’1’ and ’0’ outputs respectively. No information is lost as a result of the logic, and thus the logic, by itself, does not require the expenditure of energy. To be sure, the operation of the gate will require energy; there must be energy input into the system for it to work. But there is no loss due solely to the logic as there is in traditional logic systems.
Of course conservative logic can also be implemented in other ways. As with non-conservative logic electronics is ahead of optics in this regard. (c.f.,1112 and13) DL makes no claim to being the best or most complete implementation of conservative logic. Indeed, our primary point is simply that DL is an optically implementable logic, something that has been hard to come by. The fact that it is conservative is an added bonus.
3.3. Two limitations
In its present form DL was designed to implement logic functions. Obviously, general computing is not limited to the evaluation of a single logic operation and much more work is needed to expand the DL paradigm toward more general computing. It is quite likely that such an expansion will require a departure from pure DL but it will still maintain an advantage over conventional systems. Below we discuss the two main limitations of pure DL.
One important logical operation that is missing from DL is fan-out. Of course specific implementations of DL may have readily accessible fan-out operations. For example, if DL is implemented optically with outputs encoded by amplitude, fan-out may be implemented with reversed y-couplers and some form of amplification (see 4.2.1). However, this requires more than the simple switching networks of DL and so is not part of DL per se.
3.3.2. Sequential Logic
As we have presented it here, directed logic is only able to perform combinational logic. Full computation requires sequential logic in addition. We have reserved discussion of DL-based sequential logic for a subsequent publication as it requires the use of further elements, such as fan-out above, which are not properly part of DL and which may depend on the specific of physical implementation in ways that DL itself does not.
4. Toward optical implementation
This section is devoted to a discussion of possible optical implementations of directed logic networks. Starting from the basic switching elements – the optical Fredkin gates, aspects of advanced implementations of large networks will be addressed as well.
4.1. Optical controlled switches: an updated survey
For applications in logic networks one is usually interested in logic gates containing nonlinear bistable elements. This is not the case for the directed logic networks. Moreover, the basic configuration of a controlled switch is not restricted to digital signals; in principle, one may use these gates for processing analog signals as well. Since the introduction of optical Fredkin gates in Ref. 4, technology has evolved and one may compile a new list of possibilities. The following list contains the most obvious ones, many other options exist with more to come in the future.
4.1.1. Polarization switching gate
Polarization switching gates were recently considered in Ref. 14. In such a gate the input and output lines correspond to two orthogonal polarizations of a light beam (or a waveguide channel of an integrated optical system) traversing a single controlled polarization rotator, such as a liquid crystal light modulator, an electro-optic (Kerr) modulator or any other means of polarization rotation. If desired by architectural requirements, the ’0’ and ’1’ signals can be converted into intensities by properly positioned polarizing beam splitters.
The main advantages of this gate is its relative simplicity and its robustness. The fact that the control input has a different nature than the signals that propagate through the gate is a problem for conventional applications of Fredkin gates but our architecture is designed with this characteristic in mind.
4.1.2. Acousto-optic gate
The two input lines are laser beams incident on an acousto-optic deflector (either bulk or integrated surface acoustic wave devices) at the Bragg angle. Considering the acoustic signal as the control input, if there is no acoustic signal, the two beams continue unaffected. An acoustic signal of the proper frequency will deflect each beam into the direction of the second one, interchanging the two outputs. This is also a simple gate but less robust than the polarization gate. It also has a control signal different from the propagating signals. For our application, this kind of gate can be easily cascaded and integrated. For example, a single acoustic pulse may activate many gates as it travels along the system as will be required by the systolic architecture to be discussed below.
4.1.3. Photorefractive gate
Basically photorefractive materials change their refractive index as a function of illumination. Thus, they serve as an ideal candidate for all-optical gates where light provides the propagating signals as well as the control signals. In general, photorefractive media will be used as phase modulators controlled by light as discussed below. In a more sophisticated architecture the photorefractive gate is based on four-wave mixing. In this gate, two counter-propagating, coaxial beams are the two input beams. The control signal constitutes the two other counter-propagating pump beams. The two inputs are transmitted if the control beam is absent and they are phase-conjugated when the pump is present resulting in switching between the outputs.
4.1.4. Waveguide coupler gates
In optical communication and integrated optical systems controlled waveguide and fiber couplers are widely employed. 2x2 controlled couplers perform exactly the task of a controlled switch. While state of the art couplers are based on electronic control, it is straight forward to use photodetection combined with the electrooptic coupler to facilitate optical control. A more advanced technology would be the use of photorefractive material for direct optical control of the coupling constant. Optical control signals can be applied from outside, normal to the plane of the waveguides, or within the waveguide itself.
4.1.5. Mach-Zehnder gates
Mach-Zehnder interferometers are also ideal as controlled switches and they are also highly developed for communications technology. Although the Mach-Zehnder interferometer has two input ports and two output ports conventional applications utilize only one of each. In our architectures we exploit both ports and a controlled phase modulator (liquid crystal, photorefractive medium, electro-optic phase modulator, etc.) can switch between the two outputs implementing a Fredkin gate.
4.2. Advanced network implementations
For a specific application, a complete network will be assembled by a large number of elements. Usually these will be constructed of the same kind of embodiment, such as one of those listed above, but, for some applications, it will be advantageous to use more than one kind. Moreover, additional components may be incorporated in the network as well. In this subsection we consider several concepts that will be handy for actual implementations of these networks.
Ideally, controlled switches are lossless. However, any practical device has losses, and in a large network these losses must be compensated for. The obvious approach is to insert amplification within the network. This can be done for each gate or periodically along the net. Possible implementations include the use of Erbium doped optical fiber amplifiers, semiconductor optical amplifiers, quantum dot amplifiers and any other method that can regenerate a weakened optical signal. To maintain the attributes of the present computing paradigm, one should avoid signal regeneration by a detector-laser combination unless extremely fast systems can be incorporated.
4.2.2. Parallel addressing and smart pixels
Up to this point we have mainly discussed the conceptual layout of the computing network and its components concentrating primarily on the propagation of the signal along the network. However, we have not specifically addressed the technical issue of the information input, which must be directed to the control line of each element. As indicated in section 2 the information vector must be distributed throughout the whole logic network with each vector element activating one or more gates in parallel. Until now it was tentatively assumed that the individual gates were hard wired to the input vector elements, thus implementing a fixed operation for a given network. To execute any other operation with the same network, the wiring to the control elements must be altered.
A significant improvement can be achieved if the wiring is replaced by a separate logic circuit which establishes the connection layout in a way that can be easily modified according to the required operation. An efficient way to implement such a connection is through an array of smart pixels that are optically addressed. There are several possibilities for such an optical addressing scheme out of which a particularly attractive one is a spatial light modulator which can project the complete control layout in parallel.
4.2.3. Systolic process
As noted in subsection 3.1, unlike conventional logic arrays, the operating speed of a directed logic network is limited only by the propagation time of the signal through the network. Since in some of the embodiments of optical controlled switches the transit time is determined only by the propagation speed of light through the medium of the net this can be very fast. Nevertheless, at computing rates practiced today, even this speed sets practical limits if the network has a reasonable length. Moreover, for most applications, existing interfaces between the network and the external world will usually set an even more severe limit to the computing speed.
The speed limitation indicated above can be partially mitigated if the information is introduced into the control elements sequentially in synchronization with the signal propagating within the network. With such an arrangement, after the signal passes a certain cross-section of the net it is ready to accept the next information vector. The result is a systolic processor that can be operated in a pulsed mode: A pulse of light is injected into the first layer of controlled switches together with the control information. The controls of the switches in subsequent layers are activated only just before the pulse riches that layer. Meanwhile the first layer is ready to accept the next light pulse together with the next control sequence. One way to achieve a proper
synchronization in the parallel architecture described above is a projection system which is inclined with respect to the plane of the logic network.
5. Prospects for the Future
5.1. Scaling with Technology
Much of the work in optical logic has focused on developing specific devices, a switch here, a NOR gate there. The current proposal is, to a certain extent, technology independent. There are many ways of implementing optical Fredkin gates4,15 and as optical technology advances, we expect ever faster, smaller mini-optics, and more energy efficient Fredkin gates to appear. Because the current proposal is based on a re-envisioning of the logical paradigm, these new technologies may be seamlessly incorporated in much the way that improved transistors have been incorporated in electronic logic.
5.2. 2-D Multi-channel computation
Most logic functions as discussed in subsection 2.2 can be implemented by a directed logic network containing two or three rows of gates. Considering these rows as a computing channel, it is straight forward to extend the system in two or three dimensions5 to form a multichannel computing system for highly parallel computation.
5.3. Beyond logic operations
As already indicated elsewhere in this paper, DL in its present form was developed to evaluate logic functions in a optics friendly way. This development lead to a new concept for implementing logic operations but it still lacks a general computing scenario. Future work will be dedicated to the mitigation of the present limitations such as fan-out and fan-in as well as conventional cascading and feedback operations. It is quite likely that these extensions will require a payment in terms of reversibility, energy gain and speed. Nevertheless, we expect to maintain the advantages of DL at least within the sections where logic operations are performed.