Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
33,765 Research products, page 1 of 3,377

  • Preprint
  • IL
  • AE
  • arXiv.org e-Print Archive

10
arrow_drop_down
Date (most recent)
arrow_drop_down
  • Open Access English
    Authors: 
    Izack Cohen; Krzysztof Postek; Shimrit Shtern;

    Real-life parallel machine scheduling problems can be characterized by: (i) limited information about the exact task duration at scheduling time, and (ii) an opportunity to reschedule the remaining tasks each time a task processing is completed and a machine becomes idle. Robust optimization is the natural methodology to cope with the first characteristic of duration uncertainty, yet the existing literature on robust scheduling does not explicitly consider the second characteristic - the possibility to adjust decisions as more information about the tasks' duration becomes available, despite that re-optimizing the schedule every time new information emerges is standard practice. In this paper, we develop a scheduling approach that takes into account, at the beginning of the planning horizon, the possibility that scheduling decisions can be adjusted. We demonstrate that the suggested approach can lead to better here-and-now decisions and better makespan guarantees. To that end, we develop the first mixed integer linear programming model for adjustable robust scheduling, and a scalable two-stage approximation heuristic, where we minimize the worst-case makespan. Using this model, we show via a numerical study that adjustable scheduling leads to solutions with better and more stable makespan realizations compared to static approaches.

  • Publication . Preprint . Article . 2023 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Ron Levie; Haim Avron; Gitta Kutyniok;
    Publisher: arXiv

    We study signal processing tasks in which the signal is mapped via some generalized time-frequency transform to a higher dimensional time-frequency space, processed there, and synthesized to an output signal. We show how to approximate such methods using a quasi-Monte Carlo (QMC) approach. We consider cases where the time-frequency representation is redundant, having feature axes in addition to the time and frequency axes. The proposed QMC method allows sampling both efficiently and evenly such redundant time-frequency representations. Indeed, 1) the number of samples required for a certain accuracy is log-linear in the resolution of the signal space, and depends only weakly on the dimension of the redundant time-frequency space, and 2) the quasi-random samples have low discrepancy, so they are spread evenly in the redundant time-frequency space. One example of such redundant representation is the localizing time-frequency transform (LTFT), where the time-frequency plane is enhanced by a third axis. This higher dimensional time-frequency space improves the quality of some time-frequency signal processing tasks, like the phase vocoder (an audio signal processing effect). Since the computational complexity of the QMC is log-linear in the resolution of the signal space, this higher dimensional time-frequency space does not degrade the computation complexity of the proposed QMC method. The proposed QMC method is more efficient than standard Monte Carlo methods, since the deterministic QMC sample points are optimally spread in the time-frequency space, while random samples are not.

  • Publication . Preprint . Article . 2022
    Open Access English
    Authors: 
    Barak Hoffer; Nicolas Wainstein; Christopher M. Neumann; Eric Pop; Eilam Yalon; Shahar Kvatinsky;
    Project: EC | Real-PIM-System (757259)

    Stateful logic is a digital processing-in-memory technique that could address von Neumann memory bottleneck challenges while maintaining backward compatibility with standard von Neumann architectures. In stateful logic, memory cells are used to perform the logic operations without reading or moving any data outside the memory array. Stateful logic has been previously demonstrated using several resistive memory types, mostly by resistive RAM (RRAM). Here we present a new method to design stateful logic using a different resistive memory - phase change memory (PCM). We propose and experimentally demonstrate four logic gate types (NOR, IMPLY, OR, NIMP) using commonly used PCM materials. Our stateful logic circuits are different than previously proposed circuits due to the different switching mechanism and functionality of PCM compared to RRAM. Since the proposed stateful logic form a functionally complete set, these gates enable sequential execution of any logic function within the memory, paving the way to PCM-based digital processing-in-memory systems.

  • Open Access
    Authors: 
    Nadia Figueroa; Haiwei Dong; Abdulmotaleb El Saddik;
    Publisher: Association for Computing Machinery (ACM)
    Country: Switzerland

    We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.

  • Publication . Preprint . Article . 2022
    Open Access English
    Authors: 
    Goldreich, Oded; Ron, Dana;
    Project: EC | VERICOMP (819702)

    We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings). Specifically, the distance between distributions is defined as the earth mover���s distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions, where we obtain the following results. - Testing whether a distribution over n-bit long strings is uniform on some set of size m can be done with query complexity ��(m/����), where �� > (log���m)/n is the proximity parameter. - Testing whether two distribution over n-bit long strings that have support size at most m are identical can be done with query complexity ��(m^{2/3}/����). Both upper bounds are quite tight; that is, for �� = ��(1), the first task requires ��(m^c) queries for any c < 1 and n = ��(log m), whereas the second task requires ��(m^{2/3}) queries. Note that the query complexity of the first task is higher than the sample complexity of the corresponding task in the standard distribution testing model, whereas in the case of the second task the bounds almost match. LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 78:1-78:19

  • Open Access English
    Authors: 
    Omri Lesser; Yuval Oreg; Ady Stern;
    Project: EC | LEGOTOP (788715)

    Topological superconductivity in one dimension requires time-reversal symmetry breaking, but at the same time it is hindered by external magnetic fields. We offer a general prescription for inducing topological superconductivity in planar superconductor-normal-superconductor-normal-superconductor (SNSNS) Josephson junctions without applying any magnetic fields on the junctions. Our platform relies on two key ingredients: the three parallel superconductors form two SNS junctions with phase winding, and the Fermi velocities for the two spin branches transverse to the junction must be different from one another. The two phase differences between the three superconductors define a parameter plane which includes large topological regions. We analytically derive the critical curves where the topological phase transitions occur, and corroborate the result with a numerical calculation based on a tight-binding model. We further propose material platforms with unequal Fermi velocities, establishing the experimental feasibility of our approach. 5+10 pages, 3+8 figures

  • Open Access English
    Authors: 
    Augeri, Fanny; Butez, Raphael; Zeitouni, Ofer;
    Publisher: HAL CCSD
    Country: France
    Project: EC | LogCorrelatedFields (692452)

    We prove a central limit theorem for the logarithm of the characteristic polynomial of random Jacobi matrices. Our results cover the G$\beta$E models for $\beta>0$. Comment: Corrected a mistake in computation of centering, improved error estimates through section 4, various typos corrected

  • Open Access
    Authors: 
    Christian Boudreault; Hichem Eleuch; Michael Hilke; Richard MacKenzie;
    Publisher: American Physical Society (APS)
    Project: NSERC

    One of the most challenging problems for the realization of a scalable quantum computer is to design a physical device that keeps the error rate for each quantum processing operation low. These errors can originate from the accuracy of quantum manipulation, such as the sweeping of a gate voltage in solid state qubits or the duration of a laser pulse in optical schemes. Errors also result from decoherence, which is often regarded as more crucial in the sense that it is inherent to the quantum system, being fundamentally a consequence of the coupling to the external environment. Grouping small collections of qubits into clusters with symmetries can protect parts of the calculation from decoherence. We use 4-level cores with a straightforward generalization of discrete rotational symmetry, omega-rotation invariance, to encode pairs of coupled qubits and universal 2-qubit logical gates. We include quantum errors as a main source of decoherence, and show that symmetry makes logical operations particularly resilient to untimely anisotropic qubit rotations. We propose a scalable scheme for universal quantum computation where cores play the role of quantum-computational transistors, quansistors. Initialization and readout are achieved by coupling to leads. The external leads are explicitly considered and are assumed to be the other main source of decoherence. We show that quansistors can be dynamically decoupled from the leads by tuning their internal parameters, giving them the versatility required to act as controllable quantum memory units. With this dynamical decoupling, logical operations within quansistors are also symmetry-protected from unbiased noise in their parameters. We identify technologies that could implement omega-rotation invariance. Many of our results can be generalized to higher-level omega-rotation-invariant systems, or adapted to clusters with other symmetries. Comment: 23 pages, 19 figures

  • Publication . Preprint . Article . 2022 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Alexander Apelblat; Francesco Mainardi;
    Publisher: arXiv

    In this survey we discuss derivatives of the Wright functions (of the first and the second kind) with respect to parameters. Differentiation of these functions leads to infinite power series with coefficient being quotients of the digamma (psi) and gamma functions. Only in few cases it is possible to obtain the sums of these series in a closed form. Functional form of the power series resembles those derived for the Mittag-Leffler functions. If the Wright functions are treated as the generalized Bessel functions, differentiation operations can be expressed in terms of the Bessel functions and their derivatives with respect to the order. It is demonstrated that in many cases it is possible to derive the explicit form of the Mittag-Leffler functions by performing simple operations with the Laplace transforms of the Wright functions. The Laplace transform pairs of the both kinds of the Wright functions are discussed for particular values of the parameters. Some transform pairs serve to obtain functional limits by applying the shifted Dirac delta function. Comment: 21 pages, 4 figures

  • Publication . Preprint . Article . Conference object . 2022
    Open Access
    Authors: 
    Arnold Filtser; Omrit Filtser; Matthew J. Katz;
    Publisher: Springer Science and Business Media LLC

    In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
33,765 Research products, page 1 of 3,377
  • Open Access English
    Authors: 
    Izack Cohen; Krzysztof Postek; Shimrit Shtern;

    Real-life parallel machine scheduling problems can be characterized by: (i) limited information about the exact task duration at scheduling time, and (ii) an opportunity to reschedule the remaining tasks each time a task processing is completed and a machine becomes idle. Robust optimization is the natural methodology to cope with the first characteristic of duration uncertainty, yet the existing literature on robust scheduling does not explicitly consider the second characteristic - the possibility to adjust decisions as more information about the tasks' duration becomes available, despite that re-optimizing the schedule every time new information emerges is standard practice. In this paper, we develop a scheduling approach that takes into account, at the beginning of the planning horizon, the possibility that scheduling decisions can be adjusted. We demonstrate that the suggested approach can lead to better here-and-now decisions and better makespan guarantees. To that end, we develop the first mixed integer linear programming model for adjustable robust scheduling, and a scalable two-stage approximation heuristic, where we minimize the worst-case makespan. Using this model, we show via a numerical study that adjustable scheduling leads to solutions with better and more stable makespan realizations compared to static approaches.

  • Publication . Preprint . Article . 2023 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Ron Levie; Haim Avron; Gitta Kutyniok;
    Publisher: arXiv

    We study signal processing tasks in which the signal is mapped via some generalized time-frequency transform to a higher dimensional time-frequency space, processed there, and synthesized to an output signal. We show how to approximate such methods using a quasi-Monte Carlo (QMC) approach. We consider cases where the time-frequency representation is redundant, having feature axes in addition to the time and frequency axes. The proposed QMC method allows sampling both efficiently and evenly such redundant time-frequency representations. Indeed, 1) the number of samples required for a certain accuracy is log-linear in the resolution of the signal space, and depends only weakly on the dimension of the redundant time-frequency space, and 2) the quasi-random samples have low discrepancy, so they are spread evenly in the redundant time-frequency space. One example of such redundant representation is the localizing time-frequency transform (LTFT), where the time-frequency plane is enhanced by a third axis. This higher dimensional time-frequency space improves the quality of some time-frequency signal processing tasks, like the phase vocoder (an audio signal processing effect). Since the computational complexity of the QMC is log-linear in the resolution of the signal space, this higher dimensional time-frequency space does not degrade the computation complexity of the proposed QMC method. The proposed QMC method is more efficient than standard Monte Carlo methods, since the deterministic QMC sample points are optimally spread in the time-frequency space, while random samples are not.

  • Publication . Preprint . Article . 2022
    Open Access English
    Authors: 
    Barak Hoffer; Nicolas Wainstein; Christopher M. Neumann; Eric Pop; Eilam Yalon; Shahar Kvatinsky;
    Project: EC | Real-PIM-System (757259)

    Stateful logic is a digital processing-in-memory technique that could address von Neumann memory bottleneck challenges while maintaining backward compatibility with standard von Neumann architectures. In stateful logic, memory cells are used to perform the logic operations without reading or moving any data outside the memory array. Stateful logic has been previously demonstrated using several resistive memory types, mostly by resistive RAM (RRAM). Here we present a new method to design stateful logic using a different resistive memory - phase change memory (PCM). We propose and experimentally demonstrate four logic gate types (NOR, IMPLY, OR, NIMP) using commonly used PCM materials. Our stateful logic circuits are different than previously proposed circuits due to the different switching mechanism and functionality of PCM compared to RRAM. Since the proposed stateful logic form a functionally complete set, these gates enable sequential execution of any logic function within the memory, paving the way to PCM-based digital processing-in-memory systems.

  • Open Access
    Authors: 
    Nadia Figueroa; Haiwei Dong; Abdulmotaleb El Saddik;
    Publisher: Association for Computing Machinery (ACM)
    Country: Switzerland

    We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.

  • Publication . Preprint . Article . 2022
    Open Access English
    Authors: 
    Goldreich, Oded; Ron, Dana;
    Project: EC | VERICOMP (819702)

    We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings). Specifically, the distance between distributions is defined as the earth mover���s distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions, where we obtain the following results. - Testing whether a distribution over n-bit long strings is uniform on some set of size m can be done with query complexity ��(m/����), where �� > (log���m)/n is the proximity parameter. - Testing whether two distribution over n-bit long strings that have support size at most m are identical can be done with query complexity ��(m^{2/3}/����). Both upper bounds are quite tight; that is, for �� = ��(1), the first task requires ��(m^c) queries for any c < 1 and n = ��(log m), whereas the second task requires ��(m^{2/3}) queries. Note that the query complexity of the first task is higher than the sample complexity of the corresponding task in the standard distribution testing model, whereas in the case of the second task the bounds almost match. LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 78:1-78:19

  • Open Access English
    Authors: 
    Omri Lesser; Yuval Oreg; Ady Stern;
    Project: EC | LEGOTOP (788715)

    Topological superconductivity in one dimension requires time-reversal symmetry breaking, but at the same time it is hindered by external magnetic fields. We offer a general prescription for inducing topological superconductivity in planar superconductor-normal-superconductor-normal-superconductor (SNSNS) Josephson junctions without applying any magnetic fields on the junctions. Our platform relies on two key ingredients: the three parallel superconductors form two SNS junctions with phase winding, and the Fermi velocities for the two spin branches transverse to the junction must be different from one another. The two phase differences between the three superconductors define a parameter plane which includes large topological regions. We analytically derive the critical curves where the topological phase transitions occur, and corroborate the result with a numerical calculation based on a tight-binding model. We further propose material platforms with unequal Fermi velocities, establishing the experimental feasibility of our approach. 5+10 pages, 3+8 figures

  • Open Access English
    Authors: 
    Augeri, Fanny; Butez, Raphael; Zeitouni, Ofer;
    Publisher: HAL CCSD
    Country: France
    Project: EC | LogCorrelatedFields (692452)

    We prove a central limit theorem for the logarithm of the characteristic polynomial of random Jacobi matrices. Our results cover the G$\beta$E models for $\beta>0$. Comment: Corrected a mistake in computation of centering, improved error estimates through section 4, various typos corrected

  • Open Access
    Authors: 
    Christian Boudreault; Hichem Eleuch; Michael Hilke; Richard MacKenzie;
    Publisher: American Physical Society (APS)
    Project: NSERC

    One of the most challenging problems for the realization of a scalable quantum computer is to design a physical device that keeps the error rate for each quantum processing operation low. These errors can originate from the accuracy of quantum manipulation, such as the sweeping of a gate voltage in solid state qubits or the duration of a laser pulse in optical schemes. Errors also result from decoherence, which is often regarded as more crucial in the sense that it is inherent to the quantum system, being fundamentally a consequence of the coupling to the external environment. Grouping small collections of qubits into clusters with symmetries can protect parts of the calculation from decoherence. We use 4-level cores with a straightforward generalization of discrete rotational symmetry, omega-rotation invariance, to encode pairs of coupled qubits and universal 2-qubit logical gates. We include quantum errors as a main source of decoherence, and show that symmetry makes logical operations particularly resilient to untimely anisotropic qubit rotations. We propose a scalable scheme for universal quantum computation where cores play the role of quantum-computational transistors, quansistors. Initialization and readout are achieved by coupling to leads. The external leads are explicitly considered and are assumed to be the other main source of decoherence. We show that quansistors can be dynamically decoupled from the leads by tuning their internal parameters, giving them the versatility required to act as controllable quantum memory units. With this dynamical decoupling, logical operations within quansistors are also symmetry-protected from unbiased noise in their parameters. We identify technologies that could implement omega-rotation invariance. Many of our results can be generalized to higher-level omega-rotation-invariant systems, or adapted to clusters with other symmetries. Comment: 23 pages, 19 figures

  • Publication . Preprint . Article . 2022 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Alexander Apelblat; Francesco Mainardi;
    Publisher: arXiv

    In this survey we discuss derivatives of the Wright functions (of the first and the second kind) with respect to parameters. Differentiation of these functions leads to infinite power series with coefficient being quotients of the digamma (psi) and gamma functions. Only in few cases it is possible to obtain the sums of these series in a closed form. Functional form of the power series resembles those derived for the Mittag-Leffler functions. If the Wright functions are treated as the generalized Bessel functions, differentiation operations can be expressed in terms of the Bessel functions and their derivatives with respect to the order. It is demonstrated that in many cases it is possible to derive the explicit form of the Mittag-Leffler functions by performing simple operations with the Laplace transforms of the Wright functions. The Laplace transform pairs of the both kinds of the Wright functions are discussed for particular values of the parameters. Some transform pairs serve to obtain functional limits by applying the shifted Dirac delta function. Comment: 21 pages, 4 figures

  • Publication . Preprint . Article . Conference object . 2022
    Open Access
    Authors: 
    Arnold Filtser; Omrit Filtser; Matthew J. Katz;
    Publisher: Springer Science and Business Media LLC

    In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

Send a message
How can we help?
We usually respond in a few hours.