The following is a list of publications about the Julia language, its standard library, Julia packages, and technical computing applications using code written in Julia.

The references are available for download as a BibTeX file.

We welcome additions to this list in the form of pull requests.

Julia language and standard library

1. Julia: A fresh approach to numerical computing. Jeff Bezanson, Alan Edelman, Stefan Karpinski, Viral B. Shah (2014),

Abstract: The Julia programming language is gaining enormous popularity. Julia was designed to be easy and fast. Most importantly, Julia shatters deeply established notions widely held in the applied community:1. High-level, dynamic code has to be slow by some sort of law of nature. 2. It is sensible to prototype in one language and then recode in another language. 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts.Julia began with a deep understanding of the needs of the scientific programmer and the needs of the computer in mind. Bridging cultures that have often been distant, Julia combines expertise from computer science and computational science creating a new approach to scientific computing. This note introduces the programmer to the language and the underlying design theory. It invites the reader to rethink the fundamental foundations of numerical computing systems.In particular, there is the fascinating dance between specialization and abstraction. Specialization allows for custom treatment. We can pick just the right algorithm for the right circumstance and this can happen at runtime based on argument types (code selection via multiple dispatch). Abstraction recognizes what remains the same after differences are stripped away and ignored as irrelevant. The recognition of abstraction allows for code reuse (generic programming). A simple idea that yields incredible power. The Julia design facilitates this interplay in many explicit and subtle ways for machine performance and, most importantly, human convenience.

2. Array operators using multiple dispatch: A design methodology for array implementations in dynamic languages. Jeff Bezanson, Jiahao Chen, Stefan Karpinski, Viral Shah, Alan Edelman (2014), ARRAY’14 Proceedings of Acm Sigplan International Workshop on Libraries, Languages, and Compilers for Array Programming, 56–61, doi:10.1145/2627373.2627383,

Abstract: Arrays are such a rich and fundamental data type that they tend to be built into a language, either in the compiler or in a large low-level library. Defining this functionality at the user level instead provides greater flexibility for application domains not envisioned by the language designer. Only a few languages, such as C++ and Haskell, provide the necessary power to define n-dimensional arrays, but these systems rely on compile-time abstraction, sacrificing some flexibility. In contrast, dynamic languages make it straightforward for the user to define any behavior they might want, but at the possible expense of performance. As part of the Julia language project, we have developed an approach that yields a novel trade-off between flexibility and compile-time analysis. The core abstraction we use is multiple dispatch. We have come to believe that while multiple dispatch has not been especially popular in most kinds of programming, technical computing is its killer application. By expressing key functions such as array indexing using multi-method signatures, a surprising range of behaviors can be obtained, in a way that is both relatively easy to write and amenable to compiler analysis. The compact factoring of concerns provided by these methods makes it easier for user-defined types to behave consistently with types in the standard library.

3. Julia: A fast dynamic language for technical computing. Jeff Bezanson, Stefan Karpinski, Viral B. Shah, Alan Edelman (2012),

Abstract: Dynamic languages have become popular for scientific computing. They are generally considered highly productive, but lacking in performance. This paper presents Julia, a new dynamic language for technical computing, designed for performance from the beginning by adapting and extending modern programming language techniques. A design based on generic functions and a rich type system simultaneously enables an expressive programming model and successful type inference, leading to good performance for a wide range of programs. This makes it possible for much of the Julia library to be written in Julia itself, while also incorporating best-of-breed C and Fortran libraries.

Publications relating to Julia packages


4. Symbolic manipulation of flows of nonlinear evolution equations, with application in the analysis of split-step time integrators. Winfried Auzinger, Harald Hofstätter, Othmar Koch (2016), Computer Algebra in Scientific Computing: 18th International Workshop, Casc 2016, Bucharest, Romania, September 19-23, 2016, Proceedings, 9890:30–42, doi:10.1007/978-3-319-45641-6_4,

Abstract: We describe a package realized in the Julia programming language which performs symbolic manipulations applied to nonlinear evolution equations, their flows, and commutators of such objects. This tool was employed to perform contrived computations arising in the analysis of the local error of operator splitting methods. It enabled the proof of the convergence of the basic method and of the asymptotical correctness of a defect-based error estimator. The performance of our package is illustrated on several examples.


5. CauseMap: Fast inference of causality from complex time series. M. Cyrus Maher, Ryan D. Hernandez (2015), doi:10.7287/peerj.preprints.583v2

Abstract: Background: Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data.Results: Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM), a method for establishing causality from long time series data (>  25 observations). Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens’ Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If the reconstructed shadows can predict points from the opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g. a single individual, it may be a valuable tool to personalized medicine. We implement CCM in Julia, a high-performance programming language designed for facile technical computing. Our software package, CauseMap, is platform-independent and freely available as an official Julia package.Conclusions: CauseMap is an efficient implementation of a state-of-the-art algorithm for detecting causality from time series data. We believe this tool will be a valuable resource for biomedical research and personalized medicine.

Numerical Optimization and Operations Research


6. Convex optimization in julia. Madeleine Udell, Karanveer Mohan, David Zeng, Jenny Hong, Steven Diamond, Stephen Boyd (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 18–28, doi:10.1109/HPTCDL.2014.5,

Abstract: This paper describes Convex, a convex optimization modeling framework in Julia. Convex translates problems from a user-friendly functional language into an abstract syntax tree describing the problem. This concise representation of the global structure of the problem allows Convex to infer whether the problem complies with the rules of disciplined convex programming (DCP), and to pass the problem to a suitable solver. These operations are carried out in Julia using multiple dispatch, which dramatically reduces the time required to verify DCP compliance and to parse a problem into conic form. Convex then automatically chooses an appropriate backend solver to solve the conic form problem.


7. Parallel algebraic modeling for stochastic optimization. Joey Huchette, Miles Lubin, Cosmin Petra (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 29–35, doi:10.1109/HPTCDL.2014.6

Abstract: We present scalable algebraic modeling software, StochJuMP, for stochastic optimization as applied to power grid economic dispatch. It enables the user to express the problem in a high-level algebraic format with minimal boilerplate. StochJuMP allows efficient parallel model instantiation across nodes and efficient data localization. Computational results are presented showing that the model construction is efficient, requiring less than one percent of solve time. StochJuMP is configured with the parallel interior-point solver PIPS-IPM but is sufficiently generic to allow straight forward adaptation to other solvers.


8. Computing in operations research using Julia. Miles Lubin, Iain Dunning (2015), INFORMS Journal on Computing, 27:238–248, doi:10.1287/ijoc.2014.0623,

Abstract: The state of numerical computing is currently characterized by a divide between highly efficient yet typically cumbersome low-level languages such as C, C++, and Fortran and highly expressive yet typically slow high-level languages such as Python and MATLAB. This paper explores how Julia, a modern programming language for numerical computing which claims to bridge this divide by incorporating recent advances in language and compiler design (such as just-in-time compilation), can be used for implementing software and algorithms fundamental to the field of operations research, with a focus on mathematical optimization. In particular, we demonstrate algebraic modeling for linear and nonlinear optimization and a partial implementation of a practical simplex code. Extensive cross-language benchmarks suggest that Julia is capable of obtaining state-of-the-art performance.


9. Second-order switching time optimization for switched dynamical systems. B. Stellato, S. Ober-Blöbaum, P. J. Goulart (2016),

Abstract: Switching time optimization arises in finite-horizon optimal control for switched systems where, given a sequence of continuous dynamics, one minimizes a cost function with respect to the switching times. We propose an efficient method for computing the optimal switching times for switched linear and nonlinear systems. A novel second-order optimization algorithm is introduced where, at each iteration, the dynamics are linearized over an underlying time grid to compute the cost function, the gradient and the Hessian efficiently. With the proposed method, the most expensive operations at each iteration are shared between the cost function and its derivatives, thereby greatly reducing the computational burden. We implemented the algorithm in the Julia package SwitchTimeOpt allowing the user to easily solve switching time optimization problems. In the case of linear dynamics, many operations can be further simplified and benchmarks show that our approach is able to provide optimal solutions in just a few ms. In the case of nonlinear dynamics, two examples show that our method provides optimal solutions with up to two orders of magnitude time reductions over state-of-the-art approaches.

Numerical linear algebra


10. The automatic solution of partial differential equations using a global spectral method. Alex Townsend, Sheehan Olver (2014),


11. A practical framework for infinite-dimensional linear algebra. Sheehan Olver, Alex Townsend (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 57–62, doi:10.1109/HPTCDL.2014.10,



12. Accurate eigenvalue decomposition of real symmetric arrowhead matrices and applications. I. Slapnicar N. Jakovcevic Stor, J. L. Barlow (2015), Linear Algebra and Its Applications, 464:62–89, doi:10.1016/j.laa.2013.10.007,


13. Forward stable eigenvalue decomposition of rank-one modifications of diagonal matrices. I. Slapnicar N. Jakovcevic Stor, J. L. Barlow (2015), Linear Algebra and Its Applications, 487:301–315, doi:10.1016/j.laa.2015.09.025,

Abstract: We present a new algorithm for solving an eigenvalue problem for a real symmetric matrix which is a rank-one modification of a diagonal matrix. The algorithm computes each eigenvalue and all components of the corresponding eigenvector with high relative accuracy in O(n) operations. The algorithm is based on a shift-and-invert approach. Only a single element of the inverse of the shifted matrix eventually needs to be computed with double the working precision. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to the complex Hermitian case. The algorithm is similar to the algorithm for solving the eigenvalue problem for real symmetric arrowhead matrices from N. Jakovčević Stor et al. (2015) [16].

14. Forward stable computation of roots of real polynomials with only real distinct roots. N. Jakovcevic Stor, I. Slapnicar (2015), 1–15,

Abstract: As showed in (Fiedler, 1990), any polynomial can be expressed as a characteristic polynomial of a complex symmetric arrowhead matrix. This expression is not unique. If the polynomial is real with only real distinct roots, the matrix can be chosen real. By using accurate forward stable algorithm for computing eigenvalues of real symmetric arrowhead matrices from (Jakovcevic Stor, Slapnicar, Barlow, 2015), we derive a forward stable algorithm for computation of roots of such polynomials in O(n2) operations. The algorithm computes each root to almost full accuracy. In some cases, the algorithm invokes extended precision routines, but only in the non-iterative part. Our examples include numerically difficult problems, like the well-known Wilkinson’s polynomials. Our algorithm compares favourably to other method for polynomial root-finding, like MPSolve or Newton’s method.


15. Sampling unitary invariant ensembles. Sheehan Olver, Raj Rao Nadakuditi, Thomas Trogdon (2014),

Abstract: We develop an algorithm for sampling from the unitary invariant random matrix ensembles. The algorithm is based on the representation of their eigenvalues as a determinantal point process whose kernel is given in terms of orthogonal polynomials. Using this algorithm, statistics beyond those known through analysis are calculable through Monte Carlo simulation. Unexpected phenomena are observed in the simulations.


16. Matrix depot: An extensible test matrix collection for julia. Weijian Zhang, Nicholas J. Higham (2015), MIMS EPrint, 25,

Abstract: Matrix Depot is a Julia software package that provides easy access to a large and diverse collection of test matrices. Its novelty is threefold. First, it is extensible by the user, and so can be adapted to include the user’s own test problems. In doing so it facilitates experimentation and makes it easier to carry out reproducible research. Second, it amalgamates in a single framework three different types of matrix collections, comprising parametrized test matrices, regularization test problems, and real-life sparse matrix data. Third, it fully exploits the Julia language. It uses multiple dispatch to help provide a simple interface and, in particular, to allow matrices to be generated in any of the numeric data types supported by the language.

Numerical quadrature


17. On the use of conformal maps for the acceleration of convergence of the trapezoidal rule and sinc numerical methods. Richard Mikaël Slevinsky, Sheehan Olver (2014),

Abstract: We investigate the use of conformal maps for the acceleration of convergence of the trapezoidal rule and Sinc numerical methods. The conformal map is a polynomial adjustment to the sinh map, and allows the treatment of a finite number of singularities in the complex plane. In the case where locations are unknown, the so-called Sinc-Padé approximants are used to provide approximate results. This adaptive method is shown to have almost the same convergence properties. We use the conformal maps to generate high accuracy solutions to several challenging integrals, nonlinear waves, and multidimensional integrals.


18. Fast computation of Gauss quadrature nodes and weights on the whole real line. Alex Townsend, Thomas Trogdon, Sheehan Olver (2014),

Abstract: A fast and accurate algorithm for the computation of Gauss-Hermite and generalized Gauss-Hermite quadrature nodes and weights is presented. The algorithm is based on Newton’s method with carefully selected initial guesses for the nodes and a fast evaluation scheme for the associated orthogonal polynomial. In the Gauss-Hermite case the initial guesses and evaluation scheme rely on explicit asymptotic formulas. For generalized Gauss-Hermite, the initial guesses are furnished by sampling a certain equilibrium measure and the associated polynomial evaluated via a Riemann-Hilbert reformulation. In both cases the n-point quadrature rule is computed in O(n) operations to an accuracy that is close to machine precision. For sufficiently large n, some of the quadrature weights have a value less than the smallest positive normalized floating-point number in double precision and we exploit this fact to achieve a complexity as low as O(sqrt(n))).

Technical computing applications

19. RNA editing generates cellular subsets with diverse sequence within populations. Dewi Harjanto, Theodore Papamarkou, Chris J. Oates, Violeta Rayon-Estrada, F. Nina Papavasiliou, Anastasia Papavasiliou (2016), Nature Communications, 7:


Code available in CellwiseEditingDifferentiation.jl.

20. Introduction to the equilibrium green’s functions: Condensed matter examples with numerical implementations. Mariana M. Odashima Beatriz G. Prado, E. Vernek (2016), 1–20,

Abstract: The Green’s function method has applications in several fields in Physics, from classical differential equations to quantum many-body problems. In the quantum context, Green’s functions are correlation functions, from which it is possible to extract information from the system under study, such as the density of states, relaxation times and response functions. Despite its power and versatility, it is known as a laborious and sometimes cumbersome method. Here we introduce the equilibrium Green’s functions and the equation-of-motion technique, exemplifying the method in discrete lattices of non-interacting electrons. We start with simple models, such as the two-site molecule, the infinite and semi-infinite one-dimensional chains, and the two-dimensional ladder. Numerical implementations are developed via the recursive Green’s function, implemented in Julia, an open-source, efficient and easy-to-learn scientific language. We also present a new variation of the surface recursive Green’s function method, which can be of interest when simulating simultaneously the properties of surface and bulk.

21. Experimental multi-threading support for the Julia programming language. Tobias Knopp (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 1–5, doi:10.1109/HPTCDL.2014.11

Abstract: Julia is a young programming language that is designed for technical computing. Although Julia is dynamically typed it is very fast and usually yields C speed by utilizing a just-in-time compiler. Still, Julia has a simple syntax that is similar to Matlab, which is widely known as an easy-to-use programming environment. While Julia is very versatile and provides asynchronous programming facilities in the form of tasks (coroutines) as well as distributed multi-process parallelism, one missing feature is shared memory multi-threading. In this paper we present our experiment on introducing multi-threading support in the Julia programming environment. While our implementation has some restrictions that have to be taken into account when using threads, the results are promising yielding almost full speedup for perfectly parallelizable tasks.

22. Julia and the numerical homogenization of PDEs. Clemens Heitzinger, Gerhard Tulzer (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 36–40, doi:10.1109/HPTCDL.2014.8

Abstract: We discuss the advantages of using Julia for solving multiscale problems involving partial differential equations (PDEs). Multiscale problems are problems where the coefficients of a PDE oscillate rapidly on a microscopic length scale, but solutions are sought on a much larger, macroscopic domain. Solving multiscale problems requires both a theoretic result, i.e., a homogenization result yielding effective coefficients, as well as numerical solutions of the PDE at the microscopic and the macroscopic length scales.Numerical homogenization of PDEs with stochastic coefficients is especially computationally expensive. Under certain assumptions, effective coefficients can be found, but their calculation involves subtle numerical problems. The computational cost is huge due to the generally large number of stochastic dimensions.Multiscale problems arise in many applications, e.g., in uncertainty quantification, in the rational design of nanoscale sensors, and in the rational design of materials.Our code for the numerical stochastic homogenization of elliptic problems is implemented in Julia. Since multiscale problems pose new numerical problems, it is in any case necessary to develop new numerical codes. Julia is a dynamic language inspired by the Lisp family of languages, it is open-source, and it provides native-code compilation, access to highly optimized linear-algebra routines, support for parallel computing, and a powerful macro system. We describe our experience in using Julia and discuss the advantages of Julia’s features in this problem domain.

23. Parallel prefix polymorphism permits parallelization, presentation & proof. Jiahao Chen, Alan Edelman (2014), HPTCDL’14 Proceedings of the 1st Workshop on High Performance Technical Computing in Dynamic Languages, 47–56, doi:10.1109/HPTCDL.2014.9,

Abstract: Polymorphism in programming languages enables code reuse. Here, we show that polymorphism has broad applicability far beyond computations for technical computing: parallelism in distributed computing, presentation of visualizations of runtime data flow, and proofs for formal verification of correctness. The ability to reuse a single codebase for all these purposes provides new ways to understand and verify parallel programs.

24. Generalized low rank models. Madeleine Udell, Corinne Horn, Reza Zadeh, Stephen Boyd (2014),

Abstract: Principal components analysis (PCA) is a well-known technique for approximating a data set represented by a matrix by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.

25. Computing energy eigenvalues of anharmonic oscillators using the double exponential sinc collocation method. Philippe Gaudreau, Richard Slevinsky, Hassan Safouhi (2014),

Abstract: A quantum anharmonic oscillator is defined by the Hamiltonian H, where the potential is given by V. Using the Sinc collocation method combined with the double exponential transformation, we develop a method to efficiently compute highly accurate approximations of energy eigenvalues for anharmonic oscillators. Convergence properties of the proposed method are presented. Using the principle of minimal sensitivity, we introduce an alternate expression for the mesh size for the Sinc collocation method which improves considerably the accuracy in computing eigenvalues for potentials with multiple wells.We apply our method to a number of potentials including potentials with multiple wells. The numerical results section clearly illustrates the high efficiency and accuracy of the proposed method. All our codes are written using the programming language Julia and are available upon request.

26. Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals. Frank van der Meulen, Moritz Schauer (2014),

Abstract: Bayesian estimation of parameters of a diffusion based on discrete time observations poses a difficult problem due to the lack of a closed form expression for the likelihood. Data-augmentation has been proposed for obtaining draws from the posterior distribution of the parameters. Within this approach, the discrete time observations are augmented with diffusion bridges connecting these observations. This poses two challenges: (i) efficiently generating diffusion bridges; (ii) if unknown parameters appear in the diffusion coefficient, then direct implementation of data-augmentation results in an induced Markov chain which is reducible. In this paper we show how both challenges can be addressed in continuous time (before discretisation) by using guided proposals. These are Markov processes with dynamics described by the stochastic differential equation of the diffusion process with an additional term added to the drift coefficient to guide the process to hit the right end point of the bridge. The form of these proposals naturally provides a mapping that decouples the dependence between the diffusion coefficient and diffusion bridge using the driving Brownian motion of the proposals. As the guiding term has a singularity at the right end point, care is needed when discretisation is applied for implementation purposes. We show that this problem can be dealt with by appropriately time changing and scaling of the guided proposal process. In two examples we illustrate the performance of the algorithms we propose. The second of these concerns a diffusion approximation of a chemical reaction network with a four-dimensional diffusion driven by an eight-dimensional Brownian motion.

Code available in ChemicalLangevin.jl.

27. Annealed important sampling for models with latent variables. M.-N. Tran, C. Strickland, M. K. Pitt, R. Kohn (2014),

Abstract: This paper is concerned with Bayesian inference when the likelihood is analytically intractable but can be unbiasedly estimated. We propose an annealed importance sampling procedure for estimating expectations with respect to the posterior. The proposed algorithm is useful in cases where finding a good proposal density is challenging, and when estimates of the marginal likelihood are required. The effect of likelihood estimation is investigated, and the results provide guidelines on how to set up the precision of the likelihood estimation in order to optimally implement the procedure. The methodological results are empirically demonstrated in several simulated and real data examples.

28. Compositional security modelling. Tristan Caulfield, David Pym, Julian Williams (2014), Human Aspects of Information Security, Privacy, and Trust, 8533:233–245, doi:10.1007/978-3-319-07620-1_21

Abstract: Security managers face the challenge of formulating and implementing policies that deliver their desired system security postures — for example, their preferred balance of confidentiality, integrity, and availability — within budget (monetary and otherwise). In this paper, we describe a security modelling methodology, grounded in rigorous mathematical systems modelling and economics, that captures the managers’ policies and the behavioural choices of agents operating within the system. Models are executable, so allowing systematic experimental exploration of the system-policy co-design space, and compositional, so managing the complexity of large-scale systems.

29. Stochastic collapsed variational Bayesian inference for latent Dirichlet allocation. James Foulds, Levi Boyles, Christopher DuBois, Padhraic Smyth, Max Welling (2013), Proceedings of the 19th Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 446–454, doi:10.1145/2487575.2487697

Abstract: There has been an explosion in the amount of digital text information available in recent years, leading to challenges of scale for traditional inference algorithms for topic models. Recent advances in stochastic variational inference algorithms for latent Dirichlet allocation (LDA) have made it feasible to learn topic models on very large-scale corpora, but these methods do not currently take full advantage of the collapsed representation of the model. We propose a stochastic algorithm for collapsed variational Bayesian inference for LDA, which is simpler and more efficient than the state of the art method. In experiments on large-scale text corpora, the algorithm was found to converge faster and often to a better solution than previous methods. Human-subject experiments also demonstrated that the method can learn coherent topics in seconds on small corpora, facilitating the use of topic models in interactive document analysis software.

30. Online learning of nonparametric mixture models via sequential variational approximation. Dahua Lin (2013), Advances in Neural Information Processing Systems 26, 395–403,

Abstract: Reliance on computationally expensive algorithms for inference has been limiting the use of Bayesian nonparametric models in large scale applications. To tackle this problem, we propose a Bayesian learning algorithm for DP mixture models. Instead of following the conventional paradigm – random initialization plus iterative update, we take an progressive approach. Starting with a given prior, our method recursively transforms it into an approximate posterior through sequential variational approximation. In this process, new components will be incorporated on the fly when needed. The algorithm can reliably estimate a DP mixture model in one pass, making it particularly suited for applications with massive data. Experiments on both synthetic data and real datasets demonstrate remarkable improvement on efficiency – orders of magnitude speed-up compared to the state-of-the-art.

31. Novel algebras for advanced analytics in Julia. Viral B. Shah, Alan Edelman, Stefan Karpinski, Jeff Bezanson, Jeremy Kepner (2013), High Performance Extreme Computing Conference (Hpec), 2013 Ieee, 1–4, doi:10.1109/HPEC.2013.6670347

Abstract: A linear algebraic approach to graph algorithms that exploits the sparse adjacency matrix representation of graphs can provide a variety of benefits. These benefits include syntactic simplicity, easier implementation, and higher performance. One way to employ linear algebra techniques for graph algorithms is to use a broader definition of matrix and vector multiplication. We demonstrate through the use of the Julia language system how easy it is to explore semirings using linear algebraic methodologies.

32. Fast and accurate multivariate gaussian modeling of protein families: Predicting residue contacts and protein-interaction partners. Carlo Baldassi, Marco Zamparo, Christoph Feinauer, Andrea Procaccini, Riccardo Zecchina, Martin Weigt, Andrea Pagnani (2014), PLoS ONE, 9:e92721, doi:10.1371/journal.pone.0092721.s001,\%3Adoi\%2F10.1371\%2Fjournal.pone.0092721

Abstract: In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website <a href=“”></a>.

Code available in GaussDCA.jl.

33. Approximate Bayesian computation for estimating number concentrations of monodisperse nanoparticles in suspension by optical microscopy. Magnus Röding, Elisa Zagato, Katrien Remaut, Kevin Braeckmans (2016), Physical Review E, 93:063311

Abstract: We present an approximate Bayesian computation scheme for estimating number concentrations of monodisperse diffusing nanoparticles in suspension by optical particle tracking microscopy. The method is based on the probability distribution of the time spent by a particle inside a detection region. We validate the method on suspensions of well-controlled reference particles. We illustrate its usefulness with an application in gene therapy, applying the method to estimate number concentrations of plasmid DNA molecules and the average number of DNA molecules complexed with liposomal drug delivery particles.

34. Computational high-throughput screening of fluid permeability in heterogeneous fiber materials. Magnus Röding, Erich Schuster, Katarina Logg, Malin Lundman, Per Bergström, Charlotta Hanson, Tobias Gebäck, Niklas Lorén (2016), Soft Matter, 12:6293–6299

Abstract: We explore computational high-throughput screening as a design strategy for heterogeneous, isotropic fiber materials. Fluid permeability, a key property in the design of soft porous materials, is systematically studied using a multi-scale lattice Boltzmann framework. After characterizing microscopic permeability as a function of solid volume fraction in the microstructure, we perform high-throughput computational screening of in excess of 35 000 macrostructures consisting of a continuous bulk interrupted by spherical/elliptical domains with either lower or higher microscopic permeability (hence with two distinct microscopic solid volume fractions and therefore two distinct microscopic permeabilities) to assess which parameters determine macroscopic permeability for a fixed average solid volume fraction. We conclude that the fractions of bulk and domains and the distribution of solid volume fraction between them are the primary determinants of macroscopic permeability, and that a substantial increase in permeability compared to the corresponding homogenous material is attainable.


35. Parallelle abstracties voor het programmeren van GPU’s in Julia (parallel abstractions for programming GPUs in Julia). Pieter Verstraete (2014),

Abstract: This master’s thesis explores the possibility to provide access to the computing power of a GPU from the high-level programming language Julia. An important requirement here is to keep the programmer’s productivity at the same high level as if he would use Julia without a GPU. Indeed, very specialized and detailed technical knowledge is needed in order to program a GPU, making it complex and time-consuming. In many modern scientific domains quite a lot of brute computing power is required, but often these domains lack the technical expertise to use GPUs in an efficient manner.The purpose of this thesis is to provide access to a GPU from Julia in a way that shields the GPU details from the programmer. In a first step we define and implement in Julia abstractions that can be executed in parallel on the GPU. Next we adapt the Julia compiler such that it can translate these abstractions to GPU code. The resulting compiler infrastructure manages the GPU in a way that is transparent to the programmer. Finally we evaluate the abstractions and compiler infrastructure in the context of a concrete application, namely the trace transform.

Book chapters

Donate Now