This page is designed to improve discoverability of projects. You can, for example, search this page for specific keywords and find all of the relevant projects.
MLJ is a machine learning framework for Julia aiming to provide a convenient way to use and combine a multitude of tools and models available in the Julia ML/Stats ecosystem.
MLJ is released under the MIT license and sponsored by the Alan Turing Institute.
Implement survival analysis models for use in the MLJ machine learning platform.
Difficulty. Moderate - hard. Duration. 350 hours
Survival/time-to-event analysis is an important field of Statistics concerned with understanding the distribution of events over time. Survival analysis presents a unique challenge as we are also interested in events that do not take place, which we refer to as 'censoring'. Survival analysis methods are important in many real-world settings, such as health care (disease prognosis), finance and economics (risk of default), commercial ventures (customer churn), engineering (component lifetime), and many more. This project aims to implement models for performing survivor analysis with the MLJ machine learning framework.
mlr3proba is currently the most complete survival analysis interface, let's get SurvivalAnalysisA.jl to the same standard - but learning from mistakes along the way.
Mentors. Sebastian Vollmer, Anthony Blaom,
Julia language fluency is essential.
Git-workflow familiarity is strongly preferred.
Some experience with survival analysis.
Familiarity with MLJ's API a plus.
A passing familiarity with machine learning goals and workflow is
preferred.
You will work towards creating a survival analysis package with a range of metrics, capable of making distribution predictions for classical and ML models. You will bake in competing risks in early, as well as prediction transformations, and include both left and interval censoring. You will code up basic models (Cox PH and AFT), as well as one ML model as a proof of concept (probably decision tree is simplest or Coxnet).
Specifically, you will:
Familiarize yourself with the training and evaluation machine
learning models in MLJ.
For SurvivalAnalysis.jl, implement the MLJ model interface.
Consider Explainability of SurvivalAnalysis through SurvSHAP(t)
Develop a proof of concept for newer advanced survival analysis
models not currently implemented in Julia.
Mateusz Krzyziński et al., SurvSHAP(t): Time-Dependent Explanations of Machine Learning Survival Models, Knowledge-Based Systems 262 (February 2023): 110234
Kvamme, H., Borgan, Ø., & Scheel, I. (2019). Time-to-event prediction with neural networks and Cox regression. Journal of Machine Learning Research, 20(129), 1–30.
Lee, C., Zame, W. R., Yoon, J., & van der Schaar, M. (2018). Deephit: A deep learning approach to survival analysis with competing risks. In Thirty-Second AAAI Conference on Artificial Intelligence.
Katzman, J. L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., & Kluger, Y. (2018). DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Medical Research Methodology, 18(1), 24.
Gensheimer, M. F., & Narasimhan, B. (2019). A scalable discrete-time survival model for neural networks.](https://peerj.com/articles/6257/) PeerJ, 7, e6257.
Bayesian methods and probabilistic supervised learning provide uncertainty quantification. This project aims increasing integration to combine Bayesian and non-Bayesian methods using Turing.
Difficulty. Difficult. Duration. 350 hours.
As an initial step reproduce SOSSMLJ in Turing. The bulk of the project is to implement methods that combine multiple predictive distributions.
Interface between Turing and MLJ
Comparisons of ensembling, stacking of predictive distribution
reproducible benchmarks across various settings.
Mentors: Hong Ge Sebastian Vollmer
Help data scientists using MLJ track and share their machine learning experiments using MLFlow.
Difficulty. Moderate. Duration. 350 hours.
MLFlow is an open source platform for the machine learning life cycle. It allows the data scientist to upload experiment metadata and outputs to the platform for reproducing and sharing purposes. This project aims to integrate the MLJ machine learning platform with MLFlow.
Julia language fluency essential.
Git-workflow familiarity strongly preferred.
General familiarity with data science workflows
You will familiarize yourself with MLJ, MLFlow and MLFlowClient.jl client APIs.
Implement functionality to upload to MLFlow machine learning model hyper-parameters, performance evaluations, and artifacts encapsulating the trained model.
Implement functionality allowing for the live tracking of learning for iterative models, such as neural networks, by hooking in to MLJIteration.jl.
MLFlow website.
Mentors. Deyan Dyankov (to be confirmed), Anthony Blaom, Diego Arenas.
Diagnose and exploit opportunities for speeding up common MLJ workflows.
Difficulty. Moderate. Duration. 350 hours.
In addition to investigating a number of known performance bottlenecks, you will have some free reign in this to identify opportunities to speed up common MLJ workflows, as well as making better use of memory resources.
Julia language fluency essential.
Experience with multi-threading and multi-processor computing essential, preferably in Julia.
Git-workflow familiarity strongly preferred.
Familiarity with machine learning goals and workflow preferred
In this project you will:
familiarize yourself with the training, evaluation and tuning of machine learning models in MLJ
benchmark and profile common workflows to identify opportunities for further code optimizations, with a focus on the most popular models
work to address problems identified
roll out new data front-end for iterative models, to avoid unnecessary copying of data
experiment with adding multi-processor parallelism to the current learning networks scheduler
implement some of these optimizations
MLJ Roadmap. See, in particular "Scalability" section.
Data front end for MLJ models.
Mentors. Anthony Blaom, Okon Samuel.
Improve and extend Julia's offering of algorithms for correcting class imbalance, with a view to integration into MLJ and elsewhere.
Difficulty. Easy - moderate. Duration. 350 hours
Many classification algorithms do not perform well when there is a class imbalance in the target variable (for example, many more positives than negatives). There are number of well-known data preprocessing algorithms, such as oversampling, for compensating for class imbalance. See for instance the python package imbalance-learn.
The Julia package ClassImbalance.jl provides some native Julia class imbalance algorithms. For wider adoption it is proposed that:
ClassImbalance.jl be made more data-generic by supporting the MLUtils.jl getobs
interface (original documentation here which now (mostly) includes tabular data implementing the Tables.jl) API. Currently there is only support for an old version of DataFrames.jl.
ClassImbalance.jl implements one or more general transformer API's, such the ones provided by TableTransforms.jl, MLJ, and FeatureTransforms.jl (a longer term goal is for MLJ to support the TableTransforms.jl API)
Other Julia-native class imbalance algorithms be added
Mentor. Anthony Blaom.
Julia language fluency is essential.
An understanding of the class imbalance phenomena essential. A detailed understanding of at least one class imbalance algorithm essential.
Git-workflow familiarity is strongly preferred.
A familiarity with machine learning goals and workflow preferred
Familiarize yourself with the existing ClassImbalance package, including known issues
Familiarize yourself with the Tables.jl interface
Assess the merits of different transformer API choices and choose one in consultation with your mentor
Implement the proposed improvements in parallel with testing and documentation additions to the package. Testing and documentation must be up-to-date before new algorithms are added.
repository.
Bayesian optimization is a global optimization strategy for (potentially noisy) functions with unknown derivatives. With well-chosen priors, it can find optima with fewer function evaluations than alternatives, making it well suited for the optimization of costly objective functions.
Well known examples include hyper-parameter tuning of machine learning models (see e.g. Taking the Human Out of the Loop: A Review of Bayesian Optimization). The Julia package BayesianOptimization.jl currently supports only basic Bayesian optimization methods. There are multiple directions to improve the package, including (but not limited to)
Hybrid Bayesian Optimization (duration: 175h, expected difficulty: medium) with discrete and continuous variables. Implement e.g. HyBO see also here.
Scalable Bayesian Optimization (duration: 175h, expected difficulty: medium): implement e.g. TuRBO or SCBO.
Better Defaults (duration: 175h, expected difficulty: easy): write an extensive test suite and implement better defaults; draw inspiration from e.g. dragonfly.
Recommended Skills: Familiarity with Bayesian inference, non-linear optimization, writing Julia code and reading Python code.
Expected Outcome: Well-tested and well-documented new features.
Mentor: Johanni Brea
There are a number of compiler projects that are currently being worked on. Please contact Jameson Nash for additional details and let us know what specifically interests you about this area of contribution. That way, we can tailor your project to better suit your interests and skillset.
LLVM AliasAnalysis (175-350 hours) The Julia language utilizes LLVM as a backend for code generation, so the quality of code generation is very important for performance. This means that there are plenty of opportunities for those with knowledge of or interest in LLVM to contribute via working on Julia's code generation process. We have recently encountered issues with memcpy information only accepting a single aliasing metadata argument, rather than separate information for the source and destination. There are other similar missing descriptive or optimization steps in the aliasing information we produce or consume by LLVM's passes.
Expected Outcomes: Improve upon the alias information "LLVM level" of Julia codegen.
Skills: C/C++ programming
Difficulty: Hard
Parser improvement and replacement (175 hours)
Error messages and infrastructure could use some work to track source locations more precisely. This may be a large project. Contact me and @c42f for more details if this interests you.
See https://github.com/JuliaLang/julia/pull/46372 for the current progress.
Expected Outcomes: Improve upon Julia parser error messages.
Skills: Some familiarity with parsers
Difficulty: Medium
Macro hygiene re-implementation, to eliminate incorrect predictions inherent in current approach (350 hours)
This may be a good project for someone that wants to learn lisp/scheme! Our current algorithm runs in multiple passes, which means sometimes we compute the wrong scope for a variable in the earlier pass than when we assign the actual scope to each value. See https://github.com/JuliaLang/julia/labels/macros, and particularly issues such as https://github.com/JuliaLang/julia/issues/20241 and https://github.com/JuliaLang/julia/issues/34164.
Expected Outcomes: Ideally, re-implementation of hygienic macros. Realistically, resolving some or any of the macros
issues.
Skills: Lisp/Scheme/Racket experience desired but not necessarily required.
Difficulty: Medium
Better debug information output for variables (175 hours)
We have part of the infrastructure in place for representing DWARF information for our variables, but only from limited places. We could do much better since there are numerous opportunities for improvement!
Expected Outcomes: Ability to see more variable, argument, and object details in gdb Recommended Skills: Most of these projects involve algorithms work, requiring a willingness and interest in seeing how to integrate with a large system.
Difficulty: Medium
Mentors: Jameson Nash, Gabriel Baraldi
Code coverage reports very good coverage of all of the Julia Stdlib packages, but it's not complete. Additionally, the coverage tools themselves (–track-coverage and https://github.com/JuliaCI/Coverage.jl) could be further enhanced, such as to give better accuracy of statement coverage, or more precision. A successful project may combine a bit of both building code and finding faults in others' code.
Another related side-project might be to explore adding Type information to the coverage reports?
Recommended Skills: An eye for detail, a thrill for filing code issues, and the skill of breaking things.
Contact: Jameson Nash
A few ideas to get you started, in brief:
Measure and optimize the performance of the partr
algorithm, and add the ability to dynamically scale it by workload size.
Automatic insertion, and subsequent optimization, of GC safe-points/regions, particularly around loops.
Higher performance multi-threaded parallel-safe allocator for whole new pages.
More optimized atomic intrinsics for Julia.
Join the regularly scheduled multithreading call for discussion of any of these at #multithreading BoF calendar invite on the Julia Language Public Events calendar.
Recommended Skills: Varies by project, but generally some multi-threading and C experience is needed
Contact: Jameson Nash
The Nanosoldier.jl project (and related https://github.com/JuliaCI/BaseBenchmarks.jl) tests for performance impacts of some changes. However, there remains many areas that are not covered (such as compile time) while other areas are over-covered (greatly increasing the duration of the test for no benefit) and some tests may not be configured appropriately for statistical power. Furthermore, the current reports are very primitive and can only do a basic pair-wise comparison, while graphs and other interactive tooling would be more valuable. Thus, there would be many great projects for a summer contributor to tackle here!
Expected Outcomes: Improvement of Julia's automated testing/benchmarking framework. Skills: Interest in and/or experience with CI systems. Difficulty: Medium
Contact: Jameson Nash, Tim Besard
The Julia manual and the documentation for a large chunk of the ecosystem is generated using Documenter.jl – essentially a static site generator that integrates with Julia and its docsystem. There are tons of opportunities for improvements for anyone interested in working on the interface of Julia, documentation and various front-end technologies (web, LaTeX).
ElasticSearch-based search backend for Documenter. (350 hours) Loading the search page of Julia manual is slow because the index is huge and needs to be downloaded and constructed on the client side on every page load (currently implemented with lunr.js). Instead, we should look at hosting the search server-side. The goal is to implement an ElasticSearch-based search backend for Documenter.
Recommended skills: Basic knowledge of web-development (JS, CSS, HTML).
Mentors: Morten Piibeleht
Ferrite.jl is a Julia package providing the basic building blocks to develop finite element simulations of partial differential equations. The package provides extensive examples to start from and is designed as a compromise between simplicity and generality, trying to map finite element concepts 1:1 with the code in a low-level . Ferrite is actively used in teaching finite element to students at several universities across different countries (e.g. Ruhr-University Bochum and Chalmers University of Technology). Further infrastructure is provided in the form of different mesh parsers and a Julia based visualizer called FerriteViz.jl.
Below we provide a four of potential project ideas in Ferrite.jl. However, interested students should feel free to explore ideas they are interested in. Please contact any of the mentors listed below, or join the #ferrite-fem
channel on the Julia slack to discuss. Projects in finite element visualization are also possible with FerriteViz.jl.
Difficulty: Easy-Medium (depending on your specific background)
Project size: 150-300 hours
Problem: Ferrite.jl is designed with the possibility to define partial differential equations on subdomains. This makes it well-suited for interface-coupled multi-physics problems, as for example fluid-structure interaction problems. However, we currently do not have an example showing this capability in our documentation. We also do not provide all necessary utilities for interface-coupled problems.
Minimum goal: The minimal goal of this project is to create a functional and documented linear fluid-structure interaction example coupling linear elasticity with a stokes flow in a simple setup. The code should come with proper test coverage.
Extended goal: With this minimally functional example it is possible to extend the project into different directions, e.g. optimized solvers or nonlinear fluid-structure interaction.
Recommended skills:
Basic knowledge the finite element method
Basic knowledge about solids or fluids
The ability (or eagerness to learn) to write fast code
Mentors: Dennis Ogiermann and Fredrik Ekre
Difficulty: Medium
Project size: 250-350 hours
Problem: Ferrite.jl has an outstanding performance in single-threaded finite element simulations due to elaborate elimination of redundant workloads. However, we recently identified that the way the single-threaded assembly works makes parallel assembly memory bound, rendering the implementation for "cheap" assembly loops not scalable on a wide range of systems. This problem will also translate to high-order schemes, where the single-threaded strategy as is prevents certain common optimization strategies (e.g. sum factorization).
Minimum goal: As a first step towards better parallel assembly performance it is the investion of different assembly strategies. Local and global matrix-free schemes are a possibility to explore here. The code has to be properly benchmarked and tested to identify different performance problems.
Extended goal: With this minimally functional example it is possible to extend the project into different directions, e.g. optimized matrix-free solvers or GPU assembly.
Recommended skills:
Basic knowledge the finite element method
Basic knowledge about benchmarking
The ability (or eagerness to learn) to write fast code
Mentors: Maximilian Köhler and Dennis Ogiermann
Graph Neural Networks (GNN) are deep learning models well adapted to data that takes the form of graphs with feature vectors associated to nodes and edges. GNNs are a growing area of research and find many applications in complex networks analysis, relational reasoning, combinatorial optimization, molecule generation, and many other fields.
GraphNeuralNetworks.jl is a pure Julia package for GNNs equipped with many features. It implements common graph convolutional layers, with CUDA support and graph batching for fast parallel operations. There are a number of ways by which the package could be improved.
While we implement a good variety of graph convolutional layers, there is still a vast zoology to be implemented yet. Preprocessing tools, pooling operators, and other GNN-related functionalities can be considered as well.
Duration: 175h.
Expected difficulty: easy to medium.
Expected outcome: Enrich the package with a variety of new layers and operators.
As part of the documentation and for bootstrapping new projects, we want to add fully worked out examples and applications of graph neural networks. We can start with entry-level tutorials and progressively introduce the reader to more advanced features.
Duration: 175h.
Expected difficulty: medium.
Expected outcome: A few pedagogical and more advanced examples of graph neural networks applications.
Provide Julia friendly wrappers for common graph datasets in MLDatasets.jl
. Create convenient interfaces for the Julia ML and data ecosystem.
Duration: 175h.
Expected difficulty: easy.
Expected outcome: A large collection of graph datasets easily available to the Julia ecosystem.
In some complex networks, the relations expressed by edges can be of different types. We currently support this with the GNNHeteroGraph
type but none of the current graph convolutional layers support heterogeneous graphs as inputs. With this project we will implement a few layers for heterographs.
Duration: 350h.
Expected difficulty: hard.
Expected outcome: The implementation of a new graph type for heterogeneous networks and corresponding graph convolutional layers.
Graph containing several millions of nodes are too large for gpu memory. Mini-batch training is performed on subgraphs, as in the GraphSAGE algorithm.
Duration: 175h.
Expected difficulty: hard.
Expected outcome: The necessary algorithmic components to scale GNN training to very large graphs.
We aim at implementing temporal graph convolutions for time-varying graph and/or node features. The design of an efficient dynamical graph type is a crucial part of this project.
Duration: 350h.
Expected difficulty: hard.
Expected outcome: A new dynamical graph type and corresponding convolutional layers.
Many graph convolutional layers can be expressed as non-materializing algebraic operations involving the adjacency matrix instead of the slower and more memory consuming gather/scatter mechanism. We aim at extending as far as possible and in a gpu-friendly way these fused implementation.
Duration: 175h.
Expected difficulty: hard.
Expected outcome: A noticeable performance increase for many graph convolutional operations.
Familiarity with graph neural networks and Flux.jl.
Carlo Lucibello (author of GraphNeuralNetworks.jl). For linear algebra, co-mentoring by Will Kimmerer (lead developer of SuiteSparseGraphBLAS.jl). Feel free to contact us on the Julia Slack Workspace or by opening an issue in the GitHub repo.
The QML.jl package provides Julia bindings for Qt QML on Windows, OS X and Linux. In the current state, basic GUI functionality exists, and rough integration with Makie.jl is available, allowing overlaying QML GUI elements over Makie visualizations.
Split off the QML code for Makie into a separate package. This will allow specifying proper package compatibility between QML and Makie, without making Makie a mandatory dependency for QML (currently we use Requires.jl for that)
Improve the integration. Currently, connections between Makie and QML need to be set up mostly manually. We need to implement some commonly used functionality, such as the registration of clicks in a viewport with proper coordinate conversion and navigation of 3D viewports.
Recommended Skills: Familiarity with both Julia and the Qt framework, some basic C++ skills, affinity with 3D graphics and OpenGL.
Duration: 175h, expected difficulty: medium
Mentors: Bart Janssens and Simon Danish
Makie.jl is a visualization ecosystem for the Julia programming language, with a focus on interactivity and performance. JSServe.jl is the core infrastructure library that makes Makie's web-based backend possible.
At the moment, all the necessary ingredients exist for designing web-based User Interfaces (UI) in Makie, but the process itself is quite low-level and time-consuming. The aim of this project is to streamline that process.
Implement novel UI components and refine existing ones.
Introduce data structures suitable for representing complex UIs.
Add simpler syntaxes for common scenarios, akin to Interact's @manipulate
macro.
Improve documentation and tutorials.
Streamline the deployment process.
Bonus tasks. If time allows, one of the following directions could be pursued.
Make Makie web-based plots more suitable for general web apps (move more computation to the client side, improve interactivity and responsiveness).
Generalize the UI infrastructure to native widgets, which are already implemented in Makie but with a different interface.
Desired skills. Familiarity with HTML, JavaScript, and CSS, as well as reactive programming. Experience with the Julia visualization and UI ecosystem.
Duration. 350h.
Difficulty. Medium.
Mentors. Pietro Vertechi and Simon Danisch.
Julia is emerging as a serious tool for technical computing and is ideally suited for the ever-growing needs of big data analytics. This set of proposed projects addresses specific areas for improvement in analytics algorithms and distributed data management.
Difficulty: Medium (175h)
Dagger.jl is a native Julia framework and scheduler for distributed execution of Julia code and general purpose data parallelism, using dynamic, runtime-generated task graphs which are flexible enough to describe multiple classes of parallel algorithms. This project proposes to implement different scheduling algorithms for Dagger to optimize scheduling of certain classes of distributed algorithms, such as mapreduce and merge sort, and properly utilizing heterogeneous compute resources. Contributors will be expected to find published distributed scheduling algorithms and implement them on top of the Dagger framework, benchmarking scheduling performance on a variety of micro-benchmarks and real problems.
Mentors: Julian Samaroo, Krystian Guliński
Difficulty: Hard (350h)
Add a distributed training API for Flux models built on top of Dagger.jl. More detailed milestones include building Dagger.jl abstractions for UCX.jl, then building tools to map Flux models into data parallel Dagger DAGs. The final result should demonstrate a Flux model training with multiple devices in parallel via the Dagger.jl APIs. A stretch goal will include mapping operations with a model to a DAG to facilitate model parallelism as well.
There are projects now that host the building blocks: DaggerFlux.jl and Distributed Data Parallel Training which can serve as jumping off points.
Skills: Familiarity with UCX, representing execution models as DAGs, Flux.jl, CUDA.jl and data/model parallelism in machine learning
Mentors: Julian Samaroo, and Dhairya Gandhi
Difficulty: Medium (175h)
Array programming is possibly the most powerful abstraction in Julia, yet our distributed arrays support leaves much to be desired. This project's goal is to implement a new distributed array type on top of the Dagger.jl framework, which will allow this new array type to be easily distributed, multithreaded, and support GPU execution. Contributors will be expected to implement a variety of operations, such as mapreduce, sorting, slicing, and linear algebra, on top of their distributed array implementation. Final results will include extensive scaling benchmarks on a range of configurations, as well as an extensive test suite for supported operations.
Mentors: Julian Samaroo, Evelyne Ringoot
JuliaImages (see the documentation) is a framework in Julia for multidimensional arrays, image processing, and computer vision (CV). It has an active development community and offers many features that unify CV and biomedical 3D/4D image processing, support big data, and promote interactive exploration.
Often the best ideas are the ones that candidate SoC contributors come up with on their own. We are happy to discuss such ideas and help you refine your proposal. Below are some potential project ideas that might help spur some thoughts. In general, anything that is missing in JuliaImages, and worths three-months' development can be considered as potential GSoC ideas. See the bottom of this page for information about mentors.
Difficulty: Medium (175h) (High priority)
JuliaImages provides high-quality implementations of many algorithms; however, as yet there is no set of benchmarks that compare our code against that of other image-processing frameworks. Developing such benchmarks would allow us to advertise our strengths and/or identify opportunities for further improvement. See also the OpenCV project below.
Benchmarks for several performance-sensitive packages (e.g., ImageFiltering, ImageTransformations, ImageMorphology, ImageContrastAdjustment, ImageEdgeDetection, ImageFeatures, and/or ImageSegmentation) against frameworks like Scikit-image and OpenCV, and optionally others like ITK, ImageMagick, and Matlab/Octave. See also the image benchmarks repository.
This task splits into at least two pieces:
developing frameworks for collecting the data, and
visualizing the results.
One should also be aware of the fact that differences in implementation (which may include differences in quality) may complicate the interpretation of some benchmarks.
Skills: JuliaImages experiences is required. Some familiarities with other image processing frameworks is preferred.
Mentors: Tim Holy
Interested contributors are encouraged to open an discussion in Images.jl to introduce themselves and discuss the detailed project ideas. To increase the chance of getting useful feedback, please provide detailed plans and ideas (don't just copy the contents here).
The CxxWrap.jl package provides a way to load compiled C++ code into Julia. It exposes a small fraction of the C++ standard library to Julia, but many more functions and containers (e.g. std::map
) still need to be exposed. The objective of this project is to improve C++ standard library coverage.
Add missing STL container types (easy)
Add support for STL algorithms (intermediate)
Investigate improvement of compile times and selection of included types (advanced)
Recommended Skills: Familiarity with both Julia and C++
Duration: 175h, expected difficulty: hard
Mentor: Bart Janssens
Take a look at the hyper.rs project, listed on the "Pluto" page, about wrapping a Rust HTTP server in a Julia package.
JuliaConstraints is an organization supporting packages for Constraint Programming in Julia. Although it is independent of it, it aims for a tight integration with JuMP.jl over time. For a detailed overview of basic Constraint Programming in Julia, please have a look at our video from JuliaCon 2021 Put some constraints into your life with JuliaCon(straints).
Often, problem-solving involves taking two actions: model and solve. Typically, there is a trade-off between ease of modeling and efficiency of solving. Therefore, one is often required to be a specialist to model and solve an optimization problem efficiently. We investigate the theoretical fundamentals and the implementation of tools to automize and make optimization frameworks. A general user should focus on the model of practical problems, regardless of the software or hardware available. Furthermore, we aim to encourage technical users to use our tools to improve their solving efficiency.
Mentor: Jean-Francois Baffier (azzaare@github)
This package consists of a set of tools designed to check the performance of packages over time and versions. The targeted audience is the whole community of packages' developers in Julia (not only JuliaConstraints).
The README provides a short demo on how PerfChecker can be used.
Basic features to implement (length ≈ 175 hours)
- PerfCheck environment similar to Test.jl and Pkg.jl - Sugar syntax `@bench`, `@alloc`, `@profile` similar to Test.jl and Pkg.jl - Interactive REPL interface - Interactive GUI interface (using for instance Makie) - Automatic Profiling ? (not sure how, there already is a bunch of super cool packages) - Automatic plotting of previous features
Advanced features (length +≈ 175 hours)
- *Smart* semi-automatic analysis of performances - performances bottlenecks - regressions - allocations vs speed trade-off - descriptive plot captions - Handle Julia and other packages versions - Integrates with juliaup - Automatically generate versions parametric space for both packages and Julia
Note that some features are interchangeable depending on the interest of the candidate. For candidates with a special interest in the JuliaConstraints ecosystem, checking the performances of some packages is an option.
Length: 175 hours – 350 hours (depending on features)
Recommended Skills (||):
- Familiarity with package development - REPL and/or GUI interfaces - Coverage, Benchmarks, and Profiling tools
Difficulty: Easy to Medium, depending on the features implemented
Constraints.jl provides an interface to work with and store information about constraints, a predicate over a set of variables that is the core of modeling and solving in constraint programming. Recently, a few common constraints have been natively integrated into the JuMP modeling language for mathematical optimization. In Constraints.jl, we provide an implementation of about 20 core constraints from XCSP³-core. Our target here is to integrate the constraints currently missing in JuMP to provide a wider spectrum to CP and mathematical optimization users when using JuMP. Additionally, these constraints should be plugged with CBLS.jl, the JuMP interface of LocalSearchSolvers.jl.
Ideally, we want to provide basic JuMP(MOI) bridges to translate those constraints to classical optimization sets.
(Note: at the time of writing this proposal, CBLS.jl is yet to be updated to interface JuMP v1 which would be a nice first issue to get familiar with the project)
Length: 175 hours – 350 hours (depending on features such as bridges)
Recommended Skills:
- Familiarity with JuMP and MOI packages - Understanding of basic constraint programming
Difficulty: Medium to hard, depending on the features implemented
Difficulty: Medium to Hard.
Length: 350 hours.
Agents.jl is a pure Julia framework for agent-based modeling (ABM). It has an extensive list of features, excellent performance and is easy to learn, use, and extend. Comparisons with other popular frameworks written in Python or Java (NetLOGO, MASON, Mesa), show that Agents.jl outperforms all of them in computational speed, list of features and usability.
In this project, contributors will be paired with lead developers of Agents.jl to improve Agents.jl with more features, better performance, and overall higher polish. Possible features to implement are:
Automatic performance increase of mixed-agent models by eliminating dynamic dispatch on the stepping function
GPU support in Agents.jl
New type of space representing a planet, which can be used in climate policy or human evolution modelling, and new interface for an overarching ABM composed of several smaller ABMs
Recommended Skills: Familiarity with agent based modelling, Agents.jl and Julia's Type System. Background in complex systems, sociology, or nonlinear dynamics is not required but would be advantageous.
Expected Results: Well-documented, well-tested useful new features for Agents.jl.
Mentors: George Datseris.
Difficulty: Easy to Medium, depending on the algorithms chosen to implement.
Length: 175 hours.
DynamicalSystems.jl is an award-winning Julia software library for dynamical systems, nonlinear dynamics, deterministic chaos and nonlinear time series analysis. It has an impressive list of features, but one can never have enough. In this project, contributors will be able to enrich DynamicalSystems.jl with new algorithms and enrich their knowledge of nonlinear dynamics and computer-assisted exploration of complex systems.
Possible projects are summarized in the wanted-features of the library
Recommended Skills: Familiarity with nonlinear dynamics and/or differential equations and the Julia language.
Expected Results: Well-documented, well-tested new algorithms for DynamicalSystems.jl.
Mentors: George Datseris
JuliaHealth is an organization dedicated to improving healthcare by promoting open-source technologies and data standards. Our community is made up of researchers, data scientists, software developers, and healthcare professionals who are passionate about using technology to improve patient outcomes and promote data-driven decision-making. We believe that by working together and sharing our knowledge and expertise, we can create powerful tools and solutions that have the potential to transform healthcare.
Description: The OMOP Common Data Model (OMOP CDM) is a widely used data standard that allows researchers to analyze large, heterogeneous healthcare datasets in a consistent and efficient manner. JuliaHealth has several packages that can interact with databases that adhere to the OMOP CDM (such as OMOPCDMCohortCreator.jl or OMOPCDMDatabaseConnector.jl). For this project, we are looking for students interested in further developing the tooling in Julia to interact with OMOP CDM databases.
Mentor: Jacob Zelko (aka TheCedarPrince) [email: jacobszelko@gmail.com]
Difficulty: Medium
Duration: 175 hours
Suggested Skills and Background:
Experience with Julia
Familiarity with some of the following Julia packages would be a strong asset:
FunSQL.jl
DataFrames.jl
Distributed.jl
OMOPCDMCohortCreator.jl
OMOPCDMDatabaseConnector.jl
OMOPCommonDataModel.jl
Comfort with the OMOP Common Data Model (or a willingness to learn!)
Potential Outcomes:
Some potential project outcomes could be:
Expanding OMOPCDMCohortCreator.jl to enable users to add constraints to potential patient populations they want to create such as conditional date ranges for a given drug or disease diagnosis.
Support parallelization of OMOPCDMCohortCreator.jl based queries when developing a patient population.
Develop and explore novel ways for how population filters within OMOPCDMCohortCreator.jl can be composed together for rapid analysis.
In whatever functionality that gets developed for tools within JuliaHealth, it will also be expected for students to contribute to the existing package documentation to highlight how new features can be used. Although not required, if students would like to submit a lightning talks, posters, etc. to JuliaCon in the future about their work, they will be supported in this endeavor!
Please contact the mentor for this project if interested and want to discuss what else could be pursued in the course of this project.
Description: Patient level prediction (PLP) is an important area of research in healthcare that involves using patient data to predict outcomes such as disease progression, response to treatment, and hospital readmissions. JuliaHealth is interested in developing tooling for PLP that utilizes historical patient data, such as patient medical claims or electronic health records, that follow the OMOP Common Data Model (OMOP CDM), a widely used data standard that allows researchers to analyze large, heterogeneous healthcare datasets in a consistent and efficient manner. For this project, we are looking for students interested in developing PLP tooling within Julia.
Mentor: Sebastian Vollmer [email: sjvollmer@gmail.com], Jacob Zelko (aka TheCedarPrince) [email: jacobszelko@gmail.com]
Difficulty: Hard
Duration: 350 hours
Suggested Skills and Background:
Experience with Julia
Exposure to machine learning concepts and ideas
Familiarity with some of the following Julia packages would be a strong asset:
DataFrames.jl
OMOPCDMCohortCreator.jl
MLJ.jl
ModelingToolkit.jl
Comfort with the OMOP Common Data Model (or a willingness to learn)
Outcomes:
This project will be very experimental and exploratory in nature. To constrain the expectations for this project, here is a possible approach students will follow while working on this project:
Review existing literature on approaches to PLP
Familiarize oneself with tools for machine learning and prediction within the Julia ecosystem
Determine PLP research question to drive package development
Develop PLP package utilizing JuliaHealth tools to work with an OMOP CDM database
Test and validate PLP package for investigating the research question
Document findings and draft JuliaCon talk
In whatever functionality that gets developed for tools within JuliaHealth, it will also be expected for students to contribute to the existing package documentation to highlight how new features can be used. For this project, it will be expected as part of the proposal to pursue drafting and giving a talk at JuliaCon. Furthermore, although not required, publishing in the JuliaCon Proceedings will both be encouraged and supported by project mentors.
Additionally, depending on the success of the package, there is a potential to run experiments on actual patient data to generate actual patient population insights based on a chosen research question. This could possibly turn into a separate research paper, conference submission, or poster submission. Whatever may occur in this situation will be supported by project mentors.
JuliaMusic is an organization providing packages and functionalities that allow analyzing the properties of music performances.
Difficulty: Medium.
Length: 350 hours.
It is easy to analyze timing and intensity fluctuations in music that is the form of MIDI data. This format is already digitalized, and packages such as MIDI.jl and MusicManipulations.jl allow for seamless data processing. But arguably the most interesting kind of music to analyze is the live one. Live music performances are recorded in wave formats. Some algorithms exist that can detect the "onsets" of music hits, but they are typically focused only on the timing information and hence forfeit detecting e.g., the intensity of the played note. Plus, there are very few code implementations online for this problem, almost all of which are old and unmaintained. We would like to implement an algorithm in MusicProcessing.jl that given a recording of a single instrument, it can "MIDIfy" it, which means to digitalize it into the MIDI format.
Recommended Skills: Background in music, familiarity with digital signal processing.
Expected results: A well-tested, well-documented function midify
in MusicProcessing.jl.
Mentors: George Datseris.
JuliaReach is the Julia ecosystem for reachability computations of dynamical systems.
Difficulty: Medium.
Description. LazySets is a Julia library for computing with geometric sets, whose focus is on lazy set representations and efficient high-dimensional processing. The main interest in this project is to develop algorithms that leverage the structure of the sets. The special focus will be on low-dimensional (typically 2D and 3D) cases.
Expected Results. The goal is to implement certain efficient algorithms from the literature. The code is to be documented, tested, and evaluated in benchmarks. Specific tasks may include: efficient vertex enumeration of zonotopes; operations on zonotope bundles; efficient disjointness checks between different set types; complex zonotopes.
Expected Length. 175 hours.
Recommended Skills. Familiarity with Julia and Git/GitHub is mandatory. Familiarity with LazySets is recommended. Basic knowledge of geometric terminology is appreciated but not required.
Mentors: Marcelo Forets, Christian Schilling.
Difficulty: Hard.
Description. Sparse polynomial zonotopes are a new non-convex set representation that are well-suited for reachability analysis of nonlinear dynamical systems. The task is to add efficient Julia implementations of:
(1) sparse polynomial zonotopes in LazySets,
(2) the corresponding reachability algorithm for dynamical systems in ReachabilityAnalysis.
Expected Results. The goal is to efficiently implement sparse polynomial zonotopes and the corresponding reachability algorithms. The code is to be documented, tested, and evaluated extensively in benchmarks. If the candidate is interested, it is possible to change task (2) with
(3) an integration of the new set representation for neural-network control systems in NeuralNetworkAnalysis.
Expected Length. 350 hours.
Recommended Skills. Familiarity with Julia and Git/GitHub is mandatory. Familiarity with the mentioned Julia packages is appreciated but not required. The project does not require theoretical contributions, but it requires reading a research article (see below); hence a certain level of academic experience is recommended.
Literature and related packages. This video explains the concept of polynomial zonotopes (slides here). The relevant theory is described in this research article. There exists a Matlab implementation in CORA (the implementation of polynomial zonotopes can be found in this folder).
Mentors: Marcelo Forets, Christian Schilling.
Difficulty: Hard.
Description. ReachabilityAnalysis is a Julia library for set propagation of dynamical systems. One of the main aims is to handle systems with mixed discrete-continuous behaviors (known as hybrid systems in the literature). This project will focus on enhancing the capabilities of the library and overall improvement of the ecosystem for users.
Expected Results. Specific tasks may include: problem-specific heuristics for hybrid systems; API for time-varying input sets; flowpipe underapproximations. The code is to be documented, tested, and evaluated in benchmarks. Integration with ModelingToolkit.jl can also be considered if there is interest.
Expected Length. 350 hours.
Recommended Skills. Familiarity with Julia and Git/GitHub is mandatory. Familiarity with LazySets and ReachabilityAnalysis is also required.
Mentors: Marcelo Forets, Christian Schilling.
JuliaStats is an organization dedicated to providing high-quality packages for statistics in Julia.
Implement panel analysis models and estimators in Julia.
Difficulty. Moderate. Duration. 350 hours
Panel data is an important kind of statistical data that deals with observations of multiple units across time. Common examples of panel data include economic statistics (where it is common to observe figures for several countries over time). This combination of longitudinal and cross-sectional data can be powerful for extracting causal structure from data.
Mentors. Nils Gudat, José Bayoán Santiago Calderón, Carlos Parada
Must be fluent in at least one language for statistical computing, and willing to learn Julia before the start of projects.
Knowledge of basic statistical inference covering topics such as maximum likelihood estimation, confidence intervals, and hypothesis testing. (Must know before applying.)
Basic familiarity with time series statistics (e.g. ARIMA models, autocorrelations) or panel data. (Can be learned after applying.)
Participants will:
Learn and build on past approaches and packages for panel data analysis, such as those in Econometrics.jl and SynthControl.jl.
Generalize TreatmentPanels.jl into an abstract interface for dealing with and manipulating panel data.
Integrate existing estimators provided by packages such as Econometrics.jl into a single package for panel data estimation.
Econometric Analysis of Cross Section and Panel Data by Jeffrey Wooldridge
Distributions.jl is a package providing basic probability distributions and associated functions.
Difficulty. Easy-Medium. Duration. 175-350 hours
Must be fluent in Julia.
A college-level introduction to probability covering topics such as probability density functions, moments and cumulants, and multivariate distributions.
Possible improvements to Distributions.jl include:
New distribution families, such as elliptical distributions or distributions of order statistics.
Additional parametrizations and keyword constructors for current distributions.
Extended support for distributions of transformed variables.
HypothesisTesting.jl is a package that implements a range of hypothesis tests.
Difficulty. Medium. Duration. 350 hours
Mentors. Sourish Das, Mousum Dutta
Must be fluent in Julia.
A college-level introduction to probability covering topics such as probability density functions, moments and cumulants, and multivariate distributions.
Improvements to Distributions.jl include:
Develop Breusch-Pagan test against heteroskedasticity
Develop Harvey-Collier Test for linearity
Develop Bartlet Rank Test for randomness
Develop an exact dynamic programming solution to Wilcoxon–Mann–Whitney (WMW) test
Alexander Marx, etal. (2016) “Exact Dynamic Programing Solution of the Wilcoxon–Mann–Whitney Test” Genomics Proteomics Bioinformatics, 14, 55-61
Implement consistent APIs for statistical modeling in Julia.
Difficulty. Medium. Duration. 350 hours
Currently, the Julia statistics ecosystem is quite fragmented. There is value in having a consistent API for a wide variety of statistical models. The CRRao.jl package offers this design.
Mentors. Sourish Das, Ayush Patnaik
Must be fluent in Julia.
Basic statistical inference covering topics such as maximum likelihood estimation, confidence intervals, and hypothesis testing.
Participants will:
Help create, test, and document standard statistical APIs for Julia.
Integrate MixedModels.jl
General improvements to JuliaStats packages, depending on the interests of participants.
Difficulty. Easy-Hard. Duration. 175-350 hours.
JuliaStats provides many of the most popular packages in Julia, including:
StatsBase.jl for basic statistics (e.g. weights, sample statistics, moments).
MixedModels.jl for random and mixed-effects linear models.
GLM.jl for generalized linear models.
All of these packages are critically important to the Julia statistics community, and all could be improved.
Mentors. Mousum Dutta, Ayush Patnaik, Carlos Parada
Must be fluent in at least one language for statistical computing, and willing to learn Julia before the start of projects.
Knowledge of basic statistical inference covering topics such as maximum likelihood estimation, confidence intervals, and hypothesis testing.
Participants will:
Make JuliaStats better! This can include additional estimators, new features, performance improvements, or anything else you're interested in.
StatsBase.jl improvements could include support for cumulants, L-moments, or additional estimators.
Improved nonparametric density estimators, e.g. those in R's Locfit.
This package is used to study complex survey data. Examples of real-world surveys include official government surveys in areas like economics, health and agriculture; financial and commercial surveys. Social and behavioural scientists like political scientists, sociologists, psychologists, biologists and macroeconomists also analyse surveys in academic and theoretical settings. The prevalence of "big" survey datasets has exploded with the ease of administering surveys online. The project aims to use performance enhancements of Julia to create a fast package for modern "large" surveys.
Difficulty. Easy-Hard. Duration. 175-350 hours
Mentors. Ayush Patnaik
Experience with at least one language for statistical computing (Julia, R, Python, SAS, Stata etc), and willing to learn Julia before the start of projects.
Knowledge of basic statistical and probability concepts, preferably covered from academic course(s).
(Bonus) Any prior experience or coursework with survey analysis, using any software or tool.
The project can be tailored around the background and interests of participants and depending on ability, several standalone mini-projects can be created. Participants can potentially work on:
Generalised variance estimation methods using taylor linearisation
Post-stratification, raking or calibration, GREG estimation and related methods.
Connect Survey.jl with FreqTable.jl for contingency table analysis, or to survival analysis, or a machine learning library.
Improve support for multistage and Probability Proportional to Size (PPS) sampling with or without replacement.
Association tests (with contingency tables), Rao-Scott, likelihood ratio tests for glms, Cox models, loglinear models.
Handling missing data, imputation like mitools.
Survey.jl - see some issues, past PR's and milestone ideas
Julia discourse post asking for community suggestions here
JuliaCon Statistics Symposium clip for Survey
Model Assisted Survey Sampling - Sarndal, Swensson, Wretman (1992)
Survey analysis in R for high level topics than can be implemented for Julia
The contributor implements a state of the art smoother for continuous-time systems with additive Gaussian noise. The system's dynamics can be described as an ordinary differential equation with locally additive Gaussian random fluctuations, in other words a stochastic ordinary differential equation.
Given a series of measurements observed over time, containing statistical noise and other inaccuracies, the task is to produce an estimate of the unknown trajectory of the system that led to the observations.
Linear continuous-time systems are smoothed with the fixed-lag Kalman-Bucy smoother (related to the Kalman–Bucy_filter). It relies on coupled ODEs describing how mean and covariance of the conditional distribution of the latent system state evolve over time. A versatile implementation in Julia is missing.
Expected Results: Build efficient implementation of non-linear smoothing of continuous stochastic dynamical systems.
Recommended Skills: Gaussian random variables, Bayes' formula, Stochastic Differential Equations
Mentors: Moritz Schauer
Rating: Hard, 350 hours
LoopModels is the successor to LoopVectorization.jl, supporting more sophisticated analysis and transforms so that it may correctly optimize a much broader set of loop nests. It uses an internal representation of loops that represents the iteration space of each constituent operation as well as their dependencies. The iteration spaces of inner loops are allowed to be functions of the outer loops, and multiple loops are allowed to exist at each level of a loopnest. LoopModels aims to support optimizations including fusion, splitting, permuting loops, unrolling, and vectorization to maximize throughput. Broadly, this functionality can be divided into five pieces:
The Julia interface / support for custom LLVM pipelines.
The internal representation of the loops (Loop IR).
Building the internal representation from LLVM IR.
Analyze the representation to determine an optimal, correct, and target-specific schedule.
Transform the IR according to the schedule.
Open projects on this effort include:
Difficulty: Hard.
Description: In order to be able to use LoopModels from Julia, we must be able to apply a custom pass pipeline. This is likely something other packages will want to be able to do in the future, and something some packages (Enzyme.jl) do already. In this project, your aim will be to create a package that provides infrastructure others can depend on to simplify applying custom pass pipelines.
Expected Results: Register a package that allows applying custom LLVM pass pipelines to Julia code.
Skills: Julia programming, preferably with some understanding of Julia's IR. Prior familiarity with libraries such as GPUCompiler and StaticCompiler a bonus.
Expected Length: 175 hours.
Difficulty: Medium.
Description: This is open ended, with many potential projects here. These range from using Presburger arithmetic to support decidable polyhedral modeling, working on canonicalizations to handle more kinds of loops frequently encountered from Julia (e.g. from CartesianIndicies
), modeling the costs of different schedules, to efficiently searching the iteration space and find the fastest way to evaluate a loop nest. We can discuss your interests and find a task you'll enjoy and make substantive contributions to.
Expected Results: Help develop some aspect of the loop modeling and/or optimization.
Skills: C++, knowledge of LLVM, loop optimization, SIMD, and optimizing compute kernels such as GEMM preferred. A passion for performance is a must!
Expected Length: 350 hours.
Mentors: Chris Elrod, Yingbo Ma.
Note: FluxML participates as a NumFOCUS sub-organization. Head to the FluxML GSoC page for their idea list.
Time: 175h
Develop a series of reinforcement learning environments, in the spirit of the OpenAI Gym. Although we have wrappers for the gym available, it is hard to install (due to the Python dependency) and, since it's written in Python and C code, we can't do more interesting things with it (such as differentiate through the environments).
A pure-Julia version of selected environments that supports a similar API and visualisation options would be valuable to anyone doing RL with Flux.
Mentors: Dhairya Gandhi.
The philosophy of the AlphaZero.jl project is to provide an implementation of AlphaZero that is simple enough to be widely accessible for contributors and researchers, while also being sufficiently powerful and fast to enable meaningful experiments on limited computing resources (our latest release is consistently between one and two orders of magnitude faster than competing Python implementations).
Here are a few project ideas that build on AlphaZero.jl. Please contact us for additional details and let us know about your experience and interests so that we can build a project that best suits your profile.
[Easy (175h)] Integrate AlphaZero.jl with the OpenSpiel game library and benchmark it on a series of simple board games.
[Medium (175h)] Use AlphaZero.jl to train a chess agent. In order to save computing resources and allow faster bootstrapping, you may train an initial policy using supervised learning.
[Hard (350h)] Build on AlphaZero.jl to implement the MuZero algorithm.
[Hard (350h)] Explore applications of AlphaZero beyond board games (e.g. theorem proving, chip design, chemical synthesis...).
In all these projects, the goal is not only to showcase the current Julia ecosystem and test its limits, but also to push it forward through concrete contributions that other people can build on. Such contributions include:
Improvements to existing Julia packages (e.g. AlphaZero, ReinforcementLearning, CommonRLInterface, Dagger, Distributed, CUDA...) through code, documentation or benchmarks.
A well-documented and replicable artifact to be added to AlphaZero.Examples, ReinforcementLearningZoo or released in its own package.
A blog post that details your experience, discusses the challenges you went through and identifies promising areas for future work.
Mentors: Jonathan Laurent
Much of science can be explained by the movement and interaction of molecules. Molecular dynamics (MD) is a computational technique used to explore these phenomena, from noble gases to biological macromolecules. Molly.jl is a pure Julia package for MD, and for the simulation of physical systems more broadly. The package is currently under development for research with a focus on proteins and differentiable molecular simulation. There are a number of ways that the package could be improved:
Machine learning potentials (duration: 175h, expected difficulty: easy to medium): in the last few years machine learning potentials have been improved significantly. Models such as ANI, ACE, NequIP and Allegro can be added to Molly.
Simulators (duration: 175h, expected difficulty: easy to medium): a variety of standard approaches to simulating molecules can be added including FIRE minimisation, pressure coupling (NPT ensemble) and enhanced sampling approaches.
Constraint algorithms (duration: 175h, expected difficulty: medium): many simulations keep fast degrees of freedom such as bond lengths and bond angles fixed using approaches such as SHAKE, RATTLE and SETTLE. A fast implementation of these algorithms would be a valuable contribution.
Electrostatic summation (duration: 175h, expected difficulty: medium to hard): methods such as particle-mesh Ewald (PME) are in wide use for molecular simulation. Developing fast, flexible implementations and exploring compatibility with GPU acceleration and automatic differentiation would be an important contribution.
Recommended skills: familiarity with computational chemistry, structural bioinformatics or simulating physical systems.
Expected results: new features added to the package along with tests and relevant documentation.
Mentor: Joe Greener
Contact: feel free to ask questions via email or the JuliaMolSim Zulip.
Matrix functions map matrices onto other matrices, and can often be interpreted as generalizations of ordinary functions like sine and exponential, which map numbers to numbers. Once considered a niche province of numerical algorithms, matrix functions now appear routinely in applications to cryptography, aircraft design, nonlinear dynamics, and finance.
This project proposes to implement state of the art algorithms that extend the currently available matrix functions in Julia, as outlined in issue #5840. In addition to matrix generalizations of standard functions such as real matrix powers, surds and logarithms, contributors will be challenged to design generic interfaces for lifting general scalar-valued functions to their matrix analogues for the efficient computation of arbitrary (well-behaved) matrix functions and their derivatives.
Recommended Skills: A strong understanding of calculus and numerical analysis.
Expected Results: New and faster methods for evaluating matrix functions.
Mentors: Jiahao Chen, Steven Johnson.
Difficulty: Hard
Julia currently supports big integers and rationals, making use of the GMP. However, GMP currently doesn't permit good integration with a garbage collector.
This project therefore involves exploring ways to improve BigInt, possibly including:
Modifying GMP to support high-performance garbage-collection
Reimplementation of aspects of BigInt in Julia
Lazy graph style APIs which can rewrite terms or apply optimisations
This experimentation could be carried out as a package with a new implementation, or as patches over the existing implementation in Base.
Expected Results: An implementation of BigInt in Julia with increased performance over the current one.
Require Skills: Familiarity with extended precision numerics OR performance considerations. Familiarity either with Julia or GMP.
Mentors: Jameson Nash
Difficulty: Hard
As a technical computing language, Julia provides a huge number of special functions, both in Base as well as packages such as StatsFuns.jl. At the moment, many of these are implemented in external libraries such as Rmath and openspecfun. This project would involve implementing these functions in native Julia (possibly utilising the work in SpecialFunctions.jl), seeking out opportunities for possible improvements along the way, such as supporting Float32
and BigFloat
, exploiting fused multiply-add operations, and improving errors and boundary cases.
Recommended Skills: A strong understanding of calculus.
Expected Results: New and faster methods for evaluating properties of special functions.
Mentors: Steven Johnson, Oscar Smith. Ask on Discourse or on slack
The CCSA algorithm by Svanberg (2001) is a nonlinear programming algorithm widely used in topology optimization and for other large-scale optimization problems: it is a robust algorithm that can handle arbitrary nonlinear inequality constraints and huge numbers of degrees of freedom. Moreover, the relative simplicity of the algorithm makes it possible to easily incorporate sparsity in the Jacobian matrix (for handling huge numbers of constraints), approximate-Hessian preconditioners, and as special-case optimizations for affine terms in the objective or constraints. However, currently it is only available in Julia via the NLopt.jl interface to an external C implementation, which greatly limits its flexibility.
Recommended Skills: Experience with nonlinear optimization algorithms and understanding of Lagrange duality, familiarity with sparse matrices and other Julia data structures.
Expected Results: A package implementing a native-Julia CCSA algorithm.
Mentors: Steven Johnson.
At JuliaCon 2021 a new sampler Monte Carlo method (for example as sampling algorithm for the posterior in Bayesian inference) was introduced [1]. The method exploits the factorization structure to sample a single continuous time Markov chain targeting a joint distribution in parallel. In contrast to parallel Gibbs sampling in the method at no time a subset of coordinates is kept fixed. In Gibbs sampling keeping a subset fixed is the main device to achieve massive parallelism: given a separating set of coordinates, the conditional posterior factorizes into independent subproblems. In the presented method, a particle representing a parameter vector sampled from the posterior never ceases to move, and it is only the decisions about changes of the direction of the movement which happen in parallel on subsets of coordinates.
There are already two implementations available which make use of Julias multithreading capabilities. Starting from that, the contributor implements a version of the algorithm using GPU computing techniques as the methods is are suitable for these approaches.
Expected Results: Implement massive parallel factorized bouncy particle sampler [1,2] using GPU computing.
Recommended Skills: GPU computing, Markov processes, Bayesian inference.
Mentors: Moritz Schauer
Rating: Hard, 350 hours
[1] Moritz Schauer: ZigZagBoomerang.jl - parallel inference and variable selection. JuliaCon 2021 contribution [https://pretalx.com/juliacon2021/talk/LUVWJZ/], Youtube: [https://www.youtube.com/watch?v=wJAjP_I1BnQ], 2021.
[2] Joris Bierkens, Paul Fearnhead, Gareth Roberts: The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data. The Annals of Statistics, 2019, 47. Vol., Nr. 3, pp. 1288-1320. [https://arxiv.org/abs/1607.03188].
Unfortunately we won't have time to mentor this year. Check back next year!
Pythia is a package for scalable machine learning time series forecasting and nowcasting in Julia.
The project mentors are Andrii Babii and Sebastian Vollmer.
This project involves developing scalable machine learning time series regressions for nowcasting and forecasting. Nowcasting in economics is the prediction of the present, the very near future, and the very recent past state of an economic indicator. The term is a contraction of "now" and "forecasting" and originates in meteorology.
The objective of this project is to introduce scalable regression-based nowcasting and forecasting methodologies that demonstrated the empirical success in data-rich environment recently. Examples of existing popular packages for regression-based nowcasting on other platforms include the "MIDAS Matlab Toolbox", as well as the 'midasr' and 'midasml' packages in R. The starting point for this project is porting the 'midasml' package from R to Julia. Currently Pythia has the sparse-group LASSO regression functionality for forecasting.
The following functions are of interest: in-sample and out-of sample forecasts/nowcasts, regularized MIDAS with Legendre polynomials, visualization of nowcasts, AIC/BIC and time series cross-validation tuning, forecast evaluation, pooled and fixed effects panel data regressions for forecasting and nowcasting, HAC-based inference for sparse-group LASSO, high-dimensional Granger causality tests. Other widely used existing functions from R/Python/Matlab are also of interest.
Recommended skills: Graduate-level knowledge of time series analysis, machine learning, and optimization is helpful.
Expected output: The contributor is expected to produce code, documentation, visualization, and real-data examples.
References: Contact project mentors for references.
Modern business applications often involve forecasting hundreds of thousands of time series. Producing such a gigantic number of reliable and high-quality forecasts is computationally challenging, which limits the scope of potential methods that can be used in practice, see, e.g., the 'forecast', 'fable', or 'prophet' packages in R. Currently, Julia lacks the scalable time series forecasting functionality and this project aims to develop the automated data-driven and scalable time series forecasting methods.
The following functionality is of interest: forecasting intermittent demand (Croston, adjusted Croston, INARMA), scalable seasonal ARIMA with covariates, loss-based forecasting (gradient boosting), unsupervised time series clustering, forecast combinations, unit root tests (ADF, KPSS). Other widely used existing functions from R/Python/Matlab are also of interest.
Recommended skills: Graduate-level knowledge of time series analysis is helpful.
Expected output: The contributor is expected to produce code, documentation, visualization, and real-data examples.
References: Contact project mentors for references.
Clifford circuits are a class of quantum circuits that can be simulated efficiently on a classical computer. As such, they do not provide the computational advantage expected of universal quantum computers. Nonetheless, they are extremely important, as they underpin most techniques for quantum error correction and quantum networking. Software that efficiently simulates such circuits, at the scale of thousands or more qubits, is essential to the design of quantum hardware. The QuantumClifford.jl Julia project enables such simulations.
Simulation of Clifford circuits involves significant amounts of linear algebra with boolean matrices. This enables the use of many standard computation accelerators like GPUs, as long as these accelerators support bit-wise operations. The main complications is that the elements of the matrices under consideration are usually packed in order to increase performance and lower memory usage, i.e. a vector of 64 elements would be stored as a single 64 bit integer instead of as an array of 64 bools. A Summer of Code project could consist of implement the aforementioned linear algebra operations in GPU kernels, and then seamlessly integrating them in the rest of the QuantumClifford library. At a minimum that would include Pauli-Pauli products and certain small Clifford operators, but could extend to general stabilizer tableau multiplication and even tableau diagonalization.
Recommended skills: Basic knowledge of the stabilizer formalism used for simulating Clifford circuits. Familiarity with performance profiling tools in Julia and Julia's GPU stack, including KernelAbstractions and Tullio.
Mentors: Stefan Krastanov
Expected duration: 175 hours (but applicants can scope it to a longer project by including work on GPU-accelerated Gaussian elimination used in the canonicalization routines)
Difficulty: Medium if the applicant is familiar with Julia, even without understanding of Quantum Information Science (but applicants can scope it to "hard" by including the aforementioned additional topics)
Often, stabilizer circuit simulations are structured as a repeated simulation of the same circuit with random Pauli errors superimposed on it. This is useful, for instance, when studying the performance of error-correcting codes. In such simulations it is possible to run one single relatively expensive simulation of the noise-less circuit in order to get a reference and then run a large number of much faster "Pauli Frame" simulations that include the random noise. By utilizing the reference simulation, the random noise simulations could more efficiently provide samples of the performance of the circuit under noise. This project would involve creating an API for such simulations in QuantumClifford.jl. A useful reference would be the Stim C++ library.
Recommended skills: Knowledge of the stabilizer formalism used for simulating Clifford circuits. Familiarity with performance profiling tools in Julia.
Mentors: Stefan Krastanov
Expected duration: 350 hours
Difficulty: Hard, due to requiring in-depth knowledge of the stabilizer formalism.
Quantum Error Correcting codes are typically represented in a form similar to the parity check matrix of a classical code. This form is called a Stabilizer tableaux. This project would involve creating a comprehensive library of frequently used quantum error correcting codes. As an initial step that would involve implementing the tableaux corresponding to simple pedagogical codes like the Steane and Shor codes, toric and surface codes, some CSS codes, etc. The project can be extended to a much longer one by including work on decoders for some of these codes. A large part of this project would involve literature surveys.
Recommended skills: Knowledge of the stabilizer formalism used for simulating Clifford circuits.
Mentors: Stefan Krastanov
Expected duration: 175 hours (but applicants can scope it as longer, depending on the list of functionality they plan to implement)
Difficulty: Medium. Easy with some basic knowledge of quantum error correction
Applying an n-qubit Clifford gate to an n-qubit state (tableaux) is an operation similar to matrix multiplication, requiring O(n^3) steps. However, applying a single-qubit or two-qubit gate to an n-qubit tableaux is much faster as it needs to address only one or two columns of the tableaux. This project would focus on extending the left-multiplication special cases already started in symbolic_cliffords.jl and creating additional right-multiplication special cases (for which the Stim library is a good reference).
Recommended skills: Knowledge of the stabilizer formalism used for simulating Clifford circuits. Familiarity with performance profiling tools in Julia. Understanding of C/C++ if you plan to use the Stim library as a reference.
Mentors: Stefan Krastanov
Expected duration: 175 hours (but applicants can scope it as longer if they )
Difficulty: Easy
Symbolics.jl have robust ways to convert symbolic expressions into multi-variate polynomials. There is now a robust Groebner basis implementation in (Groebner.jl). Finding roots and varieties of sets of polynomials would be extremely useful in many applications. This project would involve implementing various techniques for solving polynomial systems, and where possible other non-linear equation systems. A good proposal should try to enumerate a number of techniques that are worth implementing, for example:
Analytical solutions for polynomial systems of degree <= 4
Use of HomotopyContinuations.jl for testing for solvability and finding numerical solutions
Newton-raphson methods
Using Groebner basis computations to find varieties
The API for these features should be extremely user-friendly:
A single roots
function should take the sets of equations and result in the right type of roots as output (either varieties or numerical answers)
It should automatically find the fastest strategy to solve the set of equations and apply it.
It should fail with descriptive error messages when equations are not solvable, or degenerate in some way.
This should allow implementing symbolic eigenvalue computation when eigs
is called.
Mentors: Shashi Gowda, Alexander Demin Duration: 350 hours
Implement the heuristic approach to symbolic integration. Then hook into a repository of rules such as RUMI. See also the potential of using symbolic-numeric integration techniques (https://github.com/SciML/SymbolicNumericIntegration.jl)
Recommended Skills: High school/Freshman Calculus
Expected Results: A working implementation of symbolic integration in the Symbolics.jl library, along with documentation and tutorials demonstrating its use in scientific disciplines.
Mentors: Shashi Gowda, Yingbo Ma
Duration: 350 hours
Julia functions that take arrays and output arrays or scalars can be traced using Symbolics.jl variables to produce a trace of operations. This output can be optimized to use fused operations or call highly specific NNLib functions. In this project you will trace through Flux.jl neural-network functions and apply optimizations on the resultant symbolic expressions. This can be mostly implemented as rule-based rewriting rules (see https://github.com/JuliaSymbolics/Symbolics.jl/pull/514).
Recommended Skills: Knowledge of space and time complexities of array operations, experience in optimizing array code.
Mentors: Shashi Gowda
Duration: 175 hours
Herbie documents a way to optimize floating point functions so as to reduce instruction count while reorganizing operations such that floating point inaccuracies do not get magnified. It would be a great addition to have this written in Julia and have it work on Symbolics.jl expressions. An ideal implementation would use the e-graph facilities of Metatheory.jl to implement this.
Mentors: Shashi Gowda, Alessandro Cheli
Duration: 350 hours
Difficulty: Medium
Duration: 350 hours
FlashFill is mechanism for creating data manipulation pipelines using programming by example (PBE). As an example see this implementation in Microsoft Excel. We want a version of Flashfill that can work against Julia tabular data structures, such as DataFrames and Tables.jl.
Resources:
A presentation by Sumit Gulwani of Microsoft Research
A video
Recommended Skills: Compiler techniques, DSL generation, Program synthesis
Expected Output: A practical flashfill implementation that can be used on any tabular data structure in Julia
Mentors: Avik Sengupta
Difficulty: Medium
Duration: 175 hours
Apache Parquet is a binary data format for tabular data. It has features for compression and memory-mapping of datasets on disk. A decent implementation of Parquet in Julia is likely to be highly performant. It will be useful as a standard format for distributing tabular data in a binary format. There exists a Parquet.jl package that has a Parquet reader and a writer. It currently conforms to the Julia Tabular file IO interface at a very basic level. It needs more work to add support for critical elements that would make Parquet.jl usable for fast large scale parallel data processing. Each of these goals can be targetted as a single, short duration (175 hrs) project.
Lazy loading and support for out-of-core processing, with Arrow.jl and Tables.jl integration. Improved usability and performance of Parquet reader and writer for large files.
Reading from and writing data on to cloud data stores, including support for partitioned data.
Support for missing data types and encodings making the Julia implementation fully featured.
Resources:
The Parquet file format (also are many articles and talks on the Parquet storage format on the internet)
Recommended skills: Good knowledge of Julia language, Julia data stack and writing performant Julia code.
Expected Results: Depends on the specific projects we would agree on.
Mentors: Tanmay Mohapatra
Difficulty: Hard
Duration: 175 hours
DataFrames.jl is one of the more popular implementations of tabular data type for Julia. One of the features it supports is data frame joining. However, more work is needed to improve this functionality. The specific targets for this project are (a final list of targets included in the scope of the project can be decided later).
fully implement multi-threading support by joins, reduce memory requirements of used join algorithms (which should additionally improve their performance), verify efficiency of alternative joining strategies in comparison to those currently used and implement them along with adaptive algorithm choosing the right joining strategy depending on the passed data;
implement join allowing for efficient matching on non-equal keys; special attention should be made to matching on keys that are date/time and spatial objects;
implement join allowing for an in-place update of columns of one data frame by values stored in another data frame based on matching key and condition specifying when an update should be performed;
implement an more flexible mechanizm than currently available allowing to define output data frame column names when performing a join.
Resources:
Recommended skills: Good knowledge of Julia language, Julia data stack and writing performant multi-threaded Julia code. Experience with benchmarking code and writing tests. Knowledge of join algorithms (as e.g. used in databases like DuckDB or other tabular data manipulation ecosystems e.g. Polars or data.table).
Expected Results: Depends on the specific projects we would agree on.
Mentors: Bogumił Kamiński
TopOpt.jl is a topology optimisation package written in pure Julia. Topology optimisation is an exciting field at the intersection of shape representation, physics simulations and mathematical optimisation, and the Julia language is a great fit for this field. To learn more about TopOpt.jl
, check the following JuliaCon talk.
The following is a tentative list of projects in topology optimisation that you could be working on in the coming Julia Season of Contributions or Google Summer of Code. If you are interested in exploring any of these topics or if you have other interests related to topology optimisation, please reach out to the main mentor Mohamed Tarek via email.
Project difficulty: Easy to Medium
Work load: 175 or 350 hours
Description: There are numerous ways to use machine learning for design optimisation in topology optimisation. The following are all recent papers with applications of neural networks and machine learning in topology optimisation. There are also exciting research opportunities in this direction.
DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel
NTopo: Mesh-free Topology Optimization using Implicit Neural Representations
TONR: An exploration for a novel way combining neural network with topology optimization
In this project you will implement one of the algorithms discussed in any of these papers.
Knowledge prerequisites: neural networks, optimisation, Julia programming
Project difficulty: Easy
Work load: 175 hours
Description: There are some topology optimisation formulations that enable the optimisation of the shape of the structure and the material selected simultaneously. In this project, you will implement some multi-material design optimisation formulations, e.g. this paper has a relatively simple approach to integrate in TopOpt.jl. Other methods include using mixed integer nonlinear programming from Nonconvex.jl to select materials in different parts of the design.
Knowledge prerequisites: basic optimisation, Julia programming
Project difficulty: Medium
Work load: 350 hours
Description: Currently in TopOpt.jl, there are only unstructured meshes supported. This is a very flexible type of mesh but it's not as memory efficient as uniform rectilinear grids where all the elements are assumed to have the same shape. This is the most common grid used in topology optimisation in practice. Currently in TopOpt.jl, the uniform rectilinear grid will be stored as an unstructured mesh which is unnecessarily inefficient. In this project, you will optimise the finite element analysis and topology optimisation codes in TopOpt.jl for uniform rectilinear grids.
Knowledge prerequisites: knowledge of mesh types, Julia programming
Project difficulty: Medium
Work load: 350 hours
Description: Topology optimisation problems with more mesh elements take longer to simulate and to optimise. In this project, you will explore the use of adaptive mesh refinement starting from a coarse mesh, optimising and only refining the elements that need further optimisation. This is an effective way to accelerate topology optimisation algorithms.
Knowledge prerequisites: adaptive mesh refinement, Julia programming
Project difficulty: Medium
Work load: 175 or 350 hours
Description: All of the examples in TopOpt.jl and problem types are currently of the linear elasticity, quasi-static class of problems. The goal of this project is to implement more problem types and examples from the field of heat transfer. Both steady-state heat transfer problems and linear elasticity problems make use of elliptic partial differential equations so the code from linear elasticity problems should be largely reusable for heat transfer problems with minimum changes.
Knowledge prerequisites: finite element analysis, heat equation, Julia programming
Trixi.jl is a Julia package for adaptive high-order numerical simulations of conservation laws. It is designed to be simple to use for students and researchers, extensible for research and teaching, as well as efficient and suitable for high-performance computing.
Difficulty: Medium (up to hard, depending on the chosen subtasks)
Project size: 175 hours or 350 hours, depending on the chosen subtasks
Enzyme.jl is the Julia frontend of Enzyme, a modern automatic differentiation (AD) framework working at the level of LLVM code. It can provide fast forward and reverse mode AD and - unlike some other AD packages - supports mutating operations. Since Trixi.jl relies on mutating operations and caches for performance, this feature is crucial to obtain an implementation that works efficiently for both simulation runs and AD.
The overall goal of this project is to create a working prototype of Trixi.jl (or a subset thereof) using Enzyme.jl for AD, and to support as many of Trixi's advanced features as possible, such as adaptive mesh refinement, shock capturing etc.
Possible subtasks in this project include
Explore and implement forward/backward mode AD via Enzyme.jl for a simplified simulation for the 1D advection equation or the 1D compressible Euler equations (e.g., compute the Jacobian of the right-hand side evaluation Trixi.rhs!
on a simple mesh in serial execution)
Explore and implement forward mode AD via Enzyme.jl of semidiscretizations provided by Trixi.jl, mimicking the functionality that is already available via ForwardDiff.jl
Explore and implement reverse mode AD via Enzyme.jl of semidiscretizations provided by Trixi.jl as required for modern machine learning tasks
Explore and implement AD via Enzyme.jl of full simulations combining semidiscretizations of Trixi.jl with time integration methods of OrdinaryDiffEq.jl
Related subtasks in this project not related directly to Enzyme.jl but using other packages include
Explore and implement means to improve the current handling of caches to simplify AD and differentiable programming with semidiscretizations of Trixi.jl in general, e.g., via PreallocationTools.jl.
Extend the current AD support based on ForwardDiff.jl to other functionality of Trixi.jl, e.g., shock capturing discretizations, MPI parallel simulations, and other features currently not supported
This project is good for both software engineers interested in the fields of numerical analysis and scientific machine learning as well as those students who are interested in pursuing graduate research in the field.
Recommended skills: Good knowledge of at least one numerical discretization scheme (e.g., finite volume, discontinuous Galerkin, finite differences); initial knowledge in automatic differentiation; preferably the ability (or eagerness to learn) to write fast code
Expected results: Contributions to state of the art and production-quality automatic differentiation tools for Trixi.jl
Mentors: Hendrik Ranocha, Michael Schlottke-Lakemper
Difficulty: Medium (to hard, depending on the chosen subtasks)
Project size: 175 hours or 350 hours, depending on the chosen subtasks
GPUs can provide considerable speedups compard to CPUs for computational fluid dynamic simulations of the kind performed in Trixi.jl. Julia provides several ways to implement efficient code on GPUs such as CUDA.jl for Nvidia GPUs, AMDGPU.jl for AMD GPUs, and KernelAbstractions.jl, which provides a single frontend to generate code for multiple GPU backends. In this project, we will likely work with CUDA.jl due to its maturity, but other options can be explored later as well.
The goal of this project is to implement a working subset of the functionality of Trixi.jl on GPUs, starting with a basic numerical scheme on Cartesian meshes in 2D. Based thereon, there are a lot of possibilities for extensions to more complex geometries and sophisticated discretizations.
Possible subtasks in this project include
Write a simple 1D code on for CPUs, taking the methods implemented in Trixi.jl as a blueprint.
Port the simple 1D CPU code to GPUs using one of the GPU packages as a prototype.
Prototype GPU implementations of existing kernels implemented in Trixi.jl by moving data from the CPU to the GPU and back again explicitly.
Keep the data on the GPU after converting all kernels required for a simple simulation.
Extend the GPU implementations to more complex numerical methods and settings.
Extend the GPU implementations to different types of GPUs, using different GPU programming packages in Julia.
Optimize and compare the performance of the implementations.
This project is good for both software engineers interested in the fields of numerical analysis and scientific machine learning as well as those students who are interested in pursuing graduate research in the field.
Recommended skills: Background knowledge in numerical analysis, working knowledge about GPU computing, and the ability to write fast code
Expected results: Draft of a working subset of the functionality of Trixi.jl running efficiently on GPUs.
Mentors: Michael Schlottke-Lakemper, Hendrik Ranocha
Turing is a universal probabilistic programming language embedded in Julia. Turing allows the user to write models in standard Julia syntax, and provide a wide range of sampling-based inference methods for solving problems across probabilistic machine learning, Bayesian statistics and data science etc. Since Turing is implemented in pure Julia code, its compiler and inference methods are amenable to hacking: new model families and inference methods can be easily added. Below is a list of ideas for potential projects, though you are welcome to propose your own to the Turing team.
If you are interested in exploring any of these projects, please reach out to the listed project mentors or Tor Fjelde (at tef30[at]cam.ac.uk). You can find their contact information at turing.ml/team.
Mentors: Seth Axen, Tor Fjelde, Kai Xu, Hong Ge
Project difficulty: Medium
Project length: 175 hrs or 350 hrs
Description: posteriordb is a database of 120 diverse Bayesian models implemented in Stan (with 1 example model in PyMC) with reference posterior draws, data, and metadata. For performance comparison and for showcasing best practices in Turing, it is useful to have Turing implementations of these models. The goal of this project is to implement a large subset of these models in Turing/Julia.
For each model, we consider the following tasks: Correctness test: when reference posterior draws and sampler configuration are available in posteriordb, correctness of the implementation and consistency can be tested by sampling the model with the same configuration and comparing the samples to the reference draws. Best practices: all models must be checked to be differentiable with all Turing-supported AD frameworks.
Mentors: Tor Fjelde, Cameron Pfiffer, David Widmann
Project difficulty: Easy
Project length: 175 hrs
Description: Most samplers in Turing.jl implements the AbstractMCMC.jl interface, allowing a unified way for the user to interact with the samplers. The interface of AbstractMCMC.jl is currently very bare-bones and does not lend itself nicely to interoperability between samplers.
For example, it’s completely valid to compose to MCMC kernels, e.g. taking one step using the RWMH from AdvancedMH.jl, followed by taking one step using NUTS from AdvancedHMC.jl. Unfortunately, implementing such a composition requires explicitly defining conversions between the state returned from RWMH and the state returned from NUTS, and conversion of state from NUTS to state of RWMH. Doing this for one such sampler-pair is generally very easy to do, but once you have to do this for N samplers, suddenly the amount of work needed to be done becomes insurmountable.
One way to deal alleviate this issue would be to add a simple interface for interacting with the states of the samplers, e.g. a method for getting the current values in the state, a method for setting the current values in the state, in addition to a set of glue-methods which can be overriden in the specific case where more information can be shared between the states.
As an example of some ongoing work that attempts to take a step in this direction is: https://github.com/TuringLang/AbstractMCMC.jl/pull/86
Even if this PR makes it in before the project start, it’s going to take additional work to Propagate these changes to the downstream packages, i.e. implementing all these functions. Determine if the current approach is the really the way to go, or if we need to change or just add more features.
Mentors: Tor Fjelde, Xianda Sun, David Widmann, Qingliang Zhuo, Hong Ge
Project difficulty: Hard
Project length: 175 hrs
Description: Tape caching often leads to significant performance improvements for gradient-based sampling algorithms (e.g. HMC/NUTS). Tape caching is only possible at the complete computational level for ReverseDiff at the moment. This project is about implementing a more modular, i.e. function-as-a-caching-barrier, tape caching mechanism for ReverseDiff.jl.
Mentors: Tor Fjelde, Xianda Sun, Kai Xu, Hong Ge
Project difficulty: Hard
Project length: 175 hrs or 350 hrs
Description: GPU support in Turing is not quite there yet for several reasons:
Bijectors.jl, the package which provides transformations of distributions to Turing.jl, is not fully compatible with GPU. For example, many of the transformations make use of scalar indexing which is slow on GPU. DynamicPPL.jl, the package providing the DSL of Turing.jl, is not compatible with GPU. Again, a lot of scalar indexing is used, and likely some internal functions are simply just incompatible with GPU usage at the moment. Others?
There might also be other issues along the way, making Turing.jl fully support GPU usage within the span of the project is very unlikely, but taking a significant step in this direction should be possible and will be very useful.
Mentors: Tor Fjelde, Xianda Sun, Kai Xu, Hong Ge
Project difficulty: Medium
Project length: 350 hrs
Description: The variational inference functionality of Turing.jl was at some point moved into AdvancedVI.jl, but after this move the package has received very little love.
As of right now, the package only supports ADVI and the interface needs to be generalized to support more types of models and variational inference algorithms in an efficient way.
In addition, implementing more recent advanced in variational inference is also included in the project.
Mentors: Tor Fjelde, David Widmann, Hong Ge
Project difficulty: Medium
Project length: 350 hrs
Description: At the moment there is currently no support for running a Turing model in a “batched” mode.
When one wants to run, say, 2 chains in parallel for a given model, the current approach is to in effect to call sample(model, ...)
twice. Of course, one can parallelize these sample calls across multiple cores, etc. and this is already supported in Turing.jl.
What is not yet supported, is to, say, run 2 chains at the same time taking by “stacking” the parameters into a higher-dimensional array, e.g. if the parameters θ is a Vector of values, then we can stack them into a Matrix of size length(θ) × 2 and then execute the model on this instead.
It can effectively be boiled down to adding support for calling logdensity(model, θbatch) with θbatch being of size d × N and having the result be a vector of length N
. Once we have this, a sampler with batched-mode can work nicely with a Turing.jl model.
This will require: Making changes internally to DynamicPPL.jl, the DSL of Turing.jl, to allow batching. Implement a way to indicate to the code that “Hey, this input should be treated as a batch, not a single input!*. One approach to this might be an independent package which implements a wrapper type Batch or something, which is simply unwrapped at the stages where appropriate, but this needs to be further discussed.
Mentors: S. T. John, Ross Viljoen
Project difficulty: Medium
Project length: 350 hrs
Description: Adding approximate inference methods for non-Gaussian likelihoods which are available in other GP packages but not yet within JuliaGPs. The project would start by determining which approximate inference method(s) to implement - there’s lots to do, and we’re happy to work with a contributor on whichever method they are most interested in, or to suggest one if they have no strong preference.
Mentors: Ross Viljoen, S. T. John
Project difficulty: Medium
Project length: 350 hrs
Description: This would involve first ensuring that common models are able to run fully on the GPU, then identifying and improving GPU-specific performance bottlenecks. This would begin by implementing a limited end-to-end example involving a GP with a standard kernel, and profiling it to debug any substantial performance bottlenecks. From there, support for a wider range of the functionality available in KernelFunctions.jl and AbstractGPs.jl can be added. Stretch goal: extension of GPU support to some functionality in ApproximateGPs.jl.
We are generally looking for folks that want to help with the Julia VS Code extension. We have a long list of open issues, and some of them amount to significant projects.
Required Skills: TypeScript, Julia, web development.
Expected Results: Depends on the specific projects we would agree on.
Mentors: David Anthoff
The VSCode extension for Julia could provide a simple way to browse available packages and view what's installed on a users system. To start with, this project could simply provide a GUI that reads in package data from a Project.toml
/Manifest.toml
and show some UI elements to add/remove/manage those packages.
This could also be extended by having metadata about the package, such as a readme, github stars, activity and so on (somewhat similar to the VSCode-native extension explorer).
Expected Results: A UI in VSCode for package operations.
Recommended Skills: Familiarity with TypeScript and Julia development.
Mentors: Sebastian Pfitzner
Also take a look at Pluto - VS Code integration!
Julia has early support for targeting WebAssembly and running in the web browser. Please note that this is a rapidly moving area (see the project repository for a more detailed overview), so if you are interested in this work, please make sure to inform yourself of the current state and talk to us to scope out an appropriate project. The below is intended as a set of possible starting points.
Mentor for these projects is Keno Fischer unless otherwise stated.
Because Julia relies on an asynchronous task runtime and WebAssembly currently lacks native support for stack management, Julia needs to explicitly manage task stacks in the wasm heap and perform a compiler transformation to use this stack instead of the native WebAssembly stack. The overhead of this transformation directly impacts the performance of Julia on the wasm platform. Additionally, since all code Julia uses (including arbitrary C/C++ libraries) must be compiled using this transformation, it needs to cover a wide variety of inputs and be coordinated with other users having similar needs (e.g. the Pyodide project to run python on the web). The project would aim to improve the quality, robustness and flexibility of this transformation.
Recommended Skills: Experience with LLVM.
WebAssembly is in the process of standardizing threads. Simultaneously, work is ongoing to introduce a new threading runtime in Julia (see #22631 and replated PRs). This project would investigate enabling threading support for Julia on the WebAssembly platform, implementing runtime parallel primitives on the web assembly platform and ensuring that high level threading constructs are correctly mapped to the underlying platform. Please note that both the WebAssembly and Julia threading infrastructure is still in active development and may continue to change over the duration of the project. An informed understanding of the state of these projects is a definite prerequisite for this project.
Recommended Skills: Experience with C and multi-threaded programming.
WebAssembly is in the process of adding first class references to native objects to their specification. This capability should allow very high performance integration between julia and javascript objects. Since it is not possible to store references to javascript objects in regular memory, adding this capability will require several changes to the runtime system and code generation (possibly including at the LLVM level) in order to properly track these references and emit them either as direct references to as indirect references to the reference table.
Recommended Skills: Experience with C.
While Julia now runs on the web platform, it is not yet a language that's suitable for first-class development of web applications. One of the biggest missing features is integration with and abstraction over more complicated javascript objects and APIs, in particular the DOM. Inspiration may be drawn from similar projects in Rust or other languages.
Recommended Skills: Experience with writing libraries in Julia, experience with JavaScript Web APIs.
Several Julia libraries (e.g. WebIO.jl, Escher.jl) provide input and output capabilities for the web platform. Porting these libraries to run directly on the wasm platform would enable a number of existing UIs to automatically work on the web.
Recommended Skills: Experience with writing libraries in Julia.
The Julia project uses BinaryBuilder to provide binaries of native dependencies of julia packages. Experimental support exists to extend this support to the wasm platform, but few packages have been ported. This project would consist of attempting to port a significant fraction of the binary dependencies of the julia ecosystem to the web platform by improving the toolchain support in BinaryBuilder or (if necessary), porting upstream packages to fix assumptions not applicable on the wasm platform.
Recommended Skills: Experience with building native libraries in Unix environments.
The Distributed computing abstractions in Julia provide convenient abstraction for implementing programs that span many communicating Julia processes on different machines. However, the existing abstractions generally assume that all communicating processes are part of the same trust domain (e.g. they allow messages to execute arbitrary code on the remote). With some of the nodes potentially running in the web browser (or multiple browser nodes being part of the same distributed computing cluster via WebRPC), this assumption no longer holds true and new interfaces need to be designed to support multiple trust domains without overly restricting usability.
Recommended Skills: Experience with distributed computing and writing libraries in Julia.
Currently supported use cases for Julia on the web platform are primarily geared towards providing interactive environments to support exploration of the full language. Of course, this leads to significantly larger binaries than would be required for using Julia as part of a production deployment. By disabling dynamic language features (e.g. eval) one could generate small binaries suitable for deployment. Some progress towards this exists in packages like PackageCompiler.jl, though significant work remains to be done.
Recommended Skills: Interest in or experience with Julia internals.