Roadmap 0.4#

For 0.4.* series of releases - November 2020#

The napari roadmap captures current development priorities within the project and aims to guide for napari core developers, to encourage and inspire contributors, and to provide insights to external developers who are interested in building for the napari ecosystem. For more details on what this document is and is not, see the about this document section.

The mission of napari is to be a foundational multi-dimensional image viewer for Python and provide graphical user interface (GUI) access to image analysis tools from the Scientific Python ecosystem for scientists across domains and levels of coding experience. To work towards this mission, we have set the following three high-level priorities to guide us over the upcoming months:

  • Make the data viewing and annotation capabilities bug-free, fast, and delightful to use.

  • Support functional and interactive plugins that enable complete image analysis workflows.

  • Add accessible documentation, tutorials, and demos.

You can read more about how this roadmap builds on and continues the work in our 0.3 series of releases in our 0.3 retrospective. We’re continuing to prioritize the robustness and polish of the core viewer to ensure that plugin development happens against a solid foundation, and because we believe that looking at and annotating your data should be a bug free and delighful experience, regardless of how large it is. We want to build on the success of our reader / writer plugins to provide the ability to add functional and interactive plugins and further empower a lot of the great work being built ontop of napari.

Make the data viewing and annotation capabilities bug-free, fast, and delightful to use#

This goal is really the core of the napari vision. In many cases, napari already meets this need, thanks to the fast OpenGL canvas provided by VisPy, and fast loading of out-of-core datasets provided by zarr and dask. There are, however, many use cases where napari still stumbles, such as annotating out-of-core datasets, loading many points or shapes, or dealing with data involving multithreaded computing, which can sometimes crash napari in unpredictable ways. We therefore want to continue building napari with this goal in mind, and specifically by tackling the following:

  • Clean up and refactor much of our code. As we added more and more features in the 0.3 series of releases, a lot of the code has grown unwieldy and complex. We want to simplify the structure of the code to make developing in napari a fun and friendly experience for new contributors. We launched 0.4.0 with the evented dataclass model, and are moving for an even simpler model with pydantic. Additionally, we will simplify the entire architecture of our models to make the flow of code execution and data transfer easier to follow everywhere (see e.g. #1353).

  • Better support for viewing big datasets. Currently, napari is fast when viewing on-disk datasets that can be naturally sliced along one axis (e.g. a time series) and where loading one slice is fast. However, when the loading is slow, the napari UI itself becomes slow, sometimes to the point of being unusable. We now have experimental, opt-in support for asyncronous rendering for sliced images, as well as for multiscale 2D data. In the 0.4 series of releases, we aim to bring this functionality out of experimental status, as well as extend it to other layer types and multiscale 3D data.

  • Improving the performance of operations on in-memory data. Even when data is loaded in memory, some operations, such as label and shape painting or slicing along large numbers of points can be slow. We will continue developing our benchmark suite and work to integrate it into our development process. See the performance label for a current list of issues related to performance.

  • Add physical coordinates. We now have a world coordinate system and transforms that can move between data coordinates, world coordinates, and the canvas where things are rendered; however, we still don’t have a concept of phyiscal units. See #1701 for additional discussion.

  • Support for persistent settings #1183 to allow saving of preferences between launches of the viewer. This is particularly important because it allows us to give all users early access to experimental features, helping us to test those features and more quickly bring them to the viewer by default.

  • Add linked multi-canvas support (#760) to allow orthogonal views, or linked views of two datasets, for example to select keypoints for image alignment, or simultaneous 2D slices with 3D rendering.

  • Add layer groups #970, which allow operating on many layers simultaneously making the viewer easier to use for multispectral or multimodal data, or, in the context of multiple canvases, where one wants to assign different groups to different canvases.

  • Complete serialization of the viewer #851 to enable sharing the entire viewer state. This low-level feature will open the door to a lot of functionality, including the viewing of remote datasets, “deep linking”, i.e. links that allow opening of napari viewing a specific dataset with specific view settings, easy creation of sophisticated animations, and more. See also #1875.

  • Improved error handling and bug reporting, see #1090 for details.

  • Improve the user interface and design of the viewer to make it easier to use. We have conducted a product heuristics analysis to identify design and usability issues, and will now be working during the 0.4 series of releases to implement them. Also see the design label for more information.

Support functional and interactive plugins that enable complete image analysis workflows.#

We want napari to support not just visualizing data, but also analyzing it. We want to help researchers access the scientific Python ecosystem through napari. To do this, we aim to develop the napari plugin infrastructure to allow plugins to process the data being viewed, and for napari to then visualize and save the results.

0.4.3 brought experimental support for functional plugins and interactive Qt widget plugins. During the 0.4.x series of releases, we aim to work with our community to strengthen that support, identify gaps in functionality, and fill those gaps, either with new plugin types or by modifying the existing plugin architecture to suit developer and user needs. Ideally, before 0.5.0, functional and interactive plugins move out of experimental support and become permanent features.

For more details, follow the plugins label on GitHub.

Provide accessible documentation, tutorials, and demos#

To make the most of napari, users need to find reliable documentation on how to use it. Currently, our documentation has many issues, including autogenerated but poorly-organized API docs, tutorials with outdated screenshots, and inconsistent documentation between the main site, the tutorials, and the API documentation. See the documentation label on GitHub for more details.

Quality documentation is therefore high priority for the team. We aim to:

  • Improve our website napari.org to provide easy access to all napari related materials, including the four types of documentation: learning-oriented tutorials, goal-oriented how-to-guides or galleries, understanding-oriented explanations (including developer resources), and a comprehensive API reference. See #764.

  • Support autogenerated screenshots and movies for our documentation to ensures it always stays up-to-date as we improve napari. These examples should also be runnable by our users to serve as training materials as they learn to use napari.

  • Add a napari human interface guide for plugin developers, akin to Apple’s Human Interface Guidelines. We want such a guide to promote best practices and ensure that plugins provide a consistent user experience.

Work prioritized for future roadmaps#

We’re also planning to prioritized the following work in future roadmaps:

  • General support for undo / redo functionality #474, a history feature, and macro generation.

  • Draggable and resizable layers #299 and #989.

  • Linked 1D plots such as histograms, timeseries, or z-profiles #823 and #675.

  • Support for using napari with remote computation.

  • Asynchronous and multiscale support beyond the image layer.

About this document#

This document is meant to be a snapshot of tasks and features that the napari team is investing resources in during our 0.4 series of releases starting November 2020. This document should be used to guide napari core developers, encourage and inspire contributors, and provide insights to external developers who are interested in building for the napari ecosystem. It is not meant to limit what is being worked on within napari, and in accordance with our values we remain community-driven, responding to feature requests and proposals on our issue tracker and making decisions that are driven by our users’ requirements, not by the whims of the core team.

This roadmap is also designed to be in accordance with our stated mission to be the multi-dimensional image viewer for Python and to provide graphical user interface (GUI) access to a plugin ecosystem of image analysis tools for scientists to use in their daily work.

For more details on the high level goals and decision making processes within napari you are encouraged to read our mission and values statement and look at our governance model. If you are interested in contributing to napari, we’d love to have your contributions, and you should read our contributing guidelines for information on how to get started.

Another good place to look for information around bigger-picture discussions within napari are our issues tagged with the long-term feature label.