Page Body

As Project Jupyter Celebrates 20 Years, Fernando Pérez Reflects on How It Started, Open Science’s Impact and the Value of Diversity in Coding

Rachel Leven, Berkeley Computing, Data Science, and Society:

Twenty years ago, UC Berkeley Associate Statistics Professor Fernando Pérez started one of the foundational tools for analyzing large amounts of data in a transparent and collaborative way. That project, IPython, evolved into Project Jupyter.

Project Jupyter provides a collection of tools such as the Jupyter Notebook to assist users in the process of interactive computing -- iteratively executing small fragments of programming code to explore, analyze and visualize data and computational ideas. It also allows scientists to view and build upon the work of other researchers worldwide.

Nearly 10 million Jupyter notebooks have been made public by users on GitHub, and the tool has been deemed one of 10 computer codes that transformed science, according to Nature.

Jupyter and similar tools have underpinned groundbreaking research like the first image of a black hole. And Jupyter has changed the process of scientific publishing, making it possible for scientists to easily share the data and code behind their conclusions and offering ways to replicate them.

We spoke with Pérez, who is also a co-founding investigator at the Berkeley Institute for Data Science and a faculty scientist at Lawrence Berkeley National Laboratory, about why he started this project, what challenges he’s faced and what to expect from him and Project Jupyter next.

Enjoyed this post?

Subscribe to the feed for the latest updates.