To read the full scrolls, it would be very helpful to have a good 3D segmentation of them. That’s why we’re announcing a new set of open source prizes, focused specifically around segmentation: Segmentation Tools Prizes. We’re particularly looking for tools to make it easier and faster to do segmentation.
Segmentation Tools Prizes
$35,000 total, 6 prizes. 1x10k, 5x5k.
Any tools, documentation, notebooks, analysis, to help with segmentation and flattening of the scrolls. Must be open source.
What we would like to see:
In general, we would love to see people build on existing open source tools, especially those that have been built in the community. But all submissions are welcome!
Improvements that make the tools super fast to use. For example, the ability to start segmenting without downloading all of the scroll data; streaming from our download server, for example. (We are happy to host some server-side software if this helps!)
Making it faster to move between slices in tools like Volume Cartographer, by only loading the parts of the images that are relevant for your current segmentation task.
Super fast keyboard shortcut based interfaces.
AI / computer vision assistance to make segmentation faster and more accurate.
Quick visualizations to ensure that your segmentation makes sense from different angles.
For more details, see here.
To get a sense of what segmentation looks like right now, watch this stream of JP doing some segmentation using Volume Cartographer:
We strongly encourage everyone to try it for themselves. You will quickly feel the pain! Also be sure to try VolumeAnnotate, a Python reimplementation of part of Volume Cartographer, and an open source prize winner (though it can’t yet save files in the same data format as Volume Cartographer).
Community news
Francesca and Oliver shared a tool for efficiently exploring the scroll data, with a live-unwrapping algorithm.
Brent Marin has been working on Python bindings for Volume Cartographer.
More interesting notebooks on Kaggle.
Casey Handmer did an awesome analysis of the scroll data, where he tried to calculate the effective resolution of the scans.
Henrik tried to further flatten the segment data to improve training and inference.
Matthew Russell and Nick Moore created libraries for loading the data, complementary to the work by MatthieuFP and Brett Olsen.
Moshe Levy has been playing with measures of how porous the papyrus is, which appears to make some ink visible.
Moshe and hu-po tried statistical approaches, which appear to show differences between ink and no-ink regions.
People didn’t realize just how tiny the fragments are until they saw this photo of Françoise Bérard (Director of the Library at the Institute de France) holding them before scanning at Diamond Light Source.
spelufo has been wondering if the papyrus was woven or not, and suggests that it might be in this fragment.
There was a question in Discord about how much the scrolls might have shrunk, and this photo from the FAQ might give a rough indication.
There has also been some discussion in Discord about how to make your own campfire scroll, using papyrus and ink.
Fun idea: Noah Thies created a chatbot trained on the Vesuvius Challenge website and related papers.
WayneWayneHello shared a folded T-shirt CT dataset, if you need more segmentation practice. 😉
Segmentation Tools Prizes ($35,000 total)
How do I access project related channels on Discord?
So far, I didn't succeed to connect any channel when clicking on links in this newsletter.
What a great open source challenge to embark on!