| Literature DB >> 28494014 |
Gregory M Kurtzer1, Vanessa Sochat2, Michael W Bauer1,3,4.
Abstract
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.Entities:
Mesh:
Year: 2017 PMID: 28494014 PMCID: PMC5426675 DOI: 10.1371/journal.pone.0177459
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Container comparison.
| Singularity | Shifter | Charlie Cloud | Docker | |
|---|---|---|---|---|
| Privilege model | SUID/UserNS | SUID | UserNS | Root Daemon |
| Supports current production Linux distros | Yes | Yes | No | No |
| Internal image build/bootstrap | Yes | No | No | No |
| No privileged or trusted daemons | Yes | Yes | Yes | No |
| No additional network configurations | Yes | Yes | Yes | No |
| No additional hardware | Yes | Maybe | Yes | Maybe |
| Access to host filesystem | Yes | Yes | Yes | Yes |
| Native support for GPU | Yes | No | No | No |
| Native support for InfiniBand | Yes | Yes | Yes | Yes |
| Native support for MPI | Yes | Yes | Yes | Yes |
| Works with all schedulers | Yes | No | Yes | No |
| Designed for general scientific use cases | Yes | Yes | No | No |
| Contained environment has correct perms | Yes | Yes | No | Yes |
| Containers are portable, unmodified by use | Yes | No | No | No |
| Trivial HPC install (one package, zero conf) | Yes | No | Yes | Yes |
| Admins can control and limit capabilities | Yes | Yes | No | No |
In addition to the default Singularity container image, a standard file, Singularity supports numerous other formats described in the table. For each format (except directory) the suffix is necessary for Singularity to identify the image type.
*relies on Docker
**with security implications
***depends on upstream
Container formats supported.
| Format | Description |
|---|---|
| Standard Unix directories containing a root container image | |
| Zlib compressed tar archives | |
| Bzip2 compressed tar archives | |
| Uncompressed tar archives | |
| Zlib compressed CPIO archives | |
| Uncompressed CPIO archives |
In addition to the default Singularity container image, a standard file, Singularity supports numerous other formats described in the table. For each format (except directory) the suffix is necessary for Singularity to identify the image type
Fig 1Singularity usage workflow.
The standard Singularity Usage Workflow involves a working endpoint (left) where the user has root, and a container can be created, modified and updated, and then transferred to a shared computational resource (right) to be executed at scale.
Singularity commands.
| Global Options | |
|---|---|
| − | Print debugging information |
| − | Display usage summary |
| − | Only print errors |
| − − | Show application version |
| − | Increase verbosity +1 |
| − | Print shell wrapper debugging information |
| Show additional help for a command | |
| Execute a command within container | |
| Launch a runscript within container | |
| Run a Bourne shell within container | |
| Execute any test code defined within container | |
| Bootstrap a new Singularity image | |
| Copy files from your host into the container | |
| Create a new container image | |
| Export the contents of a container via a tar pipe | |
| Import/add container contents via a tar pipe | |
| Mount a Singularity container image |
Singularity command descriptions available via singularity –help. For all of the commands in the table, the general usage is: singularity [global options…]
Singularity reported usage.
| Site or Organization | System Name | Size (cores) | Purpose of the System |
|---|---|---|---|
| CSIRO | bragg-gpu | 2048 | broad base scientific |
| Genentech, Inc. | research | ||
| Georgia State University | Orion | 362 | research |
| GSI Helmholtz Center | Greencube | 300,000 | heavy ion physics |
| Holland Computing Center | Crane and Tusker | 14,000 | campus cluster |
| HPC-UGent | golett | 2500 | general scientific research |
| LBNL | Lawrencium | 30,000 | general scientific research |
| Lunarc | Aurora | 360 | Research |
| McGill HPC Centre | guillimin | 22300 | Compute Canada cluster |
| Microway | Microway Research Cluster | 192 | ccientific benchmarking |
| MIT | openmind | 1,176 | neuroscience |
| NIH | Biowulf | 54,000 | general biomedical research |
| Purdue University | Rice | 11520 | campus HPC resource |
| Purdue University | Conte | 78880 | campus HPC resource |
| Purdue University | Snyder | 2220 | campus HPC resource |
| Purdue University | Hammer | 3960 | campus HPC resource |
| Purdue University | Carter | 10560 | campus HPC resource |
| R Systems NA, Inc. | Oak1 | 1024 | shared resource |
| R Systems NA, Inc. | Oak2 | 2048 | shared resource |
| R Systems NA, Inc. | HOU1 | 5376 | shared resource |
| Rutgers University | sirius | 32 | scientific SMP machine |
| SDSC | Gordon | 16384 | cluster for XSEDE |
| SDSC | Comet | 47776 | cluster for XSEDE |
| Stanford University | sherlock | 12764 | compute for Stanford |
| Stanford University | scg4 | 3920 | genomics at Stanford |
| TACC | Stampede | 102400 | NSF key resource, all fields |
| UFIT | HiPerGator | 51,000 | research computing cluster |
| Ulm University, Germany | JUSTUS | 550 | computational chemistry |
| UNF | Stark | 64 | fMRI analysis of the brain |
| University of Arizona | Ocelote | 10000 | general research |
| University of Arizona | ElGato | 2300 | GPU cluster |
| UNC Berkeley | Savio | 7820 | HPC for research |
| University of Chicago | midway.rcc.uchicago.edu | 24196 | university cluster |
| University of Leeds | MARC1 | 1236 | bioinformatics, analytics |
| University of Manitoba | Grex | 3840 | generalHPC cluster |
| WU in St. Louis | 2000 | general cluster |
HPC Clusters Using Singularity: At the time of this writing, this table shows the site or organization name, the system name, the number of cores, and the purpose for 36 clusters that have (reported) Singularity installed.
*San Diego Supercomputer Center
**Lawrence Berkeley National Laboratory
***National Institute of Health
****Texas Advanced Computing Center