This is a collaborative space. In order to contribute, send an email to maximilien.chaumon@icm-institute.org
On any page, type the letter L on your keyboard to add a "Label" to the page, which will make search easier.
How to upload data on Brainlife
It is possible to upload data on Brainlife. Datasets can be public or private (it depends on the nature of the project it is linked to).
Store data on BL
There are two ways to store data on BL:
The Archive tab you find in the Processes section shows all the current content in your data archive. Data-objects under this tab are stored in BL archival storage permanently.
You cannot directly run Apps on archived data. Instead, BL will automatically stage data-objects from BL archive and transfer them to compute resources where the Apps can be executed. Data generated in the Processes page will be removed within 25 days unless you archive them.
Upload data on BL
To upload data, go to the Processes page and then in the Archive tab. In the right corner, you see a green button “Upload data”. Then, select the datatype of your data and fill in the blanks (see BL documentation).
But as you can see, the meg/fif
datatype is not listed when you want to select this datatype for your dataset: you can’t upload your data using the BL GUI, you have to use the BL CLI.
No need to install all of the dependencies, you can use the BL CLI hosted as a docker container via Singularity by running the following command:
$ singularity run docker://brainlife/cli login
The result is:
For more details, see the BL documentation.
In order to upload a data object, Brainlife requires that you supply a project and datatype associated with it it:
get the ID of your project
$ singularity run docker://brainlife/cli project query --admin <admin>
get the datatype of the data you want to upload
$ singularity run docker://brainlife/cli datatype query --query <keyword>
The command returns several results, it’s up to you to select the one you want.
Once you get these two informations, you can upload your data. For example for the neuro/meg/fif
datatype:
You can upload data by specifying a single directory containing all of the files for its associated datatype. However, you can also specify the path for each individual file ID.
Eventually, if you want to have info on your newly uploaded dataset:
It can be heavy to always write singularity run docker://brainlife/cli
when using BL CLI, so you can add this command to your ~/.bashrc
:
Besides, it may be possible to create a bash script that upload automatically your data.
In our case, the data will stay in the lab and we will use the cluster of the lab to run BL jobs. It needs to be discussed with the DSI (see notes of https://icm-institute.atlassian.net/wiki/spaces/CENIR/pages/1452343301).