This is a collaborative space. In order to contribute, send an email to maximilien.chaumon@icm-institute.org
On any page, type the letter L on your keyboard to add a "Label" to the page, which will make search easier.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Current »

It is possible to upload data on Brainlife. Datasets can be public or private (it depends on the nature of the project it is linked to).

Store data on BL

There are two ways to store data on BL:

  • The Archive tab you find in the Processes section shows all the current content in your data archive. Data-objects under this tab are stored in BL archival storage permanently.

  • You cannot directly run Apps on archived data. Instead, BL will automatically stage data-objects from BL archive and transfer them to compute resources where the Apps can be executed. Data generated in the Processes page will be removed within 25 days unless you archive them.

Upload data on BL

To upload data, go to the Processes page and then in the Archive tab. In the right corner, you see a green button “Upload data”. Then, select the datatype of your data and fill in the blanks (see BL documentation).

But as you can see, the meg/fifdatatype is not listed when you want to select this datatype for your dataset: you can’t upload your data using the BL GUI, you have to use the BL CLI.

No need to install all of the dependencies, you can use the BL CLI hosted as a docker container via Singularity by running the following command:

$ singularity run docker://brainlife/cli login

The result is:

For more details, see the BL documentation.

In order to upload a data object, Brainlife requires that you supply a project and datatype associated with it it:

  1. get the ID of your project

    $ singularity run docker://brainlife/cli project query --admin <admin>

  2. get the datatype of the data you want to upload

    $ singularity run docker://brainlife/cli datatype query --query <keyword>

The command returns several results, it’s up to you to select the one you want.

Once you get these two informations, you can upload your data. For example for the neuro/meg/fif datatype:

$ singularity run docker://brainlife/cli data upload --fif rest1-raw.fif --calibration sss_cal.dat --crosstalk --ct_sparse.fif --destination mean_tm-raw.fif --project 5ff32b04116c5cbba4d1929b --datatype neuro/meg/fif --subject rest1-raw

You can upload data by specifying a single directory containing all of the files for its associated datatype. However, you can also specify the path for each individual file ID.

Eventually, if you want to have info on your newly uploaded dataset:

$ singularity run docker://brainlife/cli data query --subject rest1-raw

It can be heavy to always write singularity run docker://brainlife/cli when using BL CLI, so you can add this command to your ~/.bashrc:

function bl {
    singularity run docker://brainlife/cli 
}

Besides, it may be possible to create a bash script that upload automatically your data.

In our case, the data will stay in the lab and we will use the cluster of the lab to run BL jobs. It needs to be discussed with the DSI (see notes of Jeudi 25 mars).

  • No labels