Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Status
colourRed
titledraft

It is possible to upload data on Brainlife. Datasets can be public or private (it depends on the nature of the project it is linked to).

...

There are two ways to store data on BL:

  •  The The Archive tab you find in the Processes section shows all the current content in your data archive. Data-objects under this tab are stored in BL archival storage permanently.

  • You cannot directly run Apps on archived data. Instead, BL will automatically stage data-objects from BL archive and transfer them to compute resources where the Apps can be executed. Data generated in the Processes page will be removed within 25 days unless you archive them.

...

Code Block
$ singularity run docker://brainlife/cli login

The result is:

...

Then, to run the BL command, you must write

For more details, see the BL documentation.

In order to upload a data object, Brainlife requires that you supply a project and datatype associated with it it:

  1. get the ID of your project

    Code Block
    $ singularity run docker://brainlife/cli project query --admin <admin>
    Image Added

  2. get the datatype of the data you want to upload

    Code Block
    $ singularity run docker://brainlife/cli datatype query --query <keyword>

...

The command returns several results, it’s up to you to select the one you want.

Once you get these two informations, you can upload your data. For example for the neuro/meg/fif datatype:

Code Block
$ singularity run docker://brainlife/cli bl --<options>

...

 data upload --fif rest1-raw.fif --calibration sss_cal.dat --crosstalk ct_sparse.fif --destination mean_tm-raw.fif --project 5ff32b04116c5cbba4d1929b --datatype neuro/meg/fif --subject rest1-raw

You can upload data by specifying a single directory containing all of the files for its associated datatype. However, you can also specify the path for each individual file ID.

Eventually, if you want to have info on your newly uploaded dataset:

Code Block
$ singularity run docker://brainlife/cli bl <command>data query --subject rest1-raw

...

Info

It can be heavy to always write singularity run docker://brainlife/cli when using BL CLI, so you can add this command to your ~/.bashrc:

Code Block
function bl {
    singularity run docker://brainlife/cli 

...


}

For more details, see the BL documentation.

In order to upload a data object, Brainlife requires that you supply a project and datatype associated with itBesides, it may be possible to create a bash script that upload automatically your data.

In our case, the data will stay in the lab and we will use the cluster of the lab to run BL jobs. It needs to be discussed with the DSI (see notes of /wiki/spaces/CENIR/pages/1452343301).