...
get the ID of your project
Code Block $ singularity run docker://brainlife/cli project query --admin <admin>
get the ID datatype of the datatype of your project data you want to upload
Code Block $ singularity run docker://brainlife/cli datatype query --query <datatype>
...
<keyword>
...
The command returns several results, it’s up to you to select the one you want.
Once you get these two IDinformations, you can upload your data. To do so, all that is required is a directory to upload, a project id, and a datatype ID (but other options exist)For example for the neuro/meg/fif
datatype:
Code Block |
---|
$ singularity run docker://brainlife/cli data upload --fif rest1-raw.fif --calibration sss_cal.dat --crosstalk ct_sparse.fif --directory <path>destination mean_tm-raw.fif --project <projectid>5ff32b04116c5cbba4d1929b --datatype <datatypeid>neuro/meg/fif --subject rest1-raw |
You can upload data by specifying a single directory containing all of the files for its associated datatype. However, you can also specify the path for each individual file ID.
Eventually, if you want to have info on your newly uploaded dataset:
Code Block |
---|
$ singularity run docker://brainlife/cli data query --subject rest1-raw |
...
Info |
---|
It can be heavy to always write |
Code Block |
---|
function bl { singularity run docker://brainlife/cli } |
Besides, it may be possible to create a bash script that upload automatically your data.
In our case, the data (to dowill stay in the lab and we will use the cluster of the lab to run BL jobs. It needs to be discussed with the DSI (see notes of /wiki/spaces/CENIR/pages/1452343301).