This is the documentation for the FeTS Platform, developed by CBICA at UPenn, in collaboration with Intel Labs, Intel AI and Intel IOT
Home |
---|
Application Setup |
Process the Data |
Run the Application |
Extras |
ITCR Connectivity |
Note the ${fets_root_dir}
from Setup.
cd ${download_location}
${fets_root_dir}/bin/FeTS # launches application
Please add the following path to your LD_LIBRARY_PATH
when using FeTS: ${fets_root_dir}/lib
:
export LD_LIBRARY_PATH=${fets_root_dir}/lib:$LD_LIBRARY_PATH
Note the ${fets_root_dir}
from Setup.
For the first application of FeTS in volumetric brain tumor MRI scans, you should follow the pre-processing pipeline defined in the International Brain Tumor Segmentation (BraTS) Challenge:
Input_Data
│
└───Patient_001
│ │
│ └───T1
│ │ │ image_001.dcm
│ │ │ image_002.dcm
│ │ │ ...
│ └───T1GD
│ │ │ image_001.dcm
│ │ │ image_002.dcm
│ │ │ ...
│ └───T2
│ │ │ image_001.dcm
│ │ │ image_002.dcm
│ │ │ ...
│ └───T2FLAIR
│ │ │ image_001.dcm
│ │ │ image_002.dcm
│ │ │ .....
│
└───Pat_JohnDoe
│ │ ...
│
│ ...
│
└───SmithJoe
│ │ ...
!
, @
, #
, $
, %
, ^
, &
, *
, '
, ;
, :
, ,
, ~
PatientID,T1,T1GD,T2,T2FLAIR
Patient_001,/path/to/Patient_001/T1/image_001.dcm,/path/to/Patient_001/T1GD/image_001.dcm,/path/to/Patient_001/T2/image_001.dcm,/path/to/Patient_001/T2FLAIR/image_001.dcm
Pat_JohnDoe,/path/to/Pat_JohnDoe/T1/image_001.dcm,/path/to/Pat_JohnDoe/T1GD/image_001.dcm,/path/to/Pat_JohnDoe/T2/image_001.dcm,/path/to/Pat_JohnDoe/T2FLAIR/image_001.dcm
...
SmithJoe,/path/to/SmithJoe/T1/image_001.dcm,/path/to/SmithJoe/T1GD/image_001.dcm,/path/to/SmithJoe/T2/image_001.dcm,/path/to/SmithJoe/T2FLAIR/image_001.dcm
echo "PatientID,T1,T1GD,T2,T2FLAIR" > raw_data.csv
for d in $Input_data/*; do sub=`basename $d`; t1=`ls -1 $d/T1/* | head -n1`; tlce=`ls -1 $d/T1GD/* | head -n1`; t2=`ls -1 $d/T2/* | head -n1`; flair=`ls -1 $d/T2FLAIR/* | head -n1`; echo $sub,$t1,$tlce,$t2,$flair >> raw_data.csv; done
PrepareDataset
executable (which internally calls the BraTSPipeline
executable):
${fets_root_dir}/bin/PrepareDataset -i /path/to/raw_data.csv -o /path/to/output
/path/to/output
│ │
│ └───DataForFeTS # this is to be passed for inference/training
│ │ │
│ │ └───Patient_001 # this is constructed from the ${PatientID} header of CSV
│ │ │ │ Patient_001_brain_t1.nii.gz
│ │ │ │ Patient_001_brain_t1ce.nii.gz
│ │ │ │ Patient_001_brain_t2.nii.gz
│ │ │ │ Patient_001_brain_flair.nii.gz
│ │ │
│ │ └───Pat_JohnDoe # this is constructed from the ${PatientID} header of CSV
│ │ │ │ ...
│ │
│ │
│ └───DataForQC # this is to be used for quality-control
│ │ │
│ │ └───Patient_001 # this is constructed from the ${PatientID} header of CSV
│ │ │ │ raw_${modality}.nii.gz
│ │ │ │ raw_rai_${modality}.nii.gz
│ │ │ │ raw_rai_n4_${modality}.nii.gz
│ │ │ │ ${modality}_to_SRI.nii.gz
│ │ │ │ brainMask_SRI.nii.gz # generated using BrainMaGe [https://github.com/CBICA/BrainMaGe/] or DeepMedic [https://cbica.github.io/CaPTk/seg_DL.html]
│ │ │ │ log.txt
│ │ │
│ │ └───Pat_JohnDoe # this is constructed from the ${PatientID} header of CSV
│ │ │ │ ...
NOTE: For some OS variants, we have seen PrepareDataset
executable to cause issues, for which we have an alternative with ${fets_root_dir}/bin/PrepareDataset.py
, which has the exact same API and can be invoked in the following way:
cd ${fets_root_dir}/bin
./OpenFederatedLearning/venv/bin/python \ # virtual environment that was set up in previous section
./PrepareDataset.py -i /path/to/raw_data.csv -o /path/to/output
NOTE: Skip this step if you have used PrepareDataset
as described in https://fets-ai.github.io/Front-End/process_data#pre-processing
If you have processed data from a prior study that you would like to include in the FeTS federation, please ensure that all the data is co-registered within each patient and the annotations are in the same space. Once that is assured, follow these steps:
PrepareDataset
as shown aboveDataForQC
, under each patient, the transformation matrices will be generated per modality. Use the T1CE_to_SRI.mat file (the assumption here is that the data is co-registered within each patient) to transform the annotation (which is in the patient space) in the following manner:
${fets_root_dir}/bin/Preprocessing \
-i /path/to/patient_X/annotation.nii.gz \
-rFI ${fets_root_dir}/data/sri24/atlasImage.nii.gz \
-o /path/to/output/DataForFeTS/patient_X/annotation_final_seg.nii.gz \
-rIA /path/to/output/DataForQC/patient_X/T1CE_to_SRI.mat \
-reg Rigid -rIS 1 -rSg 1
/path/to/output/DataForFeTS/patient_X/brain_*
/path/to/output/DataForFeTS/patient_X/annotation_final_seg.nii.gz