FAQ
How can I do a clean slate?
To clean the environment you must execute (after being inside the FLamby
folder cd FLamby/
):
conda deactivate
make clean
I get an error when installing Flamby
error: [Errno 2] No such file or directory: ‘pip’
Try running:
conda deactivate
make clean
pip3 install --upgrade pip
and try running your make installation option again.
I am installing Flamby on a machine equipped with macOS and an intel processor
In that case, you should use
make install-mac
instead of the standard installation. If you have already installed the flamby environment, just run
conda deactivate
make clean
before running the install-mac installation again. This is to avoid the following error, which will appear when running scripts. #### error : OMP: Error #15
I or someone else already downloaded a dataset using another copy of the flamby repository, my copy of flamby cannot find it and I don’t want to download it again, what can I do ?
There are two options. The safest one is to cd to the flamby directory and run:
python create_dataset_config.py --dataset-name fed_camelyon16 OR fed_heart_disease OR ... --path /path/where/the/dataset/is/located
This will create the required dataset_location.yaml
file in your
copy of the repository allowing FLamby to find it.
One can also directly pass the data_path
argument when instantiating
the dataset but this is not recommended.
from flamby.datasets.fed_heart_disease import FedHeartDisease
center0 = FedHeartDisease(center=0, train=True, data_path="/path/where/the/dataset/is/located")
Can I run clients in different threads with FLamby? How does it run under the hood?
FLamby is a lightweight and simple solution, designed to allow researchers to quickly use cleaned datasets with a standard API. As a consequence, the benchmark code performing the FL simulation is minimalistic. All clients run sequentially in the same python environment, without multithreading. Datasets are assigned to clients as different python objects.
Does FLamby support GPU acceleration?
FLamby supports GPU acceleration thanks to the underlying deep learning backend (pytorch for now).