Data management on the disk array¶
Partitions, quotas, advices and commands to manage your data on Boreale
The data of the Boreale cluster are stored on a disk array accessible on the whole cluster using GPFS shared filesystem. The performance measured during delivery is 3.5 GB/s for the
/dlocal partition. These partitions are optimized for reading/writing large files.
If your work involves a lot of small files, contact support to set up access to a partition more suited to this type of processing (see the rest of the documentation for explanations).
Some practical commands¶
How many files do I have in
gpfs1 dlocal line, column "files
How much disk space do I consume?
--block-size auto at the end of the command, you will get a display in "human" format (with the unit of value most suitable for a human)
For a human format display of quotas on
dlocal spaces only:
How to find out the list of temporary job folders of the user name_login?
How to know the list of jobs submitted in the partition compute between 11/01/2022 and 11/15/2022 to do housework?
You can add the
-l option to display more information.
How to know the number of files in folder
I need to reduce my number of files, but I can't delete anything. How can I do it?
Archive some trees with the
tar command: one archive = 1 file
In the submission scripts...
The retrieval of data is done with an
mv command. Do not replace it with a
cp command, which duplicates the data and can take a long time to execute. The
mv command is immediate between
If you develop...
Choose large files with HDF5 type formats rather than a multitude of small files. You will gain in performance on computing clusters with large block sizes.
If you generate a lot of files...
Keep an eye on your quota. Display it automatically at login (with appropriate addition to your
Some additional information¶
Partitions and their use
The disk array is separated into 2 parts (file systems) each containing subparts (filesets).
The first part is the largest and most powerful: it hosts
The second part is smaller but on SSD disks to compensate for the performance loss of the small block size: it hosts
/homecontains the home folders for users.
/dlocalcontains the temporary job folders (
/dlocal/run) and some permanent job folders (
/dlocal/home) when the need is qualified.
/softcontains software made available by CRIANN
Warning : no backup is made on user data. Remember to repatriate your codes and data to your laboratories.
We strongly encourage you to use versioning tools, such as GIT, at your institutions. Ask your IT department for more information.
git client is installed on the frontend nodes, without module loading.
In order to guarantee good performance, it is necessary to maintain a reasonable rate of filling the bay (volume and number of files). For this, CRIANN has set two types of quotas:
/home: default quota of 50 GB / user
/dlocal: 10 million files quota
In both cases, the limits correspond to "soft" values that can be exceeded temporarily (7 days). After this period, the usage must go back down below the soft limit, otherwise, no file creation is possible. A "hard" limit of 10 GB over soft quota is also set: it cannot be exceeded under any circumstances.
If you feel that these limits are too restrictive for you, do not hesitate to open a ticket with support. These limits can be increased upon justified request.
mmlsquota explained at the top of this page allows you to display the quotas and the grace period between the "soft" and "hard" quotas. Once the 7 day time limit is exceeded, any request for additional volume allocation is refused. Only commands allowing to come back under the "soft" quota are allowed (
mv commands for example).
The problem of the number of files
CRIANN has chosen to keep the temporary files of the jobs (
/dlocal/run/<jobid>) beyond the life of the jobs. This folder can be used as a working folder for the next job.
These folders are automatically deleted by CRIANN, 45 days after the end of the corresponding job. This has the advantage of being able to link several jobs and also to recover data that would not have been recovered at the end of the job.
For most users, in 45 days, this corresponds to a few thousand files. For some users of software such as OpenFoam, this can represent several tens of millions of files. The quota is there to avoid a drift, but the submission of new jobs becomes impossible if the quota is exceeded: it is thus necessary to do some cleaning in addition to the automatic cleaning...
Si vous avez des questions, merci de contacter le support : email@example.com