Page MenuHomePhabricator

Dumps project is showing the wrong quota for Gluster storage
Closed, DeclinedPublic

Description

This has been reported a *very* long time ago, but Ryan seems to be ignoring me on IRC. :(

Anyway, the GlusterFS project storage is showing wrong figures for the df -h command and du -sh command (while inside /data/project).

This is the output of the query:

hydriz@dumps-1:/data/project$ df -h
Filesystem Size Used Avail Use% Mounted on

projectstorage.pmtpa.wmnet:/dumps-project 300G 173G 128G 58% /data/project

hydriz@dumps-1:/data/project$ du -sh

11G .

Can someone look into this and resolve it soon? Many thanks!


Version: unspecified
Severity: normal
See Also:
https://bugzilla.redhat.com/show_bug.cgi?id=765478

Details

Reference
bz40742

Event Timeline

bzimport raised the priority of this task from to Medium.Nov 22 2014, 12:58 AM
bzimport set Reference to bz40742.
bzimport added a subscriber: Unknown Object (MLST).

You ping me on IRC at random hours and disappear for days or weeks at a time. That's fine, but since you're mostly unreachable during my waking hours, bugs like this are always better. I had no clue you reported this.

Seems this is yet another bug in gluster. Seems the bug has been reported in numerous places and there's no response.

Yesterday df claimed projectstorage.pmtpa.wmnet:/dumps-project had 18 TB in total and I don't remember how may free TB, but not a byte could be written to disk because of allegedly reached quota.
Now that some stuff has been deleted, df says 234G used vs. 72G according to du and writing to disk works again.

Now /data/project doesn't show up at all in df.

Which instance are you trying to access this from?

(In reply to comment #5)

Which instance are you trying to access this from?

dumps-1, dumps-2 (just tried) and all the others (tried in the past) were the same.

root@i-00000355:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 10309828 2624584 7161636 27% /
udev 2020900 8 2020892 1% /dev
tmpfs 809992 252 809740 1% /run
none 5120 0 5120 0% /run/lock
none 2024972 0 2024972 0% /run/shm
/dev/vdb 41284928 180240 39007536 1% /mnt
projectstorage.pmtpa.wmnet:/dumps-home 52428800 135296 52293504 1% /home
projectstorage.pmtpa.wmnet:/dumps-project 314572800 96101632 218471168 31% /data/project

^^ note that the filesystem is an automount and needs to be mounted for df to work. It'll automatically mount itself when it is accessed.

(In reply to comment #7)

^^ note that the filesystem is an automount and needs to be mounted for df to
work. It'll automatically mount itself when it is accessed.

I browsed directories and deleted files, isn't that enough? Anyway, ok, we're now back to comment 0.

did you consider switching to nfs

(In reply to comment #9)

did you consider switching to nfs

Who is "you"? Are the project users supposed to do something? Thanks.

(In reply to comment #9)

did you consider switching to nfs

Yes, but that created an extremely large load on the host nodes, which was the main factor that crashed the whole of Labs last year (see relevant bug: bug 36993).

This might not necessarily be relevant now since we are not doing much I/O, so we can experiment with that. However, we should be resolving the root cause, which is the Gluster bug, since it can happen to other projects in the future.

Problem still current:

$ df -h
Filesystem Size Used Avail Use% Mounted on
[...]
projectstorage.pmtpa.wmnet:/dumps-project 300G 102G 199G 34% /data/project

$ du -shc /data/project/
11G /data/project/

Yes, as I mentioned, I have no plans on fixing this or even investigating it. Are you having issues writing to the filesystem? If not, we'll just wait till gluster is replaced by NFS.

Ok, thanks for clarifying the status on bugzilla too. How hard is it to raise the quota to 400 GB so that we can use the standard 300?

(In reply to comment #13)

Are you having issues writing to the filesystem?

Not yet since you fixed it, thanks.

I think the 300GB will actually be usable. I can raise it, but you guys have mentioned in your emails that it isn't necessary.

(In reply to comment #15)

I think the 300GB will actually be usable.

Hm? When?

I can raise it, but you guys have
mentioned in your emails that it isn't necessary.

Well, Hydriz said that we'll try and manage with 300 GB, but not with 200.