-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange issue with permission denied and bizarre mtime #4314
Comments
Volume type looks something wrong. Did you created the volume with replica count 9? or wanted to create distributed replicate with replica count 3? Please share the Volume create command used here. Use the below command to create Distributed Replicate volume with Replica count 3
|
The goal is to have 9 replicas. |
This is not a supported configuration, Only Replica count 2 and 3 are the tested and supported ones. You can explore Disperse volume where you will get the high availability and more storage space with the same number of bricks. For example, Create a volume with 6 data bricks and 3 redundancy bricks. Your volume size will be 6 x size in each brick and the volume will be highly available even if 3 nodes/bricks goes down. @xhernandez / @pranithk Is it possible to have redundancy count more than data bricks if high availability is more important than storage space? |
On Sat, Mar 16, 2024 at 2:39 PM Aravinda VK ***@***.***> wrote:
This is not a supported configuration, Only Replica count 2 and 3 are the
tested and supported ones. You can explore Disperse volume where you will
get the high availability and more storage space with the same number of
bricks. For example, Create a volume with 6 data bricks and 3 redundancy
bricks. Your volume size will be 6 x size in each brick and the volume will
be highly available even if 3 nodes/bricks goes down.
@xhernandez <https://github.com/xhernandez> / @pranithk
<https://github.com/pranithk> Is it possible to have redundancy count
more than data bricks if high availability is more important than storage
space?
No. It's not possible. The number of data bricks is enforced to always be
greater than half of the total bricks to have a way to guarantee the
quorum. In this case the maximum redundancy configuration would be 5 + 4.
A thing to consider is that dispersed volumes require more computational
power to encode/decode the data, and the performance could differ compared
to a replicated volume (in some workloads it could be better and in some
slower). Some testing should be done to be sure everything is inside the
allowed tolerance if they want to go with dispersed volumes.
Xavi
… —
Reply to this email directly, view it on GitHub
<#4314 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANS6GFPZX6ZJ7EUE6ZG2ETYYRDSHAVCNFSM6AAAAABEW6ZVOOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBRHE4TAMBUHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
We also have a smaller testing environment
Same problem presentation. |
Description of problem:
Randomly we'll start getting permission denied errors accompanied by strange mtimes on the fuse mount.
We could not find a way to reproduce the problem, and it happens on directories that has been present for multiple years.
The symptom are always similar in that the Modified Time for the directory is set to some bizarre, inaccurate year:
From the FUSE mount point:
From the Brick folder (independant of the brick)
In the logs we see:
Doing sudo touch resets the timestamp and the directories are now accessible again.
Expected results:
Access as a normal user
Mandatory info:
- The output of the
gluster volume info
command:- The output of the
gluster volume status
command:- The output of the
gluster volume heal
command:**- Is there any crash ? Provide the backtrace and coredump
No crash, no coredumps
- The operating system / glusterfs version:
Mix of Ubuntu 20.04 and Ubuntu 22.04
glusterfs 10.1 on Ubuntu 22.04
glusterfs 7.2 on Ubuntu 20.04
The issue happens the same on either versions.
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: