Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'get' #65

Open
madmax01 opened this issue Mar 7, 2022 · 9 comments
Open

AttributeError: 'NoneType' object has no attribute 'get' #65

madmax01 opened this issue Mar 7, 2022 · 9 comments

Comments

@madmax01
Copy link

madmax01 commented Mar 7, 2022

Hi everyone,

Just as info.. also with latest gstatus and gluster 10.1 once inside an replication 1 brick is down > then other Gluster nodes getting an error. even if all is fine (apart of the 1 Brick down). As long the Volume is up > means is not emergency.

may an easy fix?

Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/bin/gstatus/main.py", line 77, in
File "/usr/bin/gstatus/main.py", line 73, in main
File "/usr/bin/gstatus/glusterlib/cluster.py", line 37, in gather_data
File "/usr/bin/gstatus/glustercli/cli/volume.py", line 189, in status_detail
File "/usr/bin/gstatus/glustercli/cli/parsers.py", line 307, in parse_volume_status
AttributeError: 'NoneType' object has no attribute 'get'

thx

@sac
Copy link
Member

sac commented Mar 15, 2022

Hi this is what I tried:

root@vm1:~# gstatus -a -v rep 

Cluster:
         Status: Healthy                 GlusterFS: 11dev
         Nodes: 2/2                      Volumes: 1/1

Volumes: 

rep
                Replicate          Started (PARTIAL) - 1/2 Bricks Up 
                                   Capacity: (56.17% used) 11.00 GiB/19.00 GiB (used/total)
                                   Bricks:
                                      Distribute Group 1:
                                         vm1:/data/r1   (Online)
                                         vm2:/data/r2   (Offline)
                                   Note: glusterd/glusterfsd is down in one or more nodes.
                                         Sizes might not be accurate.


And works fine for me. Can you please provide the volume details and the exact command you executed?

@madmax01
Copy link
Author

i just used "gstatus" command.... nothing extra as quick test..... and got error above. this is when 1 gluster node was down.... then i had the issue across all other Nodes with the Error.

i had this with earlier Gluster Versions also.... currently i cannot test this with same Setup.

i look somewhere else in Lab and re/create... will paste then once i upgraded stack

@guinoise
Copy link

guinoise commented Apr 6, 2022

Same here, when one node is offline got an error. Either without any options (full status) or for a specific volume (like sac did).

gstatus
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/bin/gstatus/__main__.py", line 77, in <module>
  File "/usr/bin/gstatus/__main__.py", line 73, in main
  File "/usr/bin/gstatus/glusterlib/cluster.py", line 37, in gather_data
  File "/usr/bin/gstatus/glustercli/cli/volume.py", line 189, in status_detail
  File "/usr/bin/gstatus/glustercli/cli/parsers.py", line 307, in parse_volume_status
AttributeError: 'NoneType' object has no attribute 'get'

gstatus -a -v HA
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/bin/gstatus/__main__.py", line 77, in <module>
  File "/usr/bin/gstatus/__main__.py", line 73, in main
  File "/usr/bin/gstatus/glusterlib/cluster.py", line 35, in gather_data
  File "/usr/bin/gstatus/glusterlib/cluster.py", line 125, in _get_volume_details
  File "/usr/bin/gstatus/glustercli/cli/volume.py", line 179, in status_detail
  File "/usr/bin/gstatus/glustercli/cli/parsers.py", line 307, in parse_volume_status
AttributeError: 'NoneType' object has no attribute 'get'

@thetuxinator
Copy link

Same issue here, quite easy to reproduce, take 1 node offline (systemctl glusterd stop) and execute gstatus -o json . Looks like it tries to get the status of all bricks, but if 1 node is offline gluster volume status volname detail will not output detail status of that particular node's brick and thus its output is empty.

@thetuxinator
Copy link

Looks like https://github.com/gluster/glustercli-python/pull/48/files should fix this.

@madmax01
Copy link
Author

May possible to get a new binary compiled ? ;)

i can also confirm today in an 4-way replication i tested to shutdown network ports,.... and each Gluster node blamed exact same like above.

but in Fact each Gluster own Brick is online > just cannot talk to each other.

@thetuxinator
Copy link

thetuxinator commented May 19, 2022

you don't need to recompile actually, the fix is not in gstatus but in glustercli-python and they merged it to master just some days ago, see gluster/glustercli-python@1b03496 to use the most recent python pip module of glustercli-python and have this fixed, install the master version using pip install git+https://github.com/gluster/glustercli-python

@madmax01
Copy link
Author

new version 1.0.8 works for me ;)

@proligde
Copy link

proligde commented Jul 22, 2022

At first I thought 1.0.8 fixed it for me, too, but then I noticed that now some of my volumes are marked as DOWN in cases where the previous version threw the Traceback. Calling it again, all the Volumes are shown as UP again. It happens on every 4th call or sth. Just like the error. I notice there is a slightly longer delay when calling it, when that happens.

The gluster-volumes are in production use, work fine and don't have any issues. At least none that I can see.

However, I found out, that gluster volume status sometimes (about as often as the gstatus command fails) shows one of these two messages:

Another transaction is in progress for <MyVolumeName>. Please try again after some time.

OR

Locking failed on <MyNodeName>. Please check log file for details.

My guess is that this might be interpreted as being DOWN?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants