Skip to content
This repository has been archived by the owner on Apr 25, 2024. It is now read-only.

Connector fails on Unsupported verifier flavorAUTH_SYS #4

Open
danielhaviv opened this issue Jan 17, 2016 · 4 comments
Open

Connector fails on Unsupported verifier flavorAUTH_SYS #4

danielhaviv opened this issue Jan 17, 2016 · 4 comments

Comments

@danielhaviv
Copy link

Hi,
Were trying to use the connector to connect to a normal linux NFS share but receive the following exception:
[root@ip-172-31-11-139 ~]# hadoop fs -ls /
16/01/14 11:40:53 ERROR rpc.RpcClientHandler: RPC: Got an exception
java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
at org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
at org.apache.hadoop.fs.nfs.rpc.RpcClientHandler.messageReceived(RpcClientHandler.java:62)
at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

@gokulsoundar
Copy link

Hi. We posted a newer version of this code: https://github.com/NetApp/NetApp-Hadoop-NFS-Connector/releases/tag/v1.0.6. There is a patch for Hadoop since it has a bug.
Try that instead and see.

@benruland
Copy link

benruland commented May 17, 2016

Hi, I got the same error as @danielhaviv

After replacing the nfs-hadoop.jar with your 3.0.0-version, the original error is gone, but there is still an error:

16/05/17 09:34:23 ERROR rpc.RpcClient: RPC: xid=107000001 RpcReply request denied: xid:107000001,messageType:RPC_REPLYverifier_flavor:AUTH_NONErejectState:AUTH_ERROR
16/05/17 09:34:23 ERROR mount.MountClient: Mount MNT operation failed with RpcException RPC: xid=107000001 RpcReply request denied: xid:107000001,messageType:RPC_REPLYverifier_flavor:AUTH_NONErejectState:AUTH_ERROR
16/05/17 09:34:23 DEBUG shell.Command: java.io.IOException
        at org.apache.hadoop.fs.nfs.mount.MountClient.mnt(MountClient.java:101)
        at org.apache.hadoop.fs.nfs.NFSv3FileSystemStore.<init>(NFSv3FileSystemStore.java:111)
        at org.apache.hadoop.fs.nfs.topology.SimpleTopologyRouter.getStore(SimpleTopologyRouter.java:83)
        at org.apache.hadoop.fs.nfs.NFSv3FileSystem.getFileStatus(NFSv3FileSystem.java:854)
        at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
        at org.apache.hadoop.fs.Globber.doGlob(Globber.java:285)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:151)
        at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
        at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
        at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
        at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
        at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:305)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:362)

I'm using Cloudera CDH 5.5.2.

Do you have any ideas how to fix this problem? Mountin the nfs via usual Linux commands works like a charm.

Best regards,
Benjamin

edit: I added the nfs-mapping.json for further reference:

{
  "spaces": [
    {
      "name": "netapp",
      "uri": "nfs://10.231.0.11:2049/",
      "options": {
        "nfsExportPath": "/vs01_02",
        "nfsReadSizeBits": 20,
        "nfsWriteSizeBits": 20,
        "nfsSplitSizeBits": 27,
        "nfsAuthScheme": "AUTH_SYS",
    "nfsUsername": "root",
    "nfsGroupname": "root",
    "nfsUid": 0,
    "nfsGid": 0,
        "nfsPort": 2049,
        "nfsMountPort": -1,
        "nfsRpcbindPort": 111
      },
      "endpoints": [
        {
        "host": "nfs://10.231.0.11:2049/",
        "path": "/"
        }
      ]
    }
  ]
}

@AnkitaD
Copy link

AnkitaD commented Jun 5, 2016

Hey check controller side "unix-user" and "unix-group"-

  1. root user's UserID and GroupID of your SVM should be 0 and not 1.

  2. Also create separate users.json file and group.json file.

You can refer to TR-4382 for creating these files. I guess this might help you to get rid of the problem mentioned above.
tr-4382.pdf

@potnuruamar
Copy link

With 2.7.1 there is a open issue "https://issues.apache.org/jira/browse/HADOOP-12345". Please
try with "2.8.0, 3.0.0-alpha1".

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants