绝地求生出现错误 出现Client/Server verslon mismatch是什么意思

db:: 4.27::RSL https://vcenterserver:9443/vsphere-client/locales/rsl/flex-common-lib-5.5.0.swf failed to load. Error #2032 j3
Widget settings form goes here博客分类:
1.question
13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying&&&&& connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
&&& answer:
&
&&& namenode 节点没有起来,查看namenode日志 排错&
2.question
&& ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /server/bin/hadoop/data: namenode namespaceID = ; datanode namespaceID = &
&& answer:
&&&
&& 可能是namesplaceId的版本重复了,此时可先format,在删除那么文件,在重新format, 所有slave也format(可选)
3.question
17:26:57,748 ERROR namenode.NameNode - java.lang.NullPointerException
&&&&&&& at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:136)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:176)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:206)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:240)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:434)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
&&&&&&& at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)&&
&& answer:
&&& hdfs://server0:9000/
&&
&&& 这个问题是 9000 后边的/&& 注意配置 hadoop配置文件内& 所有的 路径后边不带 "/"
&&& **切记更改之后同步到所有slave上
&&&
4.question
& Exception in thread "main" java.io.IOException: Call to server0/192.168.2.10:9000 failed on local exception: java.io.EOFException
&&&&&&& at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
&&&&&&& at org.apache.hadoop.ipc.Client.call(Client.java:743)
&&&&&&& at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
&&&&&&& at $Proxy0.getProtocolVersion(Unknown Source)
&&&&&&& at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
&&&&&&& at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
&&&&&&& at org.apache.hadoop.hdfs.DFSClient.&init&(DFSClient.java:207)
&&&&&&& at org.apache.hadoop.hdfs.DFSClient.&init&(DFSClient.java:170)
&&&&&&& at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
&&&&&&& at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
&&&&&&& at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
&&&&&&& at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
&&&&&&& at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
&&&&&&& at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
&&&&&&& at org.apache.nutch.crawl.Crawl.main(Crawl.java:94)
&&&&&&&
18:24:57,507 WARN& ipc.Server - Incorrect header or version mismatch from 192.168.2.10:42413 got version 3 expected version 4
&&&&&&&
& answer:
&&&&
&&&& 当slave调不到master的时候 如果配置文件没问题 报这个错误,则是& hadoop版本的问题,& hadoop和nutch1.2中hadoop版本不一样
&&&&
5.question
17:07:00,946 ERROR datanode.DataNode - DatanodeRegistration(192.168.2.12:50010, storageID=DS--127.0.0.2-5333243, infoPort=50075, ipcPort=50020):DataXceiver
&&&&&&& org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_3 is valid, and cannot be written to.
&&&&&&& at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:983)
&&&&&&& at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.&init&(BlockReceiver.java:98)
&&&&&&& at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
&&&&&&& at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
&&&&&&& at java.lang.Thread.run(Thread.java:662)
&&&&&&&
&& answer:
&&
&&&&&& 这个问题则为:
&&&&&& /etc/hosts& 要ip 映射到主机名
&&&&&&
&&&&&& 例如:
&&&&&& #hadoop master
192.168.2.10&&& server0
192.168.2.11&&& server1
192.168.2.12&&& server2
192.168.2.13&&& server3
**当你修改了这个发现还有这个问题时
vi /etc/HOSTNAME& 这个文件里 一定要改成相应的 master 或是 slave 所在的 主机名
而不能是localhost
例如:server1 的机器
则 HOSTNAME 内为 server1
浏览: 196759 次
来自: 北京
你写的不好用啊
meifangzi 写道楼主真厉害
都分析源码了 用了很久. ...
楼主真厉害
都分析源码了
顶一个,最近也在学习设计模式,发现一个问题,如果老是看别人的博 ...
木南飘香 写道
(window.slotbydup=window.slotbydup || []).push({
id: '4773203',
container: s,
size: '200,200',
display: 'inlay-fix'

我要回帖

更多关于 绝地求生出现battleye 的文章

 

随机推荐