Exceptionin thread "main" java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCrc32.nativeCo

转自:http://www.cnblogs.com/longshiyVip/p/4805418.html

在eclipse上运行hadoop报错:Exceptionin thread “main” java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjav,这个问题折腾了我很久,后来找到方法解决。

描述一下:电脑是win8.1的64位操作系统,在机子上安装了redhat虚拟机,部署了hadoop环境,在win8的eclipse上面运行hadoop的mapreduce程序报错:Exceptionin thread “main” java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjav.

后来在网上找到解决方案是:由于hadoop.dll 版本问题出现的,这是由于hadoop.dll 版本问题,2.4之前的和自后的需要的不一样,需要选择正确的版本(包括操作系统的版本),并且在 Hadoop/bin和 C:\windows\system32 上将其替换。

我的hadoop是2.5.0的,我之前用的是1.2的hadoop.dll,后来根据网上的说法换了hadoop.dll,找了2.6的还是不行,原来我找的2.6的是32位操作系统的,后来幸运找到了2.6的64位的hadoop.dll。于是这个问题得以解决,真的感谢提供下载资源的人。太难找了。

提供了64位hadoop.dll相关文件的下载:hadoop2.6(x64)V0.2

原下载地址:http://download.csdn.net/detail/myamor/8393459


当在hadoop上运行MapReduce程序时,报了以下错误:

Exception in thread “main” java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.ChecksumFileSystem C h e c k s u m F S O u t p u t S u m m e r . c l o s e ( C h e c k s u m F i l e S y s t e m . j a v a : 400 ) a t o r g . a p a c h e . h a d o o p . f s . F S D a t a O u t p u t S t r e a m PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job 10. r u n ( J o b . j a v a : 1296 ) a t o r g . a p a c h e . h a d o o p . m a p r e d u c e . J o b 10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:71)

按照上面那篇博客,总算是解决了:
这里写图片描述

猜你喜欢

转载自blog.csdn.net/zimojiang/article/details/80473201
今日推荐