While we were experimenting with MapReduce programs in our hadoop cluster, we started noticing following errors.
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory file: /tmp/hsperfdata_hdfs/28099 Try using the -Djava.io.tmpdir= option to select an alternate temp location. Exception in thread "main" java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:107) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81) at org.apache.hadoop.util.RunJar.run(RunJar.java:209) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
On first look it seemed as if disk is full, and that would be causing the jobs to fail. Further analysis showed that /tmp directory was mounted with allocated space of just 300M. Remounting the /tmp drive with 2GB space solved the problem.
sudo mount -o remount,size=2G /tmp