When you try to do any HDFS operation, you get following exception:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/hadoopuser/dir/in. Name node is in safe mode.
What is Safe Mode in Hadoop?
Safe Mode in Hadoop is a maintenance state of NameNode. During Safe Mode, HDFS cluster is read-only, and does not allow any changes. It doesn’t replicate or delete blocks.
When the Namenode Starts, it automatically enters the Safe mode, and it performs following initialization tasks:
- Loads the file system namespace from the last known saved fsimage,
- Loads the edit log file.
- Applies edits log file changes on fsimage, and created in new file system namespace.
- Receives block reports containing information about block locations from all Datanodes
To leave Safe Mode, NameNode should collect reports for at least a specified threshold percentage of blocks and these should satisfy minimum replication condition.Even though this threshold may be reached fast, safe mode will extend to the configurable amount of time . This is make sure that remaining DataNodes check in before it starts replicating missing blocks or deleting over replicated blocks. After completion of block replication maintenance activity, the name node leaves safe mode automatically.
You can check if your Hadoop cluster by running following command:
hdfs dfsadmin -safemode get
If you just restarted your cluster, you should give it ample time to recover from Safemode. This time can vary based on size of your cluster. If its stuck in dhat state, then that can be fixed by using following command:
hdfs dfsadmin -safemode leave