Hadoop回收站及fs.trash参数详解
前言:
- Linux系统里,个人觉得最大的不方便之一就是没有回收站的概念。rm -rf很容易造成极大的损失。而在Hadoop或者说HDFS里面,有trash(回收站)的概念,可以使得数据被误删以后,还可以找回来。
- Hadoop里的trash选项默认是关闭的,所以如果要生效,需要提前将trash选项打开,修改conf里的core-site.xml即可,下面我们测试下开启前后的区别:
1.不启用trash
[hadoop@hadoop000 ~]$ hdfs dfs -put test.log / [hadoop@hadoop000 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 1 hadoop supergroup 34 2018-05-23 16:49 /test.log drwx------ - hadoop supergroup 0 2018-05-19 15:48 /tmp drwxr-xr-x - hadoop supergroup 0 2018-05-19 15:48 /user # 删除test.log 注意提示 [hadoop@hadoop000 ~]$ hdfs dfs -rm -r /test.log Deleted /test.log # 重新查看 发现test.log被删除 [hadoop@hadoop000 ~]$ hdfs dfs -ls / Found 2 items drwx------ - hadoop supergroup 0 2018-05-19 15:48 /tmp drwxr-xr-x - hadoop supergroup 0 2018-05-19 15:48 /user
2.启用trash
[hadoop@hadoop000 hadoop]$ pwd /opt/software/hadoop-2.8.1/etc/hadoop # 增加fs.trash参数配置 开启trash(进程不需重启) [hadoop@hadoop000 hadoop]$ vi core-site.xml <property> <name>fs.trash.interval</name> <value>1440</value> </property> <property> <name>fs.trash.checkpoint.interval</name> <value>1440</value> </property> # fs.trash.interval是在指在这个回收周期之内,文件实际上是被移动到trash的这个目录下面,而不是马上把数据删除掉。等到回收周期真正到了以后,hdfs才会将数据真正删除。默认的单位是分钟,1440分钟=60*24,刚好是一天;fs.trash.checkpoint.interval则是指垃圾回收的检查间隔,应该是小于或者等于fs.trash.interval。 # 参考官方文档:http://hadoop.apache.org/docs/r2.8.4/hadoop-project-dist/hadoop-common/core-default.xml [hadoop@hadoop000 ~]$ hdfs dfs -put test.log / [hadoop@hadoop000 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 1 hadoop supergroup 34 2018-05-23 16:54 /test.log drwx------ - hadoop supergroup 0 2018-05-19 15:48 /tmp drwxr-xr-x - hadoop supergroup 0 2018-05-19 15:48 /user # 删除test.log 注意提示的不同 [hadoop@hadoop000 ~]$ hdfs dfs -rm -r /test.log 18/05/23 16:54:55 INFO fs.TrashPolicyDefault: Moved: 'hdfs://192.168.6.217:9000/test.log' to trash at: hdfs://192.168.6.217:9000/user/hadoop/.Trash/Current/test.log # 发现删除的文件在回收站里 [hadoop@hadoop000 ~]$ hdfs dfs -ls /user/hadoop/.Trash/Current Found 1 items -rw-r--r-- 1 hadoop supergroup 34 2018-05-23 16:54 /user/hadoop/.Trash/Current/test.log # 恢复误删除的文件 [hadoop@hadoop000 ~]$ hdfs dfs -mv /user/hadoop/.Trash/Current/test.log /test.log [hadoop@hadoop000 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 1 hadoop supergroup 34 2018-05-23 16:54 /test.log drwx------ - hadoop supergroup 0 2018-05-19 15:48 /tmp drwxr-xr-x - hadoop supergroup 0 2018-05-19 15:48 /user
相关推荐
WeiHHH 2020-09-23
tomli 2020-07-26
eternityzzy 2020-07-19
飞鸿踏雪0 2020-07-09
swazerz 2020-06-22
ViMan0 2020-06-21
zzjmay 2020-06-08
sujins 2020-06-05
strongyoung 2020-06-01
sujins 2020-05-30
sujins 2020-05-29
archive 2020-05-28
changjiang 2020-11-16
minerd 2020-10-28
Aleks 2020-08-19
WeiHHH 2020-08-17