转载请注明出处:http://blog.csdn.net/l1028386804/article/details/79057521
一、场景描述
数据源准备工作详见博文《Python之——自动上传本地log文件到HDFS(基于Hadoop 2.5.2)》。
一个网站的请求量大小,直接关系到网站的访问质量,有必要对该数据进行分析且关注,本实例一分钟为单位对网站的访问数进行统计。二、实现MapReduce
【/usr/local/python/source/http_minute_conn.py】
# -*- coding:UTF-8 -*-
'''
Created on 2018年1月14日
@author: liuyazhuang
'''
from mrjob.job import MRJob
import re
class MRCounter(MRJob):
def mapper(self, key, line):
i = 0
for dt in line.split():
#获取时间字段,位于日志的底 列,形如[14/Jan/2018:08:47:24
if i == 3:
timeraw = dt.split(":")
#获取小时:分钟,作为key
hm = timeraw[1] + ":" + timeraw[2]
#初始化key:value,value计数为1方便reducer做累加
yield hm, 1
i += 1
def reducer(self, key, occurrences):
yield key, sum(occurrences)
if __name__ == '__main__':
MRCounter.run()
三、生成MapReduce任务
python http_minute_conn.py -r hadoop --jobconf mapreduce.job.priority=VERY_HIGH --jobconf mapreduce.map.tasks=2 --jobconf mapduce.reduce.tasks=1 -o hdfs://liuyazhuang121:9000/output/http_minute_conn hdfs://liuyazhuang121:9000/user/root/website.com/20180114
四、验证结果
此时,我们输入命令:
hadoop fs -ls /output/http_minute_conn
查看生成的结果文件如下:
[root@liuyazhuang121 source]# hadoop fs -ls /output/http_minute_conn
Found 2 items
-rw-r--r-- 1 root supergroup 0 2018-01-14 16:50 /output/http_minute_conn/_SUCCESS
-rw-r--r-- 1 root supergroup 7522 2018-01-14 16:50 /output/http_minute_conn/part-00000
然后,我们通过命令:
hadoop fs -cat /output/http_minute_conn/part-00000
查看输出的结果如下:
"14:00" 7
"14:01" 7
"14:02" 10
"14:03" 10
"14:04" 4
"14:05" 25
"14:06" 14
"14:07" 3
"14:08" 3
"14:09" 4
"14:10" 3
"14:11" 3
"14:12" 13
"14:13" 24
"14:14" 13
"14:15" 3
"14:16" 5
"14:17" 3
"14:18" 5
"14:19" 8
"14:20" 2
"14:21" 1
"14:22" 1
"14:23" 2
"14:24" 4
"14:25" 2
"14:26" 2
"14:27" 5
"14:28" 5
"14:29" 3
"14:30" 2
"14:31" 2
"14:32" 2
"14:33" 2
"14:34" 4
"14:35" 2
"14:36" 2
"14:37" 24
"14:38" 23
"14:39" 24
"14:40" 3
"14:41" 23
"14:42" 3
"14:43" 37
"14:44" 4
"14:45" 3
"14:46" 3
"14:47" 3
"14:48" 43
"14:49" 19
"14:50" 3
"14:51" 32
"14:52" 4
"14:53" 24
"14:54" 3
"14:55" 24