久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<tfoot id='uZzaY'></tfoot>
    • <bdo id='uZzaY'></bdo><ul id='uZzaY'></ul>
    <i id='uZzaY'><tr id='uZzaY'><dt id='uZzaY'><q id='uZzaY'><span id='uZzaY'><b id='uZzaY'><form id='uZzaY'><ins id='uZzaY'></ins><ul id='uZzaY'></ul><sub id='uZzaY'></sub></form><legend id='uZzaY'></legend><bdo id='uZzaY'><pre id='uZzaY'><center id='uZzaY'></center></pre></bdo></b><th id='uZzaY'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='uZzaY'><tfoot id='uZzaY'></tfoot><dl id='uZzaY'><fieldset id='uZzaY'></fieldset></dl></div>

        <legend id='uZzaY'><style id='uZzaY'><dir id='uZzaY'><q id='uZzaY'></q></dir></style></legend>

        <small id='uZzaY'></small><noframes id='uZzaY'>

        python中的Hadoop Streaming Job失敗錯誤

        Hadoop Streaming Job failed error in python(python中的Hadoop Streaming Job失敗錯誤)

        <i id='hIzN2'><tr id='hIzN2'><dt id='hIzN2'><q id='hIzN2'><span id='hIzN2'><b id='hIzN2'><form id='hIzN2'><ins id='hIzN2'></ins><ul id='hIzN2'></ul><sub id='hIzN2'></sub></form><legend id='hIzN2'></legend><bdo id='hIzN2'><pre id='hIzN2'><center id='hIzN2'></center></pre></bdo></b><th id='hIzN2'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='hIzN2'><tfoot id='hIzN2'></tfoot><dl id='hIzN2'><fieldset id='hIzN2'></fieldset></dl></div>
          <tbody id='hIzN2'></tbody>

        <tfoot id='hIzN2'></tfoot>

        • <legend id='hIzN2'><style id='hIzN2'><dir id='hIzN2'><q id='hIzN2'></q></dir></style></legend>
        • <small id='hIzN2'></small><noframes id='hIzN2'>

                  <bdo id='hIzN2'></bdo><ul id='hIzN2'></ul>

                  本文介紹了python中的Hadoop Streaming Job失敗錯誤的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  來自 本指南,我已經成功運行了示例練習.但是在運行我的 mapreduce 作業時,我收到以下錯誤
                  ERROR streaming.StreamJob:作業不成功!
                  2016 年 10 月 12 日 17:13:38 信息流.StreamJob:killJob...
                  流式傳輸作業失??!

                  來自日志文件的錯誤

                  From this guide, I have successfully run the sample exercise. But on running my mapreduce job, I am getting the following error
                  ERROR streaming.StreamJob: Job not Successful!
                  10/12/16 17:13:38 INFO streaming.StreamJob: killJob...
                  Streaming Job Failed!

                  Error from the log file

                  java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
                  at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
                  at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
                  at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
                  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
                  at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
                  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
                  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
                  at org.apache.hadoop.mapred.Child.main(Child.java:170)
                  

                  映射器.py

                  import sys
                  
                  i=0
                  
                  for line in sys.stdin:
                      i+=1
                      count={}
                      for word in line.strip().split():
                          count[word]=count.get(word,0)+1
                      for word,weight in count.items():
                          print '%s	%s:%s' % (word,str(i),str(weight))
                  

                  reducer.py

                  Reducer.py

                  import sys
                  
                  keymap={}
                  o_tweet="2323"
                  id_list=[]
                  for line in sys.stdin:
                      tweet,tw=line.strip().split()
                      #print tweet,o_tweet,tweet_id,id_list
                      tweet_id,w=tw.split(':')
                      w=int(w)
                      if tweet.__eq__(o_tweet):
                          for i,wt in id_list:
                              print '%s:%s	%s' % (tweet_id,i,str(w+wt))
                          id_list.append((tweet_id,w))
                      else:
                          id_list=[(tweet_id,w)]
                          o_tweet=tweet
                  

                  [edit] 運行作業的命令:

                  [edit] command to run the job:

                  hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper /home/hadoop/mapper.py -file /home/hadoop/reducer.py -reducer /home/hadoop/reducer.py -input my-input/* -output my-output
                  

                  輸入是任意隨機序列的句子.

                  Input is any random sequence of sentences.

                  謝謝,

                  推薦答案

                  你的 -mapper 和 -reducer 應該只是腳本名稱.

                  Your -mapper and -reducer should just be the script name.

                  hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input my-input/* -output my-output
                  

                  當您的腳本位于 hdfs 內另一個文件夾中的作業中時,該作業與執行為."的嘗試任務相關.(僅供參考,如果您想要添加另一個文件,例如查找表,您可以在 Python 中打開它,就好像它與您的腳本在同一目錄中一樣,而您的腳本在 M/R 作業中)

                  When your scripts are in the job that is in another folder within hdfs which is relative to the attempt task executing as "." (FYI if you ever want to ad another -file such as a look up table you can open it in Python as if it was in the same dir as your scripts while your script is in M/R job)

                  還要確保你有 chmod a+x mapper.py 和 chmod a+x reducer.py

                  also make sure you have chmod a+x mapper.py and chmod a+x reducer.py

                  這篇關于python中的Hadoop Streaming Job失敗錯誤的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  python: Two modules and classes with the same name under different packages(python:不同包下同名的兩個模塊和類)
                  Configuring Python to use additional locations for site-packages(配置 Python 以使用站點包的其他位置)
                  How to structure python packages without repeating top level name for import(如何在不重復導入頂級名稱的情況下構造python包)
                  Install python packages on OpenShift(在 OpenShift 上安裝 python 包)
                  How to refresh sys.path?(如何刷新 sys.path?)
                  Distribute a Python package with a compiled dynamic shared library(分發帶有已編譯動態共享庫的 Python 包)

                  • <i id='OlcAi'><tr id='OlcAi'><dt id='OlcAi'><q id='OlcAi'><span id='OlcAi'><b id='OlcAi'><form id='OlcAi'><ins id='OlcAi'></ins><ul id='OlcAi'></ul><sub id='OlcAi'></sub></form><legend id='OlcAi'></legend><bdo id='OlcAi'><pre id='OlcAi'><center id='OlcAi'></center></pre></bdo></b><th id='OlcAi'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='OlcAi'><tfoot id='OlcAi'></tfoot><dl id='OlcAi'><fieldset id='OlcAi'></fieldset></dl></div>
                      <legend id='OlcAi'><style id='OlcAi'><dir id='OlcAi'><q id='OlcAi'></q></dir></style></legend>

                          <tbody id='OlcAi'></tbody>
                          <bdo id='OlcAi'></bdo><ul id='OlcAi'></ul>
                          <tfoot id='OlcAi'></tfoot>

                            <small id='OlcAi'></small><noframes id='OlcAi'>

                            主站蜘蛛池模板: 538在线视频| 91色网站 | 国产精品入口夜色视频大尺度 | 成人黄色在线观看 | 黄视频在线播放 | 中文在线字幕观看 | 伊人国产女 | 超碰在线观看97 | 日韩午夜片 | 色片网址 | 欧美不卡在线 | 欧美日韩国产在线播放 | aaaaaabbbbbb毛片| 999久久久久久久久6666 | 欧美日韩毛片 | 久久免费小视频 | 情侣av| 日韩精品影视 | 一区二区三区四区在线视频 | 天堂成人av | 欧美特黄| 黄色福利 | 在线成人小视频 | 国产第二页 | 精品国产欧美一区二区三区成人 | 午夜xxx| 91精品国产乱码久久久 | 欧美在线观看一区二区三区 | 波多野结衣在线观看一区二区 | 国产精品理论片 | 日韩精品久久久 | 中文字幕黄色 | 99久久精品国产一区二区三区 | 精品一区二区三区av | 草草在线视频 | 免费中文字幕 | 99久久综合| 国产精品成人一区二区 | 欧美成人激情 | 在线观看视频一区二区三区 | 精品视频免费在线观看 |