久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<tfoot id='uZzaY'></tfoot>
    • <bdo id='uZzaY'></bdo><ul id='uZzaY'></ul>
    <i id='uZzaY'><tr id='uZzaY'><dt id='uZzaY'><q id='uZzaY'><span id='uZzaY'><b id='uZzaY'><form id='uZzaY'><ins id='uZzaY'></ins><ul id='uZzaY'></ul><sub id='uZzaY'></sub></form><legend id='uZzaY'></legend><bdo id='uZzaY'><pre id='uZzaY'><center id='uZzaY'></center></pre></bdo></b><th id='uZzaY'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='uZzaY'><tfoot id='uZzaY'></tfoot><dl id='uZzaY'><fieldset id='uZzaY'></fieldset></dl></div>

        <legend id='uZzaY'><style id='uZzaY'><dir id='uZzaY'><q id='uZzaY'></q></dir></style></legend>

        <small id='uZzaY'></small><noframes id='uZzaY'>

        python中的Hadoop Streaming Job失敗錯誤

        Hadoop Streaming Job failed error in python(python中的Hadoop Streaming Job失敗錯誤)

        <i id='hIzN2'><tr id='hIzN2'><dt id='hIzN2'><q id='hIzN2'><span id='hIzN2'><b id='hIzN2'><form id='hIzN2'><ins id='hIzN2'></ins><ul id='hIzN2'></ul><sub id='hIzN2'></sub></form><legend id='hIzN2'></legend><bdo id='hIzN2'><pre id='hIzN2'><center id='hIzN2'></center></pre></bdo></b><th id='hIzN2'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='hIzN2'><tfoot id='hIzN2'></tfoot><dl id='hIzN2'><fieldset id='hIzN2'></fieldset></dl></div>
          <tbody id='hIzN2'></tbody>

        <tfoot id='hIzN2'></tfoot>

        • <legend id='hIzN2'><style id='hIzN2'><dir id='hIzN2'><q id='hIzN2'></q></dir></style></legend>
        • <small id='hIzN2'></small><noframes id='hIzN2'>

                  <bdo id='hIzN2'></bdo><ul id='hIzN2'></ul>

                  本文介紹了python中的Hadoop Streaming Job失敗錯誤的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  來自 本指南,我已經(jīng)成功運行了示例練習.但是在運行我的 mapreduce 作業(yè)時,我收到以下錯誤
                  ERROR streaming.StreamJob:作業(yè)不成功!
                  2016 年 10 月 12 日 17:13:38 信息流.StreamJob:killJob...
                  流式傳輸作業(yè)失敗!

                  來自日志文件的錯誤

                  From this guide, I have successfully run the sample exercise. But on running my mapreduce job, I am getting the following error
                  ERROR streaming.StreamJob: Job not Successful!
                  10/12/16 17:13:38 INFO streaming.StreamJob: killJob...
                  Streaming Job Failed!

                  Error from the log file

                  java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
                  at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
                  at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
                  at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
                  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
                  at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
                  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
                  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
                  at org.apache.hadoop.mapred.Child.main(Child.java:170)
                  

                  映射器.py

                  import sys
                  
                  i=0
                  
                  for line in sys.stdin:
                      i+=1
                      count={}
                      for word in line.strip().split():
                          count[word]=count.get(word,0)+1
                      for word,weight in count.items():
                          print '%s	%s:%s' % (word,str(i),str(weight))
                  

                  reducer.py

                  Reducer.py

                  import sys
                  
                  keymap={}
                  o_tweet="2323"
                  id_list=[]
                  for line in sys.stdin:
                      tweet,tw=line.strip().split()
                      #print tweet,o_tweet,tweet_id,id_list
                      tweet_id,w=tw.split(':')
                      w=int(w)
                      if tweet.__eq__(o_tweet):
                          for i,wt in id_list:
                              print '%s:%s	%s' % (tweet_id,i,str(w+wt))
                          id_list.append((tweet_id,w))
                      else:
                          id_list=[(tweet_id,w)]
                          o_tweet=tweet
                  

                  [edit] 運行作業(yè)的命令:

                  [edit] command to run the job:

                  hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper /home/hadoop/mapper.py -file /home/hadoop/reducer.py -reducer /home/hadoop/reducer.py -input my-input/* -output my-output
                  

                  輸入是任意隨機序列的句子.

                  Input is any random sequence of sentences.

                  謝謝,

                  推薦答案

                  你的 -mapper 和 -reducer 應該只是腳本名稱.

                  Your -mapper and -reducer should just be the script name.

                  hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input my-input/* -output my-output
                  

                  當您的腳本位于 hdfs 內(nèi)另一個文件夾中的作業(yè)中時,該作業(yè)與執(zhí)行為."的嘗試任務相關.(僅供參考,如果您想要添加另一個文件,例如查找表,您可以在 Python 中打開它,就好像它與您的腳本在同一目錄中一樣,而您的腳本在 M/R 作業(yè)中)

                  When your scripts are in the job that is in another folder within hdfs which is relative to the attempt task executing as "." (FYI if you ever want to ad another -file such as a look up table you can open it in Python as if it was in the same dir as your scripts while your script is in M/R job)

                  還要確保你有 chmod a+x mapper.py 和 chmod a+x reducer.py

                  also make sure you have chmod a+x mapper.py and chmod a+x reducer.py

                  這篇關于python中的Hadoop Streaming Job失敗錯誤的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  python: Two modules and classes with the same name under different packages(python:不同包下同名的兩個模塊和類)
                  Configuring Python to use additional locations for site-packages(配置 Python 以使用站點包的其他位置)
                  How to structure python packages without repeating top level name for import(如何在不重復導入頂級名稱的情況下構造python包)
                  Install python packages on OpenShift(在 OpenShift 上安裝 python 包)
                  How to refresh sys.path?(如何刷新 sys.path?)
                  Distribute a Python package with a compiled dynamic shared library(分發(fā)帶有已編譯動態(tài)共享庫的 Python 包)

                  • <i id='OlcAi'><tr id='OlcAi'><dt id='OlcAi'><q id='OlcAi'><span id='OlcAi'><b id='OlcAi'><form id='OlcAi'><ins id='OlcAi'></ins><ul id='OlcAi'></ul><sub id='OlcAi'></sub></form><legend id='OlcAi'></legend><bdo id='OlcAi'><pre id='OlcAi'><center id='OlcAi'></center></pre></bdo></b><th id='OlcAi'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='OlcAi'><tfoot id='OlcAi'></tfoot><dl id='OlcAi'><fieldset id='OlcAi'></fieldset></dl></div>
                      <legend id='OlcAi'><style id='OlcAi'><dir id='OlcAi'><q id='OlcAi'></q></dir></style></legend>

                          <tbody id='OlcAi'></tbody>
                          <bdo id='OlcAi'></bdo><ul id='OlcAi'></ul>
                          <tfoot id='OlcAi'></tfoot>

                            <small id='OlcAi'></small><noframes id='OlcAi'>

                            主站蜘蛛池模板: 欧美爱爱视频网站 | 综合精品在线 | 伊人久久综合 | 在线观看中文字幕视频 | 国产传媒在线观看 | 黄色精品视频网站 | 色视频网站| 国产成人99久久亚洲综合精品 | av小说在线 | 欧美极品在线播放 | 精品成人一区二区 | av电影手机在线看 | 日日夜夜av | 小h片免费观看久久久久 | 久久亚洲91 | 羞羞视频网站免费看 | 成人精品鲁一区一区二区 | 精品国产区| 日韩福利 | 久久国色| 人人干人人舔 | 在线免费看黄 | 中文在线一区二区 | 高清国产午夜精品久久久久久 | 91亚洲精选 | 久久精品视频12 | 久久99国产精品久久99果冻传媒 | 欧美激情欧美激情在线五月 | 射久久 | 国产精品一区二区在线 | 黄色成人在线观看 | 国产福利视频导航 | 在线小视频 | 欧美一区成人 | 欧州一区二区 | 黄色在线免费观看视频网站 | 久久日韩精品一区二区三区 | 成人久久久久久久久 | 91亚洲精品国偷拍自产在线观看 | 超碰人人人人 | 日本高清aⅴ毛片免费 |