問題描述
我正在嘗試學習將 Yelp 的 Python API 用于 MapReduce,MRJob.他們簡單的單詞計數器示例很有意義,但我很好奇如何處理涉及多個輸入的應用程序.例如,不是簡單地計算文檔中的單詞,而是將向量乘以矩陣.我想出了這個解決方案,它可以工作,但感覺很傻:
I'm trying to learn to use Yelp's Python API for MapReduce, MRJob. Their simple word counter example makes sense, but I'm curious how one would handle an application involving multiple inputs. For instance, rather than simply counting the words in a document, multiplying a vector by a matrix. I came up with this solution, which functions, but feels silly:
class MatrixVectMultiplyTast(MRJob):
def multiply(self,key,line):
line = map(float,line.split(" "))
v,col = line[-1],line[:-1]
for i in xrange(len(col)):
yield i,col[i]*v
def sum(self,i,occurrences):
yield i,sum(occurrences)
def steps(self):
return [self.mr (self.multiply,self.sum),]
if __name__=="__main__":
MatrixVectMultiplyTast.run()
這段代碼是運行 ./matrix.py <input.txt
之所以起作用,是因為矩陣按列存儲在 input.txt 中,相應的向量值位于行尾.
This code is run ./matrix.py < input.txt
and the reason it works is that the matrix stored in input.txt by columns, with the corresponding vector value at the end of the line.
所以,下面的矩陣和向量:
So, the following matrix and vector:
表示為 input.txt 為:
are represented as input.txt as:
簡而言之,我將如何將矩陣和向量更自然地存儲在單獨的文件中并將它們都傳遞到 MRJob 中?
In short, how would I go about storing the matrix and vector more naturally in separate files and passing them both into MRJob?
推薦答案
如果您需要針對另一個(或相同的 row_i、row_j)數據集處理原始數據,您可以:
If you're in need of processing your raw data against another (or same row_i, row_j) data set, you can either:
1) 創建一個 S3 存儲桶來存儲數據的副本.將此副本的位置傳遞給您的任務類,例如下面代碼中的 self.options.bucket 和 self.options.my_datafile_copy_location .警告:不幸的是,似乎整個文件必須在處理之前下載"到任務機器.如果連接失敗或加載時間過長,此作業可能會失敗.這是一些執行此操作的 Python/MRJob 代碼.
1) Create an S3 bucket to store a copy of your data. Pass the location of this copy to your task class, e.g. self.options.bucket and self.options.my_datafile_copy_location in the code below. Caveat: Unfortunately, it seems that the whole file must get "downloaded" to the task machines before getting processed. If the connections falters or takes too long to load, this job may fail. Here is some Python/MRJob code to do this.
把它放在你的映射器函數中:
Put this in your mapper function:
d1 = line1.split(' ', 1)
v1, col1 = d1[0], d1[1]
conn = boto.connect_s3(aws_access_key_id=<AWS_ACCESS_KEY_ID>, aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>)
bucket = conn.get_bucket(self.options.bucket) # bucket = conn.get_bucket(MY_UNIQUE_BUCKET_NAME_AS_STRING)
data_copy = bucket.get_key(self.options.my_datafile_copy_location).get_contents_as_string().rstrip()
### CAVEAT: Needs to get the whole file before processing the rest.
for line2 in data_copy.split('
'):
d2 = line2.split(' ', 1)
v2, col2 = d2[0], d2[1]
## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
yield <your output key, value pairs>
conn.close()
2) 創建一個 SimpleDB 域,并將所有數據存儲在其中.在這里閱讀 boto 和 SimpleDB:http://code.google.com/p/boto/wiki/SimpleDbIntro
2) Create a SimpleDB domain, and store all of your data in there. Read here on boto and SimpleDB: http://code.google.com/p/boto/wiki/SimpleDbIntro
您的映射器代碼如下所示:
Your mapper code would look like this:
dline = dline.strip()
d0 = dline.split(' ', 1)
v1, c1 = d0[0], d0[1]
sdb = boto.connect_sdb(aws_access_key_id=<AWS_ACCESS_KEY>, aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>)
domain = sdb.get_domain(MY_DOMAIN_STRING_NAME)
for item in domain:
v2, c2 = item.name, item['column']
## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
yield <your output key, value pairs>
sdb.close()
如果您有大量數據,第二個選項可能會執行得更好,因為它可以對每一行數據而不是一次全部數據進行請求.請記住,SimpleDB 值的長度最多只能為 1024 個字符,因此如果您的數據值長于此,您可能需要通過某種方法進行壓縮/解壓縮.
This second option may perform better if you have very large amounts of data, since it can make the requests for each row of data rather than the whole amount at once. Keep in mind that SimpleDB values can only be a maximum of 1024 characters long, so you may need to compress/decompress via some method if your data values are longer than that.
這篇關于帶 MRJob 的多個輸入的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!