問題描述
為什么下面的代碼只適用于multiprocessing.dummy
,而不適用于簡單的multiprocessing
.
Why does the code below work only with multiprocessing.dummy
, but not with simple multiprocessing
.
import urllib.request
#from multiprocessing.dummy import Pool #this works
from multiprocessing import Pool
urls = ['http://www.python.org', 'http://www.yahoo.com','http://www.scala.org', 'http://www.google.com']
if __name__ == '__main__':
with Pool(5) as p:
results = p.map(urllib.request.urlopen, urls)
錯誤:
Traceback (most recent call last):
File "urlthreads.py", line 31, in <module>
results = p.map(urllib.request.urlopen, urls)
File "C:UserspatriAnaconda3libmultiprocessingpool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:UserspatriAnaconda3libmultiprocessingpool.py", line 657, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[<http.client.HTTPResponse object at 0x0000016AEF204198>]'. Reason: 'TypeError("cannot serialize '_io.BufferedReader' object")'
缺少什么才能在沒有虛擬"的情況下工作?
What's missing so that it works without "dummy" ?
推薦答案
你從 urlopen()
得到的 http.client.HTTPResponse
-object 有一個 >_io.BufferedReader
- 附加對象,這個對象不能被pickle.
The http.client.HTTPResponse
-object you get back from urlopen()
has a _io.BufferedReader
-object attached, and this object cannot be pickled.
pickle.dumps(urllib.request.urlopen('http://www.python.org').fp)
Traceback (most recent call last):
...
pickle.dumps(urllib.request.urlopen('http://www.python.org').fp)
TypeError: cannot serialize '_io.BufferedReader' object
multiprocessing.Pool
將需要腌制(序列化)結果以將其發送回父進程,但此處失敗.由于 dummy
使用線程而不是進程,因此不會出現酸洗,因為同一進程中的線程自然共享它們的內存.
multiprocessing.Pool
will need to pickle (serialize) the results to send it back to the parent process and this fails here. Since dummy
uses threads instead of processes, there will be no pickling, because threads in the same process share their memory naturally.
這個TypeError
的一般解決方案是:
A general solution to this TypeError
is:
- 讀出緩沖區并保存內容(如果需要)
- 從您嘗試腌制的對象中刪除對
'_io.BufferedReader'
的引用
在您的情況下,在 http.client.HTTPResponse
上調用 .read()
將清空并刪除緩沖區,因此是用于將響應轉換為可腌制內容的函數可以這樣做:
In your case, calling .read()
on the http.client.HTTPResponse
will empty and remove the buffer, so a function for converting the response into something pickleable could simply do this:
def read_buffer(response):
response.text = response.read()
return response
例子:
r = urllib.request.urlopen('http://www.python.org')
r = read_buffer(r)
pickle.dumps(r)
# Out: b'x80x03chttp.client
HTTPResponse...
在考慮這種方法之前,請確保您確實想要使用多處理而不是多線程.對于像您在此處擁有的 I/O 綁定任務,多線程就足夠了,因為無論如何大部分時間都花在等待響應上(不需要 cpu 時間).多處理和所涉及的 IPC 也會帶來大量開銷.
Before you consider this approach, make sure you really want to use multiprocessing instead of multithreading. For I/O-bound tasks like you have it here, multithreading would be sufficient, since most of the time is spend in waiting (no need for cpu-time) for the response anyway. Multiprocessing and the IPC involved also introduces substantial overhead.
這篇關于multiprocessing.pool.MaybeEncodingError: 'TypeError("cannot serialize '_io.BufferedReader' object",)'的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!