久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<small id='s5p14'></small><noframes id='s5p14'>

    1. <tfoot id='s5p14'></tfoot>

    2. <legend id='s5p14'><style id='s5p14'><dir id='s5p14'><q id='s5p14'></q></dir></style></legend>

      <i id='s5p14'><tr id='s5p14'><dt id='s5p14'><q id='s5p14'><span id='s5p14'><b id='s5p14'><form id='s5p14'><ins id='s5p14'></ins><ul id='s5p14'></ul><sub id='s5p14'></sub></form><legend id='s5p14'></legend><bdo id='s5p14'><pre id='s5p14'><center id='s5p14'></center></pre></bdo></b><th id='s5p14'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='s5p14'><tfoot id='s5p14'></tfoot><dl id='s5p14'><fieldset id='s5p14'></fieldset></dl></div>
          <bdo id='s5p14'></bdo><ul id='s5p14'></ul>
      1. 在 python 中高效的文件讀取需要在 ' ' 上拆

        Efficient file reading in python with need to split on #39;#39;(在 python 中高效的文件讀取需要在 上拆分)

          <tfoot id='yPL8R'></tfoot>
          1. <i id='yPL8R'><tr id='yPL8R'><dt id='yPL8R'><q id='yPL8R'><span id='yPL8R'><b id='yPL8R'><form id='yPL8R'><ins id='yPL8R'></ins><ul id='yPL8R'></ul><sub id='yPL8R'></sub></form><legend id='yPL8R'></legend><bdo id='yPL8R'><pre id='yPL8R'><center id='yPL8R'></center></pre></bdo></b><th id='yPL8R'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='yPL8R'><tfoot id='yPL8R'></tfoot><dl id='yPL8R'><fieldset id='yPL8R'></fieldset></dl></div>
                <bdo id='yPL8R'></bdo><ul id='yPL8R'></ul>
                  <tbody id='yPL8R'></tbody>

              • <small id='yPL8R'></small><noframes id='yPL8R'>

                  <legend id='yPL8R'><style id='yPL8R'><dir id='yPL8R'><q id='yPL8R'></q></dir></style></legend>
                • 本文介紹了在 python 中高效的文件讀取需要在 ' ' 上拆分的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我一直在閱讀以下文件:

                  I've traditionally been reading in files with:

                  file = open(fullpath, "r")
                  allrecords = file.read()
                  delimited = allrecords.split('
                  ')
                  for record in delimited[1:]:
                      record_split = record.split(',')
                  

                  with open(os.path.join(txtdatapath,pathfilename), "r") as data:
                    datalines = (line.rstrip('
                  ') for line in data)
                    for record in datalines:
                      split_line = record.split(',')
                      if len(split_line) > 1:
                  

                  但似乎當我在多處理線程中處理這些文件時,我得到了 MemoryError.當我正在閱讀的文本文件需要在 ' ' 上拆分時,我如何才能最好地逐行讀取文件.

                  But it seems when I process these files in a multiprocessing thread I get MemoryError. How can I best readin files line by line, when the text file I'm reading needs to be split on ' '.

                  這里是多處理代碼:

                  pool = Pool()
                  fixed_args = (targetdirectorytxt, value_dict)
                  varg = ((filename,) + fixed_args for filename in readinfiles)
                  op_list = pool.map_async(PPD_star, list(varg), chunksize=1)     
                  while not op_list.ready():
                    print("Number of files left to process: {}".format(op_list._number_left))
                    time.sleep(60)
                  op_list = op_list.get()
                  pool.close()
                  pool.join()
                  

                  這是錯誤日志

                  Exception in thread Thread-3:
                  Traceback (most recent call last):
                    File "C:Python27lib	hreading.py", line 810, in __bootstrap_inner
                      self.run()
                    File "C:Python27lib	hreading.py", line 763, in run
                      self.__target(*self.__args, **self.__kwargs)
                    File "C:Python27libmultiprocessingpool.py", line 380, in _handle_results
                      task = get()
                  MemoryError
                  

                  我正在嘗試按照 Mike 的建議安裝 pathos,但我遇到了問題.這是我的安裝命令:

                  I'm trying to install pathos as Mike has kindly suggested but I'm running into issues. Here is my install command:

                  pip install https://github.com/uqfoundation/pathos/zipball/master --allow-external pathos --pre
                  

                  但這是我收到的錯誤消息:

                  But here are the error messages that I get:

                  Downloading/unpacking https://github.com/uqfoundation/pathos/zipball/master
                    Running setup.py (path:c:usersxxxappdatalocal	emp2pip-1e4saj-b
                  uildsetup.py) egg_info for package from https://github.com/uqfoundation/pathos/
                  zipball/master
                  
                  Downloading/unpacking ppft>=1.6.4.5 (from pathos==0.2a1.dev0)
                    Running setup.py (path:c:usersxxxappdatalocal	emp2pip_build_jp
                  tyuserppftsetup.py) egg_info for package ppft
                  
                      warning: no files found matching 'python-restlib.spec'
                  Requirement already satisfied (use --upgrade to upgrade): dill>=0.2.2 in c:pyth
                  on27libsite-packagesdill-0.2.2-py2.7.egg (from pathos==0.2a1.dev0)
                  Requirement already satisfied (use --upgrade to upgrade): pox>=0.2.1 in c:pytho
                  n27libsite-packagespox-0.2.1-py2.7.egg (from pathos==0.2a1.dev0)
                  Downloading/unpacking pyre==0.8.2.0-pathos (from pathos==0.2a1.dev0)
                    Could not find any downloads that satisfy the requirement pyre==0.8.2.0-pathos
                   (from pathos==0.2a1.dev0)
                    Some externally hosted files were ignored (use --allow-external pyre to allow)
                  .
                  Cleaning up...
                  No distributions at all found for pyre==0.8.2.0-pathos (from pathos==0.2a1.dev0)
                  
                  Storing debug log for failure in C:Usersxxxpippip.log
                  

                  我在 Windows 7 64 位上安裝.最后,我設(shè)法使用 easy_install 進行了安裝.

                  I'm installing on Windows 7 64 bit. In the end I managed to install with easy_install.

                  但是現(xiàn)在我失敗了,因為我無法打開那么多文件:

                  But Now I have a failure as I cannot open that many files:

                  Finished reading in Exposures...
                  Reading Samples from:  C:XXXXXXXXX
                  Traceback (most recent call last):
                    File "events.py", line 568, in <module>
                      mdrcv_dict = ReadDamages(damage_dir, value_dict)
                    File "events.py", line 185, in ReadDamages
                      res = thpool.amap(mppool.map, [rstrip]*len(readinfiles), files)
                    File "C:Python27libsite-packagespathos-0.2a1.dev0-py2.7.eggpathosmultipr
                  ocessing.py", line 230, in amap
                      return _pool.map_async(star(f), zip(*args)) # chunksize
                    File "events.py", line 184, in <genexpr>
                      files = (open(name, 'r') for name in readinfiles[0:])
                  IOError: [Errno 24] Too many open files: 'C:\xx.csv'
                  

                  當前使用多處理庫,我將參數(shù)和字典傳遞到我的函數(shù)中并打開映射文件,然后輸出字典.這是我目前如何做的一個例子,如何用 pathos 做這個聰明的方法?

                  Currently using the multiprocessing library, I am passing in parameters and dictionaries into my function and opening a mapped file and then outputting a dictionary. Here is an example of how I currently do it, how would the smart way to do this with pathos?

                  def PP_star(args_flat):
                      return PP(*args_flat)
                  
                  def PP(pathfilename, txtdatapath, my_dict):
                      return com_dict
                  
                  fixed_args = (targetdirectorytxt, my_dict)
                  varg = ((filename,) + fixed_args for filename in readinfiles)
                  op_list = pool.map_async(PP_star, list(varg), chunksize=1)
                  

                  如何使用 pathos.multiprocessing

                  推薦答案

                  假設(shè)我們有 file1.txt:

                  hello35
                  1234123
                  1234123
                  hello32
                  2492wow
                  1234125
                  1251234
                  1234123
                  1234123
                  2342bye
                  1234125
                  1251234
                  1234123
                  1234123
                  1234125
                  1251234
                  1234123
                  

                  file2.txt:

                  1234125
                  1251234
                  1234123
                  hello35
                  2492wow
                  1234125
                  1251234
                  1234123
                  1234123
                  hello32
                  1234125
                  1251234
                  1234123
                  1234123
                  1234123
                  1234123
                  2342bye
                  

                  等等,通過file5.txt:

                  1234123
                  1234123
                  1234125
                  1251234
                  1234123
                  1234123
                  1234123
                  1234125
                  1251234
                  1234125
                  1251234
                  1234123
                  1234123
                  hello35
                  hello32
                  2492wow
                  2342bye
                  

                  我建議使用分層并行 map 來快速讀取您的文件.multiprocessing 的一個分支(稱為 pathos.multiprocessing)可以做到這一點.

                  I'd suggest to use a hierarchical parallel map to read your files quickly. A fork of multiprocessing (called pathos.multiprocessing) can do this.

                  >>> import pathos
                  >>> thpool = pathos.multiprocessing.ThreadingPool()
                  >>> mppool = pathos.multiprocessing.ProcessingPool()
                  >>> 
                  >>> def rstrip(line):
                  ...     return line.rstrip()
                  ... 
                  # get your list of files
                  >>> fnames = ['file1.txt', 'file2.txt', 'file3.txt', 'file4.txt', 'file5.txt']
                  >>> # open the files
                  >>> files = (open(name, 'r') for name in fnames)
                  >>> # read each file in asynchronous parallel
                  >>> # while reading and stripping each line in parallel
                  >>> res = thpool.amap(mppool.map, [rstrip]*len(fnames), files)
                  >>> # get the result when it's done
                  >>> res.ready()
                  True
                  >>> data = res.get()
                  >>> # if not using a files iterator -- close each file by uncommenting the next line
                  >>> # files = [file.close() for file in files]
                  >>> data[0]
                  ['hello35', '1234123', '1234123', 'hello32', '2492wow', '1234125', '1251234', '1234123', '1234123', '2342bye', '1234125', '1251234', '1234123', '1234123', '1234125', '1251234', '1234123']
                  >>> data[1]
                  ['1234125', '1251234', '1234123', 'hello35', '2492wow', '1234125', '1251234', '1234123', '1234123', 'hello32', '1234125', '1251234', '1234123', '1234123', '1234123', '1234123', '2342bye']
                  >>> data[-1]
                  ['1234123', '1234123', '1234125', '1251234', '1234123', '1234123', '1234123', '1234125', '1251234', '1234125', '1251234', '1234123', '1234123', 'hello35', 'hello32', '2492wow', '2342bye']
                  

                  但是,如果您想檢查還有多少文件要完成,您可能需要使用迭代"映射 (imap) 而不是異步"映射 (地圖).有關(guān)詳細信息,請參閱此帖子:Python 多處理 - 跟蹤pool.map操作過程

                  However, if you want to check how many files you have left to finish, you might want to use an "iterated" map (imap) instead of an "asynchronous" map (amap). See this post for details: Python multiprocessing - tracking the process of pool.map operation

                  在此處獲取 pathos:https://github.com/uqfoundation

                  這篇關(guān)于在 python 中高效的文件讀取需要在 ' ' 上拆分的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                  <small id='Yerz9'></small><noframes id='Yerz9'>

                      <tfoot id='Yerz9'></tfoot>
                        <bdo id='Yerz9'></bdo><ul id='Yerz9'></ul>
                          <tbody id='Yerz9'></tbody>
                        <legend id='Yerz9'><style id='Yerz9'><dir id='Yerz9'><q id='Yerz9'></q></dir></style></legend>
                        • <i id='Yerz9'><tr id='Yerz9'><dt id='Yerz9'><q id='Yerz9'><span id='Yerz9'><b id='Yerz9'><form id='Yerz9'><ins id='Yerz9'></ins><ul id='Yerz9'></ul><sub id='Yerz9'></sub></form><legend id='Yerz9'></legend><bdo id='Yerz9'><pre id='Yerz9'><center id='Yerz9'></center></pre></bdo></b><th id='Yerz9'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Yerz9'><tfoot id='Yerz9'></tfoot><dl id='Yerz9'><fieldset id='Yerz9'></fieldset></dl></div>
                            主站蜘蛛池模板: 日韩一区不卡 | 欧美中文一区 | 人人爽人人草 | 亚洲综合国产 | 高清免费av | 99热在线免费 | 国产清纯白嫩初高生在线播放视频 | 久久黄视频 | 欧美成人免费在线视频 | 色综合一区二区三区 | 国产伦精品一区二区三区在线 | 天天看天天干 | 伊人网99 | 亚州激情| h小视频 | 欧美在线看片 | 瑞克和莫蒂第五季在线观看 | 国产综合久久 | 人妖一区 | 国产免费拔擦拔擦8x高清 | 一区二区三区免费在线观看 | 黄色片视频网站 | 精品欧美一区二区精品久久 | 精品1区2区3区 | 成人免费视频观看视频 | 久久精品久久精品 | 欧美一区二区在线 | 91一区二区 | 99精品视频在线 | 日韩av一区二区在线观看 | 国产极品粉嫩美女呻吟在线看人 | 亚洲精品欧美一区二区三区 | 欧美极品在线观看 | 成人看片在线观看 | 日韩久久久久久久久久久 | 中文字幕乱码一区二区三区 | 国产 日韩 欧美 中文 在线播放 | 日韩中文字幕 | 欧美日韩大片 | 欧美久久一级 | 亚洲成人av|