久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

  • <tfoot id='KrtkK'></tfoot>

      <i id='KrtkK'><tr id='KrtkK'><dt id='KrtkK'><q id='KrtkK'><span id='KrtkK'><b id='KrtkK'><form id='KrtkK'><ins id='KrtkK'></ins><ul id='KrtkK'></ul><sub id='KrtkK'></sub></form><legend id='KrtkK'></legend><bdo id='KrtkK'><pre id='KrtkK'><center id='KrtkK'></center></pre></bdo></b><th id='KrtkK'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='KrtkK'><tfoot id='KrtkK'></tfoot><dl id='KrtkK'><fieldset id='KrtkK'></fieldset></dl></div>
    1. <small id='KrtkK'></small><noframes id='KrtkK'>

      • <bdo id='KrtkK'></bdo><ul id='KrtkK'></ul>
        <legend id='KrtkK'><style id='KrtkK'><dir id='KrtkK'><q id='KrtkK'></q></dir></style></legend>
      1. 一旦任何一個進程在python中找到匹配項,如何讓

        How to get all pool.apply_async processes to stop once any one process has found a match in python(一旦任何一個進程在python中找到匹配項,如何讓所有pool.apply_async進程停止)
      2. <i id='B7EbC'><tr id='B7EbC'><dt id='B7EbC'><q id='B7EbC'><span id='B7EbC'><b id='B7EbC'><form id='B7EbC'><ins id='B7EbC'></ins><ul id='B7EbC'></ul><sub id='B7EbC'></sub></form><legend id='B7EbC'></legend><bdo id='B7EbC'><pre id='B7EbC'><center id='B7EbC'></center></pre></bdo></b><th id='B7EbC'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='B7EbC'><tfoot id='B7EbC'></tfoot><dl id='B7EbC'><fieldset id='B7EbC'></fieldset></dl></div>

            • <legend id='B7EbC'><style id='B7EbC'><dir id='B7EbC'><q id='B7EbC'></q></dir></style></legend>

              <small id='B7EbC'></small><noframes id='B7EbC'>

              • <bdo id='B7EbC'></bdo><ul id='B7EbC'></ul>

                    <tbody id='B7EbC'></tbody>
                  <tfoot id='B7EbC'></tfoot>
                • 本文介紹了一旦任何一個進程在python中找到匹配項,如何讓所有pool.apply_async進程停止的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我有以下代碼,它利用多處理來遍歷一個大列表并找到匹配項.一旦在任何一個進程中找到匹配項,如何讓所有進程停止?我看過一些例子,但我似乎都不適合我在這里所做的事情.

                  I have the following code that is leveraging multiprocessing to iterate through a large list and find a match. How can I get all processes to stop once a match is found in any one processes? I have seen examples but I none of them seem to fit into what I am doing here.

                  #!/usr/bin/env python3.5
                  import sys, itertools, multiprocessing, functools
                  
                  alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;"
                  num_parts = 4
                  part_size = len(alphabet) // num_parts
                  
                  def do_job(first_bits):
                      for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)):
                          # CHECK FOR MATCH HERE
                          print(''.join(x))
                          # EXIT ALL PROCESSES IF MATCH FOUND
                  
                  if __name__ == '__main__':
                      pool = multiprocessing.Pool(processes=4)
                      results = []
                  
                      for i in range(num_parts):
                          if i == num_parts - 1:
                              first_bit = alphabet[part_size * i :]
                          else:
                              first_bit = alphabet[part_size * i : part_size * (i+1)]
                          pool.apply_async(do_job, (first_bit,))
                  
                      pool.close()
                      pool.join()
                  

                  感謝您的寶貴時間.

                  更新 1:

                  我已經實現了@ShadowRanger 在偉大方法中建議的更改,它幾乎按照我想要的方式工作.因此,我添加了一些日志記錄以指示進度,并在其中放置了一個測試"鍵以進行匹配.我希望能夠獨立于 num_parts 增加/減少 iNumberOfProcessors.在這個階段,當我將它們都設置為 4 時,一切都按預期工作,啟動了 4 個進程(控制臺額外運行了一個).當我更改 iNumberOfProcessors = 6 時,有 6 個進程啟動,但只有其中一個進程有任何 CPU 使用率.所以看起來2是空閑的.就像我之前的解決方案一樣,我能夠在不增加 num_parts 的情況下將核心數量設置得更高,并且所有進程都會被使用.

                  I have implemented the changes suggested in the great approach by @ShadowRanger and it is nearly working the way I want it to. So I have added some logging to give an indication of progress and put a 'test' key in there to match. I want to be able to increase/decrease the iNumberOfProcessors independently of the num_parts. At this stage when I have them both at 4 everything works as expected, 4 processes spin up (one extra for the console). When I change the iNumberOfProcessors = 6, 6 processes spin up but only for of them have any CPU usage. So it appears 2 are idle. Where as my previous solution above, I was able to set the number of cores higher without increasing the num_parts, and all of the processes would get used.

                  我不確定如何重構這種新方法以提供相同的功能.您能否看一下并給我一些重構的方向,以便能夠相互獨立地設置 iNumberOfProcessors 和 num_parts 并且仍然使用所有進程?

                  I am not sure about how to refactor this new approach to give me the same functionality. Can you have a look and give me some direction with the refactoring needed to be able to set iNumberOfProcessors and num_parts independently from each other and still have all processes used?

                  這是更新后的代碼:

                  #!/usr/bin/env python3.5
                  import sys, itertools, multiprocessing, functools
                  
                  alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;"
                  num_parts = 4
                  part_size = len(alphabet) // num_parts
                  iProgressInterval = 10000
                  iNumberOfProcessors = 6
                  
                  def do_job(first_bits):
                      iAttemptNumber = 0
                      iLastProgressUpdate = 0
                      for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)):
                          sKey = ''.join(x)
                          iAttemptNumber = iAttemptNumber + 1
                          if iLastProgressUpdate + iProgressInterval <= iAttemptNumber:
                              iLastProgressUpdate = iLastProgressUpdate + iProgressInterval
                              print("Attempt#:", iAttemptNumber, "Key:", sKey)
                          if sKey == 'test':
                              print("KEY FOUND!! Attempt#:", iAttemptNumber, "Key:", sKey)
                              return True
                  
                  def get_part(i):
                      if i == num_parts - 1:
                          first_bit = alphabet[part_size * i :]
                      else:
                          first_bit = alphabet[part_size * i : part_size * (i+1)]
                      return first_bit
                  
                  if __name__ == '__main__':
                      # with statement with Py3 multiprocessing.Pool terminates when block exits
                      with multiprocessing.Pool(processes = iNumberOfProcessors) as pool:
                  
                          # Don't need special case for final block; slices can 
                          for gotmatch in pool.imap_unordered(do_job, map(get_part, range(num_parts))):
                               if gotmatch:
                                   break
                          else:
                               print("No matches found")
                  

                  更新 2:

                  好的,這是我嘗試@noxdafox 建議的嘗試.我根據他的建議提供的鏈接整理了以下內容.不幸的是,當我運行它時,我得到了錯誤:

                  Ok here is my attempt at trying @noxdafox suggestion. I have put together the following based on the link he provided with his suggestion. Unfortunately when I run it I get the error:

                  ... 第 322 行,在 apply_async 中raise ValueError("池未運行")ValueError: 池未運行

                  ... line 322, in apply_async raise ValueError("Pool not running") ValueError: Pool not running

                  誰能給我一些關于如何讓它工作的指導.

                  Can anyone give me some direction on how to get this working.

                  基本上問題是我第一次嘗試進行多處理,但不支持在找到匹配項后取消所有進程.

                  Basically the issue is that my first attempt did multiprocessing but did not support canceling all processes once a match was found.

                  我的第二次嘗試(基于@ShadowRanger 的建議)解決了這個問題,但破壞了能夠獨立擴展進程數量和 num_parts 大小的功能,這是我第一次嘗試可以做到的.

                  My second attempt (based on @ShadowRanger suggestion) solved that problem, but broke the functionality of being able to scale the number of processes and num_parts size independently, which is something my first attempt could do.

                  我的第三次嘗試(基于@noxdafox 的建議)拋出了上述錯誤.

                  My third attempt (based on @noxdafox suggestion), throws the error outlined above.

                  如果有人能給我一些關于如何維護我第一次嘗試的功能的指導(能夠獨立縮放進程數量和 num_parts 大小),并添加一旦找到匹配項就取消所有進程的功能不勝感激.

                  If anyone can give me some direction on how to maintain the functionality of my first attempt (being able to scale the number of processes and num_parts size independently), and add the functionality of canceling all processes once a match was found it would be much appreciated.

                  感謝您的寶貴時間.

                  這是我根據@noxdafox 建議第三次嘗試的代碼:

                  Here is the code from my third attempt based on @noxdafox suggestion:

                  #!/usr/bin/env python3.5
                  import sys, itertools, multiprocessing, functools
                  
                  alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;"
                  num_parts = 4
                  part_size = len(alphabet) // num_parts
                  iProgressInterval = 10000
                  iNumberOfProcessors = 4
                  
                  
                  def find_match(first_bits):
                      iAttemptNumber = 0
                      iLastProgressUpdate = 0
                      for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)):
                          sKey = ''.join(x)
                          iAttemptNumber = iAttemptNumber + 1
                          if iLastProgressUpdate + iProgressInterval <= iAttemptNumber:
                              iLastProgressUpdate = iLastProgressUpdate + iProgressInterval
                              print("Attempt#:", iAttemptNumber, "Key:", sKey)
                          if sKey == 'test':
                              print("KEY FOUND!! Attempt#:", iAttemptNumber, "Key:", sKey)
                              return True
                  
                  def get_part(i):
                      if i == num_parts - 1:
                          first_bit = alphabet[part_size * i :]
                      else:
                          first_bit = alphabet[part_size * i : part_size * (i+1)]
                      return first_bit
                  
                  def grouper(iterable, n, fillvalue=None):
                      args = [iter(iterable)] * n
                      return itertools.zip_longest(*args, fillvalue=fillvalue)
                  
                  class Worker():
                  
                      def __init__(self, workers):
                          self.workers = workers
                  
                      def callback(self, result):
                          if result:
                              self.pool.terminate()
                  
                      def do_job(self):
                          print(self.workers)
                          pool = multiprocessing.Pool(processes=self.workers)
                          for part in grouper(alphabet, part_size):
                              pool.apply_async(do_job, (part,), callback=self.callback)
                          pool.close()
                          pool.join()
                          print("All Jobs Queued")
                  
                  if __name__ == '__main__':
                      w = Worker(4)
                      w.do_job()
                  

                  推薦答案

                  可以查看this question 查看解決您問題的實現示例.

                  You can check this question to see an implementation example solving your problem.

                  這也適用于 concurrent.futures 池.

                  This works also with concurrent.futures pool.

                  只需將 map 方法替換為 apply_async 并從調用方迭代您的列表.

                  Just replace the map method with apply_async and iterated over your list from the caller.

                  類似的東西.

                  for part in grouper(alphabet, part_size):
                      pool.apply_async(do_job, part, callback=self.callback)
                  

                  石斑魚食譜

                  這篇關于一旦任何一個進程在python中找到匹配項,如何讓所有pool.apply_async進程停止的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)
                  <tfoot id='MZKjM'></tfoot>
                  <i id='MZKjM'><tr id='MZKjM'><dt id='MZKjM'><q id='MZKjM'><span id='MZKjM'><b id='MZKjM'><form id='MZKjM'><ins id='MZKjM'></ins><ul id='MZKjM'></ul><sub id='MZKjM'></sub></form><legend id='MZKjM'></legend><bdo id='MZKjM'><pre id='MZKjM'><center id='MZKjM'></center></pre></bdo></b><th id='MZKjM'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='MZKjM'><tfoot id='MZKjM'></tfoot><dl id='MZKjM'><fieldset id='MZKjM'></fieldset></dl></div>
                        <tbody id='MZKjM'></tbody>

                        <bdo id='MZKjM'></bdo><ul id='MZKjM'></ul>
                        • <small id='MZKjM'></small><noframes id='MZKjM'>

                          1. <legend id='MZKjM'><style id='MZKjM'><dir id='MZKjM'><q id='MZKjM'></q></dir></style></legend>

                          2. 主站蜘蛛池模板: 午夜影院在线观看免费 | av先锋资源 | 午夜精品久久久久久不卡欧美一级 | 国产一区二区三区在线 | 艹逼网| 日日摸日日碰夜夜爽2015电影 | 国产亚洲一区二区三区在线观看 | 91麻豆产精品久久久久久 | 热久久久久 | 毛片高清 | 一区二区三区国产精品 | 黄色一级大片在线免费看产 | www.成人在线视频 | 国产在线一区观看 | 欧美一级片黄色 | 欧美激情区| 国产成人免费 | 久久亚洲国产 | 欧美一区二区三区 | 欧美99| 草草视频在线观看 | 午夜丰满寂寞少妇精品 | 精品一二| 99热首页 | 中文字幕在线电影观看 | 香蕉国产在线视频 | 国产黄色在线观看 | 国产在线拍偷自揄拍视频 | 亚洲精品一区二区在线观看 | 亚洲欧美一区二区三区国产精品 | 国产精品小视频在线观看 | 久久久福利 | 国产一区二区三区久久 | 亚洲成人第一页 | 日韩网| 天堂资源最新在线 | av片免费 | 国产精品欧美一区二区 | 欧美片网站免费 | 国产亚洲一区二区三区在线观看 | 欧美日韩在线电影 |