久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

  • <legend id='tSXut'><style id='tSXut'><dir id='tSXut'><q id='tSXut'></q></dir></style></legend>
      <bdo id='tSXut'></bdo><ul id='tSXut'></ul>
    <i id='tSXut'><tr id='tSXut'><dt id='tSXut'><q id='tSXut'><span id='tSXut'><b id='tSXut'><form id='tSXut'><ins id='tSXut'></ins><ul id='tSXut'></ul><sub id='tSXut'></sub></form><legend id='tSXut'></legend><bdo id='tSXut'><pre id='tSXut'><center id='tSXut'></center></pre></bdo></b><th id='tSXut'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='tSXut'><tfoot id='tSXut'></tfoot><dl id='tSXut'><fieldset id='tSXut'></fieldset></dl></div>

      <tfoot id='tSXut'></tfoot>
      1. <small id='tSXut'></small><noframes id='tSXut'>

        在 Python 中的進程之間共享許多隊列

        Sharing many queues among processes in Python(在 Python 中的進程之間共享許多隊列)

          <tfoot id='QCCuB'></tfoot>
              <bdo id='QCCuB'></bdo><ul id='QCCuB'></ul>
              <i id='QCCuB'><tr id='QCCuB'><dt id='QCCuB'><q id='QCCuB'><span id='QCCuB'><b id='QCCuB'><form id='QCCuB'><ins id='QCCuB'></ins><ul id='QCCuB'></ul><sub id='QCCuB'></sub></form><legend id='QCCuB'></legend><bdo id='QCCuB'><pre id='QCCuB'><center id='QCCuB'></center></pre></bdo></b><th id='QCCuB'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='QCCuB'><tfoot id='QCCuB'></tfoot><dl id='QCCuB'><fieldset id='QCCuB'></fieldset></dl></div>
                <tbody id='QCCuB'></tbody>
                1. <small id='QCCuB'></small><noframes id='QCCuB'>

                2. <legend id='QCCuB'><style id='QCCuB'><dir id='QCCuB'><q id='QCCuB'></q></dir></style></legend>

                  本文介紹了在 Python 中的進程之間共享許多隊列的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我知道 multiprocessing.Manager() 以及它如何用于創建共享對象,特別是可以在工作人員之間共享的隊列.有這個問題,這個問題,這個問題甚至我自己的一個問題.

                  I am aware of multiprocessing.Manager() and how it can be used to create shared objects, in particular queues which can be shared between workers. There is this question, this question, this question and even one of my own questions.

                  但是,我需要定義大量隊列,每個隊列都鏈接一對特定的進程.假設每對進程及其鏈接隊列由變量key標識.

                  However, I need to define a great many queues, each of which is linking a specific pair of processes. Say that each pair of processes and its linking queue is identified by the variable key.

                  當我需要放入和獲取數據時,我想使用字典來訪問我的隊列.我無法完成這項工作.我已經嘗試了很多東西.將 multiprocessing 導入為 mp:

                  I want to use a dictionary to access my queues when I need to put and get data. I cannot make this work. I've tried a number of things. With multiprocessing imported as mp:

                  在由多處理模塊導入的配置文件中定義一個類似 for key in all_keys: DICT[key] = mp.Queue 的字典(稱為 multi.py) 不會返回錯誤,但隊列 DICT[key] 不會在進程之間共享,每個進程似乎都有自己的隊列副本,因此不會發生通信.

                  Defining a dict like for key in all_keys: DICT[key] = mp.Queue in a config file which is imported by the multiprocessing module (call it multi.py) does not return errors, but the queue DICT[key] is not shared between the processes, each one seems to have their own copy of the queue and thus no communication happens.

                  如果我嘗試在定義進程并啟動它們的主多處理函數的開頭定義 DICT,例如

                  If I try to define the DICT at the beginning of the main multiprocessing function that defines the processes and starts them, like

                  DICT = mp.Manager().dict()    
                  for key in all_keys:
                      DICT[key] = mp.Queue()
                  

                  我得到了錯誤

                  RuntimeError: Queue objects should only be shared between processes through
                   inheritance
                  

                  改成

                  DICT = mp.Manager().dict()    
                  for key in all_keys:
                      DICT[key] = mp.Manager().Queue()
                  

                  只會讓一切變得更糟.在 multi.py 的頭部而不是在 main 函數內部嘗試類似的定義會返回類似的錯誤.

                  only makes everything worse. Trying similar definitions at the head of multi.py rather than inside the main function returns similar errors.

                  必須有一種方法可以在進程之間共享多個隊列,而無需在代碼中明確命名每個隊列.有什么想法嗎?

                  There must be a way to share many queues between processes without explicitly naming each one in the code. Any ideas?

                  編輯

                  這是程序的基本架構:

                  1- 加載第一個模塊,它定義了一些變量,導入 multi,啟動 multi.main(),并加載另一個模塊,該模塊開始級聯的模塊加載和代碼執行.同時……

                  1- load the first module, which defines some variables, imports multi, launches multi.main(), and loads another module which starts a cascade of module loads and code execution. Meanwhile...

                  2- multi.main 看起來像這樣:

                  2- multi.main looks like this:

                  def main():
                      manager = mp.Manager()
                      pool = mp.Pool()
                      DICT2 = manager.dict()
                  
                      for key in all_keys:
                          DICT2[key] = manager.Queue()
                          proc_1 = pool.apply_async(targ1,(DICT1[key],) ) #DICT1 is defined in the config file
                          proc_2 =  pool.apply_async(targ2,(DICT2[key], otherargs,) 
                  

                  除了使用 poolmanager 之外,我還使用以下命令啟動進程:

                  Rather than use pool and manager, I was also launching processes with the following:

                  mp.Process(target=targ1, args=(DICT[key],))
                  

                  3 - targ1 函數獲取來自主進程的輸入數據(按 key 排序).它旨在將結果傳遞給 DICT[key] 以便 targ2 可以完成它的工作.這是不工作的部分.有任意數量的 targ1s、targ2s 等,因此有任意數量的隊列.

                  3 - The function targ1 takes input data that is coming in (sorted by key) from the main process. It is meant to pass the result to DICT[key] so targ2 can do its work. This is the part that is not working. There are an arbitrary number of targ1s, targ2s, etc. and therefore an arbitrary number of queues.

                  4 - 其中一些進程的結果將被發送到一組不同的數組/pandas 數據幀,這些數組/pandas 數據幀也由 key 索引,我希望可以從任意進程訪問這些數據幀,甚至是在不同的模塊中啟動的.我還沒有寫這部分,這可能是一個不同的問題.(我在這里提到它是因為上面 3 的答案也可能很好地解決了 4.)

                  4 - The results of some of these processes will be sent to a bunch of different arrays / pandas dataframes which are also indexed by key, and which I would like to be accessible from arbitrary processes, even ones launched in a different module. I have yet to write this part and it might be a different question. (I mention it here because the answer to 3 above might also solve 4 nicely.)

                  推薦答案

                  聽起來您的問題是在您嘗試通過將 multiprocessing.Queue() 作為參數傳遞來共享它時開始的.您可以通過創建 托管隊列 來解決此問題:

                  It sounds like your issues started when you tried to share a multiprocessing.Queue() by passing it as an argument. You can get around this by creating a managed queue instead:

                  import multiprocessing
                  manager = multiprocessing.Manager()
                  passable_queue = manager.Queue()
                  

                  當您使用管理器創建它時,您正在存儲和傳遞一個 代理 到隊列,而不是隊列本身,因此即使您傳遞給工作進程的對象是復制后,它仍將指向相同的底層數據結構:您的隊列.它(在概念上)與 C/C++ 中的指針非常相似.如果您以這種方式創建隊列,您將能夠在啟動工作進程時傳遞它們.

                  When you use a manager to create it, you are storing and passing around a proxy to the queue, rather than the queue itself, so even when the object you pass to your worker processes is a copied, it will still point at the same underlying data structure: your queue. It's very similar (in concept) to pointers in C/C++. If you create your queues this way, you will be able to pass them when you launch a worker process.

                  由于您現在可以傳遞隊列,因此您不再需要管理您的字典.在 main 中保留一個普通字典,它將存儲所有映射,并且只為您的工作進程提供他們需要的隊列,因此他們不需要訪問任何映射.

                  Since you can pass queues around now, you no longer need your dictionary to be managed. Keep a normal dictionary in main that will store all the mappings, and only give your worker processes the queues they need, so they won't need access to any mappings.

                  我在這里寫了一個例子.看起來你正在你的工人之間傳遞對象,所以這就是這里所做的.假設我們有兩個處理階段,數據開始和結束都在 main 的控制中.看看我們如何創建像管道一樣連接工作人員的隊列,但是通過只給他們他們需要的隊列,他們就不需要知道任何映射:

                  I've written an example of this here. It looks like you are passing objects between your workers, so that's what's done here. Imagine we have two stages of processing, and the data both starts and ends in the control of main. Look at how we can create the queues that connect the workers like a pipeline, but by giving them only they queues they need, there's no need for them to know about any mappings:

                  import multiprocessing as mp
                  
                  def stage1(q_in, q_out):
                  
                      q_out.put(q_in.get()+"Stage 1 did some work.
                  ")
                      return
                  
                  def stage2(q_in, q_out):
                  
                      q_out.put(q_in.get()+"Stage 2 did some work.
                  ")
                      return
                  
                  def main():
                  
                      pool = mp.Pool()
                      manager = mp.Manager()
                  
                      # create managed queues
                      q_main_to_s1 = manager.Queue()
                      q_s1_to_s2 = manager.Queue()
                      q_s2_to_main = manager.Queue()
                  
                      # launch workers, passing them the queues they need
                      results_s1 = pool.apply_async(stage1, (q_main_to_s1, q_s1_to_s2))
                      results_s2 = pool.apply_async(stage2, (q_s1_to_s2, q_s2_to_main))
                  
                      # Send a message into the pipeline
                      q_main_to_s1.put("Main started the job.
                  ")
                  
                      # Wait for work to complete
                      print(q_s2_to_main.get()+"Main finished the job.")
                  
                      pool.close()
                      pool.join()
                  
                      return
                  
                  if __name__ == "__main__":
                      main()
                  

                  代碼產生這個輸出:

                  Main 開始了這項工作.
                  第 1 階段做了一些工作.
                  第 2 階段做了一些工作.
                  主要完成了工作.

                  Main started the job.
                  Stage 1 did some work.
                  Stage 2 did some work.
                  Main finished the job.

                  我沒有包含將隊列或 AsyncResults 對象存儲在字典中的示例,因為我仍然不太了解您的程序應該如何工作.但是現在您可以自由地傳遞隊列,您可以根據需要構建字典來存儲隊列/進程映射.

                  I didn't include an example of storing the queues or AsyncResults objects in dictionaries, because I still don't quite understand how your program is supposed to work. But now that you can pass your queues freely, you can build your dictionary to store the queue/process mappings as needed.

                  事實上,如果你真的在多個worker之間建立了一個管道,你甚至不需要在main中保留對inter-worker"隊列的引用.創建隊列,將它們傳遞給您的工作人員,然后只保留對 main 將使用的隊列的引用.如果您確實有任意數量"的隊列,我肯定會建議您盡快讓舊隊列被垃圾回收.

                  In fact, if you really do build a pipeline between multiple workers, you don't even need to keep a reference to the "inter-worker" queues in main. Create the queues, pass them to your workers, then only retain references to queues that main will use. I would definitely recommend trying to let old queues be garbage collected as quickly as possible if you really do have "an arbitrary number" of queues.

                  這篇關于在 Python 中的進程之間共享許多隊列的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)
                      <tbody id='8oxxX'></tbody>

                  • <legend id='8oxxX'><style id='8oxxX'><dir id='8oxxX'><q id='8oxxX'></q></dir></style></legend>

                  • <i id='8oxxX'><tr id='8oxxX'><dt id='8oxxX'><q id='8oxxX'><span id='8oxxX'><b id='8oxxX'><form id='8oxxX'><ins id='8oxxX'></ins><ul id='8oxxX'></ul><sub id='8oxxX'></sub></form><legend id='8oxxX'></legend><bdo id='8oxxX'><pre id='8oxxX'><center id='8oxxX'></center></pre></bdo></b><th id='8oxxX'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='8oxxX'><tfoot id='8oxxX'></tfoot><dl id='8oxxX'><fieldset id='8oxxX'></fieldset></dl></div>
                    <tfoot id='8oxxX'></tfoot>

                      <small id='8oxxX'></small><noframes id='8oxxX'>

                          <bdo id='8oxxX'></bdo><ul id='8oxxX'></ul>

                            主站蜘蛛池模板: 欧美久久精品 | 在线视频 亚洲 | 9999国产精品欧美久久久久久 | 你懂的av| 毛片.com| 天堂亚洲| 精品视频久久久久久 | 天天草视频 | 一区二区三区视频播放 | 久久伊 | 日韩欧美第一页 | 91新视频 | 日韩欧美在线播放 | 国产在线精品一区二区三区 | 午夜国产 | 亚洲网站观看 | 日韩成人在线播放 | 国产成人综合网 | 黑人巨大精品欧美一区二区免费 | 天天玩天天干天天操 | 国产精品久久九九 | 国产在线拍偷自揄拍视频 | 国产成人精品999在线观看 | 国内在线视频 | 91在线观看免费视频 | 日韩av成人 | 91精品国产综合久久久久久丝袜 | 国产一区二区不卡 | 色婷婷亚洲 | 一区二区三区四区在线 | 成人在线中文字幕 | 国产高清性xxxxxxxx | 国产精品视频久久久久久 | 亚洲性在线 | 国产精品视频一区二区三区四区国 | 久久国际精品 | 成人国产精品久久久 | 免费一区二区 | 一级做a爰片性色毛片16 | 国产精品久久久久久久免费大片 | 国内精品久久久久久久影视简单 |