本文介紹了在 Pandas 中使用多處理讀取 csv 文件的最簡單方法的處理方法,對大家解決問題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!
問題描述
限時(shí)送ChatGPT賬號..
這是我的問題.
帶有一堆 .csv 文件(或其他文件).Pandas 是一種讀取它們并保存為 Dataframe
格式的簡單方法.但是當(dāng)文件量很大時(shí),我想通過多處理讀取文件以節(jié)省一些時(shí)間.
Here is my question.
With bunch of .csv files(or other files). Pandas is an easy way to read them and save into Dataframe
format. But when the amount of files was huge, I want to read the files with multiprocessing to save some time.
我手動(dòng)將文件分成不同的路徑.單獨(dú)使用:
I manually divide the files into different path. Using severally:
os.chdir("./task_1")
files = os.listdir('.')
files.sort()
for file in files:
filename,extname = os.path.splitext(file)
if extname == '.csv':
f = pd.read_csv(file)
df = (f.VALUE.as_matrix()).reshape(75,90)
然后將它們組合起來.
如何使用 pool
運(yùn)行它們來解決我的問題?
任何建議將不勝感激!
How to run them with pool
to achieve my problem?
Any advice would be appreciated!
推薦答案
使用Pool
:
import os
import pandas as pd
from multiprocessing import Pool
# wrap your csv importer in a function that can be mapped
def read_csv(filename):
'converts a filename to a pandas dataframe'
return pd.read_csv(filename)
def main():
# get a list of file names
files = os.listdir('.')
file_list = [filename for filename in files if filename.split('.')[1]=='csv']
# set up your pool
with Pool(processes=8) as pool: # or whatever your hardware can support
# have your pool map the file names to dataframes
df_list = pool.map(read_csv, file_list)
# reduce the list of dataframes to a single dataframe
combined_df = pd.concat(df_list, ignore_index=True)
if __name__ == '__main__':
main()
這篇關(guān)于在 Pandas 中使用多處理讀取 csv 文件的最簡單方法的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!
【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!