久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

通過 hashlib 查找重復文件?

Finding duplicate files via hashlib?(通過 hashlib 查找重復文件?)
本文介紹了通過 hashlib 查找重復文件?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我知道以前有人問過這個問題,并且我已經看到了一些答案,但是這個問題更多的是關于我的代碼以及完成這項任務的最佳方式.

I know that this question has been asked before, and I've saw some of the answers, but this question is more about my code and the best way of accomplishing this task.

我想掃描一個目錄并查看該目錄中是否有任何重復項(通過檢查 MD5 哈希).以下是我的代碼:

I want to scan a directory and see if there are any duplicates (by checking MD5 hashes) in that directory. The following is my code:

import sys
import os
import hashlib

fileSliceLimitation = 5000000 #bytes

# if the file is big, slice trick to avoid to load the whole file into RAM
def getFileHashMD5(filename):
     retval = 0;
     filesize = os.path.getsize(filename)

     if filesize > fileSliceLimitation:
        with open(filename, 'rb') as fh:
          m = hashlib.md5()
          while True:
            data = fh.read(8192)
            if not data:
                break
            m.update(data)
          retval = m.hexdigest()

     else:
        retval = hashlib.md5(open(filename, 'rb').read()).hexdigest()

     return retval

searchdirpath = raw_input("Type directory you wish to search: ")
print ""
print ""    
text_file = open('outPut.txt', 'w')

for dirname, dirnames, filenames in os.walk(searchdirpath):
    # print path to all filenames.
    for filename in filenames:
        fullname = os.path.join(dirname, filename)
        h_md5 = getFileHashMD5 (fullname)
        print h_md5 + " " + fullname
        text_file.write("
" + h_md5 + " " + fullname)   

# close txt file
text_file.close()


print "


Reading outPut:"
text_file = open('outPut.txt', 'r')

myListOfHashes = text_file.read()

if h_md5 in myListOfHashes:
    print 'Match: ' + " " + fullname

這給了我以下輸出:

Please type in directory you wish to search using above syntax: /Users/bubble/Desktop/aF

033808bb457f622b05096c2f7699857v /Users/bubble/Desktop/aF/.DS_Store
409d8c1727960fddb7c8b915a76ebd35 /Users/bubble/Desktop/aF/script copy.py
409d8c1727960fddb7c8b915a76ebd25 /Users/bubble/Desktop/aF/script.py
e9289295caefef66eaf3a4dffc4fe11c /Users/bubble/Desktop/aF/simpsons.mov

Reading outPut:
Match:  /Users/bubble/Desktop/aF/simpsons.mov

我的想法是:

1) 掃描目錄2)將MD5哈希+文件名寫入文本文件3) 以只讀方式打開文本文件4) 再次掃描目錄并檢查文本文件...

1) Scan directory 2) Write MD5 hashes + Filename to text file 3) Open text file as read only 4) Scan directory AGAIN and check against text file...

我發現這不是一個好方法,而且它不起作用.匹配"只是打印出最后處理的文件.

I see that this isn't a good way of doing it AND it doesn't work. The 'match' just prints out the very last file that was processed.

我怎樣才能讓這個腳本真正找到重復項?有人可以告訴我完成這項任務的更好/更簡單的方法.

How can I get this script to actually find duplicates? Can someone tell me a better/easier way of accomplishing this task.

非常感謝您的幫助.抱歉,這篇文章很長.

Thank you very much for any help. Sorry this is a long post.

推薦答案

識別重復項的明顯工具是哈希表.除非您正在處理 非常大 數量的文件,否則您可以執行以下操作:

The obvious tool for identifying duplicates is a hash table. Unless you are working with a very large number of files, you could do something like this:

from collections import defaultdict

file_dict = defaultdict(list)
for filename in files:
    file_dict[get_file_hash(filename)].append(filename)

在此過程結束時,file_dict 將包含每個唯一哈希的列表;當兩個文件具有相同的哈希值時,它們都會出現在該哈希值的列表中.然后過濾 dict 以查找大于 1 的值列表,并比較文件以確保它們相同 - 如下所示:

At the end of this process, file_dict will contain a list for every unique hash; when two files have the same hash, they'll both appear in the list for that hash. Then filter the dict looking for value lists longer than 1, and compare the files to make sure they're the same -- something like this:

for duplicates in file_dict.values():   # file_dict.itervalues() in Python 2
    if len(duplicates) > 1:
        # double-check reported duplicates and generate output

或者這個:

duplicates = [files for files in file_dict.values() if len(files) > 1]

get_file_hash 可以使用 MD5s;或者它可以像 Ramchandra Apte 在上面的評論中建議的那樣簡單地獲取文件的第一個和最后一個字節;或者它可以簡單地使用上面評論中建議的文件大小.不過,后兩種策略中的每一種都更有可能產生誤報.您可以將它們結合起來以降低誤報率.

get_file_hash could use MD5s; or it could simply get the first and last bytes of the file as Ramchandra Apte suggested in the comments above; or it could simply use file sizes as tdelaney suggested in the comments above. Each of the latter two strategies are more likely to produce false positives though. You could combine them to reduce the false positive rate.

如果您正在處理非常大量文件,則可以使用更復雜的數據結構,例如 布隆過濾器.

If you're working with a very large number of files, you could use a more sophisticated data structure like a Bloom Filter.

這篇關于通過 hashlib 查找重復文件?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區域周圍繪制一個矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個矩形邊界框中應用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 欧美性受xxxx| 国产传媒在线播放 | 久草中文网 | 99免费看 | 国产精品综合一区二区 | 男人午夜视频 | 国产成人精品亚洲日本在线观看 | 亚洲高清av在线 | 日韩成人久久 | 超级乱淫av片免费播放 | 欧美日韩一区二区三区在线观看 | 国产精品久久久久久久久免费软件 | 亚洲精品日韩精品 | 久久中文字幕一区 | 亚洲播放 | 99自拍视频 | 国产成人精品视频在线观看 | 在线欧美小视频 | 亚洲精品一区中文字幕乱码 | 中文字幕第二十页 | 一区二区三区在线免费观看视频 | 欧美在线一区二区三区 | 欧美精品一区二区三区四区 在线 | 精品福利视频一区二区三区 | 天天看天天摸天天操 | 少妇一级淫片免费播放 | 国产成人综合一区二区三区 | 日韩欧美综合在线视频 | 久久精品在线 | 日韩欧美国产一区二区 | 日本在线视频一区二区 | 精品在线观看入口 | 国产精品免费在线 | 好好的日在线视频 | 少妇特黄a一区二区三区88av | 国产精品久久久久久久久久久久午夜片 | 欧美888| 午夜天堂精品久久久久 | 日韩精品一区二区三区在线播放 | 午夜视频免费在线观看 | 韩日在线 |