問題描述
我知道以前有人問過這個問題,并且我已經看到了一些答案,但是這個問題更多的是關于我的代碼以及完成這項任務的最佳方式.
I know that this question has been asked before, and I've saw some of the answers, but this question is more about my code and the best way of accomplishing this task.
我想掃描一個目錄并查看該目錄中是否有任何重復項(通過檢查 MD5 哈希).以下是我的代碼:
I want to scan a directory and see if there are any duplicates (by checking MD5 hashes) in that directory. The following is my code:
import sys
import os
import hashlib
fileSliceLimitation = 5000000 #bytes
# if the file is big, slice trick to avoid to load the whole file into RAM
def getFileHashMD5(filename):
retval = 0;
filesize = os.path.getsize(filename)
if filesize > fileSliceLimitation:
with open(filename, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
retval = m.hexdigest()
else:
retval = hashlib.md5(open(filename, 'rb').read()).hexdigest()
return retval
searchdirpath = raw_input("Type directory you wish to search: ")
print ""
print ""
text_file = open('outPut.txt', 'w')
for dirname, dirnames, filenames in os.walk(searchdirpath):
# print path to all filenames.
for filename in filenames:
fullname = os.path.join(dirname, filename)
h_md5 = getFileHashMD5 (fullname)
print h_md5 + " " + fullname
text_file.write("
" + h_md5 + " " + fullname)
# close txt file
text_file.close()
print "
Reading outPut:"
text_file = open('outPut.txt', 'r')
myListOfHashes = text_file.read()
if h_md5 in myListOfHashes:
print 'Match: ' + " " + fullname
這給了我以下輸出:
Please type in directory you wish to search using above syntax: /Users/bubble/Desktop/aF
033808bb457f622b05096c2f7699857v /Users/bubble/Desktop/aF/.DS_Store
409d8c1727960fddb7c8b915a76ebd35 /Users/bubble/Desktop/aF/script copy.py
409d8c1727960fddb7c8b915a76ebd25 /Users/bubble/Desktop/aF/script.py
e9289295caefef66eaf3a4dffc4fe11c /Users/bubble/Desktop/aF/simpsons.mov
Reading outPut:
Match: /Users/bubble/Desktop/aF/simpsons.mov
我的想法是:
1) 掃描目錄2)將MD5哈希+文件名寫入文本文件3) 以只讀方式打開文本文件4) 再次掃描目錄并檢查文本文件...
1) Scan directory 2) Write MD5 hashes + Filename to text file 3) Open text file as read only 4) Scan directory AGAIN and check against text file...
我發現這不是一個好方法,而且它不起作用.匹配"只是打印出最后處理的文件.
I see that this isn't a good way of doing it AND it doesn't work. The 'match' just prints out the very last file that was processed.
我怎樣才能讓這個腳本真正找到重復項?有人可以告訴我完成這項任務的更好/更簡單的方法.
How can I get this script to actually find duplicates? Can someone tell me a better/easier way of accomplishing this task.
非常感謝您的幫助.抱歉,這篇文章很長.
Thank you very much for any help. Sorry this is a long post.
推薦答案
識別重復項的明顯工具是哈希表.除非您正在處理 非常大 數量的文件,否則您可以執行以下操作:
The obvious tool for identifying duplicates is a hash table. Unless you are working with a very large number of files, you could do something like this:
from collections import defaultdict
file_dict = defaultdict(list)
for filename in files:
file_dict[get_file_hash(filename)].append(filename)
在此過程結束時,file_dict
將包含每個唯一哈希的列表;當兩個文件具有相同的哈希值時,它們都會出現在該哈希值的列表中.然后過濾 dict 以查找大于 1 的值列表,并比較文件以確保它們相同 - 如下所示:
At the end of this process, file_dict
will contain a list for every unique hash; when two files have the same hash, they'll both appear in the list for that hash. Then filter the dict looking for value lists longer than 1, and compare the files to make sure they're the same -- something like this:
for duplicates in file_dict.values(): # file_dict.itervalues() in Python 2
if len(duplicates) > 1:
# double-check reported duplicates and generate output
或者這個:
duplicates = [files for files in file_dict.values() if len(files) > 1]
get_file_hash
可以使用 MD5s;或者它可以像 Ramchandra Apte 在上面的評論中建議的那樣簡單地獲取文件的第一個和最后一個字節;或者它可以簡單地使用上面評論中建議的文件大小.不過,后兩種策略中的每一種都更有可能產生誤報.您可以將它們結合起來以降低誤報率.
get_file_hash
could use MD5s; or it could simply get the first and last bytes of the file as Ramchandra Apte suggested in the comments above; or it could simply use file sizes as tdelaney suggested in the comments above. Each of the latter two strategies are more likely to produce false positives though. You could combine them to reduce the false positive rate.
如果您正在處理非常大量文件,則可以使用更復雜的數據結構,例如 布隆過濾器.
If you're working with a very large number of files, you could use a more sophisticated data structure like a Bloom Filter.
這篇關于通過 hashlib 查找重復文件?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!