久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    <tfoot id='yKcv0'></tfoot>
    1. <small id='yKcv0'></small><noframes id='yKcv0'>

        <bdo id='yKcv0'></bdo><ul id='yKcv0'></ul>

      <legend id='yKcv0'><style id='yKcv0'><dir id='yKcv0'><q id='yKcv0'></q></dir></style></legend>
        <i id='yKcv0'><tr id='yKcv0'><dt id='yKcv0'><q id='yKcv0'><span id='yKcv0'><b id='yKcv0'><form id='yKcv0'><ins id='yKcv0'></ins><ul id='yKcv0'></ul><sub id='yKcv0'></sub></form><legend id='yKcv0'></legend><bdo id='yKcv0'><pre id='yKcv0'><center id='yKcv0'></center></pre></bdo></b><th id='yKcv0'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='yKcv0'><tfoot id='yKcv0'></tfoot><dl id='yKcv0'><fieldset id='yKcv0'></fieldset></dl></div>
      1. C# - 以字節(jié)塊從 Google Drive 下載

        C# - Downloading from Google Drive in byte chunks(C# - 以字節(jié)塊從 Google Drive 下載)

              <tbody id='91zae'></tbody>
          1. <tfoot id='91zae'></tfoot>

            <i id='91zae'><tr id='91zae'><dt id='91zae'><q id='91zae'><span id='91zae'><b id='91zae'><form id='91zae'><ins id='91zae'></ins><ul id='91zae'></ul><sub id='91zae'></sub></form><legend id='91zae'></legend><bdo id='91zae'><pre id='91zae'><center id='91zae'></center></pre></bdo></b><th id='91zae'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='91zae'><tfoot id='91zae'></tfoot><dl id='91zae'><fieldset id='91zae'></fieldset></dl></div>

                <bdo id='91zae'></bdo><ul id='91zae'></ul>
              • <small id='91zae'></small><noframes id='91zae'>

                • <legend id='91zae'><style id='91zae'><dir id='91zae'><q id='91zae'></q></dir></style></legend>
                  本文介紹了C# - 以字節(jié)塊從 Google Drive 下載的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  我目前正在為網(wǎng)絡(luò)連接較差的環(huán)境進(jìn)行開發(fā).我的應(yīng)用程序有助于自動為用戶下載所需的 Google Drive 文件.它對于小文件(從 40KB 到 2MB)工作得相當(dāng)好,但對于大文件(9MB)卻經(jīng)常失敗.我知道這些文件大小可能看起來很小,但就我客戶的網(wǎng)絡(luò)環(huán)境而言,Google Drive API 經(jīng)常出現(xiàn) 9MB 文件失敗.

                  I'm currently developing for an environment that has poor network connectivity. My application helps to automatically download required Google Drive files for users. It works reasonably well for small files (ranging from 40KB to 2MB), but fails far too often for larger files (9MB). I know these file sizes might seem small, but in terms of my client's network environment, Google Drive API constantly fails with the 9MB file.

                  我已經(jīng)得出結(jié)論,我需要以較小的字節(jié)塊下載文件,但我不知道如何使用 Google Drive API 做到這一點.我一遍又一遍地閱讀了 this,并嘗試了以下代碼:

                  I've concluded that I need to download files in smaller byte chunks, but I don't see how I can do that with Google Drive API. I've read this over and over again, and I've tried the following code:

                  // with the Drive File ID, and the appropriate export MIME type, I create the export request
                  var request = DriveService.Files.Export(fileId, exportMimeType);
                  
                  // take the message so I can modify it by hand
                  var message = request.CreateRequest();
                  var client = request.Service.HttpClient;
                  
                  // I change the Range headers of both the client, and message
                  client.DefaultRequestHeaders.Range =
                      message.Headers.Range =
                      new System.Net.Http.Headers.RangeHeaderValue(100, 200);
                  var response = await request.Service.HttpClient.SendAsync(message);
                  
                  // if status code = 200, copy to local file
                  if (response.IsSuccessStatusCode)
                  {
                      using (var fileStream = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite))
                      {
                          await response.Content.CopyToAsync(fileStream);
                      }
                  }
                  

                  但是,生成的本地文件(來自 fileStream)仍然是全長的(即 ??40KB 驅(qū)動器文件為 40KB 文件,9MB 文件為 500 內(nèi)部服務(wù)器錯誤).在旁注中,我還嘗試了 ExportRequest.MediaDownloader.ChunkSize,但據(jù)我觀察,它只會改變 ExportRequest.MediaDownloader.ProgressChanged 回調(diào)的頻率調(diào)用(即如果 ChunkSize 設(shè)置為 256 * 1024,回調(diào)將每 256KB 觸發(fā)一次).

                  The resultant local file (from fileStream) however, is still full-length (i.e. 40KB file for the 40KB Drive file, and a 500 Internal Server Error for the 9MB file). On a sidenote, I've also experimented with ExportRequest.MediaDownloader.ChunkSize, but from what I observe it only changes the frequency at which the ExportRequest.MediaDownloader.ProgressChanged callback is called (i.e. callback will trigger every 256KB if ChunkSize is set to 256 * 1024).

                  我該如何繼續(xù)?

                  推薦答案

                  您似乎正朝著正確的方向前進(jìn).從您上次的評論來看,請求將根據(jù)塊大小更新進(jìn)度,因此您的觀察是準(zhǔn)確的.

                  You seemed to be heading in the right direction. From your last comment, the request will update progress based on the chunk size, so your observation was accurate.

                  查看 SDK 中 MediaDownloader 的源代碼找到了以下(強調(diào)我的)

                  核心下載邏輯.我們下載媒體并將其寫入一次輸出流 ChunkSize 個字節(jié),提高 ProgressChanged每個塊之后的事件.分塊行為在很大程度上是歷史性的工件:先前的實現(xiàn)發(fā)出多個 Web 請求,每個請求對于 ChunkSize 字節(jié).現(xiàn)在我們在一個請求中完成所有事情,但是 API為了兼容性,保留了客戶端可見的行為.

                  The core download logic. We download the media and write it to an output stream ChunkSize bytes at a time, raising the ProgressChanged event after each chunk. The chunking behavior is largely a historical artifact: a previous implementation issued multiple web requests, each for ChunkSize bytes. Now we do everything in one request, but the API and client-visible behavior are retained for compatibility.

                  您的示例代碼只會下載一個從 100 到 200 的塊.使用這種方法,您必須跟蹤索引并手動下載每個塊,將它們復(fù)制到每個部分下載的文件流中

                  Your example code will only download one chunk from 100 to 200. Using that approach you would have to keep track of an index and download each chunk manually, copying them to the file stream for each partial download

                  const int KB = 0x400;
                  int ChunkSize = 256 * KB; // 256KB;
                  public async Task ExportFileAsync(string downloadFileName, string fileId, string exportMimeType) {
                  
                      var exportRequest = driveService.Files.Export(fileId, exportMimeType);
                      var client = exportRequest.Service.HttpClient;
                  
                      //you would need to know the file size
                      var size = await GetFileSize(fileId);
                  
                      using (var file = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite)) {
                  
                          file.SetLength(size);
                  
                          var chunks = (size / ChunkSize) + 1;
                          for (long index = 0; index < chunks; index++) {
                  
                              var request = exportRequest.CreateRequest();
                  
                              var from = index * ChunkSize;
                              var to = from + ChunkSize - 1;
                  
                              request.Headers.Range = new RangeHeaderValue(from, to);
                  
                              var response = await client.SendAsync(request);
                  
                              if (response.StatusCode == HttpStatusCode.PartialContent || response.IsSuccessStatusCode) {
                                  using (var stream = await response.Content.ReadAsStreamAsync()) {
                                      file.Seek(from, SeekOrigin.Begin);
                                      await stream.CopyToAsync(file);
                                  }
                              }
                          }
                      }
                  }
                  
                  private async Task<long> GetFileSize(string fileId) {
                      var file = await driveService.Files.Get(fileId).ExecuteAsync();
                      var size = file.size;
                      return size;
                  }
                  

                  這段代碼對驅(qū)動 api/server 做了一些假設(shè).

                  This code makes some assumptions about the drive api/server.

                  • 服務(wù)器將允許分塊下載文件所需的多個請求.不知道請求是否受到限制.
                  • 服務(wù)器仍然接受 Range 標(biāo)頭,如開發(fā)人員文檔中所述
                  • That the server will allow the multiple requests needed to download the file in chunks. Don't know if requests are throttled.
                  • That the server still accepts the Range header like stated in the developer documenation

                  這篇關(guān)于C# - 以字節(jié)塊從 Google Drive 下載的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  Ignore whitespace while reading XML(讀取 XML 時忽略空格)
                  XML to LINQ with Checking Null Elements(帶有檢查空元素的 XML 到 LINQ)
                  Reading XML with unclosed tags in C#(在 C# 中讀取帶有未閉合標(biāo)簽的 XML)
                  Parsing tables, cells with Html agility in C#(在 C# 中使用 Html 敏捷性解析表格、單元格)
                  delete element from xml using LINQ(使用 LINQ 從 xml 中刪除元素)
                  Parse malformed XML(解析格式錯誤的 XML)

                    <small id='4fGdz'></small><noframes id='4fGdz'>

                        <tbody id='4fGdz'></tbody>

                        • <tfoot id='4fGdz'></tfoot>
                          • <bdo id='4fGdz'></bdo><ul id='4fGdz'></ul>
                          • <i id='4fGdz'><tr id='4fGdz'><dt id='4fGdz'><q id='4fGdz'><span id='4fGdz'><b id='4fGdz'><form id='4fGdz'><ins id='4fGdz'></ins><ul id='4fGdz'></ul><sub id='4fGdz'></sub></form><legend id='4fGdz'></legend><bdo id='4fGdz'><pre id='4fGdz'><center id='4fGdz'></center></pre></bdo></b><th id='4fGdz'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='4fGdz'><tfoot id='4fGdz'></tfoot><dl id='4fGdz'><fieldset id='4fGdz'></fieldset></dl></div>
                          • <legend id='4fGdz'><style id='4fGdz'><dir id='4fGdz'><q id='4fGdz'></q></dir></style></legend>
                            主站蜘蛛池模板: 成人在线观看网站 | 中文字幕在线观看视频网站 | 中文字幕视频在线观看 | 久久久无码精品亚洲日韩按摩 | 精品一区二区三区视频在线观看 | 亚洲精品在线视频 | 午夜不卡福利视频 | 中文字幕日韩欧美一区二区三区 | 91在线精品秘密一区二区 | 成人欧美一区二区三区 | 国产成人精品久久二区二区91 | 色橹橹欧美在线观看视频高清 | 欧美日韩精品久久久免费观看 | 极情综合网 | 国产三级精品视频 | 91 在线| 国产成人a亚洲精品 | 国产精品自在线 | 精品一区二区三区在线视频 | 亚洲乱码一区二区三区在线观看 | 一区二区三区四区电影视频在线观看 | 福利网址| 91精品久久久久久久久中文字幕 | 四虎影| a天堂在线 | 久久久久国产一级毛片 | 欧美 日韩 国产 在线 | 欧美日韩黄色一级片 | 日本精品一区二区三区在线观看视频 | 久久国产精品99久久久久久丝袜 | 夜夜夜久久久 | 亚洲精品粉嫩美女一区 | 国产1区| 黄色一级免费 | 国产精品自拍视频网站 | 日韩在线中文字幕 | 欧美一区二区 | 黄色综合 | 欧美一区视频在线 | 日本久久网 | 国产精品美女久久久免费 |