当使用Task.WhenAll(列表)将大约200个300k的blob写入blob存储时,写入挂起/花费的时间明显长于按顺序执行每个写入。
我在一个函数应用程序中运行该进程。
不起作用
private async Task WriteToBlobAsync(List<DataSeries> allData)
{
int blobCount = 0;
List<Task> blobWriteTasks = new List<Task>();
foreach(DataSeries series in allData)
{
blobCount++;
string seriesInJson = JsonConvert.SerializeObject(series);
blobWriteTasks.Add(_destinationBlobStore.WriteBlobAsync(seriesInJson, series.SaveName));
//await _destinationBlobStore.WriteBlobAsync(seriesInJson, series.SaveName);
if (blobCount % 100 == 0)
{
_flightSummaryDoc.AddLog($"{blobCount} Blobs Complete");
_log.Info($"{blobCount} Blobs Complete");
}
}
await Task.WhenAll(blobWriteTasks.ToArray());
}
工作速度明显更快(但不应该如此)
private async Task WriteToBlobAsync(List<DataSeries> allData)
{
int blobCount = 0;
List<Task> blobWriteTasks = new List<Task>();
foreach(DataSeries series in allData)
{
blobCount++;
string seriesInJson = JsonConvert.SerializeObject(series);
//blobWriteTasks.Add(_destinationBlobStore.WriteBlobAsync(seriesInJson,series.SaveName));
await _destinationBlobStore.WriteBlobAsync(seriesInJson, series.SaveName);
if(blobCount % 100 == 0)
{
_flightSummaryDoc.AddLog($"{blobCount} Blobs Complete");
_log.Info($"{blobCount} Blobs Complete");
}
}
//await Task.WhenAll(blobWriteTasks.ToArray());
}
发布于 2019-05-02 16:12:58
它正在变慢并失败,因为它无法处理200个并发请求。
考虑使用SemaphorSlim
来使用其内置的节流机制,并将并发请求限制在更合理的数量。
请看这篇文章:How to limit the amount of concurrent async I/O operations?
https://stackoverflow.com/questions/55944905
复制