Commit aed5ad2b authored by Matt Menke's avatar Matt Menke Committed by Commit Bot

Make fewer connections in PacFileFetcherImplTest.IgnoresLimits.

It was making enough connections to exceed the total socket pool limit,
which is currently 256, and apparently large enough to cause timeouts on
some bots.  Switch to using the per-group limit instead - socket pool
tests themselves should make sure that ignore_limits ignores the global
pool limits as well, as there's no other way to exceed the per-group
limit, so just checking the per-group limit should be sufficient.

Bug: 820845
Change-Id: I2a661b535492c5bbca4e90710bd2d91e6115edfe
Reviewed-on: https://chromium-review.googlesource.com/964542
Commit-Queue: Matt Menke <mmenke@chromium.org>
Reviewed-by: default avatarEric Roman <eroman@chromium.org>
Cr-Commit-Position: refs/heads/master@{#544391}
parent 26fbdf79
...@@ -540,13 +540,13 @@ TEST_F(PacFileFetcherImplTest, DataURLs) { ...@@ -540,13 +540,13 @@ TEST_F(PacFileFetcherImplTest, DataURLs) {
} }
} }
// Makes sure that a request gets through when the socket pool is full, so // Makes sure that a request gets through when the socket group for the PAC URL
// PacFileFetcherImpl can use the same URLRequestContext as everything else. // is full, so PacFileFetcherImpl can use the same URLRequestContext as
TEST_F(PacFileFetcherImplTest, Priority) { // everything else.
// Enough requests to exceed the per-pool limit, which is also enough to TEST_F(PacFileFetcherImplTest, IgnoresLimits) {
// exceed the per-group limit. // Enough requests to exceed the per-group limit.
int num_requests = 10 + ClientSocketPoolManager::max_sockets_per_pool( int num_requests = 2 + ClientSocketPoolManager::max_sockets_per_group(
HttpNetworkSession::NORMAL_SOCKET_POOL); HttpNetworkSession::NORMAL_SOCKET_POOL);
net::test_server::SimpleConnectionListener connection_listener( net::test_server::SimpleConnectionListener connection_listener(
num_requests, net::test_server::SimpleConnectionListener:: num_requests, net::test_server::SimpleConnectionListener::
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment