A/B test for determining a value for unused socket timeout.  Currently the
timeout defaults to 10 seconds.  Having this value set too low won't allow us
to take advantage of idle sockets.  Setting it to too high could possibly
result in more ERR_CONNECT_RESETs, requiring one RTT to receive the RST packet
and possibly another RTT to re-establish the connection.

r=jar


Review URL: http://codereview.chromium.org/2827016

git-svn-id: svn://svn.chromium.org/chrome/trunk/src@50364 0039d316-1c4b-4281-b951-d872f2087c98
diff --git a/net/socket/client_socket_pool_base_unittest.cc b/net/socket/client_socket_pool_base_unittest.cc
index 286949a..c9ad282 100644
--- a/net/socket/client_socket_pool_base_unittest.cc
+++ b/net/socket/client_socket_pool_base_unittest.cc
@@ -423,7 +423,8 @@
     CreatePoolWithIdleTimeouts(
         max_sockets,
         max_sockets_per_group,
-        base::TimeDelta::FromSeconds(kUnusedIdleSocketTimeout),
+        base::TimeDelta::FromSeconds(
+            ClientSocketPool::unused_idle_socket_timeout()),
         base::TimeDelta::FromSeconds(kUsedIdleSocketTimeout));
   }