One of the more practical, and yet fundamental uses of packet capture analysis in todays networks is examining HTTP flows to isolate problems with the protocol or underlying network interactions. If you’re writing a web application or trying to debug why a particular service is slow, filtering for and graphing http response times can give you an instant picture of overall performance and outliers.
About HTTP response times
The http response time is the delta time between when an http request is
transmitted, and when the http response is transmitted. Calculating it
accurately is done while decoding the packets, and this value is
stored in a field called
http.time in the http response packet.
Adding http.time to your capture view
Let’s take a look at this capture,
which we took against a locally hosted CloudShark instance. We’ve added a custom
column called “HTTP Time” which contains the value of
To add this column to our view, we can add a custom field by clicking on Profile–>Custom Columns in the capture viewer. We then add a custom column here:
You can order the fields at the top of that dialog window. When you’re done, click save at the bottom.
We’ve also added the field
http.request_in as the column called
“Request Frame”. We’ll explain why in a moment.
Since http.time is contained within the http response packets, we want to look only at the http responses, using this filter:
Graphing http response time
To get a good view of http response times, we can great a graph. CloudShark lets you graph on things like number of bytes, number of packets, etc., but it will also lets you graph on the average value of numeric fields.
In this graph, we’ve created a filter using the AVG (average) function. The syntax we put in the graph for the y-axis is like this:
This creates a graph of our average http response times over the duration of the capture:
What can be learned here? Large HTTP Reponse times could be due to network delay, but with a web app, it’s most likely that the application is taking a long time to process the request. In our case, that’s probably it - we were purposefully running CloudShark on a very slow machine so we could demonstrate some big numbers here (11 seconds, whoa!).
How can we find out what the problem is? Let’s look at that big spike in the average response time. Since it’s the average at that time in the capture, we can make a guess as to what a good threshold would be to find the outliers at that time (let’s pick 6 seconds, since that’s about halfway up the spike).
Using our threshold, we can then build a filter to find those responses that had a time greater than 6 seconds:
http.request_in column that we added? We can use that to associate
the response with the packet that contained the request. Now we know which
GET requests caused those long responses! We can put them altogether using the
in operator in our filter:
Armed with this, we can point our engineers at a specific GET request that had an exceedingly long response time, maybe getting to the root of a web app issue. Better yet, we can give them a ladder diagram view of the whole problem:
A note about calculating accurate http response times
We should note that the
http.time values aren’t contained in the packet capture
data, but generated behind the scenes and then included as a field. This info
is generated by:
- Searching through all streams to find each HTTP request
- After having a list of all requests, search through all the responses
- Match each response to their respective request
- Calculate the time needed to complete a full response after a full request
- Second order data (minimum, maximum, average) can then be calculated on hosts that receive multiple requests.
We say “full response” and “full request” due to the possibility of fragments and retransmissions in the stream; some additional work is necessary to get around this. This entire process makes the response times about as accurate as they can be.
Capture analysis tools add other interesting clues like this while they’re decoding
http.time is one of them, and there are others that we’ll discuss in