Keep in mind that the objective is to (1) understand what the paper is trying to say; (2) assess whether what it is saying is worthwhile; and (3) judge whether the paper is effective in communicating what it says.
A final point: in reading papers for your career, you won't always have to go to a similarly deep level of analysis, but it is good to practice in case you have to and to recognize when a paper is effective and how it can be improved.
1997 was about in the middle of the video distribution era. There was certainly plenty of work as far back as the early 1990s that discussed all manner of video delivery systems. And the area was generally considered hot until after 2000.
Note: the version of the paper posted on the 290F web site did not have references. But a version with references was available at: http://www.nmsl.cs.ucsb.edu/papers/INFOCOM-97.pdf.
The main goal of the paper was to look at scheduling algorithms for a video delivery system. The paper uses a model where the server and network together can offer a finite number of channels. The question then becomes one of how to do scheduling in these kinds of systems.
This paper is (slightly) different than a traditional video delivery scheduling paper in that it looks at the implications of a scheduling algorithm on longer-term performance. Instead of evaluating how many requests can be satisfied in a given period of time, the paper more focuses on the user-perceived impact of different scheduling algorithms. For example, a particularly noteworthy conclusion is that while average performance over a period time looks pretty stable (after all, that's what averaging does), the instantaneous performance varies widely. If anything, this paper demonstrates how averaging can hide subtle and important behavior.
The system architecture is one of a server, a multicast-capable network, and a number of users connected with generic players. (If you want, you could describe more about what the architecture is, but such a description is more book-report style: useful in thinking through the components of the system and noting the assumptions made by the author, but unless there is something noteworthy, is really just a summary.)
The papers uses a video system model that makes one particularly novel assumption about user behavior. Very few evaluation models in previous papers looked at changes in user behavior. In essence, this is one of the first papers to look at a "flash crowd" (okay, I'm stretching here, but really, dynamic workloads weren't that common). It is through this model that the author exposes some of the more interesting long term performance characteristics.
The author then describes the various allocation policies that will be studied. (Again, like the system description, you can provide a brief summary of these.)
Figure 3 is one of the first results presented in the paper and it uses a style that is repeated for several figures. The stacking of the graphs allows a reader to see what is happening for different metrics at different points in time. The results are also presented as a time-series so that instantaneous performance (over a 5 minute interval) can be displayed. The author also includes faint running averages.
The author then steps through a series of results (explain if it is helpful).
The author then describes his proposal for a "pure rate control" scheduling strategy and presents an additional set of results. Based on the results the author proposes another scheduling strategy that gives slightly more flexibility but at the tradeoff of causing more unstable performance.
Finally, the author presents a series of additional comparative results. (Explain if anything interesting.)
As an overall contribution, the illumination of the problem that can be caused by dynamic load patterns, especially in systems where resources are allocated for long periods of time was quite clever. The paper exposed a problem with existing "one-dimensional" scheduling algorithms and gave a solution to where a second-order of resource allocation considerations was incorporated.
Weaknesses of the paper include the following. First, the paper assumes that multicast communication is free; that the cost of sending to one user is the same as sending to a million users. This over simplifies the delivery cost and creates an obvious opportunity to scale with no downside. While it might have been difficult to consider the cost of multicast, the author should have at least mentioned it as a factor.
Another weakness is the proposed pure rate control scheme. It is fine as a first attempt, but the author makes no claims about whether it is optimal or fairly good, etc. It seems like the author chose a scheme that would offer some improved performance and flexibility and presented results. Likely there are innumerable other rate control strategies that could have been studied, but it would have been useful to say something qualitative about the proposed pure rate control strategy. (Okay, well, at the time it seemed like a good, if not obvious choice. Hindsight is 20/20 and there are probably a fair number of things that could be proposed.)
In terms of paper organization and readability, the paper is above average. The organization is straightforward and easy to follow but not particularly inspiring (the author responds: hey, that's not fair, as if "inspiring" has to be a goal for the organization of a paper!!). The abstract is fairly generic as to what the contributions and performance of the rate-based policies will be and whether the performance is exciting enough to be a contribution unto itself. In other words, the author could have been more clear about whether the performance gains were interesting. (Sometimes you'll see papers say what the performance numbers are, but then not say whether the improvement is good or not.) Finally, the paper does not really give much detail about the simulator or exactly what the metrics are (both pieces of information are there, but the flow of the paper does not really describe the pieces in the right sequence).