Faculty Research Seminar

Winter 2000


This seminar is an introduction to research for graduate students. For the first two weeks we will present talks by established researchers on how to conduct research. After that, every week, one of the faculty members will talk about his research. This is a great way to get information about the ongoing research projects in the department.

Course number: 595J Enrollment Code: 68932

Time: Fridays 1-2pm Place: CS conference room

This week's talk

Friday, March 16th, at 1PM:

``Large and Long-Lived Parallel Computation using Java on the Internet'' by Peter Cappello

The research concerns a Java-based infrastructure intended to harness the Internet's vast, growing, computational capacity for ultra-large, coarse-grained, parallel applications. The purpose of this research is to: 1) transform large heterogeneous computer networks, even the Internet itself, into a monolithic, multi-user, always-available multiprocessor; 2) solve some world record size computational problems; 3) via a simple API, allow designers to focus on a recursive decomposition/composition of the parallelizable part of the computation.

Summarily, the application programmer will get the performance benefits of massive parallelism without the typically attendant costs: adulterating the application logic with an interprocessor communication protocol, topology-specific (e.g., hypercube) interprocessor communication, and fault tolerance schemes.

The research includes implementing several widely applicable algorithms, and deploy parallel implementations on well over a thousand processors that are somewhat geographically dispersed. This sets the stage for using tens of thousands of processors. These computations include the optimization version of some NP-hard problems (e.g., traveling salesman problem and integer linear programming) and several scientific computations (e.g., the conjugate gradient method for solving linear systems iteratively and the N-body problem). Perhaps most challenging, and revealing of the archtecture's limits, is the N-body problem.

An application is appropriate if an execution time of a few minutes, say 10, is acceptable. For example, a branch-and-bound problem that takes 100,000 minutes (more than 2 months) on one processor, should be solved in 10 minutes on 10,000 Internetworked processors. However, if a problem takes 10,000 seconds on a single processor, it cannot be solved in 1 second with 10,000 processors; Internet latencies preclude parallelism that is that fine-grained. Thus, virtual reality applications, for example, would not be appropriate for this architecture.

The software for hosting or brokering such computations will be downloadable from the a web site, as will the software for developing & deploying applications on the host/broker network. The web site will contain tutorials, demonstrations, and a repository for users to share their work. A mail list will facilitate communication among its user community.

Network statistics (e.g., how many hosts are available and the average amount of time a host is available) will be gathered, aggregated, and displayed in quasi-real time from a web interface. Another visualization tool will gather/display interprocessor communication for actual computations. Seeing these communications "in action" will give insight into the application's decomposition/composition process, and visually reveal the communication patterns associated with the task scheduling and fault tolerance mechanisms. The use of multicast and JavaSpaces will be investigated in connection with these research goals.

Previous Talks

  • Friday, March 3rd:

    Yuan-Fang Wang

  • Friday, February 25th, at 9AM:

    ``Specification and Automated Verification of Concurrent Software Systems,'' by Tevfik Bultan

    I will talk about my recent work on model checking concurrent software systems. Model checking is an automated technique for analyzing concurrent systems. Research in this area involves developing efficient data structures which are capable of representing large state spaces. Using these data structures a model checker exhaustively searches the state space of a system to find errors. Another component of my research is developing verifiable specification languages for concurrent systems. The goal is to establish a relationship between specification languages and model checkers similar to the relationship between programming languages and compilers. You can find more information about these projects at http://www.cs.ucsb.edu/~bultan/

  • Friday, February 18th, at 1pm: ``Algorithms and Software for Sensitivity Analysis and Optimal Control,'' by Linda Petzold
  • A wide variety of scientific and engineering problems can be described by systems of differential-algebraic equations (DAEs). In this talk we outline our current work on algorithms and software for sensitivity analysis and optimal control of large-scale DAE systems, with application to space vehicle trajectory control, chemical vapor deposition of high-temperature superconducting thin films, and design of a bioartificial artery.

  • Friday, February 11th, at 1pm: ``Some Research Issues in Databases & Distributed Systems'' by Ambuj Singh
  • Friday, February 4th, at 1pm: ``Research in the Networking and Multimedia Systems Lab'' by Kevin Almeroth
  • This research seminar will cover the latest topics in the NMSL. Review the group's WWW site at: http://www.nmsl.cs.ucsb.edu/

  • Friday, January 28, at 1pm: ``Overview of recent research on aggregation and summarization of data,'' by Amr El Abbadi
  • Friday, January 21, at 12:30pm: "You and Your Research" by Richard Hamming (videotaped lecture)
  • This talk centers on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?" From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.

    Richard Hamming (1915-1998) is best known for his pioneering work on error-correcting codes, his work on integrating differential equations, and the spectral window which bears his name. He received a number of awards including the ACM Turing Award.

  • Friday, January 14, at 12:30pm: "Desires & Diversions" -- video-taped lecture by Allen Newell
  • "What happens over a whole scientific career? Mine isn't over, but it's already 40 years long. What shape and even purpose can be given to such an endeavor? There are many styles of scientific lives...I will tell some of the story of my own total scientific endeavor...to shed light on how to live a science." With this introduction, Allen Newell begins a retrospection of his lifelong pursuit of a single scientific goal: understanding the nature of the human mind. He discusses when a scientific life really starts, how to make diversions and failures useful, and what to do with successes.

    Allen Newell had a productive life as a computer scientist. In addition to AI and cognitive science, he had contributions in programming languages, computer architecture, computer chess and speech processing. He had a BS in physics, and a PhD in industrial administration. He won the Turing award jointly with Herb Simon. This was his last lecture.