This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. Objectives This R&D project has been expanded to include essentially all control and processing computers employed at five of the six beam lines within the PXRR (X8-C mostly handles it s own, although we work with them to keep things as uniform and standardized as possible). It includes centralized data-storage facilities and administrative servers, beam line servers and workstations at all of our experimental stations, additional processing/analysis/data-backup machines, as well as the networking infrastructure tying it all together. The guiding principle is to purchase small, standardized, commodity components that are inexpensive enough that they can be replaced at the end of their serviceable lifetime. There are parallel objectives of centralizing systems-management of this collection of what are mostly independent systems, and standardizing the computing environment for users throughout our enterprise. This includes hardware, software, and naming schemes. Things will be much more maintainable: since parts are more interchangeable, spares can be shared, and maintenance and configuration procedures are the same. From the users perspective, it will be convenient for users and staff to move from one experimental or computing station to another since computing is the same from one to the next. We work continually to increase the reliability, availability, and serviceability of critical components. Results The rather unusual plan, provided by the NCRR, of having a regular annual budget for computing hardware allows us essentially to keep up with demand for raw computing resources. We organize the components of our computing infrastructure into separate groups dedicated to beam line control, storage, processing, and administration. Our approach has enabled a range of improvements. This allows simplification of management of each machine, separation of administrative and control systems from user resources, isolation of embedded systems (motor control, detectors, etc) on a separate, private network, and improvement in the level of interchangeability between similar systems. Rather than maintaining several different physical computers, one for each major service such as authentication, web service, back-up monitoring, etc., we are working to create a pair of mirrored machines that will provide all of these services quite reliably. Our storage resources total 15TB of disk space, divided among 11 separate RAID units. Each beam line is flexibly assigned a primary and secondary space, (depending on demand and service requirements of the disk arrays) approaching virtualization of storage resources. All of this disk storage, and our central servers, have been installed in an equipment rack with dual un-interruptible power supplies. Each supply is which fed by a separate circuits, and each feeds separate remotely controllable power strips. Then each server straddles these power strips via dual power supplies. All combined, this provides extremely robust and serviceable power. The entire computing resources for these six beam lines are connected by a dedicated, gigabit-rate local-area network, with its own gigabit uplink out of the NSLS building to the BNL network core. The product of all this is that we've made our system more reliable for our users (and quickly recoverable from a failure), and more adaptable for the future, while making it easier for our computing staff to take care of things and advance the system further. Plans We ll perform continual, incremental improvements to the systems, with the emphasis this year being on improving reliability of the mass storage system and implementing convenient and secure remote access. Significance This system provides extremely cost-effective processing power, storage, and networking to the entire PXRR. The system is quite scalable, allowing us to add quanta of disk space, network bandwidth, and processing power fairly independently. It nicely absorbed the data-collection capacity of the Q315 detector system at X29, and we believe it will absorb increased throughput at X25. It has sufficient standardization, virtualization, redundancy, and management capabilities to have very good reliability, availability, and serviceability. It is also extremely modular and adaptable, allowing us to accommodate, not just growth, but change.
Showing the most recent 10 out of 167 publications