This subproject is one of many research subprojects utilizing theresources provided by a Center grant funded by NIH/NCRR. The subproject andinvestigator (PI) may have received primary funding from another NIH source,and thus could be represented in other CRISP entries. The institution listed isfor the Center, which is not necessarily the institution for the investigator. Objectives This project represents core computing support, including essentially all control and processing computers employed at five of the six beamlines within the PXRR (X8-C mostly handles it's own). It includes centralized data-storage facilities and administrative servers, beamline servers and workstations at all of our experimental stations, additional processing / analysis / data-backup machines, and the networking infrastructure tying it all together. The guiding principle is to purchase small, standardized, commodity components that are inexpensive enough that they can be replaced at the end of their serviceable lifetime. There are parallel objectives of centralizing systems-management of this collection of systems, and standardizing the computing environment for users throughout our enterprise. This includes hardware, software, and naming schemes. Things are easily maintainable: since parts are interchangeable, spares can be shared, and maintenance and configuration procedures are the same. It is convenient for users to move from site to another since computing appears the same. Our original plan for the past year was for incremental improvements, with focus on storage and remote access. A storm of DOE-mandated cyber-security activity, including a day long 'standdown' during which the lab was cut off from the internet, consumed a major chunk of our manpower and has delayed much of our planned work. Results The rather unusual plan, provided by the NCRR, of having a regular annual budget for computing hardware allows us essentially to keep up with demand for raw computing resources. We organize the computing infrastructure into separate groups dedicated to beamline control, storage, processing, and administration. Each beamline has a hidden computer for overall control, one seat where the user/operator may run and monitor the experiment, and usually two seats with high power processors for data reduction. There are two sites on opposite sides of the NSLS x-ray ring, known as Cyber Caf s, where there are four to six processing seats. There is also an automatic DVD-writing machine in each Cyber Caf . Our storage resources total 15TB of disk space, divided among 11 separate RAID units. Each beamline is flexibly assigned a primary and secondary space. An equipment rack with dual un-interruptible power supplies houses all of this disk storage and the central servers for our gigabit-rate local-area network. It all provides robust service. On the other hand, because we're thinly staffed throughout the PXRR, the year of delays caused by the cyber-security emergencies have delayed our work on storage improvements, remote access, and other refinements. We are forced to be reactive rather than rational. Fortunately our experimental stations rarely suffer, but we are not advancing in the way we would like. Here are some of the things we've been required to do:1. Installed the 'ordo' (auditing) client on all Linux/Unix systems.2. All Windows systems were joined to BNL's Microsoft Windows Domain and configured with MS's Systems Management Server client as mandated.3. Created a Cyber Security Program Plan 'Subsystem' documentation for PXRR systems (a 24 page document). 4. Our system manager, Matt Cowan contributed significantly to NSLS Subsystem docs, necessary because we're as a major NSLS user group.5. Cowan worked with NSLS and BNL's Cyber Security Advisory Council to develop 'variance' solutions for 'deficiencies' noted during the standdown. The variance process started out as a complete duplication and expansion of all the work which went into the subsystem documentation, but recently has collapsed under it's own weight, now takes into account our existing subsystem documentation, and has become much more bearable.6. Significant time was spent dealing with a DOE penetration-testing team (The Red Team) and it's aftermath. They were able to get into our systems through a very poorly configured system of one of a nearby research groups. 7. We still suffer occasional system failures that are owed to 'security scans' by BNL's cyber-security teams.8. We must make other substantial effort to satisfy DOE's demand for centralized administration of our systems. The only silver lining to this dark cloud is that it has forced us to create a new system that handles some services like the web site and our database, although these were lower priority among our operational goals. Plans We'll attempt to foresee places where incremental improvements to the systems will be useful. A serious developmental objective for next year will be to accomplish remote beamline operation, somehow being cyber-secure at the same time. Unfortunately our plans must include responding to unfunded DOE cyber-security mandates. One of these is to install a system for user authentication called Centrify, which in fact may eventually provide us with a robust means to provide easily manageable system accounts for each of our experimental groups, instead of our current practice of using a single, shared operator account. The objective will be to improve our own cybersecurity within reasonable limits while at the same time working on our own scientifically urgent objectives. Significance Our computing infrastructure provides extremely cost effective processing, storage, and networking to the entire PXRR. The system has proven to be nicely scalable and adaptable as our needs change. Continued refinement will only make it better and something we need to think about less and less. However, with the goals we would like to achieve (too many details to mention here), and with the current, ongoing DOE security initiatives, we find our manpower stretched, and the target dates for the above items slipping further away.
Showing the most recent 10 out of 167 publications