
News
Here is all the latest news from the UCF ARCC:
UCF Research Computing Day
- Details
- Written by R. Paul Wiegand R. Paul Wiegand
- Published: 07 March 2017 07 March 2017
The UCF Advanced Research Computing Center (ARCC) announces UCF Research Computing Day 2017. This half-day event will be held on Friday, April 7th in room 101 of the Harris Engineering Center. It will cover UCF's 3,256-core high performance computing system (known as Stokes) available for research use, the UCF Research Network for high-speed data transfer, access to national resources, and even upcoming visualization resources. In addition, two current users will present their research and how high performance computing helps.
The ARCC also hopes to hear from attendees about future needs and considerations in the area of research computing.
Registration and more information (including a full agenda) is available at https://arcc.ist.ucf.edu/rcday
Changes due to Fall 2016 Maintenance Cycle
- Details
- Written by Super User Super User
- Published: 19 December 2016 19 December 2016
Greetings Stokes users,
Stokes has returned to operation! Please remember that we have two such maintenance downtimes per year, one in late Fall (the one we just completed) and one in late Spring.
Most of what was done this cycle should not affect users. There is one change to the way unix groups are handled that users should pay particular attention to. Please take a moment to read over the changes:
Changes include the following:
- SLURM was upgraded to version 16.05.7.
- The latest Intel Composer (2017) and GCC (6.2.0) compilers were installed.
- The latest OpenMPI (2.0) was built using the latest build tools (IC 2017) and GCC 6.2).
- Repaired some of the Dell nodes.
- The way Unix groups were handled were changed. See below for more details.
Previously, each user had an account and each PI's user account had a corresponding group. Students and staff that worked with a given PI were in his or her personal group. Now it is different and works as follows:
- Every user (PI or otherwise) has his or her own group and that is one's default group. For instance, if I had a student "Tom Baker" then he would have a user account "tbaker" and a default group "tbaker".
- PIs have an additional group called pi.<PI username>. For instance, in addition to my default group "pwiegand" (in which only I am a member), I also have an account "pi.pwiegand" (in which my students and I are members). Thus, students and staff of each PI are in their own group and *also* in his or her corresponding pi group(s).
- All files in /groups/
have been grouped to "pi.<PI username>", regardless of what they were before. - All files in /home/<user> directory have been re-grouped to each user's private group. **THIS COULD AFFECT USERS** who were using directories in their user area as a share point. We suggest that you use the shared /groups area rather than your personal user directories for this; however, if you want to still do this you must do so by changing the group to "pi.<PI Username>".
The ARCC staff appreciates your patience and thank you for your use of the resources! If you have any questions or concerns, please contact us!
Paul Wiegand & Glenn Martin
Network outage, Tue.6.Dec 1a-5a
- Details
- Written by R. Paul Wiegand R. Paul Wiegand
- Published: 22 November 2016 22 November 2016
Fall Maintenance cycle downtime, Dec.12 - Dec.19
- Details
- Written by R. Paul Wiegand R. Paul Wiegand
- Published: 31 October 2016 31 October 2016
Stokes and Newton will be taken down per our bi-annual routine maintenance cycle during Mid-December. Specifically, the clusters will be unavailable from the morning of Monday, December 12 through the morning of Monday, December 19.
Changes made during downtime will be minimal. The most significant change will be a slight change to the way groups are handled at the Linux level. We will provide more detail in the change log when we bring the system back online.
Recall that we now routinely bring the system down twice a year, once in late Fall and once in late Spring. We will keep the users notified in advance of such downtimes, but we recommend you build such expectations into your workflow. Though we anticipate no data loss during this time, it's never a bad idea to backup your materials. So we suggest you use this opportunity to copy salient data and code off of Stokes prior to the downtime.