- General information
- Data readiness
- Grid and Cloud
- Infrastructure
- Online Computing
- Software Infrastructure
- Batch system, resource management system
- CVS->Git
- Computing Environment
- Facility Access
- FileCatalog
- HPSS services
- Home directories and other areas backups
- Hypernews
- Installing the STAR software stack
- Provision CVMFS and mount BNL/STAR repo
- RCF Contributions
- Security
- Software and Libraries
- Storage
- Tools
- Tutorials
- Video Conferencing
- Web Access
- Machine Learning
- Offline Software
- Production
- S&C internal group meetings
- Test tree
Software Infrastructure
Updated on Wed, 2022-06-22 08:22 by testadmin. Originally created by jeromel on 2005-05-16 20:39.
Under:
On the menu today ...
- General Information
- Infrastructure & Software
Software Release, Sanity, ..
Problem reporting: General RCF Information and problem reporting, STAR problem reporting, - General Tools
- Web Sanity, Software & Documentation tools
- HPSS tools & services
General Information
SOFI stands for SOFtware infrastructure and Infrastructure. It includes any topics related to code standards, tools compiling your code, problems with base code and Infrastructure. SOFI also addresses (or try to address) your need in terms of monitoring or easily manage activities and resources in the STAR environment.
- Discussion list: starsoft-l@lists.bnl.gov (old: starsofi-hn@www.star.bnl.gov)
- Web archive
- RCF liaison meeting support documents
- IO performance of some of our hardware or tested hardware.
Infrastructure & Software
- Current software releases page
- A tutorial exists on Setting up your computing environment and what is defined and how to use it ...
- See the Autobuild & Code Sanity page where you will find the AutoBuild, Insure++, profiling compilation results as well as information about valgrind, Jprof etc ...
- If you are searching for a quick startup documentation for Batch system, resource management system, ...
- General RCF problems should be reported using Computing Facility Issue reporting system (RT).
You should NOT use this system to report STAR-specific problems. Instead, use the STAR Request Tracking system described below. - To report STAR specific problems: Request Tracking (bug and issues tracker) system,
Submitting a problem (bug), help request or issue to the Request Tracking system using Email
You can always submit a report to the bug tracking system by sending an Email directly. There is no need for a personalized account and using the Web Interface is not mandatory. For each BugTracking category, an equivalent @www.star.bnl.gov mailing list exists.
The currently available queues are
bugs-high problem with ANY STAR Software with a need to be fixed without delay bugs-medium problem with ANY STAR Software and must be fixed for the next release bugs-low problem with ANY STAR Software. Should be fixed for the next release comp-support General computing operation support (user, hardware and middleware provisioning) issues-infrstruct Any Infrastructure issues (General software and libraries, tools, network)
issues-scheduler Issues related to the SUMS project (STAR Unified Meta-Scheduler)
issues-xrootd Issues related to the (X)rootd distributed data usage issues-simu Issues related to Simulation grid-general STAR VO general Grid support : job submission, infrastructure, components, testing problem etc ... grid-bnl STAR VO, BNL Grid Operation support grid-lbl STAR VO, LBNL Grid Operation support wishlist Use it for or suggesting what you would wish to see soon, would be nice to have etc ...
You may use the guest account for accessing the Web interface. The magic word is here.
- To create a ticket, select the queue (drop down menu after the create-ticket button). Queues are currently sorted by problem priority. Select the appropriate level. A wishlist queue has been created for your comments and suggestions. After the queue is selected, click on the create-ticket button and fill the form. Please, do not forget the usual information i.e. the result of STAR_LEVELS and of uname -a AND a description of how to reproduce the problem.
- If you want to request a private account instead of using the guest account, send a message to the wishlist queue. There are 2 main reasons for requesting a personalized account :
- If you are planning to be an administrator or a watcher of the bug tracking system (that is, receive tickets automatically, take responsibility for solving them etc ...) you MUST have a private account.
- If you prefer to see the summary and progress of your own submitted tickets at login instead of seeing all tickets submitted under the guest account, you should also ask for a private account.
- At login, the left side panels shows the tickets you have requested and the tickets you own. The right panel shows a the status of all queues. Having a private account setup does NOT mean that you cannot browse the other users tickets. It only affects the left panels summary.
- To find a particular bug, click on search and follow the instructions.
- Finally, if you would like to have a new queue created for a particular purpose (sub-system specific problems), feel free to request to setup such a queue.
General Tools
Data location tools
- Several tools exists to locate data both on disk and in HPSS. Some tools are available from the production page and we will list here only the tools we are developing for the future.
- FileCatalog (command line interface get_file_list.pl and Perl module interface).
- User manual
Resource Monitoring tools
- Nova shows jobs running per users, per machine etc ...
- STAR disk space overall
- GPFS occupancy
- Farm monitoring - Ganglia Reports
- STAR Ganglia reports - those pages required the protected or the ganglia password.
- STAR Offline Ganglia monitoring a tool monitoring global resources.
- The Online Ganglia monitoring page
- RCF/STAR Ganglia reports - you need to use your RCF/Kerberos credentials for the below monitoring
- STAR Ganglia reports - those pages required the protected or the ganglia password.
- Queue Monitoring
- Users with Heavy NFS (GPFS) IO traffic [coming soon]
- Condor Status and Information : Occupancy plot, Usage plot, Pool/Queue plot and Running users and shares statistics.
- (Old and may break soon: pool usage monitoring and condor pool occupancy and general aggregate statistics)
- CRS jobs monitoring for STAR
- Nagios Based Farm Alert
- Cacti network traffic plots (to/from a few selected nodes incuding DB and Xrootd)
Browsers
- Database Browser
- Current RunLog Browser
- 2008 RunLog Browser
- 2007 RunLog Browser
- 2006 RunLog Browser
- ...
- Fast-Offline Browser
Web Sanity, Software & documentation tools
Web based access and tools
Web Sanity
- You can consult the server's log by using this cgi.
- Usage statistics using awstats.
- Status interafce includes: perl-status, serv-status, serv-info.
Software & documentation auto-generation
- Our STAR Software CVS Repositories browser
Allows browsing the full offline and online CVS repositories with listings showing days since last modification, modifier, and log message of last commit, display and download (checkout) access to code, access to all file versions and tags, and diff'ing between consecutive or arbitrary versions, direct file-level access to the cross-referenced presentation of a file, ... You can also sort by - Doxygen Code documentation (what is already doxygenized )
and the User documentation (a quick startup ...) Our current Code documentation is generated using the doxygen generator. Two utilities exists to help you with this new documentation scheme :- doxygenize is a utility which takes as argument a list of header files and modify them to include a "startup" doxygen tag documentation. It tries to guess the comment block, the author and the class name based on content. The current version also documents struct and enum lists. Your are INVITED TO CHECK the result before committing anything. I have tested on several class headers but there is always the exception where the parsing fails ...
- An interface to doxygen named doxycron.pl was created and installed on our Linux machines to satisfy the need of users to generate the documentation by themselves for checking purposes. That same generator interface is used to produce our Code documentation every day so, a simple convention has been chosen to accomplish both tasks. But why doxycron.plinstead of directly using doxygen? If you are a doxygen expert, the answer is 'indeed, why ?'. If not, I hope you will appreciate that doxycron.pl not only takes care of everything for you (like creating the directory structure, a default actually-functional configuration file, safely creating a new documentation set etc ....) but also adds a few more tasks to its list you normally have to do it yourself when using doxygen base tools (index creation, sorting of run-time errors etc ...). This being said, let me describe this tool now ...
The syntax for doxycron.pl is
% doxycron.pl [-i] PathWhereDocWillBeGenerated Path(s)ToScanForCode Project(s)Name SubDir(s)Tag
The arguments are:- -i is used here to disable the doxytag execution, a useless pass if you only want to test your documentation.
- PathWhereDocWillBeGenerated is the path where the documentation tree will be or TARGETD
- Path(s)ToScanForCode is the path where the sources are or INDEXD (default is the comma separated list /afs/rhic.bnl.gov/star/packages/dev/include,/afs/rhic.bnl.gov/star/packages/dev/StRoot)
- Project(s)Name is a project name (list) or PROJECT (default is the comma separated include,StRoot)
- SubDir(s)Tag an optional tag (list) for an extra tree level or SUBDIR. The default is the comma separated list include, . Note that the last element is null i.e. "". When encountered, the null portion of a SUBDIR list will tell doxycron.pl to generate an searchable index based all previous non-null SUBDIR in the list.
Note that if one uses lists instead of single values, then, ALL arguments MUST be a list and the first 3 are mandatory.
To pass an empty argument in a list, you must use quotations as in the following example
% doxycron.pl /star/u/jeromel/work/doxygen /star/u/jeromel/work/STAR/.$STAR_HOST_SYS/DEV/include,/star/u/jeromel/work/STAR/DEV/StRoot include,StRoot 'include, '
In order to make it clear what the conventions are, let's describe a step by step example as follow:
Examples 1 (simple / brief explaination):
% doxycron.pl `pwd` `pwd`/dev/StRoot StRoot
would create a directory dox/ in `pwd` containing the code documentation generated from the relative tree dev/StRoot for the project named StRoot. Likely, this (or similar) will generate the documentation you need.
Example 2 (fully explained):
% doxycron.pl /star/u/jeromel/work/doxygen /star/u/jeromel/work/STAR/DEV/StRoot Test
In this example, I scan any source code found in my local cvs checked-out area /star/u/jeromel/work/STAR/DEV starting from StRoot. The output tree structure (where the documentation will end) are requested to be in TARGETD=/star/u/jeromel/work/doxygen. In order to accomplish this, doxycron.pl will check and do the following:- Check that the doxygen program is installed
- Create (if it does not exists) $TARGETD/dox directory where everything will be stored and the tree will start
- Search for a $TARGETD/dox/$PROJECT.cfg file. If it does not exists, a default configuration file will be created. In our example, the name of the configuration file defaults to /star/u/jeromel/work/doxygen/dox/Test.cfg. You can play with several configuration file by changing the project name. However, changing the project name would not lead to placing the documents in a different directory tree. You have to play with the $SUBDIR value for that.
- The $SUBDIR variable is not used in our example. If I had chosen it to be, let's say, /bof, the documentation would have been created in $TARGETD/dox/bof instead but the template is still expected to be $TARGETD/dox/$PROJECT.cfg.
The configuration file should be considered as a template file, not a real configuration file. Any item appearing with a value like Auto-> or Fixed-> will be replaced on the fly by the appropriate value before doxygen is run. This ensures keeping the conventions tidy and clean. You actually, do not have to think about it neither, it works :) ... If it does not, please, let me know. Note that the temporary configuration file will be created in /tmp on the local machine and left there after running.
What else does one need to know : the way doxycron.pl works is the safest I could think off. Each new documentation set is re-generated from scratch, that is, using temporary directories, renaming old ones and deleting very old ones. After doxycron.pl has completed its tasks, you will end up with the directories $TARGETD/dox$SUBDIR/html and $TARGETD/dox$SUBDIR/latex. The result of the preceding execution of doxycron.pl will be in directories named html.old and latex.old.
One thing will not work for users though : the indexing. The installation of indexing mechanism in doxygen is currently not terribly flexible and fixed values were chosen so that clicking on the Search index link will go to the cgi searching the entire main documentation pages.As a last note, doxygen understands ABSOLUTE path names only and therefore, doxycron.pl will die out if you try to use relative paths as the arguments. Just as a reminder, /titi/toto is an absolute path while things like ./or ./tata are relative path.
HPSS tools & services
- How to retrieve files from HPSS. Please, use the Data Carousel and ONLY the DataCarousel.
Note: DO NOT use hsi to retrieve files from HPSS - this access mode locks tape drives for exclusive use (only you, not shared with any other user) and have dire impacts on STAR;s operations from production to data restores. If you are caught using it, you will be banned from accessing HPSS (your privilege to access HPSS resources will be revoked).
Again - Please, use the Data Carousel.- Data Carousel Quick Start/Tutorial
- Accounting interface (see the result of you requests as they are processed)
- DataCarousel Input file generator for raw files (valid only for what the FastOffline system knows about)
- Archiving into HPSS
Several utilities exists. You can find the reference on the RCF HPSS Service page. Those utilities will bring you directly in the Archive class of service. Note that the DataCarousel can retrieve files from ANY class of service. The prefered mode for archining is the use of htar.
NOTE: You should NOT abuse those services to retreive massive amount of files from HPSS (your operation will otherwise clash with other operations, including stall or slow down data production). Use the DataCarousel instead for massive file retreival. Abuse may lead to supression of access to archival service.
- For rftp, history is in an Hypernews post Using rftp . If you save individual files and have lots of files in a directory, please avoid causing a Meta_data lookup. Meta-data lookup happens when you 'ls -l'. As a reminder, please keep in mind that HPSS is NOT made for neither small files and large amount of files in directories but for massive large files storage (on 2007/10/10 for example, a user crashed HPSS with a single 'ls -l' lookup of a 3000 files directory). In that regard, rftp is most useful if you create first an archive of your files yourself (tar, zip,...) and push the archive into HPSS afterward. If this is not your mode of operation, the preferred method is the use of htar which provides a command-line direct HPSS archive creation interface.
- htar is the recommended mode for archining into HPSS. This utility provides a tar-like interface allowing for bundling together several files or an entire directory tree. Note the syntax of htar and especiallythe extract below from this thread:
If you want the file to be created in /home/<username>/<subdir1> and <subdir1> does not existed yet, use % htar -Pcf /home/<username>/<subdir1>/<filename> <source> If you want the file to be created into /home/<username>/<subdir2> and <subdir2> already exists, use % htar -cf /home/<username>/<subdir2>/<filename> <source>
Please consult the help on the web for more information about htar. - File size is limited to be <55GB, and if exceeded you will get Error -22. In this case consider using split-tar. A simple example/syntax on how to use split-tar is:
% split-tar -s 55G -c blabla.tar blabla/
This will at least create blabla-000.tar but also the next sequences (001, 002, ...) each of 55 GBytes until all files from directory blabla/ are packed. The magic 55 G suggested herein and in many posts works for any generation of drive for the past decade. But a limit of 100-150 GB should also work on most media at BNL as per 2016. See this post for a summary of past pointers. - You may make split-tar archive cross-compatible with htar by creating afterward the htar indexes. To do this, use a command such as
% htar -X -E -f blabla-000.tar
this will create blabla-000.tar.idx you will need to save in HPSS along the archive.
»
- Printer-friendly version
- Login or register to post comments