The pages here relates to the data readiness sub-group of the S&C team. This area is comprise of calibration, database and quality assurance.
Please consult the You do not have access to view this node for a responsibility chart.
STAR Calibrations
In addition to the child pages listed below:
For the run in winter 2006, the plan is to take primarily pp data. This may lead to different requirements than in the past.
There is some question as to whether certain tasks need to be done this year because the detector was not moved during the shutdown period. Omitting such tasks should be justified before skipping!
Previous runs:
Using tracks fit with SVT points plus a primary vertex alone, we can self-align the SVT using residuals to the fits. This document explains how this can be done, but only works for the situation in which the SVT is already rather well aligned, and only small scale alignment calibration remains. The technique explained herein also allows for calibration of the hybrid drift velocities.
TPC Calibration & Data-Readiness Tasks:
Notes:
* "Run", with a capital 'R', refers to a year's Run period, e.g. Run 10)
* Not all people who have worked on various tasks are listed as they were recalled only from (faulty) memory and only primary persons are shown. Corrections and additions are welcome.
To better understand what is the effect of distortions on momentum measurements in the TPC, the attached sagitta.pdf file shows the relationship between track sagitta and its transverse momentum.
Δ(r-φ) vs. r and φ | Δ(r) vs. r and φ | |
---|---|---|
Padrow 13 | ||
Padrow 13 lines indicating locations of rows 13 and 14 |
||
Padrow 40 | ||
Padrow 40 lines indicating locations of rows 40 and 41 |
This is a recipe of RunXI dEdx calibration by Yi Guo.
In 2012, the procedure documentation was updated, including global T0 calibration:
Below are the older instructions.________________
The procedure here is basically to calculate two beamlines using only west tpc data and only east tpc data independently and then adjust the XTWIST and YTWIST parameters so that the east and west beamlines meet at z=0. The calibration needs to be done every run for each B field configuration. The obtained parameters are stored in the tpcGlobalPosition table with four different flavors: FullmagFNegative, FullMagFPositive, HalfMagFPositive and HalfMagFNegative.
To calculate the beamline intercept the refitting code (originally written by Jamie Dunlop) is used. An older evr-based version used by Javier Castillo for the 2005 heavy ion run can be found at ~startpc/tpcwrkExB_2005, and a version used for the 2006 pp run that uses the minuit vertex finder can be found at ~hjort/tpcwrkExB_2006. Note that for the evr-based version the value of the B field is hard coded at line 578 of pams/global/evr/evr_am.F. All macros referred to below can be found under both of the tpcwrkExB_200X directories referred to above, and some of them under ~hjort have been extensively rewritten.
Step-by-step outline of the procedure:
1. If using evr set the correct B field and compile.
2. Use the "make_runs.pl" script to prepare your dataset. It will create links to fast offline event.root files in your runsXXX subdirectory (create it first, along with outdirXXX). The script will look for files that were previously processed in the outdirXXX file and skip over them.
3. Use the "submit.pl" script to submit your jobs. It has advanced options but the standard usage is "submit.pl rc runsXXX outdirXXX" where "rc" indicates to use the code for reconstructed real events. The jobs will create .refitter.root files in your ourdirXXX subdirectory.
4. Next you create a file that lists all of the .refitter.root files. A command something like this should do it: "ls outdirFF6094 | grep refitter | awk '{print "outdirFF6094/" $1}' > outdirFF6094/root.files"
5. Next you run the make_res.C macro (in StRoot/macros). Note that the input and output files are hard coded in this macro. This will create a histos.root file.
6. Finally you run plot_vtx.C (in StRoot/macros) which will create plots showing your beamline intercepts. Note that under ~hjort/tpcwrkExB_2006 there is also a macro called plot_diff.C which can be used to measure the offset between the east/west beams more directly (useful for pp where data isn't as good).
Once you have made a good measurement of the offsets an iterative procedure is used to find the XTWIST and YTWIST that will make the offset zero:
7. In StRoot/StDbUtilities/StMagUtilities.cxx change the XTWIST and YTWIST parameters to what was used to process the files you analyzed in steps 1-6, and then compile.
8. Run the macro fitDCA2new.C (in StRoot/macros). Jim Thomas produces this macro and you might want to consult with him to see if he has a newer, better version. An up-to-date version as of early 2006 is under ~hjort/tpcwrkExB_2006. When you run this macro it will first ask for a B field and the correction mode, which is 0x20 for this correction. Then it will ask for pt, rapidity, charge and Z0 position. Only Z0 position is really important for our purposes here and typical values to use would be "1.0 0.1 1 0.001". The code will then report the VertexX and VertexY coordinates, which we will call VertexX0 and VertexY0 in the following steps.
9. If we now take VertexX0 and VertexY0 and our measured beamline offsets we can calculate the values for VertexX and VertexY that we want to obtain when we run fitDCA2new.C - call them VertexX_target and VertexY_target:
VertexX_target = (West_interceptX - East_interceptX)/2 + VertexX0
VertexY_target = (East_interceptY - East_interceptY)/2 + VertexY0
The game now is to modify XTWIST and YTWIST in StMagUtilities, recompile, rerun fitDCA2new.C and obtain values for VertexX and VertexY that match VertexX_target and VertexY_target (within 10 microns for heavy ion runs in the past).
10. Once you have found XTWIST and YTWIST parameters you are happy with they can be entered into the db table tpcGlobalPosition as PhiXZ and PhiYZ.
However - IMPORTANT NOTE: XTWIST = 1000 * PhiXZ , but YTWIST = -1000 * PhiYZ.
NOTE THE MINUS SIGN!! What is stored in the database is PhiXZ and PhiYZ. But XTWIST and YTWIST are what are printed in the log files.
Enter the values into the db using AddGlobalPosition.C and a file like tpcGlobalPosition*.C. To check the correction you either need to use files processed in fast offline with your new XTWIST and YTWIST values or request (re)processing of files.
Q: I am completely new to databases, what should I do first?
A: Please, read this FAQ list, and database API documentation :
Database documentation
Then, please read You do not have access to view this node
Don't forget to log in, most of the information is STAR-specific and is protected; If our documentation pages are missing some information (that's possible), please as questions at db-devel-maillist.
Q: I think, I've encountered database-related bug, how can I report it?
A: Please report it using STAR RT system (create ticket), or send your observations to db-devel maillist. Don't hesitate to send ANY db-related questions to db-devel maillist, please!
Q: I am subsystem manager, and I have questions about possible database structure for my subsystem. Whom should I talk to discuss this?
A: Dmitry Arkhipkin is current STAR database administrator. You can contact him via email, phone, or just stop by his office at BNL:
Phone: (631)-344-4922
Email: arkhipkin@bnl.gov
Office: 1-182
Q: why do I need API at all, if I can access database directly?
A: There are a few moments to consider :
a) we need consistent data set conversion from storage format to C++ and Fortran;
b) our data formats change with time, we add new structures, modify old structures;
b) direct queries are less efficient than API calls: no caching,no load balancing;
c) direct queries mean more copy-paste code, which generally means more human errors;
We need API to enable: schema evolution, data conversion, caching, load balancing.
Q: Why do we need all those databases?
A: STAR has lots of data, and it's volume is growing rapidly. To operate efficiently, we must use proven solution, suitable for large data warehousing projects – that's why we have such setup, there's simply no subpart we can ignore safely (without overall performance penalty).
Q: It is so complex and hard to use, I'd stay with plain text files...
A: We have clean, well-defined API for both Offline and FileCatalog databases, so you don't have to worry about internal db activity. Most db usage examples are only a few lines long, so really, it is easy to use. Documentation directory (Drupal) is being improved constantly.
Q: I need to insert some data to database, how can I get write access enabled?
A: Please send an email with your rcas login and desired database domain (e.g. "Calibrations/emc/[tablename]") to arkhipkin@bnl.gov (or current database administrator). Write access is not for everyone, though - make sure that you are either subsystem coordinator, or have proper permission for such data upload.
Q: How can I read some data from database? I need simple code example!
A: Please read this page : You do not have access to view this node
Q: How can I write something to database? I need simple code example!
A: Please read this page : You do not have access to view this node
Q: I'm trying to set '001122' timestamp, but I cannot get records from db, what's wrong?
A: In C++, numbers starting with '0' are octals, so 001122 is really translated to 594! So, if you need to use '001122' timestamp (any timestamp with leading zeros), it should be written as simply '1122', omitting all leading zeros.
Q: What time zone is used for a database timestamps? I see EDT and GMT being used in RunLog...
A: All STAR databases are using GMT timestamps, or UNIX time (seconds since epoch, no timezone). If you need to specify a date/time for db request, please use GMT timestamp.
Q: It is said that we need to document our subsystem's tables. I don't have privilege to create new pages (or, our group has another person responsible for Drupal pages), what should I do?
A: Please create blog page with documentation - every STAR user has this ability by default. It is possible to add blog page to subsystem documentation pages later (webmaster can do that).
Q: Which file(s) is used by Load Balancer to locate databases, and what is the order of precedence for those files (if many available)?
A: Files being searched by LB are :
1. $DB_SERVER_LOCAL_CONFIG env var, should point to new LB version schema xml file (set by default);
2. $DB_SERVER_GLOBAL_CONFIG env. var, should point to new LB version schema xml file (not set by default);
3. $STAR/StDb/servers/dbLoadBalancerGlobalConfig.xml : fallback for LB, new schema expected;
if no usable LB configurations found yet, following files are being used :
1. $STDB_SERVERS/dbServers.xml - old schema expected;
2. $HOME/dbServers.xml - old schema expected;
3. $STAR/StDb/servers/dbServers.xml - old schema expected;
Useful database tips and tricks, which could be useful for STAR activities, are stored in this section.
STAR Computing | Tutorials main page |
STAR Databases: TIMESTAMP
|
|
Offline computing tutorial | |
There are three timestamps used in STAR databases;
EntryTime and deactive are essential for 'reproducibility' and 'stability' in production. The beginTime is the STAR user timestamp. One manifistation of this, is the time recorded by daq at the beginning of a run. It is valid unti l the the beginning of the next run. So, the end of validity is the next beginTime. In this example it the time range will contain many eve nt times which are also defined by the daq system. The beginTime can also be use in calibration/geometry to define a range of valid values. EXAMPLE: (et = entryTime) The beginTime represents a 'running' timeline that marks changes in db records w/r to daq's event timestamp. In this example, say at some tim e, et1, I put in an initial record in the db with daqtime=bt1. This data will now be used for all daqTimes later than bt1. Now, I add a second record at et2 (time I write to the db) with beginTime=bt2 > bt1. At this point the 1st record is valid from bt1 to bt2 and the second is valid for bt2 to infinity. Now I add a 3rd record on et3 with bt3 < bt1 so that
Let's say that after we put in the 1st record but before we put in the second one, Lydia runs a tagged production that we'll want to 'use' fo rever. Later I want to reproduce some of this production (e.g. embedding...) but the database has changed (we've added 2nd and 3rd entries). I need to view the db as it existed prior to et2. To do this, whenever we run production, we defined a productionTimestamp at that production time, pt1 (which is in this example < et2). pt1 is passed to the StDbLib code and the code requests only data that was entered before pt1. This is how production in 'reproducible'. The mechanism also provides 'stability'. Suppose at time et2 the production was still running. Use of pt1 is a barrier to the production from 'seeing' the later db entries. Now let's assume that the 1st production is over, we have all 3 entries, and we want to run a new production. However, we decide that the 1st entry is no good and the 3rd entry should be used instead. We could delete the 1st entry so that 3rd entry is valid from bt3-to-bt2 but the n we could not reproduce the original production. So what we do is 'deactivate' the 1st entry with a timestamp, d1. And run the new production at pt2 > d1. The sql is written so that the 1st entry is ignored as long as pt2 > d1. But I can still run a production with pt1 < d1 which means the 1st entry was valid at time pt1, so it IS used. email your request to the database expert.
In essence the API will request data as following: 'entryTime <productionTime<deactive || entryTime< productionTime & deactive==0.' To put this to use with the BFC a user must use the dbv switch. For example, a chain that includes dbv20020802 will return values from the database as if today were August 2, 2002. In other words, the switch provides a user with a snapshot of the database from the requested time (which of coarse includes valid values older than that time). This ensures the reproducability of production.
Below is an example of the actual queries executed by the API:
select unix_timestamp(beginTime) as bTime,eemcDbADCconf.* from eemcDbADCconf Where nodeID=16 AND flavor In('ofl') AND (deactive=0 OR deactive>=1068768000) AND unix_timestamp(entryTime) < =1068768000 AND beginTime < =from_unixtime(1054276488) AND elementID In(1) Order by beginTime desc limit 1
For a description of format see ....
|
Test page
Welcome to the Quality assurance and quality control pages.
.
Procedure proposal for production and QA in Year4 run
Jérôme LAURET & Lanny RAY, 2004
Summary: The qualitative increase in data volume for run 4 together with finite cpu capacity at RCF precludes the possibility for multiple reconstruction passes through the full raw data volume next year. This new computing situation together with recent experiences involving production runs which were not pre-certified prior to full scale production motivates a significant change in the data quality assurance (QA) effort in STAR. This note describes the motivation and proposed implementation plan.
Introduction
The projection for the next RHIC run (also called, Year4 run which will start by the end of 2003), indicates a factor of five increase in the number of collected events comparing to preceding runs. This will increase the required data production turn-around time by an order of magnitude, from months to one year per full-scale production run. The qualitative increase in the reconstruction demands combined with an increasingly aggressive physics analysis program will strain the available data processing resources and poses a severe challenge to STAR and the RHIC computing community for delivering STAR’s scientific results in a reasonable time scale. This situation will become more and more problematic as our Physics program evolves to include rare probes. This situation is not unexpected and was anticipated since before the inception of RCF. The STAR decadal plan (10 year projection of STAR activities and development) clearly describes the need for several upgrade phases, including a factor of 10 increase in data acquisition rate and analysis throughput by 2007.
Typically, 1.2 represents an ideal, minimal number of passes through the raw data in order to produce calibrated data summary tapes for physics analysis. However, it is noteworthy that in STAR we have typically processed the raw data an average of 3.5 times where, at each step, major improvements in the calibrations were made which enabled more accurate reconstruction, resulting in greater precision in the physics measurements. The Year 4 data sample in STAR will include the new ¾ barrel EMC data which makes it unlikely that sufficiently accurate calibrations and reconstruction can be achieved with only the ideal 1.2 number of passes as we foresee the need for additional calibration passes through the entire data in order to accumulate enough statistics to push the energy calibration to the high Pt limit.
While drastically diverging from the initial computing requirement plans ( 1), this mode of operation, in conjunction with the expanded production time table, calls for a strengthening of procedures for calibration, production and quality assurance.
The following table summarizes the expectations for ~ 70 Million events with a mix of central and minbias triggers. Numbers of files and data storage requirements are also included for guidance
Au+Au 200 (minbias) |
35 M central |
35 M minbias |
Total |
No DAQ100 (1 pass) |
329 days |
152 days |
481 days |
No DAQ100 (2 passes) |
658 days |
304 days |
962 days |
Assuming DAQ100 (1 pass) |
246 days |
115 days |
361 days |
Assuming DAQ100 (2 passes) |
493 days |
230 days |
723 days |
Total storage estimated (raw) |
x |
x |
203 TB |
Total storage estimated |
x |
x |
203 TB |
Quality Assurance: Goals and proposed procedure for QA and productions
The goal of the QA activities in STAR is the validation of data and software, up to DST production. While QA testing can never be exhaustive, the intention is that data that pass the QA testing stage should be considered highly reliable for downstream physics analysis. In addition, QA testing should be performed soon after production of the data, so that errors and problems can be caught and fixed in a timely manner.
QA processes are run independently of the data taking and DST production. These processes contain the accumulated knowledge of the collaboration with respect to potential modes of failure of data taking and DST production, along with those physics distributions that are most sensitive to the health of the data and DST production software. The results probe the data in various ways:
At the most basic level, the questions asked are whether the data can be read and whether all the components expected in a given dataset are present. Failures at this level are often related to problems with computing hardware and software infrastructure.
At a more sophisticated level, distributions of physics-related quantities are examined, both as histograms and as scalar quantities extracted from the histograms and other distributions. These distributions are compared to those of previous runs that are known to be valid, and the stability of the results is monitored. If changes are observed, these must be understood in terms of changing running conditions or controlled changes in the software, otherwise an error flag should be raised (deviations are not always bad, of course, and can signal new physics: QA must be used with care in areas where there is a danger of biasing the physics results of STAR).
The focus of the QA activities until summer 2000 has been on Offline DST production for the DEV branch of the library. With the inception of data taking, the scope of QA has broadened considerably. There are in fact two different servers running autoQA processes:
Offline QA. This autoQA-generated web page accesses QA results for all the varieties of Offline DST production:
Real data production produced by the Fast Offline framework. This is used to catch gross errors in data taking, online trigger and calibration, allowing for correcting the situation before too much data is accumulated (this framework also provides on the fly calibration as the data is produced).
Nightly tests of real and Monte Carlo data (almost always using the DEV and NEW branches of the library). This is used principally for the validation of migration of library versions
Large scale production of real and Monte Carlo data (almost always using the PRO branch of the library). This is used to monitor the stability of DSTs for physics.
Online QA. This autoQA-generated web page accesses QA results for data in the Online event pool, both raw data and DST production that is run on the Online processors.
The QA dilemma
While a QA shift is usually organized during data taking, the later, official production runs were encouraged (but not mandated) to be regularly QA-ed. Typically, there has not been an organized QA effort for post-experiment DST production runs. The absence of organized quality assurance efforts following the experiment permitted several post-production problems to arise. These were eventually discovered at the (later) physics analysis stage, but the entire production run was wasted. Examples include the following:
missing physics quantities in the DSTs (e.g. V0, Kinks, etc ...)
missing detector information or collections of information due to pilot errors or code support
improperly calibrated and unusable data
...
The net effect of such late discoveries is a drastic increase in the production cycle time, where entire production passes have to be repeated, which could have been prevented by a careful QA procedure.
Production cycles and QA procedure
To address this problem we propose the following production and QA procedure for each major production cycle.
A data sample (e.g. from a selected trigger setup or detector configuration) of not more than 100k events (Au+Au) or 500k events (p+p) will be produced prior to the start of the production of the entire data sample.
This data sample will remain available on disk for a period of two weeks or until all members of “a” QA team (as defined here) have approved the sample (whichever comes first).
After the two week review period, the remainder of the sample is produced with no further delays, with or without the explicit approval of everyone in the QA team.
Production schedules will be vigorously maintained. Missing quantities which are detected after the start of the production run do not necessarily warrant a repetition of the entire run.
The above policy does not apply to special or unique data samples involving calibration or reconstruction studies nor would it apply to samples having no overlaps with other selections. Such unique data samples include, for example, those containing a special trigger, magnetic field setting, beam-line constraint (fill transition), etc., which no other samples have and which, by their nature, require multiple reconstruction passes and/or special attention.
In order to carry out timely and accurate Quality Assurance evaluations during the proposed two week period, we propose the formation of a permanent and QA team consisting of:
One or two members per Physics Working group. This manpower will be under the responsibility of the PWG conveners. The aim of these individuals will be to rigorously check, via the autoQA system or analysis codes specific to the PWG, for the presence of the required physics quantities of interest to that PWG which are understood to be vital for the PWG’s Physics program and studies.
One or more detector sub-system experts from each of the major detector sub-systems in STAR. The goal of these individuals will be to ensure the presence and sanity of the data specific to that detector sub-system.
Within the understanding that the outcome of such procedure and QA team is a direct positive impact on the Physics capabilities of a PWG, we recommend that this QA service work be done without shift signups or shift credit as is presently being done for DAQ100 and ITTF testing.
Summary
Facing important challenges driven by the data amount and Physics needs, we proposed an organized procedure for QA and production relying on a cohesive feedback from the PWG and detector sub-system’s experts within time constraints guidelines. It is understood that the intent is clearly to bring the data readiness to the shortest possible turn around time while avoiding the need for later re-production causing waste of CPU cycles and human hours.
STAR Offline QA Documentation (start here!)Quick Links: Shift Requirements , Automated Browser Instructions , You do not have access to view this node , Online RunLog Browser
|
|
Automated Offline QA BrowserQuick Links: You do not have access to view this node |
QA Shift Report FormsQuick Links: Issue Browser/Editor, Dashboard, Report Archive |
As a minimal check on effects caused by any changes to reconstruction code, the following code and procedures are to be exercised:
A suite of datasets has been selected which should serve as a reference basis for any changes. These datasets include:
Real data from Run 7 AuAu at 200 GeV
Simulated data using year 2007 geometry with AuAu at 200 GeV
Real data from Run 8 pp at 200 GeV
Simulated data using year 2008 geometry with pp at 200 GeV
These datasets should be processed with BFC as follows to generate historgrams in a hist.root file:
root4star -b -q -l
root4star -b -q -l
root4star -b -q -l
?
The RecoQA.C macro generates CINT files from the hist.root files
root4star -b -q -l 'RecoQA.C("st_physics_8113044_raw_1040042.hist.root")'
root4star -b -q -l 'RecoQA.C("rcf1296_02_100evts.hist.root")'
root4star -b -q -l 'RecoQA.C("st_physics_9043046_raw_2030002.hist.root")'
?
The CINT files are then useful for comparison to the previous reference, or storage as the new reference for a given code library. To view these plots, simply execute the CINT file with root:
root -l st_physics_8113044_raw_1040042.hist_1.CC
root -l st_physics_8113044_raw_1040042.hist_2.CC
root -l rcf1296_02_100evts.hist_1.CC
root -l rcf1296_02_100evts.hist_2.CC
root -l st_physics_9043046_raw_2030002.hist_1.CC
root -l st_physics_9043046_raw_2030002.hist_2.CC
?
One can similarly execute the reference CINT files for visual comparison:
root -l $STAR/StRoot/qainfo/st_physics_8113044_raw_1040042.hist_1.CC
root -l $STAR/StRoot/qainfo/st_physics_8113044_raw_1040042.hist_2.CC
root -l $STAR/StRoot/qainfo/rcf1296_02_100evts.hist_1.CC
root -l $STAR/StRoot/qainfo/rcf1296_02_100evts.hist_2.CC
root -l $STAR/StRoot/qainfo/st_physics_9043046_raw_2030002.hist_1.CC
root -l $STAR/StRoot/qainfo/st_physics_9043046_raw_2030002.hist_2.CC
?
Steps 1-3 above should be followed immediately upon establishing a new code library. At that point, the CINT files should be placed in the appropriate CVS directory, checked in, and then checked out (migrated) into the newly established library:
cvs co StRoot/qainfo mv *.CC StRoot/qainfo cvs ci -m "Update for library SLXXX" StRoot/qainfo cvs tag SLXXX StRoot/info/*.CC cd $STAR cvs update StRoot/info
Missing information will be filled in soon. We may also consolidate some of these steps into a single script yet to come.
Helpful links:
BBC | BTOF | BEMC | EPD |
eTOF | GMT | iTPC/TPC | HLT |
MTD | VPD | ZDC |
To join the Meeting: https://bluejeans.com/967856029 To join via Room System: Video Conferencing System: bjn.vc -or-199.48.152.152 Meeting ID : 967856029 To join via phone : 1) Dial: +1.408.740.7256 (United States) +1.888.240.2560 (US Toll Free) +1.408.317.9253 (Alternate number) (see all numbers - http://bluejeans.com/numbers) 2) Enter Conference ID : 967856029
BBC | BTOF | BEMC | EPD |
eTOF | GMT | iTPC/TPC | HLT |
MTD | VPD | ZDC |
Meeting URL https://bluejeans.com/563179247?src=join_info Meeting ID 563 179 247 Want to dial in from a phone? Dial one of the following numbers: +1.408.740.7256 (US (San Jose)) +1.888.240.2560 (US Toll Free) +1.408.317.9253 (US (Primary, San Jose)) +41.43.508.6463 (Switzerland (Zurich, German)) +31.20.808.2256 (Netherlands (Amsterdam)) +39.02.8295.0790 (Italy (Italian)) +33.1.8626.0562 (Paris, France) +49.32.221.091256 (Germany (National, German)) (see all numbers - https://www.bluejeans.com/premium-numbers) Enter the meeting ID and passcode followed by # Connecting from a room system? Dial: bjn.vc or 199.48.152.152 and enter your meeting ID & passcode
BBC | BTOF | BEMC | EPD |
eTOF | GMT | iTPC/TPC | HLT |
MTD | VPD | ZDC |
Topic: STAR QA Board Time: This is a recurring meeting Meet anytime Join Zoom Meeting https://riceuniversity.zoom.us/j/95314804042?pwd=ZUtBMzNZM3kwcEU3VDlyRURkN3JxUT09 Meeting ID: 953 1480 4042 Passcode: 2021 One tap mobile +13462487799,,95314804042# US (Houston) +12532158782,,95314804042# US (Tacoma) Dial by your location +1 346 248 7799 US (Houston) +1 253 215 8782 US (Tacoma) +1 669 900 6833 US (San Jose) +1 646 876 9923 US (New York) +1 301 715 8592 US (Washington D.C) +1 312 626 6799 US (Chicago) Meeting ID: 953 1480 4042 Find your local number: https://riceuniversity.zoom.us/u/amvmEfhce Join by SIP 95314804042@zoomcrc.com Join by H.323 162.255.37.11 (US West) 162.255.36.11 (US East) 115.114.131.7 (India Mumbai) 115.114.115.7 (India Hyderabad) 213.19.144.110 (Amsterdam Netherlands) 213.244.140.110 (Germany) 103.122.166.55 (Australia) 149.137.40.110 (Singapore) 64.211.144.160 (Brazil) 69.174.57.160 (Canada) 207.226.132.110 (Japan) Meeting ID: 953 1480 4042 Passcode: 2021
Weekly on Fridays at noon EST/EDT
Zoom information:
=========================
Topic: STAR QA board meeting
Mailing List:
Summary Page by Rongrong:
https://drupal.star.bnl.gov/STAR/pwg/common/bes-ii-run-qa
Run QA: Ashik Ikbal, Li-Ke Liu (Prithwish Tribedy, Yu Hu as code developers)
General TPC QA: Lanny Ray (Texas)
PWG Volunteers
CF Yevheniia Khyzhniak (Ohio)
Muhammad Ibrahim Abdulhamid Elsayed (Egypt)
FCV Han-Sheng Li (Purdue)
Yicheng Feng (Purdue)
Niseem Magdy (SBU)
LFSUPC Hongcan Li (CCNU)
HP Andrew Tamis (Yale)
Ayanabha Das (CTU)